Uncategorized

superintelligence – Should Artificial General Intelligence (AGI), follow the dualistic path?


An AGI will definitely need to be able to take an action between 2 or more actions because there are many real-world situations that would or might require that, like

  • Which team do you support in this football match?
  • In world war 2, there were 2 main coalitions, which one would you have supported?
  • Should I harm this human or myself?

You can come up with so many other situations.

I don’t know if our emotions or feelings are dualistic, in the sense that we can split our feelings just into 2 categories. I don’t think that’s the case, i.e. there might be a spectrum, but it’s something we should ask a neuroscientist or psychologist.

However, even if there were 2 opposites or extremes, we might not be able to decide between them. For example, in politics, there’s left and right, but there’s also the center and things in between. But some people may still not vote because they don’t agree with any of them.

In general, I think that that categorization is something that humans or even other animals indeed do to simplify our lives, but there can be many categories or the categories might not always be clear (at the beginning). From an ethical standpoint, it’s also true that being able to distinguish between bad and good can be useful. The problem in this case is the definition of good and bad, which can be subjective.

This article talks more about dualism, which has can have different meanings. For example, the mind-body dualism doesn’t seem to have much to do with your definition, but it’s still relevant to AI, if you’re interested.

(Btw, reinforcement learning is all about actions too. You might also be interested in this question.)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *