To simplify, AI can be AGI or ANI (Artificial Narrow Intelligence). We see lot of examples of ANI eg. Self driving cars, AlphaGo. We thought that an AI that can pass Turing test is AGI but now GenAIs (eg. ChatGTP) can already pass Turing test but we still don’t think they are good enough to called AGI.
What most people mean when they talk about AGI is an AI system that has capability similar to most (average) humans and hence can start replacing humans in significant ways.
The AI models today are getting super capable. We couldn’t have imagined the capability of AI chatbots, image and song generators a couple of years ago. So how will we know an AGI when we see one i.e. how do we know it has arrived.
We need to be able to see each of the below criteria be satisfied before we say that AGI has arrived:
- Is replacing humans at work enough or does it need to replace humans beyond work? Not to set bar too high, for now, let’s restrict AGI’s application to work only.
- What type of work should it be replacing
- If its white collar work then there is no need of physical interfaces to the world – just need interfaces that a human has on a laptop/mobile i.e. text, audio, video, mouse, touch, internet access. The constraints on such an AGI is the interfaces that this device has with outside world. It can control a factory that can manufacture robots but if such an interface doesn’t exist then it can’t control.
- If we are talking about blue collar work then we are talking about an android. The development of such an Android with AGI will face more constraints because it’s no longer a software only problem but a hardware problem also. On the other hand, ANIs are better suited at replacing some blue collar work (eg. Self-driving taxis)
- Because of the above 2 reasons and because AGI primarily has to deal with intelligence, let’s restrict AGI’s application to white collar work only for now.
- What has it been trained on – If it has to replace humans it should be able to do a task that it has not been trained on i.e. can it figure it out and hence need logic. This doesn’t mean 0 training. It means the ability to “learn” and build on initial training. This is not possible with current crop of GenAIs.
- How long would the “learning” take – A human being and hence his brain is trained over years. It is trained (ie. learns) by interacting with the real world. A human being will have limited capability if he grows up not surrounded by other human being. If we can develop an AI to be better than a human after many years of “training” no one would care. Hence the AGI should be able to “learn” over a drastically shorter duration and hence must have very high IQ i.e. it has to more intelligent than majority of humans in most common knowledge domains The limitation might be the exposure it has to interact with the real world and the rate at which it gets the feedback.
- Generalist vs specialist – One AI that can do many different work wont happen – humans started to specialise – we have doctors, lawyers, coders etc. But an AGI should be able to specialise in more than one area but it will end up specialising in 1 just like an average human can be a doctor or an engineer but they usually end up being one. This is because in addition to online and observable training data on which it can be trained it will also need to learn by “doing” and due to the physical constraints in the world it will take time to do this type of learning (although it will learn at much faster rate than humans)
- What type of world does it operate in – Would AGI have to function in human world (eg. Self driving car running on our roads today) or would there be worlds build for AI (eg. A city designed for only self driving cars or a compound where automated manufacturing is happening all controlled by the AI). As AGI will initatially work with humans, it will have to perform in human world.
- What’s its purpose and why does it need a purpose – The foundational purpose of a human in his natural state is self preservation and procreate. It’s been biologically programmed into him. Beyond the foundational purpose, each person tries to find/develop his/her own purpose in life. Human purpose is a result of reward feedback loop i.e. humans will do those things that give him reward (long or short term). If an AGI has to perform well, a reward mechanism has to be built in. As AGI is designed by humans, its purpose will be determined by how it was designed. It’s very easy to create an ANI whose purpose is to monitor the output of an AGI to ensure it is aligned to the designed purpose.
- Does it have to be conscious? I am frankly not sure. More on this later.
In summary, we will be able to say an AGI has arrived when an AI system is able to completely replace a white collar specialist role in the real world with much fewer years of training/learning than a human would have taken.