In today’s column, I am going to do a deep dive into what is meant by the oft-mentioned terms known as Artificial Intelligence (AI) and Artificial General Intelligence (AGI). You might be under the assumption that both AI and AGI are tightly defined and that everyone agrees on what each such term signifies and consists of. Regrettably, that is not at all the case. The reality is that the definitions of AI and AGI are all over the map.
At first glance, a definitional disparity might seem of little consequence. A rose is a rose, one might insist, no matter how it is defined. The problem though is that if you don’t know what a rose is then there is ample room to talk past each other when discussing roses. For some, a rose might really be an apple, while others might believe it to be an orange.
The practical concerns are aplenty when it comes to the vagueness and variability of what AI and AGI denote. For example, there is a great deal of concern these days about AI or AGI being an existential risk, namely that we might be all wiped out or enslaved by intelligent machines, see my discussion at the link here. This has brought forth calls to stridently rein in AI and AGI. Indeed, there are AI ethics guidelines or soft laws that attempt to ensure that AI or AGI is devised in a responsible manner and aligned with human values, generally referred to as Responsible AI, see my coverage at the link here. There are new executive orders by the White House and proposed or passed legislation entailing so-called hard laws to govern and control AI, see my analyses at the link here and the link here, just to name a few.
But tons of loopholes make this a very leaky ship.
The vexing issue is that the definition or meaning of AI or AGI in those governing efforts could be exasperatingly off-target. If a devised definition is overly broad, we could have all kinds of non-AI apps and systems that suddenly are being burdened by new laws or stipulated guidelines that were supposed to cover just AI or AGI. Innovation could be inadvertently stifled. If the definitions undershoot what AI or AGI is, the result could be that threatening types of AI and AGI might escape the stipulated governance constraints by being considered outside the scope of the freshly devised AI laws. They will run free, as it were, unencumbered by laws that lamentedly imbue slipshod or loosey-goosey definitions.
Furthermore, I’ve predicted that we are going to get into a long-term and excruciatingly costly legal battle once these new AI laws begin to be enforced.
Here’s the deal.
A company that allegedly has devised an AI or AGI app will be dragged into the courts under the claim that the app violates some particular new AI law. The lawyers for that company will strenuously argue that the definition of AI or AGI in that passed law does not squarely match the nature of the app that the firm has devised. Back and forth the protracted legal arguments will go. You can bet that these issues will eventually climb up the legal ladder in the appeals process (taking years to slog forward). It is conceivable that at some point the U.S. Supreme Court might need to weigh in, especially if these laws convey some form of legal agency to AI or AGI of their own standing (see my analysis at the link here).
Digging Into AI And AGI
You certainly have heard or seen the AI moniker. It’s everywhere these days. You might sometimes have seen or heard the AGI moniker. It is slowly gaining momentum.
The phrase “Artificial Intelligence” and its abbreviation of AI have been around since the start of this field of study in the 1950s. In contrast, the phrase “Artificial General Intelligence” abbreviated as AGI has only been around for roughly twenty years or so (there are debates about the precise moment that the AGI naming originated, though most would likely concede that the AGI moniker wasn’t a widely public-known popularized phrase until the last five to ten years).
If you are unsure of the difference between referring to something as AI versus saying that it is AGI, you can rest assured that you are not alone in that bewilderedness. As I said, these definitions are all over the map. That being said, I am going to try and somewhat straighten that out for you. Hang in there and you’ll get some semblance of clarity on this.
On the matter of what is AGI, a recent research paper posted by Google DeepMind provides an intriguing proposal of considering that AGI ought to include levels of autonomy. Readers of my column will recognize that I have frequently examined and discussed levels of autonomy associated with autonomous vehicles and self-driving cars, see the link here and the link here. I will address this proposed use of LoA (levels of autonomy) in the context of AGI as per the advocated use suggested by the AI researchers at DeepMind.
The approach I will take in this discussion is as follows.
First, let’s begin at the beginning by noting what is meant by the AI moniker and how it came to arise as an everyday parlance. Second, I will shift into examining the somewhat more recently devised AGI moniker and explain how that naming came to be. Third, I’ll explore the DeepMind proposal and proffer insights about what it portends and how AGI might be better understood as a construct by utilizing levels of autonomy. Toward the end, I’ll provide final commentary and my concluding remarks concerning the AI and AGI definitional and ontological adventure.
Get yourself ready for quite a journey.
As Socrates notably said: “The beginning of wisdom is the definition of terms.”
This is extremely important stuff and not something of a trifling nature. I realize that some contend that definitions are of little or no consequence. They would be wrong. Dead wrong, especially if one believes that AI or AGI might be a grave existential risk.
How can we manage or govern something if we cannot even define it?
Keep that steadfastly in mind.
On The Meaning Of AI
I am frequently asked during my many speaking engagements at AI conferences a rather pointed question as to what AI is or what is the definition of AI. This query is certainly understandable. Such a question vividly showcases the prevailing confusion over the matter. It seems that each resource that you might refer to online has a different definition of AI. A colossally frustrating circumstance for all.
I usually try to be as succinct as I reasonably can and simply indicate that AI refers to systems that exhibit intelligent behavior. Period, end of story. Well, of course, that’s not really the end of the story and there is a lot more involved. I find that giving a rough-and-tumble definition is typically all that most people seem to informally want to know. In that sense, if you like, you can just declare that AI is about systems that exhibit intelligent behavior and then drop the mic. I doubt though that this will be entirely satisfying for those with an innate sense of curiosity.
We shall go ahead and use this brisk definition to get things started:
- “AI is a system that exhibits intelligent behavior.”
Let’s parse this basic conception.
Right away you can ask a provocative and logically important question as to what is meant by the notion of intelligent behavior. Does a toaster exhibit intelligent behavior? A modern-day toaster seems to be able to toast bread and stop doing so when the bread is toasted. Isn’t that intelligence at work? I doubt that we would generally say that a conventional toaster is intelligent enough to be within the realm of any overall sense of what might be construed as AI.
The question then arises as to what we mean by intelligence, intelligent behavior, and the like.
Another important parsing is that the short and sweet definition says that AI exhibits intelligent behavior, which is a stark contrast to saying that AI embodies intelligent behavior. Here’s why that is a crucial distinction. We usually indicate that humans embody intelligence. Somehow, and we don’t truly yet know how, humans in their brains are able to embody intelligence. AI as we conceive of it today is a mathematical and computational formulation consisting of hardware and software. It is not the same as the wetware of our brains and our minds.
Thus, rather than saying that AI embodies intelligence per se, which might be anthropomorphizing the AI, we would be better off saying that AI exhibits intelligent behavior. The AI might be of an entirely different construct than what happens in the human brain. And that’s okay. We don’t necessarily have to do precisely what the underlying mechanism of the brain does. If we can arrive at the same or similar outcomes, namely the exhibiting of intelligent behavior, we have won the brass ring.
Some AI researchers want to exactly create in computers the same constructs of what the brain does, either feature- for-feature or do a simulation of it. They aim to reverse engineer the brain and the mind. Doing so will hopefully make things more apparent as to devising AI that does the same kind of things.
Other AI researchers believe that we don’t necessarily have to crack the secret code on how the brain and the human mind work. They point out that if you are relying on exposing the inner secrets of the biological and chemical mysteries of the brain, you might be bottlenecked trying to arrive at AI. The idea is that maybe we can come up with intelligent behaviors by doing something akin to the brain or perhaps even something utterly unlike the brain. If we could piece together Legos and get intelligent behavior, this would be miraculous and presumably not dependent on divining how the human mind functions.
Okay, so the shortest kind of definition for AI is that systems exhibiting intelligent behavior are construed as being what we will label as AI. For the moment, we’ll leave vague the understanding of what intelligent behavior consists of. The overall emphasis is that the AI just has to exhibit and not necessarily embody whatever mechanisms give rise to showcasing intelligent behavior.
Shift gears and let’s proceed toward lengthier definitions of AI.
Mentally go back in time to the 1950s.
A hallmark for when the AI phrase first began is considered found within a proposal for a research project in 1955 that sought to explore something new that was coined as artificial intelligence. The proposal was put together by computer scientist John McCarthy and had this subject title (note that the title mentions “artificial intelligence”): “A Proposal for the Dartmouth Summer Research Project On Artificial Intelligence”, by John McCarthy, M. L. Minsky, N. Rochester, and C.E. Shannon, dated August 31, 1955. John McCarty is often designated as the considered father of AI.
Here is a notable excerpt from the cited proposal:
- “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
The excerpt implies that AI is that which consists of a machine that can be made to simulate intelligence.
The simulation doesn’t have to be precisely what the brain or mind does. We can proceed on the basis that alternative means are satisfactory. The overarching goal is to devise machines that can do things that humans rely upon their respective intelligence to do, such as making use of natural languages, devising and utilizing abstractions and concepts, solving problems, and learning or improving upon the intelligence that we embody.
We can also consider what today’s dictionaries say when it comes to defining AI.
Here is a smattering of AI definitions that I plucked from the Internet for today’s column:
- “The capability of computer systems or algorithms to imitate intelligent human behavior” (source: Merriam-Webster Dictionary).
- “Artificial intelligence is a type of computer technology concerned with making machines work in an intelligent way, similar to the way that the human mind works” (source: Collins Dictionary).
- “The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages” (source: Oxford Languages Dictionary).
- “The use or study of computer systems or machines that have some of the qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognize or create images, solve problems, and learn from data supplied to them” (source: Cambridge Dictionary).
I trust that you can discern some important characteristics of those short definitions of AI. By and large, they steer clear of insisting that AI must be exactly composed as the human brain or mind is composed. The indication is that exhibiting intelligence or simulating intelligence is sufficient.
Another common element of AI definitions involves trying to explain what is meant by intelligent behaviors. For example, one or more of those dictionary definitions attempts to explain that intelligent behavior might be the ability to recognize or create images. The 1955 proposal also laid out some explanatory examples such as being able to solve problems and learn languages. Giving examples of what is meant by intelligent behavior is handy and somewhat gets us around the toaster conundrum. Currently, toasters aren’t able to generally solve problems (other than toasting!), though in the future you are likely to see that AI-infused toasters will gradually become more common — I’ll have to change up my toaster comparison accordingly.
Maybe I’ll just ask my toaster to tell me what AI is and then relate that to you.
Alleged Robust Definitions Of AI
You are now familiar with and hopefully feel conversant in the short-form definitions of AI. I’ve presented you with AI definitions that range from a few words in size to a sentence or two in size.
We are ready to move up to the bigwigs.
Attempts to define AI for legal purposes are often done by expanding the length of the definition. The keystone assumption is that a few words or a sentence won’t cut the mustard. We need to be more detailed about what AI is. The trouble with expanding the size of the wording is that you don’t necessarily make things any tighter. There can be just as much variability in a lengthy definition as there can be in a shorter one.
Do not be fooled into thinking that the length of the definition alone imparts preciseness. It does not. All in all, words that we use in our natural language such as English are said to be semantically ambiguous. The upshot is that each word that we use can have a multitude of meanings. The more words that you toss into a definition, the more semantic ambiguity you will likely have.
Sometimes a kind of wording-oriented death spiral occurs. A definition is expanded with more and more words to try and pin down the semantic ambiguities and get a tighter definition. Meanwhile, for each additional word included, you are also increasing the semantic ambiguity. You can’t win. There is some point at which a kind of crossover occurs. You have to settle on some final set of wording that aims to have enough words to reduce the ambiguities which at the same time doesn’t open the door to even more ambiguities.
Makes one’s head hurt to think about it.
In the United States, a frequently utilized legalese definition for AI is one that can be found in the U.S. National Defense Authorization Act and has been carried around for several years from its initial use to today. I’ll show you it in a moment. Many other federal agencies use the same definition. Some states use it too. The thinking is that this is maybe a defacto AI definition that can be used on a copy-and-paste basis.
It is unfortunately a bit of a house of cards, as I explain at the link here. We do not yet know whether the definition will stand up to legal scrutiny. Just because it is often used doesn’t make it airtight. The chances are that at some future time, the definition will get vetted in our courts. The result could be that all those others who have relied upon the definition will then be subject to whatever legally determined deficiency exists. They will all be in the same boat, rightly or wrongly. A ripping effect will occur. The dominos might fall all at once.
This doesn’t imply or suggest that other AI definitions are any better off. Those who try to reinvent the wheel of AI definitions are equally subject to legal exposure. Until or if we somehow arrive collectively at a universally agreed definition for AI, everyone is facing the same looming sword hanging over their heads.
Here then is the semi-default widely-copied AI definition as shown in the US National Defense Authorization Act:
- “A machine-based system that can for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to: (a) Perceive real and virtual environments, (b) Abstract such perceptions into models through analysis in an automated manner, (c) Use model inference to formulate options for information or action” (source: United States National Defense Authorization Act, Fiscal Year 2024).
Take a close look at it.
You can readily see that it has the same hallmarks as the shorter AI definitions. The length is for sure longer. Does the length help or hurt? Well, there is a lot of debate about that. For example, the definition says that “for a given set of human-defined objectives” is part of the definition.
Ponder that segment.
If we have an AI system that is not provided with a set of human-defined objectives, does that mean that the AI system is not construed as an AI system since it averts the given definition? In other words, we let loose something that most of us would likely agree is in fact AI, but we do not feed it any stated objectives. Instead, we let it self-determine its objectives. Ergo, this would seem to no longer be an AI system as defined by this particular definition. It falls outside of the stated definition.
Do you see the quandary at hand?
I hope so.
You can pick apart the rest of the definitions in a similar fashion. This isn’t being nit-picky. Keep in mind that the legal definition will have a wide significance in terms of what is construed as within the scope of AI. This in turn will determine what legally is susceptible to the constraints or limits associated with the law at hand. And so on.
You might be wondering whether we could arrive at a universal definition of AI that would cut across all borders and geographies. Does the United States really need to have its own definition of AI? The same could be said of Canada, France, the UK, and the like.
Does each country need to have its own definition?
Your gut instinct would be that there is no need to have a nation-specific AI definition. AI is AI, regardless of which country is contending with it. A rose is a rose, no matter which country the rose is found in. The problem though is that since AI is loosey-goosey at the get-go, each jurisdiction seems to want to define AI in their own preferred manner. This applies within the United States too, such that there are a multitude of federal definitions, and there are a multitude of AI definitions at the state level, and even the same can be said at the county level, the city level, and so on.
There are enough definitions of AI to poke a stick at.
The EU has been working on a piece of legislation known as the EU Artificial Intelligence Act (AIA) and it too contains a definition of AI. This would be an AI definition that would presumably apply to a plethora of countries since it is aiming to be passed for the EU all told.
Here is the EU AIA definition of AI (subject to change since it is a draft):
- “A system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts” (EU Artificial Intelligence Act, Draft 2022).
What do you think of that AI definition?
It certainly seems more technologically terminology-laden. The notion of generative AI is mentioned. The phrasing of knowledge-based systems is indicated. Etc. You might observe that the word “autonomy” is prominently noted too. We will return to the autonomy aspects later on in this discussion.
Despite the heavy use of terminology, be assured there are plenty of loopholes and gotchas in the drafted AI definition. It too is subject to semantic ambiguity, as are all definitions.
You might be wondering whether the United Nations has any AI definitions. The hope might be that a universal definition of AI could be landed upon at the UN level. Perhaps all other countries would then merely copy and paste that definition. Turns out that the UN has a multitude of AI definitions and confronts the same conundrum as the rest of us.
Here is an example of an AI definition as noted in the UNESCO document that depicts AI ethics considerations (for my in-depth analysis of this, see the link here):
- “Information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. AI systems are designed to operate with varying degrees of autonomy by means of knowledge modeling and representation and by exploiting data and calculating correlations. AI systems may include several methods, such as but not limited to: (i) machine learning, including deep learning and reinforcement learning; (ii) machine reasoning, including planning, scheduling, knowledge representation and reasoning, search, and optimization” (United Nations UNESCO, Recommendation on the Ethics of AI, 2021).
I realize you might be nearing a point of numbness about these various AI definitions.
I’ll share one last one for now and encourage you to go online to find more. Plenty can be found.
This is what the OECD has offered as an AI definition (they have other AI definitions too):
- “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” (OECD Organization for Economic Co-operation and Development, Council on Artificial Intelligence).
Note again the use of the word autonomy.
We shall next consider additional caveats about AI definitions, including leading up to the autonomy elements.
Different Ways Of Referring To AI
One of the difficulties in nailing down what AI consists of involves the different ways in which AI as a moniker is typically used in society. This is reminiscent of the parable about the blind persons trying to describe an elephant. One person touches the tusk and insists that is what the elephant entails. Another person touches a stout leg and insists that is what the elephant entails. And on this goes.
Let’s go ahead and get the mainstay of the elephant depictions of AI onto the open table.
Here are the four mainstays:
- (1) AI is a computer system or machine (as an artifact).
- (2) AI is a field of study.
- (3) AI is a set of techniques and technologies.
- (4) AI is an aspiration or goal.
Time to unpack those.
The first mainstay is akin to the definitions of AI that I’ve already covered, namely that AI is a computer system or machine. It is in a broad sense an artifact that can exhibit intelligent behavior.
The second mainstay considers AI to be a field of study. Similar to considering law or medicine as fields of study, one might liken AI to a domain or specialty that people can learn about, research on, teach, practice, and advance as an endeavor. A college student these days might get asked what they are majoring in and will sometimes say that they are majoring in AI.
The third mainstay asserts that AI can be described as numerous techniques or technologies. For example, AI is said to consist of machine learning (ML), large language models (LLM), natural language processing (NLP), generative AI (GenAI), robotics, knowledge-based systems (KBS), common-sense reasoning (CSR), etc. By using one or more of those techniques or technologies, you are presumably doing AI work or devising AI.
The fourth mainstay is a standout because it refers to the aspirations or seemingly ultimate goal of the AI field, aiming to attain AI that fully and convincingly exhibits intelligent behavior in a manner normally exhibited by humans. Not everyone necessarily agrees that this is the final goal of the AI field or that it is even a necessity underlying the crux of AI. In any case, the attainment of such an aspiration would suggest that a “true” AI system meets or exceeds the exhibited intelligence of humans. If the AI meets or is on par with the intelligence of humans we have achieved an amazing feat. On top of this, the belief is that we might be able to attain AI that exceeds the intelligence of humans and ergo be labeled as superhuman or exhibiting superintelligence.
Make a mental note of that fourth mainstay since we will be leaning into it when we get to discussing AGI.
The Neglected Or Omitted Confabulations
I’d like to briefly identify a host of other touchy subjects that come up when deeply thinking about what AI is. You are welcome to mull them over. Some are fastidiously philosophical. All of these additional factors though are one way or another bound to have a material impact on what the definition of AI is said to be.
Here they are:
- (a) Physical. Does AI have to entail physicality or can AI be purely “intellectual”?
- (b) Soul. Does AI have to have a “soul” or can AI be considered attained even if lacking this?
- (c) Sentience. Does AI have to exhibit sentience in order to be considered true AI?
- (d) Types. What type of intelligence would AI have to exhibit such as brainiac style, emotional IQ, etc.?
- (e) Levels. What are the levels of intelligence that need to be exhibited?
- (f) Animals. Does animal intelligence count or is AI only with respect to the intelligence of humans?
- (g) Wetware. If the computer or machine is using or relying upon wetware does this negate being anointed as AI?
- Etc.
I’ve covered these considerations in my column postings and if you are interested in the nuances of each you can take a look at the link here.
The Emergence Of The AGI Moniker
Congratulations for having slogged through the definitions and variations of what is meant by referring to AI. I had to take you through that murky territory in order to sensibly bring up the more recent moniker of Artificial General Intelligence or AGI.
The emergence of the AGI moniker is relatively straightforward and somewhat of a tale of woe.
First, back up in time. People were annoyingly using the AI moniker in ways that seemed to dilute the aspirational goal of the presumed definition. Just about any kind of software was being labeled as AI. The same can be said of today’s systems. The immense popularity of the AI moniker has led many to eagerly slap the phrase onto anything that moves. Well, even things that don’t move.
You could say that AI as a meaningly form of phraseology was getting watered down. Any system that might have a mediocre amount of “AI” in it, despite not containing much of what we might call true AI, was nonetheless being touted as AI. This is something that people can get away with when there isn’t an ironclad definition to hold their feet to the fire. It is extraordinarily difficult to police the use of the AI moniker when just about anyone can willy-nilly use the phrase in whatever way they so desire.
The marketing possibilities are endless.
That was one major concern about the vanilla-flavored AI phrase.
Another concern was that many AI systems were focused on specific tasks in a rather narrow fashion.
A superb AI-based chess-playing app could do only one thing, namely play chess. The same app could not tackle some other realm such as being able to play chess. Humans can play chess and readily learn to play checkers. It is easy-peasy. An AI app for chess playing was one-and-done. You could not readily alter the AI to do anything other than play great chess.
There was a grating feeling amongst AI researchers and AI developers that the watering down of the AI moniker and the narrowness of the prevailing AI systems was losing sight of the aspirational side of AI. We might entirely sub-optimize and not try to reach for the stars in terms of attaining true AI. True AI would presumably be AI that can exhibit intelligence on par with humans. This implies a kind of completeness encompassing generalizability. Humans can learn multiple things and switch from task to task. If AI is going to exhibit the intelligence of humans, we would be remiss to allow AI as a field of study to be reduced to being myopically narrow in what AI does.
What was to be done about this sticky situation?
You could try to get the entire planet to rethink what is meant by AI. I doubt that would be successful. The horse was already out of the barn. Plus, sternly lecturing people that they are misusing the AI moniker would not get you very far. It would be irritating to those who got lectured and exhausting to those who had to continually render the lecture (you can hear the complaints, i.e., “those darned AI snobs are at it again!”).
The more friction-free path would be to coin a new term. Yes, that’s it, come up with a new phraseology that would convey the aspirational side of AI and overcome the tendency to plug away at AI that is disappointingly “only” narrow in scope (that’s not to imply that narrow is somehow misguided, we can do a lot with narrow AI, including that narrow AI can seemingly get us to the broader AI too, hopefully).
The new name had to be catchy. If the phrase doesn’t roll off the tongue or if it doesn’t instantly convey the purpose, such a moniker will not be embraced or widely adopted. You could come up with something off the wall, but that wouldn’t be pleasing. The notion would be to leverage the already popular AI moniker and do just enough to the wording to showcase that an additional direction was being taken.
Voila, Artificial General Intelligence, or AGI seems to fit that bill.
The abbreviation is spiffy and similar to the AI abbreviation. By merely adding one word into the middle of the expression you can ride on the coattails of the AI moniker. People will right away realize that something is afoot but still get the gist of things. Some might even think that you goofed and meant to say AI, perhaps accidentally inserting the word “general” into the composition. That’s fine. Not everyone has to know what AGI is. Those in the know will know. Meanwhile, those not in the know will simply assume it is the same thing as AI or maybe a cousin of the AI moniker.
A wondrous solution to an AI insider disturbing problem.
For AI insiders, the inserted word “general” is intended to say volumes. It says that we need to go far beyond the narrow AI of the past and the present. We need to be able to devise AI that can do all manner of tasks and be as much a generalist as a specialist. AGI is what the aspirational side of AI is all about. AI researchers who are pursuing an aspirational goal can therefore refer to their work as AI or refer to it as AGI, especially if they are undertaking the general path. I will soon be posting a column for example about the recent efforts of devising AI game-playing apps that can play a wide variety of games and learn on-the-fly to play new games that the AI hasn’t encountered before. You could contend that this is along the lines of the AGI ambitions.
There is a downside to the AGI moniker. First, not everyone knows what it means. Second, we might once again experience a watering down. Suppose that people start to use AGI to refer to AI that is marginally of a generalized nature. This could eventually dominate the AGI realm. I suppose we might need to come up with yet another name to reboot or refocus things. Perhaps Artificial General General Intelligence (AGGI) or Artificial We-Really-Mean-It General Intelligence (AWRMIGI) might be floated as possibilities (that’s a joke, so please laugh).
I’ve got a tough question for you.
Are you ready?
Do the prevailing definitions of AI, such as the ones I’ve provided at the earlier stage of this discussion, encompass AGI or do they somehow exclude AGI?
Some exhort that AGI is beyond the conventional moniker of AI. Others would claim that AGI is really within the framing of AI and we just don’t happen to call it out or note things in that way. Once again, a definitional conundrum arises. A battle is being waged quietly about where AGI and AI either intersect or perhaps one is the subset of the other. It’s sorrowfully messy.
Contemplate the permutations and combinations.
I began this discussion by suggesting that a succinct way to define AI is this:
- “AI is a system that exhibits intelligent behavior.”
My go-to succinct way to define AGI is this:
- “AGI is a system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans.”
Notice that I got the narrow and general characteristics in there.
I also lifted the level of performance by stating that whatever intelligent behavior is exhibited is supposed to be on par with that of humans. Not all is well though. I realize this will create confusion for AGI which is superhuman since that would by definition be beyond the level of being on par with humans. We could add further wording or simply reserve the moniker of ASI (Artificial Superintelligence) for going beyond the AGI moniker. ASI is yet another floating moniker, which I’ve covered at the link here.
One especially troubling concern about the above succinct AGI definition is that it doesn’t mention AI and therefore by omission begs (and leaves unanswered) the question of where this fits regarding AI all told. Yikes, more problems.
Here’s another version that tries to deal with that:
- “AGI is an AI system that exhibits intelligent behavior of both a narrow and general manner on par with that of humans.”
There are more controversies even about that slightly refined version. For example, if an AI researcher or AI developer comes up with an AGI that doesn’t do anything special in a narrow way, but does exceedingly well in a general way, does that count as an AGI? Maybe the stated AGI definition would preclude such an instance.
It is tempting to drop out the narrow aspects altogether:
- “AGI is an AI system that exhibits intelligent behavior of a general manner on par with that of humans.”
Unfortunately, the omission of the narrow element might get some upset because you could compellingly claim that people can do both, namely perform narrow things and do general things. The omission of the word “narrow” perhaps suggests that AGI will only do generalized things and not be good at narrow things. Round and round this goes.
Semantically ambiguity is one of those darned if you do and darned if you don’t type of irksome predicaments.
Perhaps we can try to work on better ways to define and articulate what is meant by AGI. Yes, we can, and that takes us to the next step of this hearty discussion.
Introduction To Levels Of Autonomy
In my above variations of the AGI moniker, there was an attempt to establish a level of performance. This is quite important.
I say that because the conventional definition of AI seems to be vague about the attainment of performance levels. To merely indicate that a system must exhibit intelligent behavior is wildly incomplete. Recall that I initially noted that we can readily carp about what is intelligent behavior and whether or not AI can be considered AI even if the AI is below the level of performance of humans.
If we stipulate that AGI must be on par with human-level performance this helps but can be a mighty hardship when devising AGI. You could come up with an AGI that is very impressive but not quite yet at a human level of performance. Should we chuck this AI system out of the AGI sphere, or can we allow it to remain there as a stepping stone toward greater and higher levels of performance?
One key aspect of performance consists of whether the AI or the AGI needs to have humans intellectually aiding the AI or AGI. As a rule of thumb, if the AI or AGI requires a human to perform some of the designated spheres of intelligence then we would be somewhat disappointed about the level of performance of the AI or AGI. Our highest aim is AI or AGI which is fully autonomous. No human is needed for the AI or AGI to do whatever it does.
This does not preclude the AI or AGI from interacting with humans. Please keep that notably in mind. We are likely going to want the AI or AGI to indeed interact with humans. The thing is, the AI or AGI should not be dependent upon humans to do the heavy intellectual lifting. That is supposed to be the prevue of the AI or AGI.
For example, you decide to access an AI or AGI that is able to exhibit intelligent behavior in the medical field and can do so on par with a seasoned medical doctor. You aren’t supposed to be the one bringing medical knowledge to the table. That’s what the AI or AGI is supposed to be doing. At the same time, we would naturally expect that the AI or AGI would interact with you to determine your symptoms and seek a diagnosis of your ailment. Furthermore, it could be that you too are a medical doctor and therefore want to confer with the AI or AGI on a doctor-to-doctor basis.
There is already a field of endeavor that leverages the notion of autonomy and is a realm that you probably already have heard about. I am referring to autonomous vehicles (AVs) such as self-driving cars. I’ve extensively covered the levels of autonomy associated with AVs and self-driving cars, such as this in-depth analysis of Level 4 and Level 5 (see the link here), and this overview discussion about the levels overall (see the link here).
Let’s briefly touch upon the conventional levels of autonomy (LoA) as specified for AVs.
There is a handy dandy standard developed by the SAE (Society for Automotive Engineers) that lays out a set of six levels of autonomy, ranging from a numbering of zero to a top level of five. Most people assume that the standard LoA is only intended for self-driving cars. Nope, it is a broad framework that was devised to intentionally be reused in other domains.
The topmost level of five is considered a fully autonomous agent, such as a self-driving car that can drive by itself in whatever situation a human driver could do so. The idea is that a self-driving car at Level 5 is an AV that must be able to perform the driving task without needing a human driver at the wheel. Level 4 is similar, but the autonomous capability is only within an identified ODD (operational design domain). For example, a self-driving car at Level 4 might be set up to drive in San Francisco but cannot safely drive when in another domain such as Los Angeles or Chicago. The levels below the fourth level are pretty much circumstances involving the need for a human driver to be at the wheel. Again, see the details at my links noted above.
I have reviewed and at times commented that one weakness of the SAE standard is that the topmost level of a five refers to human capacities. My beef is that there should be an additional level above the existing topmost. This added level would encompass the use case of superhuman capabilities. When considering levels of autonomy, we should account for the possibility of being able to devise AI or AGI that is “superhuman” and able to exceed humans. I would argue that a Level 6 encompassing “superhuman” autonomy would be helpful and bring the number of levels up to a total of seven (I’ve used that self-adjusted scale in various of my AI research frameworks).
You are now up-to-speed about levels of autonomy and we are ready for tying this into the definitions of AI and AGI.
Google DeepMind Makes A Proposal About AGI
I am going to walk you through an intriguing and notable proposal about AGI that was authored by AI researchers at Google DeepMind and recently posted online. I will be quoting from the paper entitled “Levels of AGI: Operationalizing Progress on the Path to AGI” by Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg, which was posted online by Google DeepMind on November 4, 2023. I urge you to consider reading the full paper if this is a topic of keen interest to you.
One of my favorite quotes from the paper is this one and demonstratively echoes my discussion herein about how the contemporary definitions of AGI are all over the map:
- “However, if you were to ask 100 AI experts to define what they mean by ‘AGI,’ you would likely get 100 related but different definitions.”
They nailed it.
Now that we agree on the problem at hand, let’s see what they propose to do about the vexing situation:
- “We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI.”
Aha, as you can plainly see, they propose that we can beneficially clarify the nature of AGI by utilizing levels of autonomy. That’s why I sneakily dragged you through the discussion about LoA for AVs and self-driving cars. Had to get you ready for this big reveal.
I had already noted that having tighter definitions for things is useful for a wide variety of sensible and crucial reasons. They point out that for AGI there are immensely bona fide reasons and payoffs from more sufficiently specifying what AGI is:
- “Shared operationalizable definitions for these concepts will support: comparisons between models; risk assessments and mitigation strategies; clear criteria from policymakers and regulators; identifying goals, predictions, and risks for research and development; and the ability to understand and communicate where we are along the path to AGI.”
Moving on, I liked that they opted to examine the numerous competing approaches that purport to aid in clarifying what AI and AGI consist of. For example, I’ve covered the Turing Test many times in my columns, such as the analysis at the link here, which has become the conventional means of trying to figure out whether true AI has been attained or not. The problem is that the Turing Test has various twists and turns that tend to make it not quite as foolproof as we might all desire.
In their paper, they briefly touch up these nine conventional attempts at trying to ascertain true AI:
- Case Study 1: The Turing Test.
- Case Study 2: Strong AI – Systems Possessing Consciousness.
- Case Study 3: Analogies to the Human Brain
- Case Study 4: Human-Level Performance on Cognitive Tasks
- Case Study 5: Ability to Learn Tasks
- Case Study 6: Economically Valuable Work
- Case Study 7: Flexible and General – The ‘Coffee Test’ and Related Challenges.
- Case Study 8: Artificial Capable Intelligence.
- Case Study 9: SOTA LLMs as Generalists.
You might opt to take a look at the paper if those are facets that you aren’t familiar with, or that you kind of know but hadn’t thought about why they have various fractures and rough edges to them. I won’t repeat the details of those facets here. You can find them covered across many of my columns and if reader interest arises, I’ll be happy to do a dedicated column posting about those in one place at one time. Happy to oblige.
Another important consideration to get in front of people is to lay out what type of criteria or characteristics we all should be thinking about when it comes to defining true AI, namely AGI. The paper covers these six criteria:
- Criteria #1: Focus on Capabilities, not Processes.
- Criteria #2: Focus on Generality and Performance.
- Criteria #3: Focus on Cognitive and Metacognitive Tasks.
- Criteria #4: Focus on Potential, not Deployment.
- Criteria #5: Focus on Ecological Validity.
- Criteria #6: Focus on the Path to AGI, not a Single Endpoint.
On a bottom-line oriented excerpt, they bring up these erstwhile considerations underlying defining AGI:
- “What is the set of tasks that constitute the generality criteria? What proportion of such tasks must an AI system master to achieve a given level of generality in our schema? Are there some tasks that must always be performed to meet the criteria for certain generality levels, such as metacognitive tasks?”
In case you were anticipating a benchmark that could be used to gauge whether a particular system has reached AGI or ascertain what level it has attained, the paper notes that more work needs to be done on deriving benchmarks (thus, I hope you will be inspired to pursue such efforts, so please put this on your list of things to do, thanks):
- “Because of the immense complexity of this process, as well as the importance of including a wide range of perspectives (including cross-organizational and multi-disciplinary viewpoints), we do not propose a benchmark in this paper. Instead, we work to clarify the ontology a benchmark should attempt to measure. We also discuss properties an AGI benchmark should possess.”
I believe that gives you an overall sense of the landscape of the paper. We can now dive into the proposed levels of autonomy.
Please know that my herein recap won’t do full justice to the nuances. I aim to whet your palette. Here goes.
The Levels of Autonomy Proposed For AGI
First, as you now are assuredly aware, one big complication is the issue of narrow versus general. The proposed approach says that we ought to encompass both of those factors. We can thus appropriately stipulate that for each level of the levels of autonomy, there will be both an allied category for the narrow AI and a side-by-side allied category for the general AI. This escapes the usual dilemma of being only narrow-oriented or only general-oriented.
We can have our cake and eat it too.
Second, the number of levels will be six, thus somewhat akin to the SAE standard that I mentioned. Numbering will again start with zero and go to five.
As a side note, many people complain that the SAE standard doesn’t count from a starting point of 1, which would seem easier since the topmost number would be six and ergo match the notion that there are six levels. All I can say is that computer people love to start numbering with the number zero. Furthermore, you could persuasively argue that zero immediately suggests an absence of autonomy, while if the numbering started with a one you might be tempted to assume that there is some semblance of autonomy at the starting point of things. See, it almost does make sense.
Back to our main focus. The topmost level is going to differ from the SAE standard by denoting that Level 5 is considered superhuman. This nicely aligns with my earlier point about trying to ensure that a superhuman category ought to be encompassed in a LoA. I’m sure that some might argue that there should be an added Level 6 that would be superhuman. Doing so would more neatly align with the SAE standard of having six base levels (absent of superhuman). Anyway, these are debates worth considering and there are tradeoffs as to how many levels you devise and what each level constitutes.
Another key consideration is what name you give to each level.
The typical idea is that you should just refer to the levels by their respective numbers. That is hard for some people to readily grasp and remember. As such, it is useful to give a word that conveys what the level constitutes. The downside is that the word might be misleading or only a rough guide. Words have semantic ambiguity, as I’ve noted several times. A number for a level is a number. Anyway, if you want a LoA to catch on, you usually have to painfully come up with words that designate what each level represents and then live with it (or change the wording if the wording gets lambasted or doesn’t take hold).
Here then are the six levels laid out in the paper:
- Level 0: No AI
- Level 1: Emerging (equal to or somewhat better than an unskilled human)
- Level 2: Competent (at least 50th percentile of skilled adults)
- Level 3: Expert (at least 90th percentile of skilled adults)
- Level 4: Virtuoso (at least 99th percentile of skilled adults)
- Level 5: Superhuman (outperforms 100% of humans)
The parenthetical portions are noteworthy.
Here’s why.
If I tell you that an AI or AGI can perform on par with humans, a smarmy but suitable retort is to ask which humans you are referring to. Not all humans are the same. Some humans are better at some things than others. If I tell you that I have a superb AI-infused chess-playing app, and I further claim it can beat humans in chess, the question arises as to which humans I am referring to. All humans? Or, maybe the world’s top chess players can at times win over the AI. In that case, perhaps 99.999% of humans on Earth can be beaten by the AI in chess. Still, there is that small segment of humans that can potentially win.
Take a look at how the paper divides up the percentiles of humans. There is ample room for discussion and debate about those percentiles. That’s fine and we can expect that such a strawman will get the creative juices going so that we can collect further input on the stratification approaches and their tradeoffs.
I will next show you the overlay of the respective narrow category and the respective general category. The paper shows this as a spreadsheet grid which is easier to visually explore.
Overlay of the proposed levels of AGI autonomy with the narrow component:
- Level 0: No AI – Narrow Non-AI (e.g., calculator app)
- Level 1: Emerging – Narrow AI (e.g., rules-based systems)
- Level 2: Competent – Narrow AI (e.g., Siri, Alexa)
- Level 3: Expert – Narrow AI (e.g., Dall-E 2)
- Level 4: Virtuoso – Narrow AI (e.g., AlphaGo)
- Level 5: Superhuman – Narrow AI (e.g., AlphaFold)
Overlay of the proposed levels of AGI autonomy with the general component:
- Level 0: No AI – General Non-AI (e.g., Amazon Mechanical Turk)
- Level 1: Emerging – General AGI (e.g., ChatGPT, Bard, Llama 2)
- Level 2: Competent – General AGI (not yet achieved)
- Level 3: Expert – General AGI (not yet achieved)
- Level 4: Virtuoso – General AGI (not yet achieved)
- Level 5: Superhuman – General AGI (aka ASI or Artificial Superintelligence, not yet achieved)
The overlay showing the narrow component has a list of examples that the paper highlighted to showcase what the level is said to encompass. Doing so was a nice touch to make the levels more tangibly comprehensible. For example, they list Siri and Alex as Level 2 in the narrow AI category, which then gives you an immediate sense of what Level 2 in narrow encompasses. The famous AlphaGo is listed as an example of the narrow AI category of Level 4. And so on.
The overly for the general component of the proposed LoA also includes some examples. According to the paper, the assertion is made that for Level 2 and higher there aren’t any AGI examples today in the general component. This seems to make sense. Whether Level 1 suitably lists some of today’s generative AI apps is something I’m sure will cause heartburn for some, while others might believe that Level 1 is an apt spot or even try to contend that a Level 2 would be suitable too. There is plenty here to get AI insider’s blood boiling.
I would politely ask that blood boiling not be undertaken.
Whatever you do, do not judge the levels by the examples. This would be akin to letting the tail wag the dog. The focus should be on the levels. Whether we agree on which of the various example AI apps fit in this or that level is not especially consequential at this stage. Let’s get the keystone fundamentals figured out and we can always move around the examples as needed.
Kudos to DeepMind for their proposal and for helping to get the ball rolling on adding levels of autonomy to what is meant by AGI and also covering a slew of additional vital considerations that are well worth giving due attention for formulating AGI definitionally.
Conclusion
You now have the skinny on AI and AGI in terms of the terms and where things sit today.
Welcome to the AI insider club. You are ready to do your part. As you can see, we are just beginning to figure out what AGI is. The proposal that I covered provides a useful strawman and showcases a taste of what is involved. The world is ready to further the pursuit of defining AGI. Likewise, we have a lot of work to do on defining AI.
I’d like to wrap up things by citing Socrates again.
Socrates said this: “Wisdom begins in wonder.”
My goal in this discussion was to inform and engage, such that you might be wondering what will happen next. The answer is straightforward. Be inspired and motivated to join in the maturation and evolution of the AI and AGI field.
Contributing your wisdom will be appreciated by AI insiders and likewise the world at large.