We are living in the era of Artificial Intelligence, with the global AI market size already crossed US$240 billion and now estimated to go past US$730 billion by 2030, at an annual growth rate surpassing 17%.
Among all the hullabaloo surrounding AI, a paradigm that has captured everyone’s attention is AGI, Artificial General Intelligence.
But what is it? Why is everyone talking about it, and why is the scientific and tech community so keen on observing each turn it takes? Let’s delve deeper to understand better.
But before jumping straight into Artificial General Intelligence, let us try to ascertain what general intelligence implies.
Setting the Contours of General Intelligence
General intelligence implies the capability to achieve a range of goals and carry out diverse tasks whose contexts and environments are different. Systems that are ‘generally intelligent’ must:
- Handle problems and situations that are substantially different from what could be anticipated
- Be able to generalize the knowledge it has gained for it to be transferred from one problem context to others.
The scientific community also expects various real-world general intelligences to share some common properties without being significantly sure what these properties could be.
The premise of Artificial General Intelligence takes off from these characteristics of general intelligence and attempts to go and think beyond them.
The Core AGI Hypothesis
The hypothesis got its first official articulation in a paper titled “Artificial General Intelligence: Concept, State of the Art, and Future Prospects,” published in the Journal of Artificial General Intelligence by Ben Goertzel. The hypothesis said the following:
“The creation and study of synthetic intelligence with sufficiently broad (e.g., human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligence with significantly narrower scope and weaker generalization capability.”
To flesh out its traits, AGI will be sufficiently broad in its scope and have strong generalization capability.
In a more populist mode of writing and explaining scientific things, AGI is a stream of theoretical AI research aimed at developing AI with a human level of cognitive function, which includes the ability to self-teach.
Many researchers believe that it is practically not possible to uplift AI to a “human level of cognitive function.” However, for sure, it is seen as a stronger mode of AI compared to the weak or narrower AIs we have seen so far.
Artificial General Intelligence as “Strong AI”
According to IBM, “Strong artificial intelligence (AI), also known as artificial general intelligence (AGI) or general AI, is a theoretical form of AI used to describe a certain mindset of AI development. If researchers can develop Strong AI, the machine would require an intelligence equal to humans; it would have a self-aware consciousness that can solve problems, learn, and plan for the future.”
Strong AI would essentially work like a human being at the cognitive level. It will start like a child, learn through input and experiences, and make progress in its abilities to eventually become a machine that is so intelligent that it can not be distinguished from the human brain.
Functionally, Strong AI would be different from narrow or weak AI in its ability to handle the number and diversity of tasks.
To put this distinction into perspective, weak or narrow AI focuses on carrying out one repetitive task, while Strong AI can perform various tasks simultaneously. More importantly, while weak or narrow AI will always depend on human inputs, AGI or Strong AI, once it is done with the initial growth and learning phases, will no longer depend on instructions coming from humans. It will generate a consciousness resembling humans rather than simulating it.
With all these understandings serving as the theoretical foundation for AGI, the question is how to approach it, especially when the researchers say that an ideal AGI can never be achieved. There are four broad approaches to AGI: Symbolic, Emergentist, Hybrid, and Universalist.
The Symbolic Approach to AGI: This approach assumes that minds exist primarily to manipulate symbols representing different aspects of the world or themselves. It also assumes that a physical symbol system comes with the ability to input, output, store, and alter symbolic entities and, therefore, can prompt appropriate executable actions to reach the end goal.
Therefore, the symbolic cognitive architecture centers around the concept of a ‘working memory’ that would draw on long-term memory as required and utilize a centralized control over perception, cognition, and action.
The Emergentist Approach to AGI: The emergentist approach to AGI assumes that the processing capabilities of abstract symbols would emerge from lower-level subsymbolic dynamics. In simpler terms, this approach to AGI revolves around considering the human brain as a set of simple elements that can self-organize complexly, if required.
The Hybrid Approach to AGI: The hybrid approach to AGI has the “whole is greater than the sum of the parts” phenomenon as its guiding philosophy. It wants to respond to both the strengths and weaknesses of the symbolic and emergentist approaches through an integrative, hybrid architecture that combines subsystems operating according to the two paradigms.
The combination can be that of a symbolic subsystem with a large subsymbolic system or a population of small agents, each of which is both symbolic and subsymbolic in nature.
The Universalist Approach to AGI: The universalist approach to AGI begins with algorithms that have the power to yield immensely powerful general intelligence if supplied with massive and unrealistically immense computing power. The objective is to eventually scale them down by making them adapt to work using feasible computational resources.
While all these approaches have evolved with research around the paradigm advancing at a steady rate, many technology organizations have been building practical solutions around AGI. The most well-known of them has been Open AI.
Leading Companies Working on AGI
1. Open AI
OpenAI, most famous for its solution ChatGPT, has an AGI-centric vision. In a blog post published on February 24, 2023, the company started its plans without any ambiguity. It said its mission was to ‘ensure that artificial general intelligence – AI systems that are generally smarter than humans – benefits all humanity.’
The company said that with its systems getting closer to AGI, it was ‘becoming increasingly cautious with the creation and deployment’ of its models. As examples, it highlighted the deployment of InstructGPT and ChatGPT.
What is ChatGPT?
Created by Open AI and launched on November 30, 2022, ChatGPT, at its core, is an AI-driven natural language processing tool that allows human-like conversations with the chatbot. Not only can it answer questions, but it can also help in composing emails, essays, codes, and more.
Click here for the list of the five best ChatGPT extensions.
What is Instruct GPT?
Instruct GPT, a term coined by Open AI, is an advanced AI-driven language model that can follow instructions given in a text prompt. Its advanced capabilities of understanding text-based requirements and generating text-based replies accordingly make it a powerful tool for a host of service and content-based purposes.
Open AI Chat GPT in Numbers
According to the latest available numbers, ChatGPT by OpenAI has more than 100 million weekly active members. Reports published on October 12, 2023, said that OpenAI was generating revenue at a pace of US$1.3 billion a year, generating more than US$100 million per month, 30% up from its summer 2023 numbers.
Open AI Organizational Shake-Up
However, OpenAI was all over the news most recently for its organizational shake-up that created stirs across the tech and tech investment community worldwide. The board of directors had decided to oust its CEO, Sam Altman, for not being “consistently candid” in his communications. However, five days after he was suddenly sacked, the board decided to bring Altman back to his old job.
Reportedly, every single member of Open AI’s staff had co-signed a letter saying they would consider quitting if Mr. Altman were not brought back to his earlier position. One of the board members responsible for the ouster of Sam Altman, Open AI’s chief scientist Ilya Sutskever, wrote on his X handle the following: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together, and I will do everything I can to reunite the company.”
With Mr Altman back as Open AI CEO, the controversy seems to have become a thing of the past.
While OpenAI has been the most prominent in gaining traction and eyeballs in the AGI space, other players have existed for a significant time. DeepMind is one such player that has been at work since 2010.
2. DeepMind
DeepMind’s official long-term aim is ‘to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence.’ The company joined forces with Google in 2014.
The operating principle of DeepMind has always been to advance science to benefit humanity. As such, it wants to leverage artificial intelligence to serve societal needs and expectations.
What may count as the success achieved by DeepMind so far are that its programs have learned to diagnose eye diseases as effectively as the world’s top doctors, saved 30% of the energy used to keep data centers cool, predict the complex three-dimensional shapes of proteins, revolutionizing the way for the pharma industry to advance.
According to available data, DeepMind had only raised one round of funds on February 1, 2011, before collaborating with Google. The investors were Founders Fund and Horizons Ventures. The funding was worth US$50 million reportedly.
3. Adept
Another emerging player in the field of AGI is Adept. In March 2023, the barely one-year-old startup with just 25 employees raised US$350 million of venture capital. It raised funds by demonstrating a rudimentary version of a digital assistant.
Adept has studied how humans use computers to build an AI model that can turn a text command into a set of actions. The funding round was completed at a post-money valuation of US$1 billion.
According to David Luan, the co-founder of Adept, the company wants to build the same model for computing following the way a synthesizer lets a musician play the sounds of many instruments without having to learn how to play the original instrument.
AGI: The Road Ahead
People have differing views on AGI’s potential. Some believe that AGI might have dangerous consequences for humanity, while some believe that AGI would not be able to achieve what we think it might.
According to Diego Klabjan, a professor at Northwestern University and founding director of the school’s Master of Science in Analytics Program:
“Human brains have billions of neurons that are connected in a very intriguing and complex way, and the current state-of-the-art [technology] is just straightforward connections following very easy patterns. So going from a few million neurons to billions of neurons with current hardware and software technologies – I don’t see that happening.”
Oxford University’s Future of Humanity Institute conducted a survey among 352 machine learning researchers about AI’s evolution potential. The median number of respondents made noteworthy year-wise predictions.
- By 2026: machines may become capable of writing school essays
- By 2027: self-driving trucks may render drivers unnecessary
- By 2031: AI may outperform humans in the retail sector
- By 2049: AI could become the next Stephen King
- By 2137: All human jobs might be automated
It is much too early to predict where AI will eventually take us. But, like all technologies, it can be used for good or bad.
Open AI CEO Sam Altman believes “there should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.” He even goes further to say that:
“The world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too.”
Yet, there are AGI solution providers like DeepMind and more that believe AGI will be the harbinger of many scientific breakthroughs and bring real-world change for the better. AGI will definitely have a significant influence on our pursuits in research, engineering, science, and safety for the world. A large part of the success will definitely depend on how cautious and calibrated human actions are.
Click here to learn all about investing in artificial intelligence.