Uncategorized

Artificial General Intelligence Or AGI: A Very Short History



AGI is the new AI, promoted by tech leaders and AI experts, all promising its imminent arrival, for better or for worse. Anyone frightened by Elon Musk’s warning that “AGI poses a grave threat to humanity, perhaps the greatest existential threat we face today,” should first study the evolution of AGI from science-fiction to real-world fiction.

The term AGI was coined in 2007 when a collection of essays on the subject was published. The book, titled Artificial General intelligence, was co-edited by Ben Goertzel and Cassio Pennachin. In their introduction, they provided a definition:

“AGI is, loosely speaking, AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation.” The rationale for “christening” AGI for Goertzel and Pennachin was to distinguish it from “run-of-the-mill ‘artificial intelligence’ research,” as AGI is “explicitly focused on engineering general intelligence in the short term.”

In 2007, “run-of-the-mill” research focused on narrow challenges and AI programs of the time could only “generalize within their limited context.” While “work on AGI has gotten a bit of a bad reputation,” according to Goertzel and Pennachin, “AGI appears by all known science to be quite possible. Like nanotechnology, it is ‘merely an engineering problem’, though certainly a very difficult one.”

AGI is considered by Goertzel and Pennachin as only an engineering challenge because “we know that general intelligence is possible, in the sense that humans – particular configurations of atoms – display it. We just need to analyze these atom configurations in detail and replicate them in the computer.”

Goertzel and Pennachin seem to contradict themselves when they also assert that the Japanese 5th generation Computer System project “was doomed by its pure engineering approach, by its lack of an underlying theory of mind.” But maybe there’s no contradiction here because they assume that the mind is also a collection of atoms that can be emulated in a computer by the right engineering approach: “We have several contributions in this book that are heavily based on cognitive psychology and its ideas about how the mind works. These contributions pay greater than zero attention to neuroscience, but they are clearly more mind-focused than brain-focused.”

The brain-focused approach presented in the book is “a neural net based approach, trying to model the behavior of nerve cells in the brain and the emergence of intelligence therefrom. Or one can proceed at a higher level, looking at the general ways that information processing is carried out in the brain, and seeking to emulate these in software.”

This was written, of course, when the real fringe of the AI community—ignored in this 2007 book—were the handful of people (e.g., 2018 Turing Award winners Geoffrey Hinton, Yann LeCun and Joshua Bengio) who in 2007 coined the term “deep learning” to describe their machine learning approach to finding patterns in lots of data using statistical analysis algorithms. These have been called since the 1950s “artificial neural networks,” algorithms that have been presented throughout the years with no empirical evidence as “mimicking the brain.”

In 2007, the people that were the first to discuss various approaches to achieving the newly-termed “AGI” completely ignored the fringe approach to “AI” that in 2012 became the mainstream approach to AI with the successful marriage of GPUs, lots of data, and artificial neural networks. Still, the researchers previously on the fringe of AI and now the kings of the data mountain understood well the branding and marketing power of “AGI” and continued in the exalted tradition of promising the imminent arrival of machines with human-like intelligence (or superintelligence) and the possible extinction of humanity by these possibly malevolent machines.

The key person in the importation of this tradition to the new successful approach to AI was apparently Shane Legg, a co-founder of DeepMind. Legg suggested to Goertzel the term “Artificial General Intelligence” and described to Cade Metz (who quoted Legg in his book Genius Makers) the general attitude to the subject in the AI community around 2007: “If you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character.”

Aspiring to build superintelligence while worrying about what it could do to humanity, Legg joined his colleague Demis Hassabis (they were exploring the connections between the brain and machine learning at UCL) to establish DeepMind. Hassabis told Legg that “they could raise far kore money from venture capitalists than they ever could writing grant proposals as professors,” Metz reports. With AGI as the stated aim of DeepMind, mentioned in the first line of their business plan, “they told anyone who would listen, including potential investors, that this research could be dangerous.”

To get to Peter Thiel, their first investor, Hassabis gave a presentation at the 2010 Singularity Conference, arguing that the best way to build artificial intelligence was to mimic the way the brain worked: “We should be focusing on the algorithmic level of the brain, extracting the kind of representations and algorithms the brain uses to solve the kind of problems we want to solve with AGI.”

There you have it. Using the term “AGI”—with its exciting connotations of both saving and destroying humanity—to get the attention and deep pockets of investors, claiming to replicate the human brain in the computer while pursuing a statistical analysis method that has nothing to do empirically speaking with how the human brain works.

Whether insisting that their approach to AI resembles the biological processes of the human brain (“connectionism”) or that they can replicate the process of human thinking in the computer (“symbolic AI”), the two key approaches to AI since the term was coined in 1955 have banked on the widely accepted notion that “we are as gods.” This belief in modern man’s ability to conquer all frontiers, even replicate man in the machine, has been based on the centuries-old idea that humans are a “particular configurations of atoms.”

Next in my AGI Washing Series, I will offer a short pre-history of AGI.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *