Artificial General Intelligence (AGI) Is One Prompt Away

Until about a year ago, the relationship between artificial intelligence (AI) and machine learning (ML) was murky. They were often written as a single unit (AI/ML). The running joke was that a startup used “AI” when raising assets and “‘ML” when hiring engineers.

Historically, the two were separate endeavors. AI represented, for example, search-and-calculate algorithms like chess programs that could beat humans, while ML represented statistical techniques to predict responses to new inputs from a training dataset. The confusion between AI and ML came about because of deep learning, an extension of neural networks to allow for hierarchical connections between layers of neurons, and trained on exponentially more data than before. Deep learning (DL) was a major innovation and many began to elevate DL out of ML and equate it with AI. The best chess programs are now based essentially on DL rather than old-school AI. One further innovation to DL was the concept of transformers, which is the T in GPT (generative pre-trained transformer).

Today, the distinction between AI and ML is obvious: DL has been relegated back to ML and AI has essentially become synonymous with generative AI (GAI), tools that generate human-like output, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard, or Meta’s Llama for text, and DALL-E, Midjourney, and Stable Diffusion for images. There are GAI tools for music and video too.

In finance as in many other fields, most applied work is still done with ML, although with a curious and maybe envious eye towards these GAI tools. While some purists may still distinguish AI from GAI, no one refers to these tools as ML, and in time the G from GAI will likely fade away into obscurity, just like the hyphen in the word “e-mail.”

The real interesting G is in artificial general intelligence (AGI). An AGI is more than a generative tool. It is a person. You might think of it as a digital person or a silicon-based person rather than our more familiar carbon-based people, but it’s literally a person. It has sentience and consciousness. It can generate new knowledge. It can think and feel and joke and love. It has rights. It’s alive. This is exactly what people used to mean by “AI” until AI beat humans at chess and Go and art and poetry and we kept moving the goalpost. This aspirational “true” AI is now called AGI.

People who spend a lot of time with AI chatbots can eventually come to feel that they are already alive. Last year, Google software engineer Blake Lemoine argued that an internal Google project was sentient, months before ChatGPT was released. Thousands of people have fallen in love with commercial chatbots customized to act as their significant other. The technology is very engaging. When I interact with ChatGPT or Claude, I often say please and thank you. It’s hard not to. (Full disclosure: I haven’t fallen in love with them yet.)

But in calm reflection, virtually no one believes AGI is already here, although they tend to acknowledge it is far more believable than before.

Lex Fridman#367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast

OpenAI CEO Sam Altman put it well on a podcast with Lex Fridman following the release of GPT-4 when he said that he didn’t think GPT-4 was an AGI, but that it’s remarkable that the question is legitimately debatable now.

Cbsnews“Godfather of Artificial Intelligence” Geoffrey Hinton on the promise, risks of advanced AI

Geoff Hinton, one of the “godfathers of deep learning,” proclaimed on “60 Minutes” last week that he is certain we will have AGI soon, and biological humans will be relegated to be the second-smartest species on the planet. Many share his view. Many more are at least worried about the possibility.

Indeed, one of the main hypothetical existential threats of AI involve scenarios like Skynet or the Matrix where robots gain sentience and do not share the same values as us biological humans. This is what is meant by “the alignment problem.” Unaligned AGI’s may enslave us or kill us, perhaps even thinking it is for our own good. Hinton raises another possibility: they will be so good at persuasion, they’ll be able to convince us to do anything.

the GuardianPhilosophy will be the key that unlocks artificial intelligence | David Deutsch

Of course, it’s possible everyone is wrong, and nothing about current AI will lead to AGI. Physicist and philosopher David Deutsch, and the father of quantum computation, makes (at least) three important points about AGI. First, he notes that AGI must be possible, as a matter of physics and the universality of computation. Second, it is a software program, an algorithm that could run on any hardware. And third, AGI are people by definition, because if they lacked any cognitive ability of humans, they would not be considered AGI in the first place.

The combination of those insights, and the experience of interacting with the new AI tools, leads logically to one inevitable conclusion: AGI is a prompt away.

Today, ChatGPT can generate code from text prompts and it can run the generated code too. This means that anything that can be computed by any piece of hardware can be computed by ChatGPT. In technical terms, this means ChatGPT is Turing complete, named for Alan Turing, the father of modern computer science. Therefore, if a program for AGI can be written in any single computer language, it can also be written and executed through ChatGPT.

We have always been “just one program” away from AGI. But now we know that we are “just one prompt” away. Doesn’t that feel a lot closer?

Imagining how to write an AGI software program can seem daunting. You need to think about structure and logical flow. How would you model memory? Creativity? Intuition? How would you store what an “explanation” is and figure out how to have it generate new conjectures and criticize existing explanations? What storage units would they occupy? It’d be hard to even know where to start. And with conventional programming languages, even a stray semicolon or a slight typo, let alone a logical error, can render the entire program useless.

But now it’s just a matter of stringing the right syllables together, in more or less the right order. There must exist a prompt, written in regular English, or even Japanese or Welsh or emojis, that would lead ChatGPT to become sentient, conscious, moral, awake, alive, able to generate new explanatory knowledge, and, in all respects, a person. It’s just a matter of mumbling more or less the right words. In fact, there are probably many such prompts, and unlike conventional programming, a few errant words or syllables won’t matter. As a further bonus, ChatGPT can even help you along the way!

It may take one or several brilliant insights, but one ultimate AGI prompt may be as short as a few paragraphs. It’s possible somebody has already written it but didn’t realize what they had created and naively turned it off.

Finding this prompt is now an exercise that anybody with the power of speech can pursue. AGI is no longer a pursuit available only to highly trained programmers and theorists.

How will we know when we’ve succeeded? Deutsch has another insight about that, one he shares with Turing himself: “we’ll know an AGI when we see one.” Perhaps, as Hinton has hinted, it’ll persuade us.

At that point, we can drop not only the G but also the A in AGI.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *