Uncategorized

The hype and the slow road ahead


ARC, a non-profit working on open artificial intelligence, has announced a 1 million dollar prize for a competition to achieve AGI or Artificial General Intelligence. AGI is a hypothetical AI system in which a machine can learn and think like a human. To appreciate the difficulty level of the competition, consider a comparison from the field of mathematics. The Millennium Prize Problems, selected by the Clay Institute, are seven complex mathematical problems, each with a $1 million prize for the first correct solution. To date, only one problem has been solved by the Russian mathematician Grigori Perelman in 2010.

While comparing the AGI problem to mathematical problems may not be entirely accurate, both share the unpredictability of their timelines in which they will be resolved. However, the excitement around AGI is far greater because its potential impact is akin to discovering El Dorado. This has divided the AI community into two camps: one that believes AGI will be achieved within a few years, and another that does not foresee it happening in the near future.

According to leading figures like Sam Altman, CEO of OpenAI and Demis Hassabis, CEO of DeepMind (a subsidiary of Google’s parent company Alphabet), existing large language models are sophisticated enough to eventually achieve AGI. Their belief is based on the scaling hypothesis: the idea that, even with a few incremental technical advances, continually training AI models with increasing computational power and data will inevitably lead to AGI. Rather than needing radically new algorithms, we may just need to significantly amplify the existing models to achieve AGI.

Leopold Aschenbrenner, a former OpenAI researcher, recently published a 165-page essay on the future of AI. He predicts that by 2027, AI models could match the capabilities of human AI researchers and engineers, potentially leading to an intelligence explosion where AI surpasses human intelligence. This aligns closely with futurist Ray Kurzweil’s 1999 prediction that AI would surpass human intelligence by 2029. Once AGI is achieved, this intelligence explosion could result in hundreds of thousands of AGI bots being employed in AI research, leading to exponential acceleration in AI development and the achievement of superintelligence in a short time. Artificial superintelligence, an AI system with intellect far beyond human intelligence, is when science fiction scenarios like those in Terminator movies could start becoming a reality.

In such a scenario, AI might not only take away our jobs but could also end up interfering in our lives in ways we can’t imagine. Future possibilities range from the apocalyptic to the humorous. We might find ourselves in a machine-ruled world like “The Matrix” or as obese, lazy humans glued to screens all the time as depicted in “Wall-E.” On a more optimistic note, we could be hitchhiking across galaxies. Not to mention, having virtual friends, as shown in the movie “Her,” seems like a near-future reality.

However, one must be cautious when making predictions about AI. Historically, those who have placed big bets on AI have often been proven wrong. The history of AI is characterized by periods of intense optimism followed by intervals of disappointment, known as “AI winters.” For instance, AI pioneers Herbert Simon and Allen Newell predicted in 1957 that “within ten years a digital computer will be the world’s chess champion.” Similarly, Marvin Minsky, a founding figure in AI, claimed in 1967 that “within a generation, the problem of creating artificial intelligence will substantially be solved.” Even today, the high expectations for autonomous vehicles have encountered delays and setbacks as researchers grapple with the complexities of real-world driving environments.

In a recent survey, the majority of 1,712 AI experts who responded to the question of when they thought AI would be able to perform every task better and more cheaply than human workers were not overly optimistic. A separate survey of elite forecasters with exceptional track records shows they are even less optimistic.
Yoshua Bengio, a Turing Award laureate and a pioneer in deep learning, remains skeptical about achieving AGI in the near future. He highlights the complexity of human cognition and the significant gaps in our current understanding of how to replicate it in machines. Similarly, Yann LeCun, a leading AI researcher and Chief AI Scientist at Meta, while not entirely dismissive, frequently points out the gaps in our current understanding and capabilities needed to achieve AGI. He argues that large language models are insufficient for achieving AGI because they lack essential capabilities of intelligent beings, such as understanding and reasoning about the physical world.

Andrew Ng, a leading AI researcher and co-founder of Google Brain, adopts a more measured perspective. While optimistic about the long-term potential of AI, Ng emphasizes the need for a practical focus on AI applications that can have immediate, positive impacts. He suggests that discussions about AGI can sometimes distract from the valuable progress being made in more specific domains. Ng suggests that progress toward AGI, if it happens, will likely be incremental rather than a sudden breakthrough. He believes that advancements in AI will come through continuous improvements in algorithms, data and computing power.

The group that believes AGI will be achieved soon requires substantial resources—capital, data, energy, etc.—to prove their hypothesis. Therefore, they are likely to be more vocal about the inevitability of achieving AGI sooner rather than later, as they need to convince investors of the value proposition. Consequently, we may hear more news claiming that AGI is imminent in the next few years. However, the reality may be closer to the view of researchers like Andrew Ng, who argue that it is very difficult to predict when AGI will happen and that it is more likely to occur incrementally over the years rather than through an overnight breakthrough.



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *