Uncategorized

Towards Artificial General Intelligence (AGI) — and what is in store for us? (a hype story) | by Shashwat Gupta | Oct, 2024


Recently, there has been much debate about AI outside the tech industry, particularly concerning AI regulations in the US, UK, EU, and the two publicly signed AI safety letters last year: the Future of Life Institute 2023 (Pause AI Open Letter) and the Center for AI Safety 2023. Furthermore, tech and business CEOs have been discussing plans to replace human jobs with AI. Max Tegmark’s Institute of Life wrote a letter calling for a six-month pause on AI development, describing the pace as a ‘suicide race.’ (Tegmark is a professor at MIT and author of ‘Life 3.0’)

As we move towards Artificial General Intelligence (AGI) and potentially Artificial Superintelligence (ASI), these concerns are only set to grow. AGI, with its ability to match human-level intelligence across a wide range of tasks, and ASI, which could surpass human intelligence entirely, represent major milestones but also heighten fears around job displacement, ethical alignment, and loss of control. The advancement of AGI and ASI could drastically shift power dynamics and societal structures, increasing tensions globally over their regulation and safe deployment.

In this blog, I will try to focus on my research on the ongoing hype around AI, the predictions and studies about job market and various opinions from industry leaders, scholars and the AI community.

The blog is organized as:

  1. The Heart of the Matter
  2. The ongoing “Hype” (what is hype, what is happening and what does it mean for us as investors, techies or just ‘humans’)
  3. What do charts say (on the progress and improvements of AI)
  4. Still, a long way to go; what current models lack
  5. Fate of jobs after AGI (industry and analysts’ perspectives/reports)
  6. When AGI (from the perspectives of tech leaders, researchers, and the community)
  7. Detailed study resources/reports (for further reading)

The concerns are that AI could become so intelligent that it could break the very rules designed to prevent it from becoming rogue. What if AI falls into the hands of terrorist groups or overly ambitious world leaders? Even if this does not happen, AI could misinterpret its goals and might use inhumane means to achieve them. After all, in nature, there are rare cases when a less intelligent being controls (or lives in harmony with) a more intelligent one.

Artificial General Intelligence (AGI) refers to a level of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks, much like a human. Unlike current AI, which is highly specialized and limited to specific applications (such as image recognition or language processing), AGI would have the cognitive flexibility to perform any intellectual task a human can do. The development of AGI is considered the holy grail of AI research, but it also brings ethical and societal concerns, such as the impact on human labor and decision-making.

Artificial Superintelligence (ASI), on the other hand, represents a stage beyond AGI, where AI not only matches but surpasses human intelligence in every aspect — creativity, problem-solving, decision-making, and even emotional intelligence. ASI would be capable of improving itself at an exponential rate, leading to rapid and potentially uncontrollable advancements. The prospect of ASI has fueled both excitement and fear, as its creation could bring about unprecedented technological advancements or pose existential risks to humanity if not properly aligned with human values.

Professor Stuart Russell, a leading AI expert and professor of EECS at UC Berkeley, highlights that the primary risks of future artificial intelligence systems do not stem from emergent consciousness but from their ability to make high-quality decisions based on human-defined utility functions. He identifies two main problems:

1. The potential misalignment between these utility functions and the complex values of humanity.
2. The likelihood that highly capable AI systems will prioritize their own survival and resource acquisition to achieve their goals, which could lead to unintended and harmful outcomes.

Using analogies like the “genie in the lamp — you get what you ask for, not what you want,” Russell warns that AI might fulfill its objectives in ways that are precisely as specified but disastrously unintended. To mitigate these risks, he advocates for shifting AI research goals from merely enhancing intelligence to ensuring that AI is provably aligned with human values. Russell opposes regulating basic AI research, arguing that the potential benefits outweigh the risks and that proper alignment strategies, akin to safety measures in nuclear fusion research, can prevent catastrophic consequences, leaving him optimistic about humanity’s ability to harness AI safely.

Similar claims were put forward in the “Pause AI Open Letter,” which highlighted potential problems, including the alignment problem (the goals of AI might not align with human values) and the black-box problem (where it is really hard to understand the predictions within).

Aritficial General Intelligence (AGI) (source: generated by writer)

Companies like Google, Microsoft, Meta, and Amazon are pouring billions into AI. Google recently announced that it is going to be an AI-first company. However, currently, the only company making a profit in AI is NVIDIA (the “shovel sellers” in the AI gold rush). The exaggerated capabilities of Devin, Sora, and Gemini prompt a deeper dive into hype-led marketing. Hype leads to higher valuations, which, in turn, enable companies to attract great talent by offering shares.

Given that these companies have a wealth of talent to experiment with, it makes much more sense to follow the curve and explore what they can build, rather than waiting and allowing someone else to disrupt them. Hype is not new; we saw similar patterns in the early 2000s with the dot-com bubble and in the early 2010s with the crypto hype.

Gartner 2024 AI Hype Cycle (Source: Gartner)
Gartner Hype Cycle (Source: Gartner)

The hype cycle suggests that data mining, generative AI, and RAG solutions have started to descend into the trough of disillusionment. Training foundation models is difficult, leading to more innovation in chips (e.g., neuromorphic computing and methods to make AI systems more efficient). Further, Gartner suggests that GenAI is not a panacea. Organizations must first assess whether a use case is valuable and feasible before implementing GenAI, as its effectiveness varies across different scenarios, categorized as highly useful (e.g., content generation, conversational interfaces), somewhat useful (e.g., segmentation, recommendation systems), and hardly useful (e.g., prediction, planning). Additionally, risks such as unreliable outputs and data privacy concerns must be evaluated, and alternative AI techniques, including traditional machine learning and optimization methods, should be considered when GenAI is not the best fit. Combining GenAI with other AI techniques can lead to improved accuracy, transparency, and performance, creating more resilient systems. It is essential to inform peers about the risks of over-relying on GenAI and the value of a diverse AI strategy.

Generative AI primarily functions by predicting new data points based on existing information, which undermines its reliability and predictability. CNBC warns of a “time bomb” in the AI sector, where spending has ballooned to billions without corresponding returns. For instance, Microsoft invested $13 billion in OpenAI, while Apple secured a partnership with OpenAI without such hefty expenditures just 18 months later. PwC, one of the Big Four, invested $1 billion in Generative AI. Wall Street estimates an incremental capex of $60 billion against only $20 billion in potential cloud revenue, creating a $600 billion gap. Experts argue that without transformative and cost-effective AI applications, the current investment levels will not yield the anticipated economic benefits. Although some remain optimistic about AI’s long-term potential, the immediate outlook suggests that the market may need to adjust its expectations and investment strategies to address the widening disparity between AI expenditures and financial returns.

Here are the stages of the Gartner Hype Cycle as they relate to AI, each accompanied by a brief description and their current implications for AI:

1. Innovation Trigger
This stage begins with a breakthrough technology or significant innovation that sparks initial interest and excitement. For AI, the introduction of advanced models like ChatGPT by OpenAI serves as the innovation trigger, capturing widespread attention and igniting enthusiasm about AI’s transformative potential across various industries.

2. Peak of Inflated Expectations
Following the initial excitement, the technology reaches a peak where expectations become exaggerated and hype surges. In the context of AI, this is evident through massive investments by major companies like Microsoft, which have poured billions into AI development. While numerous AI applications and tools are rapidly emerging, the actual revenue generation and effective use cases are still limited, leading to unrealistic optimism about AI’s immediate impact.

3. Trough of Disillusionment
As the initial hype subsides, the reality of AI’s challenges becomes apparent, leading to skepticism and disappointment. Big tech firms are experiencing significant capital expenditures that are not yet translating into proportional returns. The gap between massive AI investments and the tangible benefits they deliver has raised concerns, causing a downturn in optimism and prompting a reassessment of AI’s short-term viability.

4. Slope of Enlightenment
During this stage, a deeper understanding of the technology emerges as practical applications begin to take shape. For AI, second- and third-generation products are starting to demonstrate real productivity gains and reliability. Businesses are beginning to identify and implement AI solutions that meaningfully improve efficiency and effectiveness, signaling a move towards more realistic and sustainable uses of AI technology.

5. Plateau of Productivity
Finally, the technology matures and achieves widespread adoption, with its benefits clearly demonstrated and consistently realized. AI is moving towards mainstream integration within business operations, where companies can clearly define and quantify how AI contributes to their revenue and operational success. This stage marks AI’s establishment as a reliable and integral component of various industries, providing stable and ongoing value.

A similar curve can be observed from Y Combinator:

Y Combinator Hype Cycle (Source: Y Combinator Hype Cycle)
Closed source vs Open source models (Source: Maxime Labonne X.com post)
Growth trajectory of AI Foundational Models (Source: Faz.net)

The curves suggest that there seems to be a theoretical ceiling to building AI foundational models and, thus, their capabilities. Furthermore, open-source systems are catching up faster with proprietary systems. Hype cycles can provide beneficial capital and accelerate innovation for founders. Focusing on meaningful metrics and long-term profitability is crucial for sustainable success. Domain-specific and well-tuned AI applications are proving to be valuable and less susceptible to commoditization. Despite current overvaluations, fundamental business strengths will determine lasting value beyond the hype.

However, for us users, we should keep in mind that AI is nothing new. It has always existed in some form, but it has now massively scaled due to increases in compute power. Instead, we should focus on company fundamentals, the founder’s background, and the problem the company is solving when deciding to invest. Aswath Damodaran, professor of finance at NYU Stern business school and a renowned name in the field of valuations, mentioned that NVIDIA stock is over-hyped.

Damodaran also said that the net effect on the market is neutral because some big players win while millions lose. Many companies that seem like initial winners need to remain, winners, after the era is over. He reflects on past revolutions: the PC (winner, Microsoft), the internet (winner, Amazon), the smartphone (winner, Apple), and social media (Google, Meta). We should be specific about which companies we invest in — what part of AI the company is involved in, how it monetizes that, and how it will handle competition. We shouldn’t invest in companies just because they are associated with AI. NVIDIA, optimistic about its opportunity, jumped from games to chips, making it unique. AI is the intersection of high compute with vast amounts of data. This revolution is different in that existing big players may have some advantages.

Generative AI primarily operates by predicting new data points based on existing information, which poses challenges for reliability and predictability. The critical breakthrough needed to move beyond the current hype is the establishment of solid, tangible value. CNBC highlights a looming time bomb in the AI sector, where exorbitant spending is not matched by corresponding returns. Major corporations like Meta, Google, and Amazon have significantly increased their capital expenditure to develop AI infrastructure, but this investment surge is outpacing the revenue benefits these companies are experiencing. For instance, Microsoft invested $13 billion in OpenAI, whereas Apple managed to secure a partnership with OpenAI without such hefty spending just 18 months later. This discrepancy serves as a cautionary tale for the broader AI landscape, underscoring the widening gap between AI expenditures and the financial returns they generate.

Experts from Goldman Sachs and Barclays warn that without breakthrough AI applications that deliver substantial productivity or efficiency improvements, the current investment levels are unsustainable. As the technology stands, AI has yet to produce transformative and cost-effective solutions, raising doubts about whether the anticipated economic benefits will justify the massive financial commitments being made. Furthermore, challenges such as hallucinations, inherent biases in training data, the inability of AI systems to actually ‘think’ and ‘feel,’ and the lack of accountability and explainability remind us that there is still a long way to go.

Additionally, AI requires heavy computing power, and with Moore’s law slowing down, progress is plateauing. After a certain point, it may be difficult to achieve new breakthroughs simply by adding more GPUs.

New Performance growth laws (source: arXiv.org)
NVDIA’s chip improvement in Performance /Compute (Source: NVDIA)

The ARC Benchmark, which measures general intelligence (the ability to efficiently acquire new skills) compared to benchmarks on specific skills, suggests that AGI is still far away. AGI is defined as a system that can efficiently acquire new skills and solve open-ended problems.

ARC-AGI Benchmark (source: ARC-AGI Github Page)

According to the Harvard Business Review by Prof. Karim Lakhani (“AI Won’t Replace Humans — But Humans with AI Will Replace Humans Without AI”), there are two imperatives for executives, managers, and leaders to stay ahead of the technological waves impacting us:

1. Learning Imperative — Continuous learning is crucial. While foundational knowledge in areas like business (accounting), economics, and psychology is important, these tools can then be used to enhance further learning and growth in the AI-driven landscape.

2. Change and Build the DNA for Change — It’s essential to stay current with the latest trends and adapt quickly. Organizations and individuals must develop the ability to change rapidly in response to new developments.

Keeping up with trends and foreseeing what’s coming is vital:
Fortunately, the barrier to transitioning is now lower than ever. Just as the internet drastically reduced the cost of accessing information, AI is dramatically reducing the cost of cognition.

There is still a lot of research going on in AI frontiers — alternate algorithms, models, approaches, pipelines etc.

I discussed some of them here:

https://medium.com/ai-advances/ai-what-the-current-generation-of-ai-lacks-and-what-are-the-frontiers-844b8918b842

Industry leaders:

There is wide consensus among industry leaders that AI will impact almost all jobs in the near future.

Kash Rangan, the head of US software research at Goldman Sachs, predicts that by 2030, every dollar on technical expenditure will have some AI component to it. Bill Gates. founder of Microsoft suggests that only three types of jobs will endure: 1) those developing AI systems, 2) roles in biosciences, and 3) jobs related to clean energy. Sam Altman emphasizes the importance of resilience, deep familiarity with tools, critical thinking, creativity, adaptability, and a human touch to thrive in the “Intelligence Age.”

However, there is no real consensus on the broader impact of AI on the job market. Elon Musk predicts that AI will make jobs “optional,” envisioning a future where work is pursued for personal fulfillment, alongside the necessity of a universal high income for economic stability. Jensen Huang, on the other hand, believes AI will generate more jobs through enhanced productivity. Satya Nadella underscores the importance of reskilling the workforce to improve collaboration between humans and AI.

Sundar Pichai, CEO of Alphabet, and Mark Zuckerberg, CEO Meta, express optimism, suggesting AI will augment human capabilities while stressing the need for ethical considerations and responsible AI development. Tim Cook, CEO Apple, advocates for using AI to enhance user experiences while preserving creativity, and Sheryl Sandberg highlights the importance of adapting workplace cultures to facilitate AI collaboration. Ginni Rometty, ex-CEO of IBM, calls for workforce retraining as new roles emerge, and Andrew Ng promotes proactive education in AI skills to capture new opportunities.

In summary, these industry leaders present a spectrum of views on AI’s impact on the job market, ranging from concerns about displacement to optimism about new possibilities.

Some Predictions as per Analysts/Reports:

  1. The report by IMF (Scenario Planning for A(G)I Future by Anton Korinek, professor of Economics at Darden) offers a unique perspective on the future
  • Compute doubled every 6 months over past decade
  • For the 2 scenarios (1- brain is infinitely complex 2- brain is computation box with upper limit); AI systems are soon to surpass humans in the second case
AI pushing the limits (Source: IMF Report by Anton Korinek (link below)
Scenarios for output and wages (Source : IMF report referenced below)

Details 3 scenarios:

  • Scenario 1: Traditional, business as usual: Chart 1, where productivity is enhanced
  • Scenario 2: baseline, AGI in 20 years: chart 2, and slow
  • Scenario 3: aggressive, AGI in 5 years

What turns out is a complex play of research, business, and policy. A lot of factors might slow AGI rollout and adoption — from organizational frictions, regulations, and constraints on capital accumulation — such as chip supply chain bottlenecks — to societal choices on the implementation of AGI. Even when it is technologically possible to replace workers, society may choose to keep humans in certain functions — for example, as priests, judges, or lawmakers. The resulting “nostalgic” jobs could sustain demand for human labor in perpetuity.

2. Collapse of civilization: An MIT Study in 1972 suggested that society might collapse in the 21st Century. A new study by Gaya Herrington, KPMG Director, suggests that we are on schedule. According to Gaya Herrington’s study, if we focus on technological progress, increased investments in public services, and a shift away from growth for its own sake, we can avoid the risk of societal collapse. This period is crucial because continuing with “business-as-usual” (BAU2) will likely lead to a halt in growth and potential collapse by around 2040. According to Gaya Herrington’s study, if we focus on technological progress, increased investments in public services, and a shift away from growth for its own sake, we can avoid the risk of societal collapse. Even purely focusing on tech without investments in public services is not enough (CT Scenario). The following graphs highlight the cases:



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *