Recently there is much debate on AI outside the tech industry, particularly with AI regulations in US, UK, EU and 2 publicly signed AI safety letters : Future of Life Institute 2023 and Center of AI Safety 2023.
According to the HBS Review by Prof. Karim Lakhani (AI won’t replace humans — But Humans With AI Will Replace Humans Without AI) , there are 2 imperatives for most executives, managers and leaders to keep up with tech waves hitting us:
- Learning Imperative — lot of learning need to be done, and continous. There are some basic skills like business (accounting), economics, psychology etc, after which you could use the tools to enhance the learning
- Change and build the DNA for change — keep up with latest trends and change fast
Keep up with the trends and see it coming
Fortunately, the barrier to transitioning is low now
Just as internet was a drop in cost of information, AI is the drop in cost of cognition
When AGI?
Timeline of AGI by big daddies of Tech:
- Ray Kurzweil — 2029 for AGI; 2045 for Singularity
- Sam Altman —within next decade
- Elon Musk — 2029
- Shane Legg — 2028
- Geoffrey Hinton — 5–20 years away (same as IMF)
- Yann LeCun — AGI soon but ASGI still far away
- Demis Hassabis — within a decade
A survey on 2778 AI authors who published in top-tier conferences by Grace et al, 2024, shows some timelines with estimates. The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116. Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios, including spread of false information, authoritarian population control, and worsened inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.
Earlier it was thought that first blue collar jobs would go, then white collar and then managerial jobs. Turns out the trend is reversed given that genAI is good at producing knowledge and the blue collar jobs are yet not replaceable unless there is a cost-effective breakthrough in robotics.
Fate of jobs after AGI:
- Bill Gates — only 3 jobs will remain 1) those who develop AI System 2) those in biosciences 3) those associated with clean energy
- Sam Altman — resillience, deep familiarity with tools, critical thinking, creativity, adaptability, and human touch
The report by IMF (Senario Planning for A(G)I Future by Anton Korinek) offers a unique perspective on the future
- compute doubles every 6 months over past decade
- For the 2 scenarios (1- brain is infinitely complex 2- brain is computation box with upper limit); AI systems are soon to surpass humans in the second case
Details 3 scenarios:
- Scenario 1 : Traditional, business as usual : Chart 1, where productivity is enhanced
- Scenario 2 : baseline, AGI in 20 years : chart 2, and slow
- Scenario 3 : aggressive, AGI in 5 years
What turns out is a complex play of research, business and policy. A lot of factors might slow AGI rollout and adoption — from organizational frictions, regulations, and constraints on capital accumulation — such as chip supply chain bottlenecks — to societal choices on the implementation of AGI. Even when it is technologically possible to replace workers, society may choose to keep humans in certain functions — for example, as priests, judges, or lawmakers. The resulting “nostalgic” jobs could sustain demand for human labor in perpetuity
The AI Hype:
The companies like google, Microsoft, Meta and Amazon are pouring billions in AI. Google has recently announced that they are gonna be AI first company. But currently the only company making profit in AI is NVDIA (shovel sellers in AI gold rush) and Devin, Sora and Gemini exaggerating their capabilities prompts us to deep dive into the Hype-led marketing. Hype leads to higher valuation which in turn enables to hire great talent by paying them shares.
The fact that these companies have a lot of talent which is there to experiment, it makes much more sense to follow the curve and explore what they can build, instead of waiting and seeing someone else disrupting them.
Some Predictions :
- Collapse of civilisation : An MIT Study in 1972 suggested that society might collapse in mid 21st Century. A new study by Gaya Herrington, KPMG Director, suggests that we are on schedule. According to Gaya Herrington’s study, if we focus on technological progress, increased investments in public services, and a shift away from growth for its own sake, we can avoid the risk of societal collapse. This period is crucial because continuing with “business-as-usual” (BAU2) will likely lead to a halt in growth and potential collapse by around 2040. According to Gaya Herrington’s study, if we focus on technological progress, increased investments in public services, and a shift away from growth for its own sake, we can avoid the risk of societal collapse. Even purely the focus on tech without investments in public services is not enough (CT Scenario). The following graphs highlight the cases: