There are different definitions of “AGI” (Artificial General Intelligence). Some people focus on AI’s understanding and possibly even sentience, while many focus on what it can do. Some people define it as equivalent to the abilities of the average person; others, as equivalent to the abilities of experts.
Part of the challenge is that intelligence comes in many forms. For instance, the ability to grasp objects is a form of intelligence though it’s not something people generally think of as a business-related skill. And at the same time, the moravec paradox observes that computers are great at things humans are not and vice versa (e.g. computers have a hard time grasping objects but can do advanced maths in milliseconds.) So, comparing human and machine intelligence is challenging.
That said, I favor the “what it can do” approach because that has the most immediate impact in people’s lives. That is, if we have AI systems that can do economically useful work just as good as the average person (or even better, the average expert), that means a few things:
- People won’t be needed to work. (Jobs? Economy?)
- All economic output could increase several times over. For instance, AI may advance our tech. At a minimum, robots can work 24/7/365 whereas humans work a fraction of that. Imagine our ability to fabricate advanced computing chips doubling, which can then be used to make more chips, etc.
- We may have begun the “singularity”, where digital based knowledge and skills skyrockets. This is because we will have reached a point where the AI can improve itself. This means expanding the types of jobs it can perform, improving its performance, and likely innovating new techniques or technologies to assist with its goals.
(Of course, that could have tremendously good or tremendously bad outcomes – e.g. global retirement and healthy ecosystem vs literal doom – but that’s another discussion.)
This vid argues that we’ve hit AGI by this definition. And I think that by some narrow definitions, this may be the case. (I still think we need more accuracy, a better “ecosystem” for it to function, more real-world modeling, etc. OTOH, this isn’t preventing it from being massively useful right now.)
So, this doesn’t mean that the things I just listed will happen tomorrow – but it does mean that we should be expecting more enormous advances in the lab, and start to see real world applications slowly beginning. The line between AI and AGI is quickly blurring. Buckle up.
p.s. I know casual readers probably hear about AI here and there but may still have a picture in their head of AI as basically just a tool for making crappy pictures. I’m begging y’all to see that AI is both way beyond that (e.g. it’s now making literal movies, and rapidly approaching market-ready results) and more importantly, that it’s much more than that. AI is advancing every field of science, from fusion energy to quantum computing to curing diseases and so much more. This is no longer a curiosity. This is real and it’s here.