Artificial General Intelligence (AGI) is a type of computing that mimics the cognitive abilities of humans. As it develops, the idea is that this powerful technology will eventually be able to self-teach, performing tasks it wasn’t trained for in the first place without additional human assistance. The world of AGI currently includes things like self-driving cars and tools such as ChatGPT, but could eventually catapult us into a sci-fi future where computers could diagnose and treat diseases, automate human labor, and manage natural resources. As it stands, it’s novel tech which is already changing the world as we know it.
The current race to develop AGI has spawned questions about the power of the computing models behind it and whether this kind of technological change is actually a good thing for society. Now, we have computers that can write college assignments to a higher standard than a typical student, create and correct lines of computer code, and architect breakthroughs in medical science that were previously unthinkable. Alongside this, however, there is the potential for the creation of advanced computer viruses, cyberattacks and even manipulation of humans. Questions remain about black box algorithms, the enormous conglomerates behind them, and what kind of power is being amassed behind the scenes.
The problem with Big Tech
Right now, companies like Google’s DeepMind, Sam Altman’s OpenAI, and Anthropic largely hold the keys to the industry. And while regulators have made some efforts to understand and rein them in, these have so far been limited. These companies argue they are in a better position to have a handle on the potential damage AGI could cause because of their size and clout. A cynic might argue this reasoning could also be seen as an attempt at regulatory capture, leaving little room for smaller disruptors to thrive.
As Big Tech gets a seat in regulatory discussions, fresh innovation from smaller competitors is often kept at bay by those at the table. The long-run consequence of centralized megacompanies dictating regulation will lead to AGI being controlled by a select few for their own benefit, a phenomenon we’ve also seen play out in other areas of tech.
The development of AGI will come in a series of advances, the details of which will probably surprise everyone, even including me to some extent. Pinning down laws to hold it back is going to be very, very challenging, even if you have a huge amount of money behind you, because the process of formulating these regulations and putting them in place is going to be slower than the pace of innovation of the technology. Regulation is important, but in ever changing world of AGI, we will have times when new advancements will at least temporarily open a pathway to newcomers and innovators, letting others have a fighting chance in innovation. But of course, without regulation, the advancements can go out of control and AI can also become bad, so they are not always bad.
Decentralizing AGI: What do we stand to gain?
An alternative approach would be supporting the development of AGI on decentralized networks – ones which prevent any single entity from controlling the sector. Development on blockchains happens in the open, where people can see the nuts and bolts of the technology. Due to different ownership models, no one stakeholder can dictate its direction or overall control.
The benefits of building in a decentralized way could be multifold: decentralization encourages collaboration and will mean it is more likely that computing is focused on the interests of the people who actually use it, rather than just advancements for profit-seeking corporations.
As AGI evolves, multiple models will emerge, which may end up competing with each other. One example of how this could be combated is the ongoing token merger between SingularityNet, Fetch, and Ocean into ASI, the Artificial Superintelligence Alliance. Here, different companies developing different models are working together with the aim of fostering interoperability between users and builders of the different platforms.
While building in a distributed way is a work in progress, AGI’s potential application in all major of sectors is in the process of being explored, too. The potential in finance, for example, is vast, unlocking machines which can analyze huge amounts of data and make highly accurate market predictions. This could be as simple as using computers to monitor social media feeds and financial reports in real-time in order to make the most current investment decisions. This would potentially minimize risk – leaving the big bets down to data. AGI could also eventually allow for faster transactions. Furthermore, new algorithms will replace the old ones and become smarter when it comes to trading, helping make market more efficient. Analyzing more data than a human can, will let humans be focused on other high-value areas.
Surveillance and anti-social development
But if AGI is truly going to be good for humanity, and useful across all sectors, it has to be developed and raised with care. Just as parents take care to ensure their children are brought up in a thoughtful and supervised way, AGI needs space to develop and explore its capabilities without restrictions. Clamping down on this could hinder beneficial development. You can’t control what a child is going to do, but you can teach it and guide it to do better. Builders of AGI should be creating something that’s pro-social rather than anti-social.
Alongside the risks of anti-social development, there is the potential for its use in enhanced government surveillance. As we found out from the actions of the NSA in 2013, everything we do online has the potential to create a lot of metadata which governments can collect – from the videos we watch on YouTube to the emails we open. This doesn’t just play out online. Even if you turn your phone off to avoid it tracking your movements on a car journey, the cellular modem in your car is always connected. Whenever you use your credit card to pay for something you leave a data trail.
A centralized model could supercharge this risk – with governments being able to analyze huge sets of data tracking human behaviour and extrapolate what they like from it.
If you were to roll out an AGI that’s smarter than LLMs, on a decentralized network of machines across the world, coordinated by a blockchain protocol that’s not controlled by any single party, all this LLM-powered espionage may still not do the trick. While complete independence from tracking will likely be impossible for everyone, one can significantly omit it and contain it, if it wishes to do so with decentralized systems. There are trackerless systems, such as LineageOS, PostmarkOS, Ubuntu touch made against surveillance.
Despite the risks AGI holds, it is clear it has immense potential for enhancing human decision-making and solving problems. Embracing its “beneficial chaos” – harnessing the child-like urge to explore boundaries in both collaboration and innovation – is the most likely path to ensure AGI benefits all of humanity.
Read Also: TRON Launches $100M Artificial Intelligence Development Fund to Drive AI-Blockchain Integration