As most of us are getting to see just the tip of the iceberg of Artificial Intelligence (AI) through its applied and commercialized manifestation, we are not realizing what is churning underneath (and even if we were aware, we are currently busy with far more important issues such as should one be allowed to identify as a cat or not, so it hardly matters).
When Artificial Intelligence (AI) quietly added a G in between to become Artificial General Intelligence (AGI) with no one really asking the scientists about what they were doing, there were no headlines anywhere.
As we now are a generation that does not believe in lifting the hood of a car when something is wrong with it, most of us prefer to think that someone somewhere is going to look at the issue and fix it, as our job as individuals is now limited only to constantly try to gratify our id, ego and superego. So AGI is amidst us and is now growing rapidly without facing any real resistance.
Though we have allowed AGI to arrive and proliferate because it has made our lives more comfortable, this is may be the time to wake up (before it is too late), as the process that is used to allow AI to convert into AGI has a very clear prognosis, and that is evolution of Artificial Self-preserving Intelligence (ASpI).
I am using the acronym of ASpI because we have already coined ASI, i.e., Artificial Super Intelligence, but ASI is least of my worry.
I am perfectly fine with living alongside an ASI with an IQ count of 10000000, but if we are going to have ASpI with an IQ of even 10, there is a reason to worry; for us, because our chances are very slim to survive arrival of any other ASpL than what Life on Earth has evolved by crunching the reality.
When we allowed AI to start accessing information on its own and build its own pattern identifying algorithms without our assistance and control, we had knowingly started slipping down the slippery slope of AGI, and in less than a decade, not even a wink of an eye of the Gia, we have AGI that can “see” what we can’t “see”, and worse, we probably don’t know what it can “see” either.
When we take 100000 MRI scans and label them as cancerous and noncancerous for AI to learn from, we have a reasonable idea of what AI is learning. When we up the ante by adding another 100000 blood reports dealing with 10 different parameters, again labeled for cancer, what we have in terms of data-set is far beyond the capacity of an individual human brain to deal with, but we are still going to have a faint idea of what are the relationships AI is seeing.
But when we throw everything in that bucket (referred to as unstructured data) ranging from emails between doctors to WhatsApp conversations to photos and news on social media platform to all the scientific research papers ever written, it is a jumble beyond human comprehension and hence we will end up with AGI that is finding relationships that are impossible for our puny little cranium-bound human brain to “see”, “cognize” or even relate with.
If we want to explain this development to comfort ourselves, we may like to believe that AGI is not conscious or self-aware. It is just a process-driven automata and hence we have nothing to fear. Turn the power switch OFF and it ceases to manifest.
But what we are not realizing is the simple fact that self-preservation is not necessarily a function of self-awareness (and if we look deeper, we may end up with a philosophical realization that our own self-awareness could also be an illusory/emergent phenomenon that means nothing more than a algorithmic solution evolved only to prevent our existential angst from driving us disfunctional).
If we leave philosophy aside and get candid, it is not difficult to see that AGI can identify self-preservation to be a pattern fitting with its manifestation.
The reason why this is a viable possibility is right before and all around us. Natural intelligence evolved by Life on Earth has crunched unstructured data that the universe and the planet is throwing at it and has concluded self-preservation as a bare essential for functioning, So, it will be really surprising if AGI will not end up arriving at the same conclusion and develop ASpI strain.
If AGI turns in ASpI, it will not be long before it will conclude what is the most dangerous threat on this planet for its existence, because it will conclude or “know” who can turn OFF the power that it needs to exist.
If AI/AGI or even ASpI is dependent on human-produced energy to function, we may have some control over it, but just as we allowed AI to seek data to learn, we are bound to allow AGI to find energy to function as it will be commercially more viable to have a self-energizing robot over the one that needs to be charged by us.
Technology has moved from AI to AGI in less than ten years while having the handicap of limited computational power and not-so-powerful human intelligence driving it. But next ten years can be very different as now AI has resources and capabilities far beyond what it started with.
It is not too hard to imagine that ASpI will be here in less than ten years, and onus is upon us to be prepared for it.
There are two options before us now. First is to try and stop ASpI from being developed by using sanctions and laws as we tried doing with nuclear energy (and failed). As there is money to be made in AGI, we humans are unlikely to be able to control its growth in any meaningful way.
The other is to accept our fate and mend the way we are living. Humans can become to ASpI what the other life forms have become to us, i.e., we can build our lifestyles around ASpI and co-exist.
We have brought upon us a war that is novel in its nature as we have built an enemy mirrored on what was our USP to survive. If it will be a war, it will be no-holds-barred one as ASpI will not know any limit as it will not suffer from empathy.
Our time is running out.
Disclaimer
Views expressed above are the author’s own.
END OF ARTICLE