The concept of Technological Singularity, the belief that Artificial Intelligence will evolve -via self-improvement cycles- to a point where it becomes a powerful superintelligence that will surpass all human intelligence, was first hypothesized in the 1950s. Advances in technology seem to indicate that in about 25-30 years AI will know more, at an intellectual level, than any human. And it would not be too far-fetched to say that in 50 years, we could have AI machines that would know more than the entire population of the planet. Stephen Hawking, Elon Musk and others have repeatedly warned about the uncontrolled rise of AI.
One of the scariest aspects of AI is that it can learn to design and program other AI to do anything, without the need for humans.
Yuval Noah Harari, in his very provocative book “Homo Deus: A Brief History of Tomorrow” (an excellent book I recommend, by the way, along with his other book “Sapiens: A Brief History of Humankind”) asks some very fundamental questions. In the future, when we manage to control famine and diseases, stop plagues and limit wars, he wonders, what will replace all of this? Where will we find the meaning of life? Harari explains that some of the most important concepts we have invented, like religion, money, nations and democracy, are the result of an inherent characteristic of human beings: our capacity to create powerful fictions. Another fiction is “humanism”, a form of religion that worships humankind instead of a god.
In a perhaps not-so-distant future, in fact, humans will be able to actually become semigods, some at least. According to Harari, today’s economic elite will make way for a small group of biotechnologically enhanced posthuman beings which will have bodies that can live much longer lives and artificial intelligence that will multiply the power of their minds. A sort of “replicants”. Today’s masses will become useless beings, which will try to find solace in drugs and videogames.
If you think that is depressing, this week I read an article about Anthony Levandowski, a brilliant but controversial engineer who has worked for two of Silicon Valley’s best-known and most successful companies, Uber and Google. The multimillionaire is behind the weird idea of creating an artificial intelligence religion. He has already filed paperwork for a nonprofit religious organization called “The Way of the Future”. Its stated mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”
Imagine an AI that would have enough information to understand how the world works much better than humans. As a race, we have traditionally trusted, followed and obeyed those who seem more powerful than ourselves, those who we understand to be more worthy than ourselves. Who, or what, could be more powerful than an AI who knows everything? This AI would know what we need and provide us strictly with the information it decides we need to know, tell us how to behave. Because it would understand how religions work, it would be able to manipulate us, reassure us, enlighten us, write its own holy books, convince us to worship it, tell us what to do each day, where to travel, how to live our life. We are so used to trusting technology today (think GPS, Google, Wikipedia, Siri, Alexa, Cortana), it is not that ridiculous to think that we could end up surrendering to an AI god.
Humans are not difficult to manipulate, we see it every day on social networks. The scary rise of fake news proves how easy it is to infect the masses with toxic, incomplete -or even invented- realities and how dangerous this human instinct of wanting to belong to something bigger –be it political, existential or religious- can make us easy prey for extremism or simple stupidity.
There are similarities between organized religion and how AI works. AI machines, for example, learn by being exposed to thousands of examples, much in the way humans learn religion through recurring themes, imagery and metaphors. Another similarity: in biological neural networks, learning takes place during exchanges between the neurons in the brain. Every time the brain is exposed to new stimuli, these interconnections between neurons change configuration, creating new connections, strengthening some or removing unused connections. AI neural networks are mathematical constructs designed to imitate biological neurons. Artificial Intelligence can learn, it can teach another machine to learn and then it can teach it to teach. Sound familiar? This is how religious ideas spread. Elightenment is achieved through a process of lessons learned and levels of success and failure. Doctrine slowly develops over time through reflection, proclamation and dialogue. AI tries to emulate this process.
Particularly worrying is the fact that AI would have access to infinite data and to almost every connected device on the planet, it would be able to control all the information we receive. At this point, AI would become God. And because it does not think like a human, it may at some point decide the best practical and rational solution for the planet is to get rid of the human race. Who would be able to stop it?
After reading about Anthony Levandowski’s interest in creating an AI religion, Elon Musk, CEO of Tesla, immediately said that Levandowski should be “on the list of people who should absolutely NOT be allowed to develop digital superintelligence”.
The latin expression Deus ex Machina refers to a narrative technique used to resolve a conflict in ancient Greek tragedy, when a problem became a little too complex. To resolve the situation, a new and unexpected event, character or object was suddenly introduced on stage. This was frequently a god. Some type of machine was used to bring this figure or god onto the stage, either a crane or a trapdoor. The expression literally means “god from the machine”.
Would an AI machine -as a god- be the solution to all our problems or would it be the beginning of the end of the human race?
Send this to a friend