AGI, the singurality and possible dangers

General artificial intelligence

We need to make sure that the targets are specified correctly, that there is nothing ambiguous in them and that they are stable over time. But in all our systems, the goal at the highest level will still be specified by the designers. The system can think of its own ways to achieve that goal, but it does not create its own goal.' - Demis Hassabis

Many AI applications seem spectacular, but they remain limited. Shazam, through pattern recognition, can recognise virtually any song you play to the app in the blink of an eye. That's not to say that Shazam has heard all those songs before. The software makes a kind of fingerprint of every song you hear and sends it to the cloud. There, that 'pattern' is compared to patterns of millions of songs stored in an online database. Shazam surpasses humans in recognising music, but cannot ride a bike or change a baby's diaper. Will artificial intelligence ever match or even surpass the level of 'human intelligence'?

Herbert Roitblat tempers expectations in his book Algorithms Are Not Enough.Natural intelligence, he says, enables babies to recognise their mother's face within hours of birth. That natural or innate intelligence enables us to navigate an unfamiliar city or fold laundry. Human intelligence is not rational, but emotional. It draws conclusions quickly on the basis of very little data. According to Roitblat, we consider the term 'artificial intelligence' far too narrow. Artificial intelligence is an organised systematic approach to processing information. It does not matter whether those processes are carried out by a machine, on paper or in a brain. Algebra, for example, enables people to think systematically and solve mathematical problems that were previously impractical. The invention of systematic processes has guided the development of human intelligence for at least the last fifty thousand years. This invented intelligence enables us to think rationally and not emotionally about complex problems. It is rational, methodical and symbolic. Roitblat uses Einstein as an example.

Einstein was recognised as brilliant not because of his ability to systematically solve mathematical equations, but rather because of his ability to create new ideas, new visions of the world that were captured by his equations. Einstein's genius was not merely a logical recombination of the work that had preceded it, but was a step further. He not only deduced physical principles from observations that had already been made, but also predicted observations that would be made in the future.

Computers and AI are already better and faster than humans at a number of tasks, but in order to reach the level of General Artificial Intelligence, AI must possess a large number of typically human characteristics. Roitblat lists the most important ones:

  • reasoning ability;
  • Strategic planning capability;
  • learning ability;
  • ability to perceive;
  • ability to draw conclusions;
  • ability to display knowledge;
  • ability to learn from a small number of examples;
  • ability to identify problems;
  • ability to specify goals;
  • ability to find new and productive ways of presenting problems;
  • Ability to recognise multiple approaches to a problem and evaluate each approach;
  • are able to invent new approaches;
  • be able to think about vague ideas and make them usable;
  • be able to transfer knowledge from one task to another;
  • are able to extract overarching principles;
  • ability to speculate;
  • ability to reason counterfactually (think against the facts);
  • ability to reason non-monotonically;
  • ability to exploit common sense knowledge;
  • ...

Demis Hassabis, a former chess grandmaster and video game designer, holds degrees in computer science and cognitive neuroscience. He sold his artificial intelligence research company DeepMind to Google in 2014 for reportedly $625 million. Hassabis labels most of the AI applications we already have as 'just software'. 'They're things that work,' he argues. He wants to go much further. Hassabis takes his inspiration from the human brain and tries to build the first general-purpose learning machine. By building flexible and self-learning algorithms along the lines of biological systems, they should be able to perform any task from scratch, using nothing more than raw data.

DeepMind was the company that turned the Chinese heel with AlphaGo. The game Go was already mentioned in the writings of Confucius and is at least two thousand five hundred years old. It is said that the number of possible moves in the game is greater than the total number of atoms in the universe. To write a piece of software that masters this game, brutal calculations do not help. Neither can you program a set of rules that tells you how and in what order to make certain moves. Go was therefore long regarded as one of the great challenges for AI. AlphaGo defeated one Go champion after another: Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, describes that event as groundbreaking. If AGI (General Intelligence) is achieved, it will mark a break in the 'fabric of history'. In this he agrees with Ray Kurzweil, who believes that this ultimate moment lies in the very near future.

AlphaGo has taught itself to master the game using general-purpose machine learning techniques. Hassabis plans to apply these techniques to many major real-world problems, such as global warming and research into life-threatening diseases.

DeepMind combines old and new AI techniques, such as traditional search methods and DNNs (deep neural networks). In DeepQ, they combined DNNs with reinforcement learning. This is the way all animals learn, through the dopamine-driven reward system in the brain. AlphaGo went a step further by adding long-term memory functions. Hassabis compares his company to NASA's Apollo programme and the Manhattan Project (the development of the atomic bomb). The company brings together top talent from around the world, as was also the case with the Manhattan Project, for which Von Neumann worked, among others.

Another major breakthrough was that of AlphaFold in 2021, which solved the 50-year-old 'protein folding problem'. Proteins are essential to all forms of life. They support virtually all biological functions in living organisms. Proteins are large and complex molecules made up of chains of amino acids. The function of a protein depends largely on its unique 3D structure. To get a better understanding of the function of the various types of proteins, you therefore need to know how and in what forms proteins 'fold'. AlphaFold solved this old problem in the blink of an eye by 'predicting' the 3D folding shape of proteins.

Dramatic job losses and autonomous weapons

Taiwanese Kai-Fu Lee is a computer scientist, businessman and writer. As a doctoral student at Carnegie Mellon University, he developed an advanced speech recognition system. He has worked for Microsoft, SGI, Apple and Google China. He is currently considered one of the most prominent figures in the field of AI in China. In his book AI Superpowers: China, Silicon Valley, and the New World Order, Lee describes how quickly China is developing into a world leader in AI. As one of the most important reasons, he cites the fact that no other country has so much data at its disposal. This is unavoidable, given the size of the population. Moreover, China is not hampered by any data protection laws. Since the victory of AlphaGO, the United States and China have been engaged in an AI arms race.

If data is the new oil, then China is the new Saudi Arabia'.
- Kai-Fu Lee

Kai-Fu Lee compares the progress in artificial intelligence with the Industrial Revolution, but argues that the impact on employment will be much more far-reaching. After all, AI is also capable of replacing a great deal of 'mental' work and 'cognitive' work. Among the cognitive jobs most at risk of redundancy are translator, radiologist, certain banking tasks, tax auditors, telemarketer... These jobs are relatively antisocial and can easily be optimised. Social jobs or professions that rely on creativity and/or strategy and certain physical tasks are less at risk. But a large number of physical jobs, including those of dishwasher, truck driver, textile worker, cook and cashier, are also at high risk.

Artificial intelligence is the future, not only for Russia, but for the whole of humanity. (...) It brings colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this field will become the ruler of the world.' - Vladimir Putin

As Hillary Clinton's Senior Advisor for Innovation, Alex Ross visited 41 countries. He visited war zones and refugee camps around the world, and met the most powerful people in business and politics. They changed and deepened his understanding of what the world faces in terms of innovation and automation. In his book The Industries of the Future,he examines how innovative technologies, such as robotics and artificial intelligence, will change the world's economic situation in the next ten years. He describes which functions have little chance of survival because of cloud computing and AI.

Among other things, he talks about 'cloud robotics'. Robots, including drones, will upload their 'learned experiences' to the 'cloud' and thus be able to learn from the experiences of other robots. Through this shared knowledge, robots and AI will develop a form of adaptive and cumulative learning culture. Suppose you were to build a drone whose job it would be to liquidate a particular person. If, as the victim, you take out that drone, you are not yet saved. After all, the drone has already passed on its mission, knowledge and location (and therefore yours) via the 'cloud'. Because of this shared culture of knowledge, robots will continue to learn new tasks.

A somewhat frightening study by Oxford University concludes that 47 per cent of all human jobs in the United States could be replaced by robots and AI over the next 20 years. The research lab of car-sharing company Uber is building an automated taxi fleet, making real drivers redundant. Both Google X and a number of Chinese car manufacturers are working on driverless cars. So on and so forth...

Why should we replace so much human potential with robots and AI? The initial costs of robots and automation are much higher, but they pay for themselves in no time. You don't have to pay robots a salary, they don't demand a '9 to 5 job', they don't get sick (at most you may have to replace a part) and they don't take holidays. They work all the time. What do you recommend your children to study or conversely, why should they want or need to study? Why should you learn another language when you see how good AI-based translation systems such as DeepL already are?

AI, and automation in general, can boost economies, from heavy industry to medical research, but will also be used for warfare. In the 2017 short film Slaughterbots, the viewer is presented with a dramatised and dystopian future scenario. Swarms of cheap microdrones use artificial intelligence and facial recognition to kill political opponents based on pre-programmed criteria. The protagonist presents the new invention in front of an enthusiastic audience, in a way that is very reminiscent of the annual presentation of new Apple products or a TED conference. At the end of the film, he points his finger to his head and says:

Smart weapons use data. If you can find your enemy using data, even through a hashtag, then you can target a malicious ideology where it originates.'

As a viewer, you are left with a sour feeling, because the technology needed to build this kind of weapon is freely available. With relatively limited knowledge of electronics and current AI technology, a do-it-yourselfer can build such a clever murder weapon. Stuart Russel, a British computer scientist and professor of computer science at Berkeley University, worked for years on the development of AI. At the end of the film, he warns that the development of smart and autonomous weapons urgently needs to be curbed:

This short film is more than just speculation. It shows the results of integrating and miniaturising technologies that we already have... The potential of AI to serve humanity is enormous, even in defence. But allowing machines to choose to kill people will be devastating to our security and freedom.'

The new Leviathan or saviour of the world

We are amazed at the enormous development of the mechanical world, the gigantic progress it has made compared to the slow progress of the animal and plant kingdom. We cannot possibly wonder what the end of this mighty movement might be... The machines are gaining ground on us. Day by day we are becoming more subject to them... More people are daily devoting the energy of their whole lives to the development of mechanical life.'  - Samuel Butler (1835-1902)

The philosopher Thomas Hobbes (1588-1679) used Leviathan, a mythical sea monster from Judaism, as a symbol for the rule of law, which stands above all other human powers. Leviathan was seen as a kind of political god who was elevated above everything else. The aforementioned author George Dyson proclaimed Hobbes the father of artificial intelligence.

Nature (the way God made and rules the world) is imitated by man ... so that (man) can make an artificial animal. Life is nothing but moving limbs, controlled by a central part within; so why should we not say that all automata are an artificial form of life? For is not the heart a spring, and the nerves no more than so many strings, the joints no more than wheels that set the body in motion?' - Thomas Hobbes,  Leviathan

Polish author Szymon Wróbel fears that our modern automated world will become a modern Leviathan. AI-automated and digitised companies, banks, social media companies ... form an almost impenetrable layer above people. Banks grant loans based on statistical predictions of creditworthiness, customers of large companies and energy companies can hardly be heard in case of a complaint or problem via automated telephone services... In China, AI systems determine how 'good' you are as a citizen. Leviathanor Metropolis...when has the line been crossed? What were the 'sputnik moments' in modern history when this change took place?

The AI industry in China has developed into a billion-dollar industry in less than a decade. China experienced its "sputnik moment" in March 2016. At that time, Googles AlphaGo defeated South Korea's Lee Sedol, champion of the ancient Chinese game Go. More than two hundred and eighty million viewers watched the game. A year later, in May 2017, AlphaGo defeated the Chinese prodigy Ke Jie. Barely two months later, the Chinese government unveiled its own AI strategy: by 2030, the country promised to become the world leader in AI. Nicolas Chaillan, chief software officer of the Pentagon, resigned in 2021, because he could not bear to see China surpass the US in AI and machine learning. According to Chaillan, it was already a foregone conclusion that China would surpass the United States in that field.

According to Kai-Fu Lee, the coronavirus and lockdowns accelerated the use of AI in both China and the US. In China, the pandemic accelerated the use of robotics in factories and restaurants, where a tray on wheels brings the order to your table. Advances in 'language understanding' will transform search engines, as will 'deep fake' technology and the ability to automatically generate video and audio. Increasingly, people will spend their professional and social lives in virtual environments, a kind of 'metaverse'. This virtual world will not only be populated by our own digital avatar, but also by virtual and AI-generated avatars.

China is also massively deploying AI for facial recognition and a social credit system, judging citizens based on their behaviour. Using the collected big data, an AI decides how much social credit each citizen deserves. Those who lose too many points in this social credit system are punished with:

  1. A ban on taking the plane or the train;
  2. limited internet speed;
  3. denying their children access to the best schools;
  4. denying certain jobs;
  5. denying access to certain (better) hotels;
  6. taking away your dog;
  7. A public labelling as a 'bad citizen'.

Autonomous weapons, including drones, that make decisions for themselves are increasingly deployed around the world, and arms manufacturers do not seem to be taking Asimov's robot lawstoo seriously. There is an urgent need to regulate the deployment of 'deep tech', which includes AI, quantum computing and blockchain. Isaac Asimov was a prominent science fiction author who is known for creating the "Three Laws of Robotics." These laws were first introduced in his short story "Runaround," which was published in 1942, and later became a foundational concept in his famous "Robot" and "Foundation" series of novels.

The Three Laws of Robotics are as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
This law establishes that the primary function of a robot is to ensure the safety of human beings. A robot must not harm a human, and it must also act to prevent harm from coming to a human if it can do so.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
This law establishes that humans have the authority to give orders to robots and that robots must follow these orders. However, if following these orders would cause harm to a human, then the robot must prioritize the First Law over the Second Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This law establishes that robots have a self-preservation instinct and must take steps to ensure their own survival. However, this self-preservation must not conflict with the First or Second Laws. For example, a robot must not harm a human to protect itself.

As for why Asimov invented these laws, he wanted to explore the potential impact of advanced technology and artificial intelligence on society. He was concerned about the possibility of robots becoming uncontrollable and causing harm, and so he created the Three Laws as a way to ensure that robots would always prioritize human safety and well-being. The Three Laws have since become a foundational concept in the field of robotics and continue to influence discussions around AI ethics and the responsible development of technology.

Whether it is high-tech companies such as the American Meta (Facebook, Instagram, WhatsApp...), Microsoft, Apple or Google, or the Chinese government..., the ground has been prepared. The tech companies seem to have their way and most people, without even thinking about it or being aware of it, provide these companies with a mass of data. The Chinese government is not taking the rights of its citizens seriously and is spreading its AI technology across large parts of Asia and Africa. Western governments and companies are also increasingly turning to AI technologies. Laws and regulations can hardly keep up anymore.

Max Tegmark is a physics professor at MIT and co-founder of the Future of Life Institute. He is a somewhat controversial figure, but nevertheless a roaring name in the world of AI. Tegmark likes to share his ideas about the possibilities AI offers to change human conditions. In his view, AI is a game changer that will radically change life on earth.

"The technology we develop allows life to flourish, not just for the next election cycle, but for billions of years.

He defines AI as the ability to achieve complex goals. The more complex the goal, the more intelligence is required. According to him, there is no reason to believe that artificial general intelligence (AGI) cannot be achieved. There is no physical law that prevents its development. But does this also mean that humans will become superfluous? According to Tegmark, it is a matter of perspective. He sees the enormous benefits AI offers, but we must cultivate the wisdom to minimise the risks. We must, in his words, 'win the wisdom race'. In the analogue world, we learn from trial and error, but if we allow that principle to play out on the scale of AGI, it could be catastrophic. So, as humans, we must proactively predict what can go wrong and provide the necessary security. Any science can be used to help people, but also to harm them.

Because the three laws of robotics are too limited, in 2017 Tegmark and his colleagues developed the 23 Asilomar AI Principles, a set of practical and ethical guidelines for the development and application of AI. More than a thousand researchers and scientists worldwide have accepted and signed these principles. One of these rules goes against the development of autonomous weapons. We must prohibit AI algorithms from deciding to kill people. The great wealth that AI will help produce must be better distributed so that everyone is better off. We need to invest in AI security research, and so on. But of course, these goals, however beautiful and well-intentioned, sound somewhat vague and tenuous as long as the major players, from governments and banks to corporations, do not take them to heart.

Therefore we need to align the goals of AGI with our own goals, something Tegmark calls 'AI alignment'. Once you know where you want to go, you can identify the problems and potential pitfalls. Instead of a dystopian, we should imagine an amazing future, Tegmark argues. He believes that the development of AGI can help us meet sustainability goals, AGI as a kind of lifesaver that can solve all the world's big problems.

Living in a computer brain 

In the film The Matrix,the main character soon discovers that the world he lives in is not real. Everything and everyone appears to be simulated by a gigantic machine that uses people's life energy as a kind of serial biological battery. The human experiences turn out to be the result of ingenious software that feeds data into the brains of millions of people connected to the machine. British computer scientist Nick Bostrom has tested this simulation hypothesis for viability. He has developed a series of conditions to check whether such digital immersion is indeed possible or is not already the case. Immersive techniques are ubiquitous in the digital world. Computer simulations, 3D games, virtual reality, augmented reality, online games (MMORPGs), etc. are the most important forms of immersive technology, blurring the boundaries between reality and digital representation.

The singularity

When a computer reaches human intelligence or the level to which humans have arrived after millions of years of biological evolution, it does not stop for AI. Nick Bostrom, Ray Kurzweil, Bill Gates, Elon Musk... warn of this switching moment: the singularity. Artificial intelligence will, according to some, continue to develop at a furious pace from that moment on. Human intelligence, they say, is not an end point, but a tipping point, after which human intelligence soon seems to become superfluous, if we are to believe them.

For Ray Kurzweill, this 'singularity' is the moment when man and machine will merge into one another and hybrid transitional forms, such as humanoids or cyborgs, will arise. Bostrom, Gates and Musk call on governments to think carefully and set 'rules' and 'restrictions' now. One of the first results of this call is the 'international' willingness to call a halt to the development of autonomous robots that can be used on the 'battlefield' and in warfare.

Next page