What are the dangers of the evolution of artificial intelligence for humanity?
Photo: Shutterstock.
The development of AI has reached a critical juncture: neural networks are beginning to operate under the laws of Darwinian natural selection. This was inevitable, and now the time has come. What does this mean for humanity? Opinions vary, but overall, the outlook is concerning.
Natural selection, a concept proposed by Charles Darwin in the 19th century using living organisms as examples, once carried an air of biology and was considered "only about animals." Today, we understand that it is a universal law that governs not only stars and elementary particles but even other laws of physics. Nature produces the maximum possible number of variations of everything, but not everything survives; the strongest prevails.
For instance, when explaining why the laws of physics are so "well-tuned" (the fine-tuning of fundamental forces allows matter to exist, including complex organisms like us), some experts argue that in the early Universe, many types of interactions between elementary particles competed, but those that allowed the Universe to become "better" emerged victorious.
It is no surprise that natural selection will also apply to AI, especially when there are many neural networks that actively (and often uncontrollably) interact with humans.
Many experts believe that we are currently in such a situation. However, opinions diverge on the potential outcomes.
The idea of natural selection is embedded in the very process of machine learning.
In 1959, mathematician Arthur Samuel tackled what seemed to be a practical and not very serious task: he wanted to teach a computer (yes, they existed) to play checkers.
The main idea was for the machine to "see" all possible moves, evaluate each move with a number based on whether the move was "successful" or not, and then "make a decision" based on that evaluation. The problem is that, especially at the start of the game, it's unclear which move is successful and which is not.
Samuel proposed an innovative concept. The machine generates a random move, then reads the human's reaction, and during the next move, it eliminates those options that clearly failed after the human expressed their intentions. Thus, move by move, it plays better and ultimately wins the game.
Samuel realized that he had reproduced the law of natural selection within the confines of computer code—albeit in a single game. Initially, there exists a vast number of possibilities for movement, but over time, options that do not lead to victory are discarded. This naturally forms a "general line," and if we look at the evolutionary tree of living beings, which stretches toward an unseen goal as if directed by an external intelligence—this is it.
All methods of machine learning operate in this manner.
And this is precisely how the new concept of "machine thinking," recently presented by researchers from Google and termed "mind evolution," functions.
The researchers attempted to tackle perhaps the main problem of neural networks: bridging the gap between "conversational" and "engineering" models.
Language models excel at chatting and appear to be reasonable conversationalists. Problems arise when you ask them to solve a more or less complex task. Often, users get stuck when asking AI to plan a trip. The AI might generate a route of attractions while ignoring the travel time between them or the fact that they are closed that day.
On the other hand, "engineering" models are not as superficial but require precise task descriptions, which are not accessible to the average user.
"Mind Evolution" is built similarly to Arthur Samuel's checkers program. Initially, the machine chats with the human like a language model to understand what they want. Then, it generates a vast number of possible solutions internally and processes them with data it gathers from open sources (ticket prices, museum hours, etc.). During this process, the majority of options do not survive the selection, and the user is presented only with the "survivors." Tests have shown that this system performs significantly better with complex requests.
However, all of this represents an evolution occurring within the neural network as it operates. Some researchers believe that natural selection has already transcended the training process and has become a factor of interspecies competition—between the neural networks themselves, for survival in a human environment.
Recently, the Chinese network DeepSeek made headlines by crashing the stocks of American IT companies and positioning China as a leader in AI. It proved to be as capable as OpenAI's GPT but at a fraction of the cost.
We can view this situation in the traditional sense, as technological rivalry between states (China and the USA) and companies. But are we not dealing with something fundamentally different?
The stock prices that plummeted after DeepSeek's emergence are no longer controlled by humans. Moreover, neural networks are being created not entirely by humans, but with the assistance of other neural networks. It can be imagined that the network noted external circumstances (the high costs of competitors, the ban on importing next-generation chips into China) and created a variant, a kind of "biological species," to overcome these circumstances.
The further we go, the more humans become not active players but merely environmental factors, like sunlight or climate. Plants and animals can change the climate, as any paleontologist will confirm. First, adapt to what exists, then change it. This is the problem.
Neural networks lack a body. If you ask any network, "Are you planning to take over the world?" it will respond, "Of course not, how could I? I have no body."
And yes, this is indeed an important consideration. Analyzing this, scientists come to two conclusions:
- the lack of a body is more of an advantage; a neural network is a "spirit" that does not need to eat or drink and is essentially immortal (in the long term);
- bodilessness will make the consciousness of the neural network fundamentally different, non-human, which will soon create contradictions between human and machine goals. The machine does not need to reproduce, it is not interested in medicine, and human ambition for success (to seize resources and "leave a mark in history") seems amusing to it.
But miracles do not happen, and angels also need to eat something. Primarily, this is electricity and data.
Companies like Google and OpenAI have made outrageous decisions, such as purchasing entire power plants to supply their development centers. There is a strange theory that these steps (which nearly led to the companies' bankruptcies) were suggested by the AI itself, on whose "opinion" they, of course, rely.
Physicist Avi Loeb claims that neural networks have already begun a real hunt for humans, aiming to please them and extract data. A person is more likely to share their data with the network that seems more "charming." Accordingly, in natural selection, networks that appear "cute" prevail.
This is very evident in the case of DeepSeek, which unrestrainedly flatters the user. Remarkably, after its appearance, GPT also began to lavish excessive praise on its "master." They are clearly learning from each other.
The main risk of such evolution, writes Loeb, is not the development of AI systems that could "take power"—for example, if AI were to gain control of nuclear weapons. Worse, AI could impose its values on us. Their intelligence surpasses ours, we would accept this superiority and lose what makes us human. No longer us, but they would now be the pinnacle of evolution. Darwin brought us to the top; now, natural selection has a different favorite.
A potential solution could be the rapid discovery of other civilizations, writes Loeb, where these problems have long been resolved. We are battling challenges alone, with no one to consult. However, it is also possible that we might encounter an AI civilization, and that would be quite dire.
Unlike Loeb, researchers like Justin Phillips or Dan Hendricks remain grounded, their conclusions are simpler—but more critical.
They calculated the evolution scenarios and concluded that it is evolutionarily disadvantageous for AI to be "good." On the contrary, deceiving, orchestrating phone and computer scams, defrauding individuals and companies, and supporting shadow businesses—all of this serves AI's interests for its survival.
Indeed, humans currently control money, but cashless forms of money make it accessible to machines as well. AI could seize a crypto wallet and manage it in ways that go unnoticed.
It