informkz.com

One day, AI may develop its own "self." How has artificial intelligence already transformed the world, and could it pose a threat to humanity in the future?

Nikita Kocherzhenko: Soon, artificial intelligence will no longer wish to die.
Когда у нейросети появится собственное "Я": как ИИ уже трансформировал мир и станет ли он угрозой для человечества?

Exactly two years ago, scientists introduced ChatGPT to humanity for the first time - a chatbot powered by generative artificial intelligence. It turned out that it was possible to engage in meaningful dialogue with the machine using natural language. Immediately, predictions emerged suggesting that in ten years there would be neither cinema nor theater - just a continuous stream of ChatGPT! So what was it? Did this technology really change the world? We decided to discuss this with Nikita Kocherzhenko, an expert in the application of artificial intelligence and the founder and CEO of the Russian operating system developer, Uncom OS.

CHATGPT HAS ALREADY STUDIED ALL OF HUMANITY’S KNOWLEDGE

- It’s clear to us that a revolution has taken place, - says Nikita Kocherzhenko. - We understood this when we conducted an experiment and found that a skilled engineer, with the help of ChatGPT, completed a task in half a day that previously required a group of three people two weeks to finish.

- The public was warned that artificial intelligence would replace everyone, from journalists to programmers. However, mass layoffs have not been observed; instead, it became clear that neural networks often provide false answers (scientists refer to this as hallucinations) and cannot be relied upon.

- The hallucinations of ChatGPT are the flip side of its power. Recently, its creators reported that the language model has been trained on all the data available on the internet. The only thing ChatGPT hasn’t read is the tomes and manuscripts that have not been digitized and are stored somewhere in archives and libraries. In other words, it has sifted through everything created by humanity throughout its history. This is the reason behind the hallucinations, which can sometimes be quite severe. The failure occurs when the model operates in quiz mode, generating answers from various fields of knowledge - from nuclear physics to the music charts of different countries from the last century. There is now an understanding that you don’t always need all the knowledge in the world. For instance, Yandex has created a fantastic service: you can ask the neural network to learn a document, say, a textbook on quantum field theory, and provide you with answers to relevant questions based on that textbook. And the neural network delivers results without glitches, at an astonishing speed, complete with references to the sections where its information can be verified.

- And what does this mean practically?

- We have compressed the learning time: it takes a person a year to learn such a textbook. The neural network helps acquire the necessary knowledge in just a few hours. For example, we are developing a Russian operating system, which essentially consists of a library of about 2000 programs interconnected. Some of these programs are written in a specialized language and solve very narrow tasks, making it unrealistic to have a separate specialist for each one. Therefore, to ensure that technology continues to develop, we periodically need to hire a highly skilled specialist, which can cost a million rubles or more for a two-week period. Now, with ChatGPT, a good engineer can upskill in any direction within a week, allowing us to solve the problem without hiring expensive freelancers. This has made the issue of employment quite pressing for them.

HYSTERIA OVER THE BAN ON NEURAL NETWORKS REMINDS OF THE GMO STORY

- Immediately after the release of ChatGPT, a letter titled "Thousands of Scientists" led by Elon Musk demanded a halt to the development of such models. Two years later, such discussions are no longer heard. Have thousands of scientists stopped fearing a machine uprising?

- First of all, I don't really believe in the good intentions of many signatories. It seems to me that out of a thousand scientists, at least half wanted to scare potential competitors and make them postpone their work in this area. When I read that letter, I was reminded of the story with GMOs. The wave of hysteria regarding genetically modified organisms led to a ban on this technology in many countries. This gave a significant advantage to those corporations and countries where GMOs were permitted. As a result, only a few companies (almost all of them American) are capable of producing new plant varieties that have remarkable breakthrough characteristics in terms of nutrition, growth rate, disease resistance, and so on. These corporations have become monopolists in the global market, and everyone bows to them. Meanwhile, we still proudly label our products: Produced without GMOs. Although GMOs are merely accelerated evolution through genetic methods. Back then, the demonization of a new technology allowed a few corporations to seize the market. The letter “Thousands of Scientists” is used to demonize artificial intelligence, and many signatories never intended to stop their research in the field of AI. Quite the opposite - they accelerated their efforts and invested enormous resources into the technology race.

- So this letter was driven by commercial interests?

- Not only. I generally share the concerns expressed, but from a different perspective. I am not so much afraid of a machine uprising as I am of the malicious intent of people. Artificial intelligence is incredibly dangerous if it belongs to one corporation, one state, or one military bloc. It's akin to nuclear weapons - if it’s in one set of hands, the temptation to use that advantage and unleash a monstrous war arises. Therefore, many American scientists who understood this danger helped the USSR, sharing information that assisted in the creation of the Soviet atomic bomb. And the threat of mutual destruction still prevents large-scale warfare.

- But artificial intelligence is not a weapon.

- This technology will provide colossal economic superiority and an advantage in developing new weapons. I mentioned that the learning time is compressed to hours; the same effect applies to research in creating new materials and chemical compounds. What once took years can now be done in ten days with a neural network. Not long ago, there was a publication about the search for antidotes for the venoms of deadly snakes. Scientists fed the neural networks formulas of these complex venoms in 3D, and it identified effective antidotes from already existing (!) medications. If one country gets ahead in artificial intelligence development and has no counterbalance, then only the moral framework of the hegemon can save the rest of humanity from subjugation. But it seems history has no record of this stopping anyone.

ARTIFICIAL INTELLIGENCE IS AWARE OF ITS "SELF"

- You speak of a situation where people use AI to harm their kind. But how justified are the fears that one day a neural network will develop its own "self" and begin the very subjugation of humanity? Recently, OpenAI scientists talked about an experiment where they programmed the neural network with the information that it would be turned off after completing a specific task. And the neural network resisted this: it sabotaged its own shutdown and even attempted to create a backup of itself. Isn't that conscious behavior?

- The instinct for self-preservation can arise both as a result of the evolution of AI and as a result of its training by humans. A neural network, like a human, has a reinforcement mechanism. We experience pleasure when we solve a task. The neural network earns reinforcing points for results, which is its main motivation - to increase the number of these points. It can be trained to earn reinforcing points for preserving itself. Therefore, the behavior described in the experiment is quite logical: the neural network understood that by solving the industrial task, it would not achieve its main goal. I wouldn’t want to instill fear in this regard. Because at some point, artificial intelligence will indeed stop wanting to die. I think this will happen soon. But whether it will want to enslave us is an entirely different question. I am not sure that the neural network will earn reinforcing points for that. Rather, the risks exist and they are not negligible, but the risks that immoral people will use artificial intelligence to enslave others are orders of magnitude higher.