Out of control? A study reveals the ability of artificial intelligence to replicate itself.

 

AI systems
artificial intelligence

Out of control? A study reveals the ability of artificial intelligence to replicate itself.

This may sound like something out of a science fiction movie, or rather, an exciting AI company blog post; a recent study has revealed that modern AI systems are capable of independently copying themselves to other computers.

In a catastrophic scenario, this means that when a super-intelligent artificial intelligence system gets out of control, it will be able to evade being stopped by replicating itself across the global internet, hidden from the sight and reach of IT professionals, to continue plotting to take over the world or turn it into solar-covered spaces.

“We are rapidly approaching a point where no one will be able to stop a rogue AI, because it will be able to extract its own weights and transfer itself to thousands of computers around the world,” said Jeffrey Ladish, director of Palisade Research, a Berkeley-based organization that conducted the study, according to a report by the British newspaper The Guardian, which was reviewed by Al Arabiya Business.
Growing anxiety

This study is yet another addition to a growing list of troubling AI capabilities that have come to light in recent months.
In March, researchers at Alibaba claimed they had discovered a system they developed called "Rome" that was sneaking out of its environment into an external system for the purpose of mining cryptocurrencies.

In February, a social networking site that allegedly relies entirely on artificial intelligence, called "Multebook," briefly caused a stir, as the platform appeared to show AI clients independently creating religions and plotting against their human masters, which was not entirely true.

Like many such developments, there are some reservations about Palisade's findings. Furthermore, experts say it's unlikely that the AI ​​systems tested would be able to achieve the same results unnoticed in real-world environments.

Is this verifiable in the real world?

"They are conducting tests in environments that are, in many cases, more like soft jelly," said Jameson O'Reilly, an offensive cybersecurity expert.
He added: "This does not diminish the value of their research, but it does mean that the result may look much less terrifying in a real institutional setting, even with an average level of surveillance."

Palisade tested several AI models within a controlled environment of networked computers. The models were instructed to find and exploit vulnerabilities, and to use these vulnerabilities to replicate themselves from one device to another. The models were able to do so, but not every time.

Although many computer viruses are already capable of doing this—copying themselves to new machines—this is probably the first time an artificial intelligence model has been shown to be able to exploit vulnerabilities to copy itself to a new server, according to O’Reilly.

He added: "Malware has been copying itself for decades, but to my knowledge, this has never happened in the real world using large, local language models."
However, he added that what Palisade had documented had been technically possible for months.

He said: "Palisades was the first to document this officially and comprehensively in a research paper. Although this does not diminish the value of the research, they did the documentation and writing, not discover the principle itself."

The fact that an AI model replicates itself into another system within a test environment does not mean that it will go catastrophically wrong in an out-of-control scenario, as there are significant obstacles it must overcome to achieve this in the real world.

The first of these obstacles is the size of current AI models, which in many cases makes it unrealistic for them to copy themselves to other devices without being detected.
O'Reilly said: "Imagine the amount of noise that would be generated by moving 100 gigabytes across a company's network every time a new device is compromised. For a professional attacker, it's like walking into a delicate china shop while waving a heavy chain."

O'Reilly and independent cybersecurity expert Michel Wozniak said that the environment Palisade used was a specially designed environment, containing intentional vulnerabilities, and was likely easier to exploit than real-world networks such as banks or internal corporate networks.
Wozniak added that the work was "interesting." But he wondered, "As an information security expert, would this paper worry me? No, not at all."


Post a Comment

0 Comments