A new urgent warning about the dangers of the technology comes from respected experts in artificial intelligence. “Without sufficient caution, we could irretrievably lose control of autonomous AI systems,” the researchers wrote in a text in the new issue of the journal Science.
Possible AI risks include large-scale cyberattacks, social manipulation, omnipresent surveillance and even the “extinction of humanity.” The authors include scientists such as Geoffrey Hinton, Andrew Yao and Dawn Song, who are among the leading minds in AI research.
The authors of the text in “Science” are particularly concerned about autonomous AI systems that can, for example, use computers independently to achieve the goals set for them. The experts argue that even programs with good intentions can have unforeseen side effects. The way AI software is trained means that it adheres closely to its specifications – but has no understanding of what the outcome should be. “As soon as autonomous AI systems pursue undesirable goals, we may no longer be able to keep them under control,” the text says.
US companies ensure responsible behavior
There have been similar dramatic warnings several times, including last year. This time the release coincides with the AI summit in Seoul. At the start of the two-day meeting on Tuesday, US companies such as Google, Meta and Microsoft, among others, pledged to use the technology responsibly.
The question of whether the ChatGPT developer company OpenAI is acting responsibly enough as a pioneer in AI technology came into sharper focus again at the weekend. The developer Jan Leike, who was responsible for making AI software safe for people at OpenAI, criticized headwinds from the executive suite after his resignation. In recent years, “glittery products” have been preferred over security, Leike wrote at We urgently need to find out how we can control AI systems “that are much smarter than us.”
OpenAI boss Sam Altman then assured that his company felt obliged to do more to ensure the security of AI software. The head of AI research at the Facebook group Meta, Yann LeCun, countered that for such urgency to emerge, even a hint of systems “that are smarter than a house cat” would first have to emerge. At the moment it is as if someone warned in 1925 that people urgently needed to learn how to use airplanes that carried hundreds of passengers across the ocean at the speed of sound. It will take many years until AI technology is as smart as humans – and similar to airplanes, safety precautions will come with it gradually.