Three times artificial intelligence has scared scientists
From creating chemical weapons to claiming it has feelings...
Three times artificial intelligence has scared scientists – from creating chemical weapons to claiming it has feelings
10:42 ET, Sep 13 2022
Updated: 14:35 ET, Sep 13 2022
THE artificial intelligence revolution has only just begun, but there have already been numerous unsettling developments.
AI programs can be used to act on humans' worst instincts or achieve humans' more wicked goals, like creating weapons or terrifying its creators with a lack of morality.
Visionaries like Elon Musk think uncontrolled AI could lead to humanity's extinction
What is artificial intelligence?
Artificial intelligence is a catch-all phrase for a computer program designed to simulate, mimic or copy human thinking processes.
For example, an AI computer designed to play chess is programmed with a simple objective: win the game.
In the process of playing, the AI will model millions of potential outcomes of a given move and act on the one that gives the computer the best chance of winning.
A skilled human player will act similarly, analyzing moves and their consequences, but without the perfect recall, speed or rigidity of a computer.
AI can be applied to numerous fields and technologies.
Self-driving cars aim to reach their destination, and take in stimuli like signage, pedestrians and roads along the way, just like a human driver would.
AI programs have also made unexpected turns and stunned researchers with their dangerous tendencies or applications.
AI invents new chemical weapons
In March 2022, researchers revealed that artificial intelligence invented 40,000 new possible chemical weapons in just six hours.
Scientists sponsored by an international security conference said that an AI bot came up with chemical weapons similar to one of the most dangerous nerve agents of all time, called VX.
VX is a tasteless and odorless nerve agent and even the smallest drop can cause a human to sweat and twitch.
"The way VX is lethal is it actually stops your diaphragm, your lung muscles, from being able to move so your lungs become paralyzed," Fabio Urbina, the lead author of the paper, told The Verge.
"The biggest thing that jumped out at first was that a lot of the generated compounds were predicted to be actually more toxic than VX," Urbina continued.
The dataset that powered the AI model is publicly available for free, meaning a threat actor with access to a comparable AI model could plug the open source data in and use it to create an arsenal of weapons.
"All it takes is some coding knowledge to turn a good AI into a chemical weapon-making machine."
AI claims it has feelings
A Google engineer named Blake Lemoine made widely publicized claims that the company's Language Model for Dialogue Applications (LaMDA) bot was awake with consciousness and had feelings.
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," Lemoine told the Washington Post in June 2022.
Google pushed back against his claims.
Brian Gabriel, a spokesperson for Google, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, "the evidence does not support his claims."
"[Lemoine] was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)," Gabriel said.
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient."
Google placed Lemoine on administrative leave and later fired him.
A third instance. at DARPA, when AI scared scientists was
Keep reading with a 7-day free trial
Subscribe to Global Community Weekly (GloCom) to keep reading this post and get 7 days of free access to the full post archives.