WASHINGTON (Reuters) – The Biden administration is poised to open up a new front in its effort to safeguard U.S. AI from China and Russia with preliminary plans to place guardrails around the most advanced AI models, Reuters reported on Wednesday.
Government and private sector researchers worry U.S. adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to wage aggressive cyber attacks or even create potent biological weapons.
Here are some threats posed by AI:
DEEPFAKES AND MISINFORMATION
Deepfakes – realistic yet fabricated videos created by AI algorithms trained on copious online footage – are surfacing on social media, blurring fact and fiction in the polarized world of U.S. politics.
While such synthetic media has been around for several years, it’s been turbocharged over the past year by a slew of new “generative AI” tools such as Midjourney that make it cheap and easy to create convincing deepfakes.
Image creation tools powered by artificial intelligence from companies including OpenAI and Microsoft, can be used to produce photos that could promote election or voting-related disinformation, despite each having policies against creating misleading content, researchers said in a report in March.
Some disinformation campaigns simply harness the ability of AI to mimic real news articles as a means of disseminating false information.
While major social media platforms like Facebook, Twitter, and YouTube have made efforts to prohibit and remove deepfakes, their effectiveness at policing such content varies.
For example, last year, a Chinese government-controlled news site using a generative AI platform pushed a previously circulated false claim that the United States was running a lab in Kazakhstan to create biological weapons for use against China, the Department of Homeland Security (DHS) said in its 2024 homeland threat assessment.
National Security Advisor Jake Sullivan, speaking at an AI event in Washington on Wednesday, said the problem has no easy solutions because it combines the capacity of AI with “the intent of state, non-state actors, to use disinformation at scale, to disrupt democracies, to advance propaganda, to shape perception in the world.”
“Right now the offense is beating the defense big time,” he said.
BIOWEAPONS
The American intelligence community, think tanks and academics are increasingly concerned about risks posed by foreign bad actors gaining access to advanced AI capabilities. Researchers at Gryphon Scientific and Rand Corporation noted that advanced AI models can provide information that could help create biological weapons.
Gryphon studied how large language models (LLM) – computer programs that draw from massive amounts of text to generate responses to queries – could be used by hostile actors to cause harm in the domain of life sciences and found they “can provide information that could aid a malicious actor in creating a biological weapon by providing useful, accurate and detailed information across every step in this pathway.”
They found, for example, that an LLM could provide post-doctoral level knowledge to trouble-shoot problems when working with a pandemic-capable virus.
Rand research showed that LLMs could help in the planning and execution of a biological attack. They found an LLM could for example suggest aerosol delivery methods for botulinum toxin.
CYBERWEAPONS
DHS said cyber actors would likely use AI to “develop new tools” to “enable larger-scale, faster, efficient, and more evasive cyber attacks” against critical infrastructure including pipelines and railways, in its 2024 homeland threat assessment.
China and other adversaries are developing AI technologies that could undermine U.S. cyber defenses, DHS said, including generative AI programs that support malware attacks.
Microsoft said in a February report that it had tracked hacking groups affiliated with the Chinese and North Korean governments as well as Russian military intelligence, and Iran’s Revolutionary Guard, as they tried to perfect their hacking campaigns using large language models.
(Reporting by Alexandra Alper; Editing by Chris Sanders and Anna Driver)
Comments