Generative AI poses threat to election security, federal intelligence agencies warn
Generative artificial intelligence could threaten election security this November, intelligence agencies warned in a new federal bulletin.
Generative AI uses images, audio, video and code to create new content, like so-called “deep fake” videos in which a person is made to look like they’re saying something they never said.
Both foreign and domestic actors could harness the technology to create serious challenges heading into the 2024 election cycle, according to the analysis compiled by the Department of Homeland Security and sent to law enforcement partners nationwide. Federal bulletins are infrequent messages to law enforcement partners, meant to call attention to specific threats and concerns.
a fake robocall impersonating the voice of President Joe Biden on the eve of the New Hampshire primary in January. The fake audio message was circulated, encouraging recipients of the call to “save your vote” for the November general election instead of participating in the state’s primary.
The “timing of election-specific AI-generated media can be just as critical as the content itself, as it may take time to counter-message or debunk the false content permeating online,” the bulletin said.
The memo also noted the lingering threat overseas, adding that in November 2023, an AI video encouraged a southern Indian state to vote for a specific candidate on election day, giving officials no time to discredit the video.
The bulletin goes on to warn about the potential use of artificial intelligence to target election infrastructure.
“Generative AI could also be leveraged to augment attack plotting if a threat actor, namely a violent extremist, sought to target U.S. election symbols or critical infrastructure,” the bulletin read. “This may include helping to understand U.S. elections and associated infrastructure, scanning internet-facing election infrastructure for potential vulnerabilities, identifying and aggregating a list of election targets or events, and providing new or improved tactical guidance for an attack.”
Some violent extremists have even experimented with AI chatbots to fill gaps in tactical and weapons guidance, DHS said, although the department noted it has not yet observed violent extremists using that technology to supplement election-related target information.
Nicole Sganga
Source: cbsnews.com