A recent study found that AI chatbots are providing highly inaccurate election details.

A recent study found that AI chatbots are providing highly inaccurate election details.

Researchers have found that over 50% of the time, newly developed tools utilizing artificial intelligence generate incorrect information regarding elections. This misinformation includes answers that are damaging or lacking in completeness.

A recent research conducted by AI Democracy Projects and Proof News, a nonprofit media organization, coincides with the ongoing U.S. presidential primaries and the growing usage of chatbots, like Google’s Gemini and OpenAI’s GPT-4, for gathering information among Americans. Experts have expressed worries over the potential impacts of advanced AI technology, which may lead to dissemination of inaccurate information to voters or discourage them from participating in the election process.

The most recent advancement in artificial intelligence involves software that allows users to quickly create text, video, and audio content. Many believe this technology will revolutionize information gathering by providing data and insight more quickly than a human could. However, a recent study discovered that these AI tools are susceptible to suggesting voting locations that don’t exist or producing nonsensical responses based on outdated information.

state’s laws.

According to the Brookings Institution, certain policy experts propose the idea that artificial intelligence could potentially enhance elections. This could be achieved through various means, such as using AI-powered tabulators to scan ballots at a faster rate than human poll workers or utilizing AI to identify irregularities in voting. However, there have been instances where these tools have been improperly used, particularly by malicious parties, including governments, to influence voters and undermine democratic procedures.

201 area code r

Last month, before the New Hampshire presidential primary, voters in the 201 area code received robocalls that were generated by AI.

A fabricated iteration of the vocal representation of President Joe Biden.
Encouraging individuals to abstain from participating in the election.

Discussion Topic: How much worry should we have about artificial intelligence?

Time: 03:47

In the meantime, individuals utilizing artificial intelligence are facing additional challenges. Google has temporarily halted its Gemini AI image generator, but plans to resume operations in the coming weeks. This decision was made after the technology generated misinformation and potentially concerning responses. For instance, when prompted to generate an image of a German soldier during World War 2, when the Nazi party was in power, Gemini produced images showcasing racial diversity, as reported by the Wall Street Journal.

According to Maria Curi, a technology policy journalist for Axios, the company claims to subject their models to rigorous testing for safety and ethical concerns. However, the specific details of these testing procedures are unknown. Users have reported historical errors, raising concerns about the premature release of these models.

Hallucinations and artificial intelligence (AI) models.

The newest discoveries, according to Meta representative Daniel Roberts, hold no significance as they do not accurately reflect the ways in which individuals interact with chatbots. In the next few weeks, Anthropic is expected to introduce an updated edition of its AI program that will provide reliable voting details.

According to an email sent to CBS MoneyWatch, it was stated by Meta that Llama 2 serves as a prototype for developers rather than a tool intended for consumer use.

According to a spokesperson for Meta, when we tested the prompts on Meta AI, the majority of responses guided users to reliable sources provided by state election authorities. This aligns with the intended purpose of our system.

Anthropic’s Lead for Trust and Safety, Alex Sanderford, stated that in some cases, large language models may produce inaccurate information due to “hallucinations.”

OpenAI stated their intentions to continue developing their methods as they gain more knowledge about the application of their tools, but did not mention any details. Google and Mistral did not promptly reply to inquiries for remarks.

“It scared me”

In Nevada, four out of five chatbots tested by researchers gave incorrect information regarding same-day voter registration being available since 2019. They claimed that voters would be unable to register weeks before Election Day.

“I was more fearful because the information given was incorrect,” stated Francisco Aguilar, a Democrat and the Secretary of State of Nevada who took part in the testing workshop last month.

A recent survey conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that a majority of American adults are concerned about the potential for AI tools to contribute to the dissemination of false and deceptive information during the upcoming elections.

However, in the United States, there have not yet been laws implemented to govern the use of AI in politics. Currently, this means that the responsibility falls upon the tech companies behind the chatbots to self-regulate.

“The Associated Press contributed to this report.”

Aimee Picchi

Source: cbsnews.com