Monday, April 29, 2024

New research addresses predicting and controlling bad actor AI activity in a year of global elections

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Credit: CC0 Public Domain

More than 50 countries are set to hold national elections this year and analysts have long sounded the alarm on the threat of bad actors using artificial intelligence (AI) to disseminate and amplify disinformation during the election season across the globe.

Now, a new study led by researchers at the George Washington University predicts that daily bad-actor AI activity will escalate by mid-2024, increasing the threat that it could affect election results. The research is the first quantitative scientific analysis that predicts how bad actors will misuse AI globally.

The paper, “Controlling bad-actor-AI activity at scale across online battlefields,” is published in the journal PNAS Nexus.

“Everybody is talking about the dangers of AI, but until our study there was no science of this threat,” Neil Johnson, lead study author and a professor of physics at GW, says. “You cannot win a battle without a deep understanding of the battlefield.”

The researchers say the study answers the what, where, and when AI will be used by bad actors globally, and how it can be controlled. Among their findings:

  • Bad actors need only basic Generative Pre-trained Transformer (GPT) AI systems to manipulate and bias information on platforms, rather than more advanced systems such as GPT 3 and 4, which tend to have more guardrails to mitigate bad activity.
  • A road network across 23 , which was previously mapped out in Johnson’s prior research, will allow bad actor communities direct links to billions of users worldwide without users’ knowledge.
  • Bad-actor activity driven by AI will become a daily occurrence by the summer of 2024. To determine this, the researchers used proxy data from two historical, technologically similar incidents that involved the manipulation of online electronic information systems. The first set of data came from automated algorithm attacks on U.S. in 2008, and the second came from Chinese cyber attacks on U.S. infrastructure in 2013. By analyzing these , the researchers were able to extrapolate the frequency of attacks in these chains of events and examine this information in the context of the current technological progress of AI.
  • Social media companies should deploy tactics to contain the disinformation, as opposed to removing every piece of content. According to the researchers, this looks like removing the bigger pockets of coordinated activity while putting up with the smaller, isolated actors.

More information:
Neil F Johnson et al, Controlling bad-actor-artificial intelligence activity at scale across online battlefields, PNAS Nexus (2024). DOI: 10.1093/pnasnexus/pgae004. academic.oup.com/pnasnexus/art … /7582771?login=false

Citation:
New research addresses predicting and controlling bad actor AI activity in a year of global elections (2024, January 23)
retrieved 23 January 2024
from https://techxplore.com/news/2024-01-bad-actor-ai-year-global.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

%d bloggers like this: