Fake accounts evolve, able to copy human behaviour

Researchers, including one of Indian-origin, have found that bots or fake accounts enabled by Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour to avoid detection.

For the study published in the journal First, the research team from the University of Southern California examines bot behaviour during the 2018 US presidential elections compared to bot behaviour during the 2016 US elections.

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies.

Advancements in AI enable bots producing more human-like content,” said study lead author Emilio Ferrara.

The researchers studied almost 250,000 social media active users who discussed the US elections both in 2016 and 2018 and detected over 30,000 bots. They found that bots in 2016 were primarily focused on retweets and high volumes of tweets around the same message.

However, as human social activity online has evolved, so have bots. In the 2018 election season, just as humans were less likely to retweet as much as they did in 2016, bots were less likely to share the same messages in high volume.

Bots, the researchers discovered, were more likely to employ a multi-bot approach as if to mimic authentic human engagement around an idea.

Also, during the 2018 elections, as humans were much more likely to try to engage through replies, bots tried to establish the voice and add to the dialogue and engage through the use of polls, a strategy typical of reputable news agencies and pollsters, possibly aiming at lending legitimacy to these accounts.

In one example, a bot account posted an online Twitter poll asking if federal elections should require voters to show ID at the polls. It then asked Twitter users to vote and retweet.

“We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences,” Ferrara said.

Artificial IntelligencechatbotsEmilio FerraraFake AccountsUniversity of Southern California
Comments (0)
Add Comment