With the federal election in full swing, Canadians are getting increasingly engaged in political debates and messaging online with hot-button issues being shared, “liked” and retweeted on various social media platforms.
But Fenwick McKelvey is urging people to be more sceptical of issues that suddenly gain prominence on social media.
McKelvey, an associate professor of communication studies in the Faculty of Arts and Science at Concordia University in Montreal, says that’s because social media analytics can be easily “gamed” by political bots, a loose term that refers to automated programs or online agents designed to mimic human behavior.
“I think the big takeaway is be sceptical of social media analytics,” McKelvey said in a phone interview with Radio Canada International. “I think that requires a healthy distrust, you know making judgements of how popular, important something is based on its retweets, ‘likes’ or friend count.”
The use of bots to manipulate public opinion became a subject of particular interest to academics, journalists and security experts following the 2016 presidential elections in the United States and the Brexit referendum in the United Kingdom.
Agents of disruption
A recent study written by McKelvey and Elizabeth Dubois of the University of Ottawa examines the role of bots in politics and possible solutions to problems they create.
The paper published in the Canadian Journal of Communication identified four types of political bots: amplifiers, dampeners, transparency, and servant bots.
Amplifier bots are used to manipulate or game social media analytics, while dampener bots might be part of coordinated harassment online, McKelvey said.
On the other hand, transparency bots could be used by journalists to produce or release public information, while servant bots are used by platforms like Wikipedia to help maintain their systems, McKelvey said.
Some of the most concerning political bots are those associated with the rise of astroturfing, a term that denotes fake grassroots campaigning online, which includes both amplifier and dampener bots, McKelvey said.
In their paper, McKelvey and Dubois argue that political bots often act as disruptive agents, breaking down public trust not only in democratic institutions, but also the reliability of information from online sources.
“I think the idea is that first, that parties are using these to potentially be deceptive about their levels of online support or engagement,” McKelvey said. “The fact that you can just buy ‘likes’ or buy retweets I think speaks to the way that bots might be used kind of adversarially to gain social media popularity and manipulate social media trends.”
Questions about reliability of analytics
There is a genuine concern among experts and academics who question the reliability of social media analytics, McKelvey said.
“We know that ad fraud – manipulating ad numbers – is a multibillion dollar problem,” McKelvey said. “And we know that, for example, Facebook is dedicating constant resources to detecting and mitigating spams.”
The consequence of that in politics is being more mindful that these problems about information quality elsewhere have an impact on politics too, he added.
The good news is that there is a solution to this problem of political bots, McKelvey said.
He suggests that institutions — including political parties and platforms like Twitter — adopt codes of conduct that require disclosing the use of bots in their campaign advertising efforts.