12Sep

How are social media companies stopping interference in the 2020 US election?

September 12, 2020 Blog , , 1

Scott Clare | 12 Sep 2020

Leading up to the 2016 US election foreign actors, in particular Russia, used social media agitation to create resentment and distrust between groups across the political spectrum. Bot accounts would attempt to create division among citizens by posting about controversial topics such as immigration and Islamophobia. A notable example occurring where Russian influence specialists staged pro- and anti-Muslim rallies for the same place and time, directly across from each other. The office of former Justice Department special counsel Robert Mueller identified “dozens” of these rallies occurring around the United States, starting from November 2015. It is argued that these divisions created along lines of race, gender, class and creed benefited the populist candidate Donald Trump, allowing him to win the election and become President. Additionally, certain social media accounts also attempted to suppress voter turnout through behavioural targeting of voters through disinformation. An example of this being that in the days leading up to the election, messages circulated on social media that Hillary Clinton had died. Also, in some key battlegrounds, messages were targeted at Democrat voters claiming that the date of the election had changed.

This foreign interference threatens the United States democratic system, as it undermines the legitimacy of the Presidency. President Abraham Lincoln outlined the characteristics of a popular government as being a ‘Government of the People, by the People, and for the People’, if a government is elected into power through the aid of foreign interference then it no longer satisfies these conditions.

Therefore, it is important that the electoral system is protected from these foreign entities acting in their own interests. The social media platforms themselves have been pressured to take responsibility for the posts made by influence specialists. But what can these organisations do to suppress disinformation from foreign sources?

In the aftermath of the 2016 election it was discovered that the scale of Russia’s infiltration into Facebook was far more severe than senior staff at the company ever understood. Academic researches found that in the lead up to the election Russia-linked imposters had hundreds of millions of interactions with Americans, whilst posing as fellow Americans. It is possible that these interactions with the Russia-linked imposters may have molded the voters’ political views. It was found that the imposter accounts presented sympathetic views with counterintuitive, politically leading twists.

An example of this sort of activity was found on the account “Being Patriotic” which used hot-button words such as “illegal”, “country” and “American” and phrases such as “illegal alien”, ″Sharia law” and “welfare state”. Notable posts were “Do liberals still think it is better to accept thousands Syrian refugees than to help our veterans?” and “More than 300,000 vets died awaiting care”. This page garnered some 4.4 million interactions, peaking between mid-2016 and early 2017.

Facebook have said they are planning to bar any new political ads in the week leading up to the election. They hope this will stop influence specialists from spreading misinformation in the days leading up to the vote. However, banning politicised adverts only a week before the election seems to be a little too late, as it can be assumed the misinformation up to this point is likely to have already cemented voters’ political views. Despite this, the Facebook head Mark Zuckerberg outlined other election security measures including the expansion of work with state election authorities to counter false information about voting. Facebook declared that they would remove posts claiming that Americans were at risk of catching the Covid-19 virus if they attended the polls. These attempts at voter suppression echo those made in the run up to the previous election. Facebook is also partnering with Reuters and National Election Pool to provide election results in real time. As well as this they will label posts that prematurely claim victory and direct users to the platform’s voting information centre.

Twitter is also taking strides to limit disinformation from all sources. As of last year political ads on the platform were banned altogether, in an attempt to avoid a gaming of the system. Furthermore, in March the company began a policy on “synthetic and manipulated media” which attempts to flag and provide greater context for content that it believes to have been “significantly and deceptively altered or fabricated”. Since March, the platform has been labelling videos, photos or other posts with “manipulated media” that it believes to be tampered with. As long as the post does not include threats to the physical safety of a person or a group or an individual’s ability to express their human rights such as participating in elections then it will remain on the site with the aforementioned label. Once labelled the post will provide “expert content” explaining to users why the content is untrustworthy. If users attempt to share the labelled post they will receive a message asking if they really want to amplify an item that is likely to mislead others.

Furthermore, Google is attempting to limit the impact of security threats from foreign state based actors. In June, hackers from China and Iran both targeted campaign staffers email accounts for President Trump and former Vice President Joe Biden. Even though none of these attacks were successful Google’s Threat Analysis Group is working to identify and prevent these types of government-backed attacks against Google and its users. Additionally, the company also launched enhanced security for Gmail and G Suite users.

These internet companies find themselves in a difficult position of protecting freedom of speech, which their platforms depend upon for survival, and protecting the democratic institutions, which in the same process they jeopardise. However, cracking down on influence specialists masquerading as Americans does not impinge on freedom of speech, as posts made fraudulently or purposely made with false information are not worthy of protection. Therefore, if this disinformation is stopped then the freedom of speech enjoyed by individuals not posing as American accounts will actually become stronger, as information and knowledge exchange will become more legitimate.

1 Comments:

  • Pingback: Policing the Real. - iNFO Vi

Leave reply

Your email address will not be published. Required fields are marked *