There may be little if any argument about the vast impact that social media platforms have on the lives of hundreds of millions of people around the world. Social media has also had a profound influence on elections.
In a session at the DEF CON 29 conference on August 7, Sebastian Bay, a researcher at the Swedish Defence Research Agency (FOI), outlined how social media platforms are failing at limiting the risk of false information dissemination, via inadequate security policies that aim to remove fake accounts. In Bay’s view, the failure to block some of the false information should be considered as a critical component of election security.
Bay explained that there are basically two related types of issue that could lead to false information. There is the issue of content itself, and then the issue of inauthentic behavior, which is about bots and other automated mechanisms designed to appear as real human activity.
In Bay’s view, the major social media platforms have made concerted efforts in recent years regarding election-related content. That said, he noted that it is tricky to develop clear policies for inauthentic behavior and other forms of social media manipulation. To be clear, Bay emphasized that inauthentic behavior is not permitted by the social media companies, though it continues to occur.
“The European Union has long underscored the need for social media companies to intensify and demonstrate effective methods to close fake accounts,” Bay said.
The social media companies do in fact report to the European Commission about the number of fake accounts that have been closed, and it’s a huge problem that Bay said is measured in the billions.
For 300 Euros Anyone Can Buy Influence
Underlying the continued challenge of fake accounts is a whole industry that provides
manipulation services for hire, including the infrastructure needed for manipulation services to work.
Bay noted that the infrastructure ranges from fake SIM cards to services used to generate and maintain fake accounts. While Bay’s agency initially referred to the market for fake accounts as a ‘black market,’ the reality is that much of the activity is happening in the open.
“It’s extremely easy to find, and the openness of this industry is still today quite striking,” he said. “We see that the larger social media manipulation service providers fearlessly promote their services on their own websites, mobile app stores and on the social media platforms themselves.”
Bay’s agency has conducted multiple experiments to see how easy it is to buy fake accounts and influence. In both 2019 and 2020, for the small sum of 300 euros, he was able to buy up to 335,000 fake engagements, across different social media providers. The activity was all also reported afterward to the social media companies, with little impact.
“Our conclusion last year was that Facebook, Instagram, Twitter, YouTube and TikTok are still failing to sufficiently combat inauthentic behavior on their platforms, enabling the widespread false information dissemination on their platforms,” Bay stated pointedly.
The same type of research is currently being conducted by Bay and his team in 2021, and he sees little change. Overall, he noted that Twitter is still the industry leader when it comes to countering abuse of their system. Facebook is making progress but still has work to do, and TikTok doesn’t seem to be moving forward much.
“Social media cyber-security equals election security, because we’re seeing that the spread of misinformation undermines the will and ability of voters to vote on Election Day,” Bay said. “We’re seeing that the intentional manipulation of political conversations on social media platforms is happening online, and some of it also happens using technical manipulation, and that can be prevented with additional cybersecurity from the social media companies.”