Inside the Briefcase

IT Briefcase Exclusive Interview: Getting the Most Out of Open Source While Managing License Compliance, Risk, and Security

IT Briefcase Exclusive Interview: Getting the Most Out of Open Source While Managing License Compliance, Risk, and Security

with Kendra Morton, Flexera
In this interview, Kendra Morton,...

Why DEM Matters More Than Ever in Financial Services

Why DEM Matters More Than Ever in Financial Services

Remember waiting in line at the bank? Banking customers...

How to Transform Your Website into a Lead Generating Machine

How to Transform Your Website into a Lead Generating Machine

Responsive customer service has become of special importance, as...

Ironclad SaaS Security for Cloud-Forward Enterprises

Ironclad SaaS Security for Cloud-Forward Enterprises

The 2015 Anthem data breach was the result of...

The Key Benefits of Using Social Media for Business

The Key Benefits of Using Social Media for Business

Worldwide, there are more than 2.6 billion social media...

Trolling Elections: How Fake Accounts Actually Work

May 26, 2020 No Comments

Featured article by Ellis Burke, Independent Technology Author

 Trolling Elections: How Fake Accounts Actually Work

Ever since the 2016 presidential elections, and the subsequent revelations of foreign interference, the issue of bots and trolls on social media platforms has been thrust into the mainstream. There is now a general awareness of the issue of fake accounts on social media, and the potential consequences of allowing those accounts to run rampantly unchecked. However, despite this awareness, fake accounts continue to plague social media websites.

Now that we know what kind of damage these networks of fake users can inflict on the democratic process, it’s time to start demanding greater action.

Fake Accounts On Social Media

The issue of fake accounts on social media platforms used to be a niche one that mostly concerned a small cabal of marketers. However, this is no longer the case. As we have seen in recent years, fake social media accounts can ultimately be used to spread political propaganda or extremist messages.

Platforms have cracked down on fake accounts in the past, usually at the behest of advertisers who felt that they were not getting their money’s worth. But while the financial costs of fake accounts are not to be scoffed at, they are no longer the overriding concern when we consider how social media platforms can be exploited by nefarious actors. Election interference appears to be the new norm, and there remain serious questions about how much social media platforms can realistically do to stem the flow of false accounts that they now find themselves inundated with.

In order to effectively combat these fake accounts, we need to better educate the average social media user as to how fake accounts actually work, and how they can spot them.

Social Media Bots

Fake accounts today don’t just encompass people who are lying about their identity and impersonating someone else or creating a fictitious persona to use on social media. Many of the fake accounts on social media platforms today are bots, automated accounts that can be directed to like and share specific content, thereby amplifying it and making it seem more popular than it really is.

Combating these social media bots is currently a significant challenge for social media platforms. While many platforms have taken measures to curtail the use of bots, there is ultimately only so much that can be done. Of course, bot makers are making increasingly sophisticated bots that are able to much better impersonate a human user than previous generations.

With a botnet work in place, a propaganda account can simply post a message to their social media account and sit back while the bots spread the message around. By having a large number of dummy bot accounts liking and commenting on content, perhaps even sharing it with followers of their own, low-quality information and misinformation can spread far and wide uninhibited. Social media algorithms will then do the rest of the work on behalf of the botnet operator.

Once they detect content that is being liked and shared at a greater rate than usual, algorithms will kick in and do their job in promoting that content to a wider audience. Before long, a post that is deliberately misleading, false or inflammatory can be disseminated among a large number of social media users.

These bots can be set up to wait for content to be posted with a particular hashtag, or they can be set to automatically interact with content that is posted from specific accounts.

Bots can also be set to log on and run their routines at particular times of the day. By introducing a time delay between different clusters of bots interacting with content, it is possible to disguise the use of bots to artificially amplify it.

Avoiding Detection

You might be wondering how these bot accounts are able to survive on social media platforms. Given all of the resources that are available to them, it should be easy for the tech giants to root out fake accounts. However, the picture is more complicated than this, as bot makers have found a number of ways to disguise their tools.

One of the most important tools for operators who want to avoid detection is a proxy service. There are now a number of proxies on the market that are aimed specifically at social media bot operators. These proxies rotate the user’s IP on a regular basis, enabling fake accounts to avoid detection and the usual account limit that other users are subjected to.

What Can We Do?

While there have been a number of advances in AI and even attempts to apply machine learning capabilities to the problem, it is still very difficult for a social media platform to accurately and reliably identify fake accounts. The best method available is still having human users reporting accounts that are clearly not legitimate.

If social media platforms that want to get serious about ridding themselves of fake accounts, educating regular users on how to spot these accounts and providing the tools that they need to report them is a step in the right direction. Combining this with an AI-driven approach will also improve the reliability of otherwise unreliable algorithms. Instead of having to craft an algorithm that can tell whether an account is fake or not, social media platforms would only need to develop an algorithm that could assess user reports and determine whether they need to be passed on to a human for review.

Our capabilities in fighting the deluge of fake social media accounts that have started to threaten our very democracy have advanced considerably in recent years. However, for whatever reason, social media platforms have been slow to shore up their defenses as far as fake accounts are concerned. With a concerted effort, and a willingness to dedicate a significant number of the resources available to the issue, there is little doubt that social media platforms could be doing more to solve this serious issue.

DATA and ANALYTICS , SECURITY, SOCIAL BUSINESS

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner