top of page
  • info203219

Disinformation, a cyber security problem?

What is Disinformation?

Disinformation is information which is published with the intention to deliberately mislead. The term 'fake news' became especially popular following the 2016 US elections and brought the concept of disinformation for national or individual gain to the forefront of many peoples' minds. But, disinformation isn't new and it isn't unique to politics or cyber security. The COVID-19 pandemic has further heightened our awareness of disinformation as an increasing threat to organisations, brands and individuals. We can't talk about disinformation without talking about social media. Social media platforms, given their nature, have played a central role in the rapid viral spread of disinformation. It is reported that disinformation spreads six times quicker than real news on twitter alone. Disinformation is typically promoted by five groups: pranksters, cyber criminals, politically-motivated individuals and groups, conspiracy theorists and insiders (unknown 'well trusted' sources such as 'a doctor'). Each groups has their own motives and gains for sharing. Once shared this information can then be intentionally - and unintentionally - amplified by more people sharing the content.

How does disinformation relate to cyber security?

In many ways disinformation can be seen as social engineering on a mass scale. Disinformation, with its sensationalist language and over exaggerated claims plays on our emotions (for example fear, panic and intrigue) giving cyber criminals another viable method for us to engage with their scams. It is important to remember that cyber criminals will follow the numbers and exploit topical news stories. We see this with phishing and SMSishing, and we also need to be vigilant with news stories and guidance pushed out on social media platforms and the internet more widely.

Importance of trust

When we're talking about disinformation we aren't just talking about the information itself, but also where the information has come from. It is becoming increasing hard to sift out truth from fiction, particularly when the message or news story is coming from a real person or someone who appears to be real. For a long time now we have spoken about the importance of trust in security, and cyber criminals recognise this. The more legitimate they make an account appear, the more likely that the message, or scam, will get amplified. There are two methods to be aware of:


Fake 'legitimate' accounts: these fake accounts often play on our emotions and don't look like bots. Such as 'NHS Susan'whose profile claimed they were a junior doctor, "fighting COVID on behalf of all LGBTQ & non-binary people" and who was also deaf. These were all emotional cues, seemingly to dupe people into believing the account was real and to provoke a reaction on social media. The profile image of this account actually belongs to an NHS nurse with a different name.


Compromised accounts: cyber criminals will compromise the accounts of a well trusted, respected individual and share disinformation using the hijacked account. The Twitter compromise and associated bitcoin scam from July 2020 provides a stark warning of this: imagine if the cyber criminals behind this attack had used the compromised accounts to spread more damaging disinformation, rather than a bitcoin scam.

We saw an example of this is 2013, when a false tweet from a trusted news organisation stated that the White House had been hit by two explosions and that then-president Barack Obama was injured. This had major repercussions causing the US stock market to fall 142-points. The incident demonstrated how tightly connected global institutions and organisations have become to information shared on social media.


What is being done?

Social media platforms are under a huge amount of pressure to identify and remove disinformation. When it come to fake accounts and bots, Twitter has set out clear 'rules of misuse', and if broken the platform will take action to remove the accounts. Compromised accounts are harder to spot. Work is being done to look into how behavioural analytics can support social media platforms. On Twitter, for example, this would include analysis of the similarity of tweets, hashtags, time of tweeting and profile geo-location information to help identify if an account has been compromised.

Let's look at some recent case studies of disinformation scams.

Case Study 1: COVID-19 Treatment Scam 'We’re not just fighting an epidemic; we’re fighting an infodemic,' said Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization (WHO), referring to fake news that 'spreads faster and more easily than the virus'. Since the start of the global pandemic, disinformation about COVID-19 has been published on an unprecedented scale. In one week during March 2020, Ofcom reported that 46% of people stated they had seen false or misleading information about about COVID-19.

Many of these disinformation scams have been surrounding fake treatments for COVID-19. These scams prey on vulnerable people’s fears and anxieties, for the financial gain of the criminals running them. A British man has recently been sentenced for creating fake COVID-19 treatment kits which were distributed globally.

Case Study 2: Martin Lewis, Celebrity Death Scam Celebrity death scams have unfortunately become popular amongst cyber criminals due to their 'breaking news' nature. 'Breaking news' stories are a popular way for cyber criminals to spread disinformation, because they can scale very quickly. Martin Lewis is regularly targeted and included in scams due to his high trust ratings amongst British people, particularly on financial matters.

What did the scammers do? 'Breaking news' adverts were displayed on mainstream news outlets, containing logos from major news outlets to enhance the credibility. These ads linked to fake bitcoin story scams.


How did the scammers do it?

These posts had been generated through Google Ads. These were not checked for their legitimacy or fraudulent activity and instead where displayed on major news sites. The adverts were then shared widely across social media, further amplifying the scam.

Top tips for spotting and limiting disinformation ✅ Get to the bottom of where the story came from - and if you can't, don't share it. Remember that sharing, liking and retweeting content only helps to amplify disinformation. Use a fact checking tool or site such as Full Fact.

✅ Identifying when a photo first appeared on the internet is a great way to know whether the news story or account profile picture is genuine or not. Use tools such as Google reverse image search or TinEye

✅ Scammers will often share some genuine information to help legitimise their scam. For example, when there are long lists it's easy to believe everything in them just because one piece of advice is correct. You should double check to see if reputable sources are sharing the same information.

✅ Just like social engineering, disinformation goes viral because it plays on our emotions (for example fear, panic and intrigue). Remember that scientific breakthroughs, prevention advice or public announcements will come from reputable sources first.

✅ Check the language - is it sensationalist, filled with over-exaggerated terms?

✅ Remember, social media accounts are compromised to spread disinformation, cyber criminals know information shared through legitimate accounts results in their message being amplified far quicker. Ensure you have a strong password and that two-factor authentication is enabled on your accounts. Read our full account security guidance here.

91 views
bottom of page