Uncovering fake news bots


New member
Uncovering fake news bots
Having the right information at the right time can make you rich or save your life, but in the grim reality of today's world, the world-wide information space is being poisoned by fake news. Nowadays, there are people who make money by producing fake information and spreading it online. But that's not even true: thousands of social media bots are being run in the industry, in addition to real people, to maximise the impact.
At this year's Security Analyst Summit conference, researchers from Recorded Future gave a speech on how to uncover these bots.

Why are fake news bots validating?
Nobody likes boots, but social media companies in particular hate them, because bots make social media less attractive in the eyes of real people. For example,
Twitter periodically detects and removes an enormous number of bots (followed by people complaining that they are losing followers). This suggests that social networks have their own way of detecting these bots. These attempts, however, are not enough to completely eliminate the boots.

Social networks don't explain the algorithms they use, but it wouldn't be wrong to say that their attempts are based on detecting abnormal behavior. One of the most obvious examples is this: if an account tries to load a hundred posts per minute, that account is definitely a bot. Or, say, an account only retweets from other accounts and never shares anything of its own; this account is also very likely a bot.

But the people who make up the bots constantly change their bots to circumvent the techniques used by social media services. On the other hand, social media services can't afford to get too many false positives; inadvertently blocking a large number of real people's accounts will attract huge backlash, so they have to be careful. That means not all boats can be detected.

In an effort to delve more deeply into how bots behave, Recorded Future has set an attribute to highlight a particular group of bots — in this instance, only the attribute of speaking about terrorist incidents mentioned on Twitter has been chosen. If an account is doing this, it is most likely a bot (or retweeted a bot). Now let's take a look at other common features of these accounts.

Related Articles: What To Look Out For When Writing An Introduction To Your Website
How do fake news bots behave?
First of all, the terrorist incidents that these accounts were talking about were actually happening and the articles about these incidents were on reputable websites (websites that clearly did not spread fake news). One small but important detail: these events took place years ago, but the accounts in question did not specify this. Linking to reputable media outlets would disrupt Twitter's bot detection algorithms, which is why hackers using bots choose this strategy.

Second, in this bot network example special, account holders pretended to live in the US, while in tweets they were mostly talking about European countries. Through this information, Recorded Future has identified more than 200 accounts bearing the similarity in question, and has been able to delve more deeply into other similarities and links between them.

As an example, the researchers drew a pattern of activity and realized that most bots were active only during certain periods that overlapped with each other. Although some accounts appeared to have been blocked in May, new accounts with the same behaviour had been created and they are still operating.

Another similarity is that all accounts used several URL shorteners to share what we might call semi-fake news. URL shorteners offered some analysis to people managing bots, such as how many times each of the links was clicked. These are normally used in t.co or goo.gl they were not just the shorteners we were familiar with, but special shorteners that were created for the purpose of collecting analysis. Surprisingly, all these special shorteners had a similar minimalist design, consisting of orange and white. Using these abbreviations, these accounts may also be linked together.

The WHOIS data for these shorteners ' websites showed that they were all hosted on the Microsoft Azure cloud platform and registered anonymously. Coincidence? Probably not. Although their campaigns are different, of course, there are other similarities between these accounts. But in general, examining a bot account, detecting quirks in that account and then trying to find other accounts with the same quirks, is an effective method of uncovering bot networks.