Nowhere else statements and opinions can spread quickly through Social media. Facebook, Twitter and Co have become the voice and mood of the barometer of modern society. But this also involves risks. Already in the year 2016, in the US presidential election campaign, showed that not only the politicians and their followers in the social media sentiment for or against the candidate – Russian trolls and Bots involved, as studies have shown. “The features of social media make it so useful for activists – low barriers to Entry, scalability, simple division of labour and the ability to post media from almost anywhere, specifically in a country – make the networks susceptible to industrialised manipulation campaigns, among other things, by their own or foreign governments,” explain Meysam Alizadeh, of Princeton University, and his colleagues. Alone between 2013 and 2018, there were at least 53 of such large-scale impact trials in 24 countries.
learning-capable algorithm on a Troll-hunting
Because of the large amount of Troll and Bot Posts, as well as the targeted Manipulation by false statements to the operator of the Social Media to keep up platforms are hardly to find suspicious Posts and to select or delete. Although machine learning Algorithms are used to Detect and Filter such messages, they have so far, but only limited success. “The key question is how industrialized information campaigns of organic, can normal activity be distinguished”, say the researchers. In addition, it is important to recognize cross-platform
features, because the campaigns are usually not only in a social network on the go. To find out, have Alizadeh and his colleagues based a the content, the learning algorithm on a certain way, trained and then various test situations exposed to.
the Basis for the study was a specific, a particularly common type of Social-Media-Posts – a short Text combined with a Link. As a learning material data sets from platforms such as Twitter, Reddit, or Facebook, which comprised a total of 7.2 million Posts served – such as trolls, as well as by normal users. The AI System got in a Test, the data a month for Learning, in this data set, the Troll Posts were marked. Then it should detect on the Basis of these data the detected characteristics of the Posts of the same or of other Trolls in the data set for the following month or following year. In a complementary Experiment, the AI has been trained initially on Twitter, to search on Reddit to trolls and Vice versa. The Tests the researchers conducted in the English language area of the platforms, and sought specifically to influence campaigns, from Russian, Chinese and Venezuelan sources.
Recognizable across Accounts and platforms
The Tests showed that: In almost all the test variants, the algorithm was able to recognize which Posts were part of a foreign-controlled influence campaign and which are not, as the scientists report. This detection is achieved even if the algorithm had been trained with other trolls or other campaigns, and therefore, so to speak, a transfer of power had to be provided. “The industrialized campaigns to leave a characteristic Signal in the content, allows them to influence month-to-month and on different Accounts across the track,” say the researchers. Often, the Posts revealed due to the nature of their link – you linked to sites that were advertised from countless other trolls, or the fit, both politically and in terms of content, only limited to the test content or context of the Posts. Some had URLs to local sites mentioned, but in your part of the Test not matching people. In General, the Venezuelan trolls were the easiest to use, Chinese and Russian camouflaged by contrast, more sophisticated.
According to the scientists, such a content-based search algorithms open up a shot, the flood of foreign-controlled influence campaigns, counter – and cross-platform. “It is possible to estimate in real-time, how many of these trolls are out there and what you’re talking about,” says Co-author Jacob Shapiro of Princeton University. “The recognition is not perfect, but it could force the players to be creative or even stop their campaigns.” However, the Tests also showed that the influence of the campaigns have learned over time and refined to become. The trolls change their strategy and characteristics, can know the algorithm, you only then again, if he has enough to get practice material. Therefore, this AI-based Troll-tracing is not a panacea, stress Alizadeh and his colleagues. But it could help – with the appropriate funding and the will of the platform operator, provided in the fight against trolls.
source: Meysam Alizadeh (Princeton University, USA) et al., Science Advances, doi: 10.1126/sciadv.abb5824
*The contribution of “AI-System will recognize Posts from trolls and Bots” will be released by Wissenschaft.de. Contact with the executives here.
Wissenschaft.de