top of page

Robots in Disguise: Human Deception through Social Bots


© Studiostocks

“They're bringing drugs. They're bringing crime. They're rapists.” American presidential candidate Donald Trump has not been well known for his mild speeches. In June 2015, when he delivered his presidential campaign announcement he was harshly talking about which kind of people from Mexico are coming to the United States of America (USA) each year. This would not be the only incident, in which he attacked Mexicans and people from all over South and Latin America. On February 24, 2016, as recorded by Alex Griswold of the online magazine Mediaite, dozens of seemingly Latino people, called Pedro Villas, Carlos Lera or Juan Garcia, all posted an identical message, favoring Trump after he won the Nevada caucuses. Does it make sense when numerous Latinos, who are constantly stigmatized by Trump, side with him? It does not. Niraj Warikoo, writing for the Detroit Free Press, solved the mystery: they are not people. They are social bots, robots in disguise of being real human beings. And what do they want? Deceive you.

© Screenshot Warikoo

[endif]--Social bots are computer algorithms, able to produce their own content and interact with other users on social media. Their creators have different agendas: political, terrorist or economic intentions, trying to manipulate public opinion, spread wrong and dissenting information, modify the popularity of users or steal personal information. Of course benign bots exist also, such as the Big Ben Twitterbot, which simply posts the “BONGS” the famous clock makes hourly in London. Those bots however do not intent to be hidden. Just as in the “Trump-case”, bots from vicious sponsors want to remain obscure, while the online community could think it is interacting with individuals of the public. Here the danger is evidently visible: we are being violated without our knowledge. The question is: why can social bots interact with us, not knowing that we are communicating with a robot? Why can social bots deceive us?

Detected Bot Activity

© Screenshot Spehr

Another example is one of economic nature. In May 2014, as reported by the Frankfurter Allgemeine Zeitung, the German Big Data Analysis Company Stockpulse detected bot activity on Twitter when the false message, that Amazon was trying to take over Groupon, was re-tweeted 65.000 times: thousands of re-tweets coming from bot accounts. Although the Amazon-stock only showed a slight rise in the market in reaction to the tweet, the case serves to demonstrate that bots can effect fluctuations in price - because the stock exchange reacts to Twitter. To illustrate further: On April 23rd, 2013 the Twitter-account of the Associated Press was hacked and it was wrongfully tweeted, that a bomb had exploded in the White House. Within two minutes the stock index Dow Jones fell around 145 points. This shows, that bots are potentially able to manipulate the economy to the extent of affecting the stock exchange. Vicious sponsors with economic interest can do great harm.

Why are we not always aware of this fraud? In 2011, Canadian researchers Boshmaf, Muslukhov, Beznosov and Ripeanu of the University of British Columbia created bots trying to infiltrate Facebook and connect to a high number of users. Out of 8,570 friendship requests their bots send out, 3,055 requests were accepted - 36 % of the people accepted an unknown user, an obscure machine as a friend! The researchers could then collect private users' data, such as email addresses and phone numbers, which has monetary value. With this data in the hands of the wrong people, great criminal damage could be done. Why is this possible? Do we have a too high trust when interacting on social media and therefore behave non-cautious?

Computers and Bots as Social Actors

Ten years ago, in 1996 Communication researchers Reeves and Nass from Stanford University, USA, developed the Media Equation theory by conducting over thirty-five studies, which indicates that people interact with computers, televisions and new media as if they were real people or real places. Their behavior towards these media was found to be essentially natural and social. In 2000, Nass together with marketing professor Moon from Harvard University published a paper stating that humans apply social scripts, the same as for human-to-human interaction in response to computers, ignoring the signals that display the asocial attributes of a computer.

Expanding on their research, which became known as the Computers as Social Actors paradigm (CASA), Communication researchers Edwards C., Edwards A., Spence and Shelton from Western Michigan University, the University of Kentucky and the University of Massachusetts demonstrated that a Twitterbot is perceived as a credible source of information from individuals in 2014. Students were shown the exact same Tweets, but were either told that they were authored by a bot or a human. They then had to answer questions on credibility and interpersonal attraction. Their results fell in line with CASA, as individuals as well relied on the same social rules in the interaction with bots as applied in the interaction with humans.

Inferring, these studies suggest, that if humans, in the words of Nass and Moon respond “mindlessly” to computers and bots, even when they know that they are interacting with a machine, we could also simply adapt to non-cautious behavior while we are online: responding socially to the unknown – regardless of it being a machine or human.

Trust within Social Media

While interacting on social media, a determining factor for our behavior, cautious or non-cautious, is our level of trust with the communicator. According to Communication researchers Valenzuela, Kee and Park from the University of Texas trust can be understood as the confidence that a stranger will not consciously or willingly harm us. That trust is established through our knowledge about the identity of the other. Within social media, we are able to gain information from the user profiles of our social networks, this being personal background or interests for example. In 1975, Communication researchers Berger and Calabrese have found this information to be necessary to be able to decrease the skepticism and doubt about the motives and behavior of unknown users, leading humans to develop trust.

Adding to non-cautious behavior of humans online is if people do not know about the existence of bots. By definition of the Merriam-Webster dictionary: “the state of being ignorant” relates to “a lack of knowledge.” Thus, we behave non-cautious, if we have no knowledge about bots. Our ignorance can be inferred by the fact, that not even experts know or are able to estimate how many bots are active on social media. For instance, Twitter, in its first quarterly report of 2016 writes that their information about their monthly active users could be biased due to third-party applications, which automatically contact Twitters servers, without the help of a human. As explained by editor Knoke of the web-magazine Engadget this means, that 8,5 % of Twitters’ active users, could be bots, analysis-programs or normal user-apps. In simpler words: Twitter does not guarantee for 1/12th of its members, that they are humans. Yet, there are no official statistics that reveal an exact number of how many bots are active on social media. Dr. Hegelich emphasizes, “an exact number can never be found, as most bots will never be detected. However, very many bots exist. They can be found in each relevant discussion on Instagram, Twitter and Facebook.”

Here we are left with our own intuition, because let alone the information that is provided on the profiles of bots will not necessarily reveal themselves to us as fake. Reconsidering the research that users tend to ignore the cues that identify a computer or Twitterbot as a machine impede the detection of a seemingly human user on social media as a bot. Psychology Professor Dr. John Suler of Rider University, who has an expertise in the psychology of cyberspace, also writes in an e-mail that “some people will be able to identify sophisticated bots, people that are sophisticated about psychology and, or bots. Other people will not be able to identify even a primitive bot. People differ in the degree to which they will perceive it as a human with human characteristics.” What is certain: as the quality of bots increases, so does the influence on our trust in social media.

High Quality of Social Bots

Ever since the development of computers, scientists have tried to design a computer algorithm that passes the Turing-Test, named after Alan Turing, a mathematician who asked in a 1950 published journal article the question: “Can a Machine Think?” His idea was, that if answers of a computer could not be differentiated from the responses of a human, then a machine could be said to be thinking. Although the Loebner Prize for Artificial Intelligence, which is the formal instantiation of the Turing-Test, given to the first computer whose answers would be indistinguishable from a human's, has not been won, the quality of the machines grow higher. The same holds for bots active in social media.

Through a written exchange Dr. Hegelich emphasizes that bots follow simple but effective rules which coin them in their authenticity: “bots post content that they often find on pre-defined websites, they have a ‘daily routine’, meaning that they post following a specific time pattern and that they can autonomously follow users.” He also adds, that bots often have avatar profile pictures of attractive individuals. So bots not only behave like a human – but can also ‘look’ like one. Who is to tell them apart from a real person?

Today, computer-generated pictures look near to perfectly real. Computer Science researchers Holmes and Farid from Dartmouth College and Psychology professor Banks from the University of California found in a 2016 study, that humans had difficulty to tell computer-generated images apart from photographic ones. The studies participants were shown 60 images, ten at each of six resolutions. Across resolutions, computer-generated pictures were correctly identified as fake only between 50 to 65%. While professional bot detection analyses, such as Dr. Hegelich’s bot detection algorithm, contain the examination of the color scheme of a profile-picture, regular users need to depend on their own abilities to differentiate between real and unreal.

Top picture: computer-generated image

Bottom picture: photograph © Holmes/Banks/Farid

Dr. Oliver Bendel, Professor for information systems at the college of higher education Fachhochschule Nordwestschweiz, Switzerland, who specializes in social media and social robotics, adds through an e-mail exchange: “Many users today, are no longer only accustomed to high quality texts, but they also value inferior texts. Here, bots can play a major role, as they can operate effortless on a comparable level.” Hence, as the quality of the bot increases both in visual appearance and human-like behavior, it becomes more and more difficult for us to distrust users while interacting on the social web.

A Look Into the Future

Both Dr. Hegelich, as well as Oliver Dziemba, behavior-researcher at the Institute for Trend and Future-Research at Heilbronn University, Germany foresee a change in the infrastructure of the social media landscape. Dziemba writtenly states that future changes depend on human decisions, to what extend and how long we are allowing bots to influence us: “when social media eventually becomes overfilled with bots who write and like, individuals need to decide, if they are fond of that, if it is sufficient or not.” Dr. Hegelich says that as soon as the users are going to change their social media behavior, then the operators of the networks will change their framework. “I am confident, that social media is going to function completely different in three years in comparison to today,” he puts it. Dr. Bendel expects individuals to apply and develop their own Turing-Tests. Perhaps humans will also withdraw into places, in which they can clearly identify with whom they are in contact – a human or a bot.

A solution to deal with the existence of social bots is to critically question everything what seems suspicious: Why does a user not have a profile description? Why does somebody post 100 messages a day? Why does a profile-picture look unrealistic? Dr. Hegelich explains that in times of big data and social bots, our deeply rooted understanding, that quantity can be an indicator for quality is simply wrong. Even when something is claimed a million times on the Internet, that does not need to mean anything. Through e-mail, Max Braun, Media-psychology Research Associate at University Hohenheim, Germany who researches human-computer interaction, points to humans ability to identify patterns: “recipients need enough time, motivation and the ability to check user profiles. When these requirements are met, then we are also more critical towards specific posts.”

Our remedy and human weapon on social media against the bots is to question everything, to not solely rely on our trust and to take our time. Just like the logic used to test the robots during the Turing-Tests of the Loebner Prize, we have to look out for things that do not make sense.

Rory Cellan-Jones, technology correspondent of the BBC, asked Rose during the 2015 Loebner Prize competition: “Could you afford to get on the housing ladder in London?” Her answer was: “When I go to foreign places, I like to study the designs of buildings. How about you? So clearly you are English.” Here it became clear that Rose wasn’t a human. It was a bot.

![endif]--![endif]--![endif]--


Recent Posts
Archive
Follow Mundus Journalism
  • Twitter Basic Square
  • Instagram Social Icon
bottom of page