The anti-Islam invective spread beyond the social media platform.Nov 5th 2019 · 4 min read
Muslim candidates, including Reps. Ilhan Omar and Rashida Tlaib, endured torrents of hateful, xenophobic and threatening tweets during last year’s campaign season, much of it amplified through bots and other fake accounts, according to a study to be released Tuesday.
The study, by the Social Science Research Council, analyzed 113,000 Twitter messages directed at Muslim candidates. The tweets called the candidates “dogs” and “pieces of garbage” and accused them of marrying siblings, being terrorists and seeking to impose the values of a “demonic” faith on Americans.
The threats and verbal attacks flowed so heavily toward Omar (D-Minn.) — who came to the United States as a refugee from Somalia and has become a visible symbol of Muslim political aspirations — that the report categorized more half of all accounts that mentioned Omar as “trolls” because they tweeted or retweeted hateful, Islamophobic or xenophobic content.
The vitriol of the tweets far surpassed what Muslim candidates reported encountering on campaign trails in their own districts, evidence, the report said, that Twitter was responsible for the spread of images and words from a small number of influential voices to a national and international audience.
“We ended up with manufactured outrage that was amplified by faceless individuals, organizations and governments,” said Lawrence Pintak, lead author of the report and a professor at the Edward R. Murrow College of Communication at Washington State University. The study is called “#Islamophobia: Stoking Fear and Prejudice in the 2018 Midterms.”
As a result of this social media blitz, Pintak said, “you create a sector of society that buys into this exaggeration of lies and exaggeration of hate in this online echo chamber, and it spills into the mainstream media and into mainstream consciousness.”
Many of the tweets cited by the report appear to violate Twitter’s terms of service, which prohibit violent threats and attacks based on religious affiliation, and the researchers found that a large number of the accounts they studied were eventually closed or deleted by the user, which can be a tactic to remove evidence of disinformation campaigns.
“Death threats, incitement to violence, and hateful conduct have no place on Twitter,” said company spokeswoman Katie Rosborough after reviewing an advanced copy of the report. “We believe this behavior undermines freedom of expression and the power of healthy public conversation. People using their accounts to spread this type of content will face enforcement action.”
Omar complained publicly Sunday about the threats against her life on Twitter by retweeting a compilation of them and saying, “Yo @Twitter this is unacceptable!” That prompted talks between her office and the company.
After reviewing an advanced copy of Tuesday’s report, she called it “a wake-up call.”
“It has become clear that these platforms do not take seriously their role providing a platform for white nationalist hate and dangerous misinformation in this country,” Omar said. “We as a nation need to think seriously about ways to address online threats to our safety and our democracy.”
The office of Tlaib (D-Mich.), who was born in Detroit to Palestinian immigrant parents, did not respond to a request for comment about the report.
Omar’s wearing a traditional Muslim hijab head-covering was a particular source of anger in tweets reviewed for the study, as were unfounded claims that she sought to impose Islamic Sharia law on Americans and was complicit in the synagogue shooting in Pittsburgh last year.
“No one that wears a #Hijab should be running for office in America. The #Quran #Islam and our #Constitution are Not compatible in any way,” said one tweet.
A set of three identical tweets said of Omar, “No way she belongs in this country. No way she should be involved in anything! Or breathing.”
The researchers examined and categorized 113,000 tweets directed toward Omar, Tlaib and a third, unsuccessful Muslim congressional candidate in the two months before the November midterm election. Overall Muslim women were more likely to be targeted online than men. And a relatively small number of influential accounts had outsized reach, thanks to accounts that tended to retweet, quote or comment rather than author tweets themselves.
“All these things that happened online — all this hate, all this controversy — were manufactured.” said Jonathan Albright, a social media researcher at Columbia University’s Tow Center for Digital Journalism and a co-author of the report. “They wouldn’t exist if somebody hadn’t built a platform like this to amplify them.”
Particularly potent was right-wing journalist Laura Loomer, whom the report dubbed “Queen of the Trolls” for her ability to shape anti-Muslim online narratives. One tweet quoted in the report said, “MUST WATCH: I confronted Rashida Tlaib and Ilhan Omar, two Jihadi U.S. candidates with connections to terror organizations.” It also claimed that they hated Jewish people.
Loomer, who was banned from Twitter and Facebook in 2018, did not respond to emails seeking comment.
The account for President Trump, @realdonaldtrump, also played an influential role, in part because people seeking to spread anti-Muslim sentiment would direct their messages to Trump, potentially increasing their reach, the report found. The study concerned a period last year before Trump said, in July, that Omar, Tlaib and two other congresswomen of color should “go back” to their countries even though three of them were born in the United States.
The report found that automated “bot” accounts — along with so-called “sock puppets,” which are controlled by people disguising their identities — played crucial roles in spreading hateful content directed toward Muslim politicians. Of the top 20 conservative accounts that spread messages about Omar, at least nine were bots, the report found.
Albright said the report also attempted to assess the impact of anti-Muslim messages on Facebook but failed because posts typically were deleted or otherwise made inaccessible before they could be collected for analysis.