Missouri episode exposes motives and methods of Russian propagandists

Throwing a gas can onto a tiny campfire:

Russian Twitter trolls pounced on the University of Missouri’s woes in 2015 using the same techniques they applied to disrupt the 2016 presidential election, a U.S. Air Force officer wrote in an article published recently in Strategic Studies Quarterly. In the aftermath of the Nov. 9, 2015, resignation of University of Missouri System President Tim Wolfe during protests over racial issues, some feared a violent white backlash.

It was fueled in part by a real post on the anonymous social app Yik-Yak from Hunter Park, then a student at Missouri University of Science and Technology in Rolla, that he would “shoot every black person I see.” The fear was enlarged and spread by a now-suspended Twitter account that warned, “The cops are marching with the KKK! They beat up my little brother! Watch out!” that included a photo of a black child with a severely bruised face and the hashtag #PrayForMizzou.

This might seem like an inappropriate or way off-topic post for BlueNC, but (imo) it is actually critical moving into the 2018 election season. While social media has completely changed the game on organizing and activism, turning out crowds that number in the thousands in just a short period of time, it has also become a minefield of click-bait and disinformation. We (each) have to be our own gatekeepers on Facebook and Twitter, taking that extra ten minutes to vet and verify stories before we aid and abet that disinformation by sharing or re-Tweeting. It's not a conspiracy theory that people are pushing conspiracy theories, there is a concerted effort to undermine and/or redirect the energies of well-meaning activists:

The Twitter account, with the handle @FanFan1911 and user name Jermaine while tweeting about Mizzou, was used to spread panic about a fake chemical factory fire in St. Mary Parish, La., in 2014 and fear of Syrian refugees in Germany in 2016, Prier wrote. The account’s original Missouri tweets were retweeted by an army of 70 robot accounts and hundreds of legitimate users and became part of the huge volume of tweets about the university at that time, he wrote.

“The rapidly spreading image of a bruised little boy was generating legitimate outrage across the country and around the world,” Prier wrote. “However, a quick Google image search for ‘bruised black child’ revealed the picture that ‘Jermaine’ attached to the tweet was a picture of an African-American child who was beaten by police in Ohio over one year earlier. The image and the narrative were part of a larger plot to spread fear and distrust. It worked.”

Payton Head, then-president of the Missouri Students Association, took the bait, Prier notes in his article. In a Facebook post, Head warned students to stay away from windows in residence halls. “The KKK has been confirmed to be sighted on campus. I’m working with the MUPD, the state trooper and National Guard,” Head wrote.

In the past many of us have worked from the assumption that, the more times we see a story referenced, the more legitimate it probably is. That's because we've assumed several people have done at least a minimum of due diligence on the subject, and found it reliable. But with bots, and people who automatically share stuff without checking its validity, that old paradigm is no longer tenable. And here's the unfortunate truth: Nobody is going to fix that for us.

This election season could be a game-changer. It could produce results that would not only improve the quality of life of millions in this state and country, it may actually save lives. The attacks on Medicaid, Medicare, SNAP benefits, and a whole laundry list of other safety net programs, will only be stopped at the ballot box. And we simply can't afford to have our energies wasted on non-existent crises. Check it before you share it.