How white supremacists evade Facebook bans

Yesterday Twitter, in response to years of complaints about letting white nationalists use the platform even though they had already been banned, said that it plans to conduct academic research on the subject. At HuffPost, Luke O’Brien took the occasion to note just how many white nationalists are using the platform:

The white supremacist accused of murdering 51 people in Christchurch, New Zealand, in March was also on Twitter, where he spread Islamophobia, white supremacist propaganda and articles about terrorist attacks. He tweeted pictures of his weapons and posted links to a disturbing manifesto he wrote, apparently in anticipation of the deadly rampage. Only after he was charged in a mass murder did Twitter act.

The shooter may have carried out the genocidal end goal of white supremacy, but there are thousands of white supremacists on Twitter with the same mindset, most of them anonymous and working in concert. In a 2018 study, extremism expert J.M. Berger offered an “extremely conservative” estimate that at least 100,000 alt-right users are on Twitter. The repercussions for these bad actors are practically nonexistent.

But Twitter isn’t the only platform with a whack-a-troll problem. In BuzzFeed, Jane Lytvynenko, Craig Silverman, and Alex Boutilier examine the aftermath of Facebook’s effort to eliminate white nationalist groups. Working with an extremism researcher, they find that banned groups including the Proud Boys had been able to set up shop on Facebook again rather easily:

Squire said the group was able to return to Facebook by slightly altering its name. One of their new pages was called PB Canada and included a link to a Telegram channel used to communicate with supporters.

Squire said the group was able to return to Facebook by slightly altering its name. One of their new pages was called PB Canada and included a link to a Telegram channel used to communicate with supporters.

They go on to document other instances of banned groups continuing to exploit various parts of Facebook to recruit new members. Sometimes groups are banned but their constituent members are not, and those members simply create new pages and spread their racist ideology there.

Facebook removed the groups found by BuzzFeed. But elsewhere on the platform, militias are organizing, Samira Sadeque reports:

Granted, since its payment accounts were suspended, UCP has taken steps to “secure” itself. On Facebook, the group used to be open but has since become closed. And yet the group, which had 3,000 members in April, now has more than 5,800. [...]

Individual users are also posting videos on their personal Facebook pages and live streaming themselves intimidating, harassing, and haranguing asylum seekers. For example, a woman named Debbie Collins Farnsworth has posted numerous live videos confronting incoming migrants. (Currently, when you click on the videos, a notification says they aren’t available right now, signaling they may have been set to private or have been removed.) Her videos have garnered thousands of views and hundreds of shares and receive much fanfare, offering a glimpse at how well this kind of anti-immigrant sentiment and intimidation is received on Facebook, where this rhetoric can spread.

Opposing immigration might not automatically make you a white nationalist, but forming a militia to harass terrified asylum seekers probably qualifies. And at a time when Facebook is pushing its users to spend more time in private groups, it seems notable that the company has no comment on a 5,000-member militia that’s coordinating on its platform.

Saying that you’ve banned white supremacists is obviously much easier than doing so. But given the way that tech platforms now reflexively laud their artificial intelligence efforts whenever questions of moderation come up, I’m struck by how easily white supremacists have managed to evade these purported bans.

If a Nazi can rejoin Facebook just by abbreviating a group’s name differently, a different approach would seem to be warranted. And the next time a platform tells us that they have banned white supremacists, it’s up to us to ask exactly how they plan to do it.

The Pelosi video again (sorry)

The distorted video of Nancy Pelosi managed to stay in the news cycle for a sixth day today, so here are three quick things about it.

One, I mentioned yesterday that one reason Facebook didn’t explicitly label the video phony was that doing so might lead people to share it more. Alex Kantrowitz takes issue with this idea in BuzzFeed, interviewing the professor who first studied the so-called “backfire effect.”

“Under some circumstances, i.e. with world-view challenging material in particular, there is some evidence for a backfire effect. However, the keywords here are some and some,” Lewandowsky said. “So the question then becomes whether you gain more by being explicit and taking the risk with a backfire effect, or by sticking to ‘additional information’ (thus avoiding backfire) but being insufficiently explicit for the majority of people who might not be susceptible to a backfire effect. This is a difficult question that does not have a one-size-fits-all answer. However, given the relatively infrequent occurrence of backfire effects—they occur less frequently than initially thought—I would lean towards being more explicit as this might maximize the overall impact even if the occasional person backfires.”

Two, I got some questions about why YouTube banned the video when Facebook did not. YouTube’s answer is that the video was banned under its deceptive practices policy. The policy was initially designed to counter spam but has since expanded to include political deception, I’m told. The company is being a little slippery here, I think: YouTube pointed me to the part of the policy that bans “misleading metadata or thumbnails,” which would not seem relevant to the Pelosi video at all. In any case, I’ll be interested to see how Google applies this policy as more deceptive videos appear.

Finally, here’s Hillary Clinton calling the video “sexist trash.”

Democracy

Elizabeth Warren puts a giant tech breakup billboard in San Francisco’s face

While some Democratic presidential candidates are coming to Silicon Valley to raise money, Warren is spending money to remind Silicon Valley that she wants to break up all of its companies. Kind of amazing! Makena Kelly and Nick Statt report:

On Wednesday, 2020 presidential candidate Sen. Elizabeth Warren (D-MA) put up a billboard in the heart of Silicon Valley pressing for big tech companies like Facebook, Amazon, and Google to be broken up.

The billboard is located at 4th and Townsend, right next to the city’s primary Caltrain stop, where a substantial chunk of South Bay technology workers arrive each morning. It’s not exactly prime placement — considering it’s neither facing the Caltrain station nor along the most traffic’d sidewalks for employees commuting back to the South Bay — but the billboard is just blocks from the headquarters for Lyft and Dropbox, among other startups. Alongside the call for antitrust action, the billboard includes a short-code number for passersby to subscribe to updates from the Warren campaign, a common fundraising tactic. The billboard is scheduled to run until next Wednesday.

Clock runs down for privacy legislation

2019 was supposed to be the year that Congress passed a national privacy law, but it’s not looking good, David McCabe reports:

The Senate Judiciary Committee sent letters earlier this year to companies asking about their data collection practices, according to a source. But there’s no indication of plans to move forward with a specific bill.

Democrats had signaled privacy legislation would be a priority when they retook the House last year. But major House committees haven’t moved forward with a bill, either, and Speaker Nancy Pelosi (D) has indicated she’s wary of overriding state rules like those soon to take effect in her home state of California.

US Universities And Retirees Are Funding The Technology Behind China’s Surveillance State

BuzzFeed explores how US money is funding the creation of the Chinese dystopian panopticon:

Since 2017, Chinese authorities have detained more than a million Uighur Muslims and other ethnic minorities in political reeducation camps in the country’s northwest region of Xinjiang, identifying them, in part, with facial recognition software created by two companies: SenseTime, based in Hong Kong, and Beijing’s Megvii. A BuzzFeed News investigation has found that US universities, private foundations, and retirement funds entrusted their money to investors that, in turn, plowed hundreds of millions of dollars into these two startups over the last three years. Using that capital, SenseTime and Megvii have grown into billion-dollar industry leaders, partnering with government agencies and other private companies to develop tools for the Communist Party’s social control of its citizens.

DIY Facial Recognition for Porn Is a Dystopian Disaster

Samantha Cole examines the case of the person who said they used facial recognition technology to identify thousands of porn actresses:

In a Monday post on Weibo, the user, who says he’s based in Germany, claimed to have “successfully identified more than 100,000 young ladies” in the adult industry “on a global scale.”

To be clear, the user has posted no proof that he’s actually been able to do this, and hasn’t published any code, databases, or anything else besides an empty GitLab page to verify this is real. When Motherboard contacted the user over Weibo chat, he said they will release “database schema” and “technical details” next week, and did not comment further.

Apple, Google and WhatsApp condemn GCHQ proposal to eavesdrop on encrypted messages

Tech companies want to stop the United Kingdom’s Government Communications Headquarters from implementing a proposal to let the government snoop on encrypted messages, Sam Meredith reports:

In an open letter to GCHQ (Government Communications Headquarters), 47 signatories including Apple, Google and WhatsApp have jointly urged the U.K. cybersecurity agency to abandon its plans for a so-called “ghost protocol.”

It comes after intelligence officials at GCHQ proposed a way in which they believed law enforcement could access end-to-end encrypted communications without undermining the privacy, security or confidence of other users.

Disclaiming responsibility: How platforms deadlocked the Federal Election Commission’s efforts to regulate digital political advertising

Here’s a paper from Katherine Haenschen and Jordan Wolf that analyzes platforms’ efforts to avoid the regulation of political advertising ahead of the 2016 election. From the abstract:

An analysis of documents submitted to the FEC demonstrates that Facebook and Google put profit ahead of the public interest in seeking exemptions from disclaimer requirements, refusing to change the size of their advertisements and downplaying the deceptive potential of political ads. Due to partisan gridlock and a lack of technological expertise, the FEC failed to rule decisively on exemptions or agree on alternative means of disclaimers for Facebook and mobile app ads, setting the stage for electoral interference in 2016. Implications for current regulatory efforts are discussed.

Sen. Josh Hawley calls out Facebook over ‘encrypted’ messaging plans

The Missouri Republican doesn’t like Facebook’s plans for encrypted messaging, Makena Kelly reports:

“If you share a link in encrypted messenger with a friend who clicks it, Facebook reserves the right to use cookies to figure out what that link was and what you two might have been discussing in your encrypted chat,” Hawley said in a statement. “If you send a roommate your rent money in encrypted messenger, Facebook reserves the right to use the payment metadata to figure out you might live together. And they call this ‘encrypted’ private messaging.”

“My advice to consumers is simple,” Hawley continued. “When Facebook tells you its messaging services are private, you can’t trust them. I’d love to know what Brian Acton and Jan Koum [WhatsApp co-founders] are thinking as they read this response.”

Ro Khanna’s quest to marry Silicon Valley capitalism with progressive populism

Tal Kopan profiles the Bay Area’s congressman during a time of growing pressure on tech companies:

The Fremont Democrat’s success has largely come from his ability to fit in with the culture of the tech world. His ability to appeal to some of big tech’s wealthiest political donors helped propel his election to Congress over an eight-term Democratic incumbent.

But the same traits that help him connect with the valley are at the root of skepticism he faces, as he gains prominence nationally and leans into an independent streak that puts him at odds with some Democratic colleagues. His closeness with the epicenter of American wealth has also raised eyebrows as Khanna takes a progressive line, including joining the presidential campaign of a self-described democratic socialist who laments the “proliferation of millionaires and billionaires” in an unequal society and has embraced calls to break up Facebook.

Elsewhere

Mark Zuckerberg’s personal security chief accused of sexual harassment and making racist remarks about Priscilla Chan by 2 former staffers

An aide to Zuckerberg has been accused of all manner of racist, homophobic, and transphobic comments, Rob Price and Jake Kanter report. He has been placed on leave while Zuckerberg’s family office investigates.

Facebook Shareholders Challenged Zuckerberg, Left Empty-Handed

Kurt Wagner attended Facebook’s annual shareholder meeting so you didn’t have to:

One investor described Facebook as “Zuckerberg’s failing autocracy.” Another said the company “destroyed journalism.” SumOfUs, an organization that runs digital campaigns intended to apply pressure on powerful corporations, had members standing outside holding signs that read “Vote No On Zuckerberg” and “Break Up Facebook.” They also brought a large, inflatable balloon shaped like the angry face emoji on Facebook’s website.

It was a strong show of force, but though shareholders came out swinging, they walked away without anything tangible to show for it.

An initiative to combat fake news raised $2.25 million from Craiglist’s founder and Facebook

The Trust Project, which helps platforms identity high-quality news, raised money and is becoming a full-fledged nonprofit, Sara Fischer reports:

The Trust Project, a technology-backed news initiative made up of dozens of global news companies, announced Thursday that it raised an additional $2.25 million from Craig Newmark Philanthropies, Facebook and the Democracy Fund.

Why it matters: With the funding, the The Trust Project can establish itself as an independent nonprofit, which will help it scale its news partnerships globally.

Twitter is not making you smarter and hurting your intelligence, new study finds

Really misleading (and confusing) headline about a comically dumb study in Italy about teaching methods. It turns out that teaching students about a book in a classroom is more effective than having them post tweets about it.

The investigation drew on a sample of roughly 1,500 students attending 70 Italian high schools during the 2016-2017 academic year. Half of the students used Twitter to analyze “The Late Mattia Pascal,” the 1904 novel by Italian Nobel laureate Luigi Pirandello, which satirizes issues of self-knowledge and self-destruction. They posted quotes and their own reflections, commenting on tweets written by their classmates. Teachers weighed in to stimulate the online discussion.

The other half relied on traditional classroom teaching methods. Performance was assessed based on a test measuring understanding, comprehension and memorization of the book.

Using Twitter reduced performance on the test by about 25 to 40 percent of a standard deviation from the average result, as the paper explains. Jeff Hancock, the founding director of the Stanford Social Media Lab, described these as “pretty big effects.”

Another study finds teen suicide rates rose just after 13 Reasons Why debut

I found this deeply unsettling. From Mary Beth Griggs:

After the release of the controversial Netflix show 13 Reasons Why, scientists found a 13.3 percent increase in teenagers’ deaths from suicide. This is the second study released this month that found a rise in youth suicides around the time the show premiered. Mental health researchers are, as a result, more concerned than ever about how suicide is portrayed in the media — because suicide can be “contagious.”

About 94 more kids ages 10 to 19 died than expected during the period of this study, which was published this week in JAMA Psychiatry. Because there’s no way to tell whether the people who died by suicide during this time actually watched the show, the study “does not provide definitive proof” that 13 Reasons Why, which focuses on a teenage girl’s death by suicide, “is associated with harmful outcomes,” the authors note in the paper. They did, however, find the increase in death “concerning.”

YouTube could help the planet by throwing out its digital waste

YouTube wastes a lot of energy, and the impact on the environment is significant, according to a study this month. Mary Beth Griggs again:

If people who are only listening to videos don’t have images playing, companies like YouTube might make themselves more Earth-friendly, a new study finds. That’s because a lot of the energy used to get that video to your eyeballs happens at the network and device level.

By sending only sound to users who aren’t watching, the company could reduce its annual carbon footprint by the equivalent of about 300,000 metric tons of carbon dioxide per year. That’s about the same amount of the greenhouse gas produced by 30,000 homes in the UK every year, according to a University of Bristol press release.

Launches

Facebook filed a patent for a drone made of kites

Finally, a patent filing that could lead to a literal product launch. From Adi Robertson:

Facebook filed a patent for an unusual drone that would use kites to stay aloft. The “dual-kite aerial vehicle” is composed of two kites tethered together and floating at different altitudes. Each kite could be directed independently, and the drone could generate its own energy to extend its flight time. As with all patents, we don’t know whether Facebook is building this system. But it indicates a continuing interest in experimental aerial vehicles, even after Facebook scaled back its earlier, well-publicized Aquila project.

Takes

About That Pelosi Video: What to Do About ‘Cheapfakes’ in 2020

Robert Chesney, Danielle Citron, and Quinta Jurecic take on the Pelosi video:

In the bulk of cases, then, it may be best to embrace a more-aggressive combination of demotion and flagging that allows the content to stay posted, yet sends a much louder message than the example set by Facebook’s current flagged-by-third-party approach. For example, anyone who clicks on the video might first be presented with a click-through screen directly stating that the platform itself has determined that the video has been meaningfully altered, and asking the user to acknowledge reading that statement before the video can be watched. If accompanied by a robust and transparent mechanism through which such categorizations can be challenged, some such “nudge” solution might prove to be the best available option in the edge cases.

It is increasingly clear that the integrity of the 2020 presidential election will be challenged by misinformation just as much, if not more so, than the 2016 election was. This time, though, the country has advance warning. There is no easy solution to the challenge of cheapfakes. This is precisely why both platforms and campaigns need to begin thinking seriously about how to address the problem now.

Google Should Google the Definition of ‘Employee’

The Times editorial board says platforms are exploiting contract labor:

The inferior treatment of contractors is both the point of the system and necessary to maintain the system. Companies that treat contractors too much like employees can be held legally liable for treating those contractors like employees. Google explicitly describes this possibility as a “risk” in internal training documents — it does not want to treat its contractors like employees. Contractors are not allowed to attend internal meetings or holiday parties. They cannot participate in the company’s career advancement programs. Google, like many other large employers, also emphasizes ceremonial distinctions, for example giving red badges to contractors and white badges to conventional employees.

And finally ...

Talk to me

Send me tips, comments, questions, and leads on white nationalists hiding in plain sight: casey@theverge.com.

Comments

Popular posts from this blog

Canadian transit agency teases amazing new transportation technology: the bus