It’s time to regulate tech platforms with laws, not fines
It was, in retrospect, perhaps not the best two weeks to go on vacation.
Each time I checked in with the news, there was some startling new collision between Facebook and democracy.
There was that $5 billion settlement with the Federal Trade Commission.
There was that $100 million settlement with the Securities and Exchange Commission.
There was an announcement that the FTC would open an antitrust investigation against the company. (Facebook co-founder Chris Hughes volunteered his help to build the case.)
There was an announcement that the Department of Justice would open up an antitrust probe against Facebook, Google, and other large tech platforms.
And that was only what happened last Wednesday!
(OK, the DOJ thing broke late the day before.)
In some ways, last week’s news represented a culmination of the threads that have unspooled over this newsletter’s not-quite-two years: A dawning awareness of internet platforms’ size and power. A reckoning over their excesses and unintended consequences. And belated but meaningful government action.
The question that ricocheted out of the various settlements and probes was, of course, just how meaningful that action actually was. The entity most invested in the idea that the settlement was meaningful was, of course, Facebook, which promoted it aggressively. The company talked of the “major changes to how it builds products and operates as a company,” and of the “fundamental shift in the way we approach our work.”
These changes and shifts were illustrated on an accompanying graphic, which touted efforts to “build privacy into every product” and generate various quarterly reports that must be signed by the CEO. An independent privacy oversight board will be spun up frown thoughtfully at Facebook’s future initiatives surrounding the creative acquisition of user data. The company distributed video clips of Mark Zuckerberg discussing these changes — rather gravely — with employees.
Facebook’s enthusiasm for the agreement was, frankly, suspicious. But it was understandable once you learned, thanks to Tony Romm at the Washington Post, that the company had essentially dictated the settlemen’s terms.
Facebook had a different understanding of its own errors: The tech giant internally believed at most it should be paying into the hundreds of millions of dollars, and the company felt it could easily prevail in court if it had to battle the FTC over how it calculates fines and what qualifies as a violation. In the end, Facebook still offered to pay more than it believed was required in a bid to assuage regulators and win other concessions from the feds.
Those concessions were numerous, and some observers — including the two Democratic FTC commissioners who voted against the settlement — found them outrageous. Facebook admitted no guilt; it agreed to make no changes to the way it collects user data; and it got the FTC to promise the agency would not hold the company or its executives liable for any as-yet undiscovered violations of its previous consent decree.
As Rebecca Slaughter pointed out in her dissent, which is very much worth reading, one problem with exempting Facebook executives from all other liability is that executives’ actions during this time were never fully investigated. And now there’s a $5 billion speeding ticket to ensure they never will be.
Members of Congress were quick to to criticize the settlement — just as they have been since details first emerged. But as Makena Kelly pointed out, Congress has powerful regulatory authority in this area:
In the long term, the only way for the FTC to quickly and effectively punish tech companies for harming consumers is if Congress were to step up and empower the agency with heightened authority in a new privacy law. The FTC already has a similar power provided to it through the Children’s Online Privacy Protection Act (COPPA) to fine companies when they’re found to have abused the privacy of children under 13, but it’s largely inapplicable to tech companies, and there’s no equivalent protection for adults. [...]
For months, Chairman Simons has been pleading for a new law. But negotiations over the past few weeks have stalled, and it looks like Facebook was able to get off relatively easy, harming user privacy soon enough to skate by without harsher penalties imposed by Congress.
Matt Levine makes a great related point about why we got a settlement decree here rather than legislation:
I actually think there’s a deeper and stranger explanation here. Facebook did some things that a lot of people are upset about, some of which (certain sorts of data sharing) probably violated the laws or its earlier consent decrees, and others of which (certain sorts of data collection) didn’t. We want to stop it from doing all those things again, and the most straightforward way to do that is to pass a law saying which things you can’t do. But Americans are biased toward thinking of bad things as being already illegal, always illegal, illegal by definition and by nature and in themselves. If the thing that Facebook did was so bad, then it must have been illegal, so there is no need for a new law against it. At most we need a settlement with Facebook clarifying exactly which things it did were illegal and specifying that it won’t do them again. People are angry at Facebook, and that anger takes essentially punitive rather than legislative forms; we want to regulate Facebook’s future conduct as punishment for its past conduct, not as part of a general law. It is hard to imagine that a company could have done a bad thing without also breaking the law—which makes it hard to write new laws to prevent future bad things.
Amid all this, Facebook reported its quarterly earnings, which were sterling as usual. The stock price rose despite news that the FTC had started a new investigation of the company related to antitrust, presumably because recent history suggests to investors that such investigations are essentially toothless. The DOJ investigation may add firepower to the cause — but, as Matt Stoller argues persuasively here, it seems much more likely that the DOJ investigation will punish the president’s political enemies (“biased” Google and Jeff Bezos-run Amazon) rather than require Facebook to (say) spin off Facebook and Instagram.
All of which suggests that a foundational question of this newsletter — How will our government regulate tech platforms, and what will be the effects of those regulations? — may be cruising toward a deeply cynical conclusion. What if the United States ultimately does all its regulation of big platforms not in law, but in fines? What if, after years of investigation, all we have to show for it is theater — a tedious going through the motions?
What if, at the end of this strange era of “regulation,” we find that our biggest platforms aren’t much regulated at all?
Democracy
Big Tech’s liability shield under fire yet again from Republicans
Here’s a new Republican bill that seeks to eliminate all content moderation on the web that goes beyond what is legal under the First Amendment. Makena Kelly reports:
The populist wing of the Republican party introduced yet another bill to remove the tech industry’s largest liability shield last week.
The Stop the Censorship Act, sponsored by Rep. Paul Gosar (R-AZ), would strike language in Section 230 of the Communications Decency Act that allows platforms to moderate content they deem as “objectionable.” Gosar argues that this language makes it easy for platforms like Facebook and Twitter to remove content grounded in conservative ideology, a Republican censorship theory that has yet to be proven outside of individual remarks made by Big Tech “whistleblowers” like what we’ve seen from organizations like Project Veritas.
Sites could be liable for helping Facebook secretly track your web browsing, says EU court
Adi Robertson reports on a ruling that could reduce the amount of “Like” buttons on the web:
The European Union’s top court says website owners could face legal risk over Facebook’s ubiquitous “Like” buttons. The Court of Justice of the European Union ruled today that site owners could be held liable for transmitting data to Facebook without users’ consent — which appears to be exactly what happens when users visit a site with a Like button, whether or not they click it.
The ruling doesn’t stop Facebook, or other companies with similar widgets, from offering these options. But sites must obtain consent from users before sending data to Facebook, unless they can demonstrate a “legitimate interest” in doing otherwise. Right now, data gets seemingly sent to Facebook as the page loads — before users have a chance to opt out. So in the future, sites might have to approach Like buttons differently.
The Unsexy Threat to Election Security
Brian Krebs brings an under-discussed vulnerability in our election system: county officials’ social media accounts:
California has a civil grand jury system designed to serve as an independent oversight of local government functions, and each county impanels jurors to perform this service annually. On Wednesday, a grand jury from San Mateo County in northern California released a report which envisions the havoc that might be wrought on the election process if malicious hackers were able to hijack social media and/or email accounts and disseminate false voting instructions or phony election results.
“Imagine that a hacker hijacks one of the County’s official social media accounts and uses it to report false results on election night and that local news outlets then redistribute those fraudulent election results to the public,” the report reads.
How the West Got China’s Social Credit System Wrong
Here is a bizarre article that charts the rise of various interconnected surveillance systems in China and then complaints that Western journalists are making too much of the potential harms that they pose. If there’s a benevolent use for any of these systems, which are emerging at a time when China is cracking down on dissidents even more so than usual, the authors never say.
Elsewhere
China’s ByteDance, after Smartisan deal, says developing smartphone
The TikTok phone is coming, Josh Horwitz reports. Should be good for anyone who wants a direct line to the Chinese government!
Chinese social media firm ByteDance Ltd said on Monday it is developing a smartphone, following a deal it made with device maker Smartisan Technology.
The plans come as the tech firm expands into new sectors beyond video and news apps.
YouTube Said It Was Getting Serious About Hate Speech. Why Is It Still Full of Extremists?
Aaron Sankin reports that YouTube’s enforcement of a new anti-hate speech policy lags far behind its public comments on the subject.
While less than scientific (and suffering from a definite selection bias), this list of channels provided a hazy window to watch what YouTube’s promises to counteract hate looked like in practice. And since June 5th, just 31 channels from our list of more than 200 have been terminated for hate speech. (Eight others were either banned before this date or went offline for unspecified reasons.)
Before publishing this story, we shared our list with Google, which told us almost 60 percent of the channels on it have had at least one video removed, with more than 3,000 individual videos removed from them in total. The company also emphasized it was still ramping up enforcement. These numbers, however, suggest YouTube is aware of many of the hate speech issues concerning the remaining 187 channels—and has allowed them to stay active.
Young Instagram Users Give Up Privacy in Search of Metrics
Sarah Frier reports that “millions of young people are turning their personal Instagram accounts into ‘business’ profiles to learn more about how their posts are performing.”
In order to be classified as a business on Facebook Inc.’s Instagram, users agree to provide their phone number or email to the public on the app. Their choice – made much easier by Instagram’s design and prompting – can endanger their privacy and that of their friends, according to David Stier, an independent data scientist who reported the issue to the company, and conducted a broad analysis on 200,000 accounts around the world with several different sampling techniques.
“I’ll talk to parents and say, ‘Did you know that if your 13-year-old turns their Instagram account into a business account, more than 1 billion people have access to their contact information?’” Stier said. “Every parent I talk to is like, ‘Are you kidding?’”
Facebook urges gay men to give blood, which can be a painful reminder they aren’t allowed to
Christina Farr and Salvador Rodriguez report that Facebook keeps telling gay guys to give blood even when it’s forbidden in their countries of residence.
The Terrible Anxiety of Location Sharing Apps
Boone Ashworth suggests not constantly tracking the whereabouts of your romantic partner, lest it fill you with perpetual dread:
Location sharing is best used sparingly. Leaving it on forever just invites endless dread and obsession. Within a year of using the service, I’ve grown accustomed to relying on that little blip in Google Maps to tell me that everything is all right. But as soon as it goes dark, my sense of safety and control becomes as lost as the person I can no longer keep track of. (God forbid I ever become a parent.)
Schüll had a similar experience, back when she and her husband shared their locations via Find My Friends. “I developed a sort of habit of always checking and it was distracting,” she says. Schüll only stopped because the service disengaged when they switched phone platforms. “I suddenly didn’t have the option anymore, and I felt so happy and relieved about it.”
Playlist
(This new feature will highlight more podcasts, books, TV show episodes and movies that might be of interest to Interface readers. Thanks to Hunter Walk for the suggestion!)
Facebook Love Scams: Who’s Really Behind That Friend Request?
In a long story and an episode of the Times’ new Hulu show, Jack Nicas investigates how overseas scammers posing as American servicemen bilk women out of their savings. (Related: this Tenable report on how Instagram dating spam is evolving.)
Ms. Holland and Mr. Anonsen represent two sides of a fraud that has flourished on Facebook and Instagram, where scammers impersonate real American service members to cheat vulnerable and lonely women out of their money. The deception has entangled the United States military, defrauded thousands of victims and smeared the reputations of soldiers, airmen, sailors and Marines. It has also sometimes led to tragedy.
The scheme stands out for its audacity. While fraud has proliferated on Facebook for years, those running the military romance scams are taking on not only one of the world’s most influential companies, but also the most powerful military — and succeeding. Many scammers operate from their phones in Nigeria and other African nations, working several victims at the same time. In interviews in Nigeria, six men told The New York Times that the love hoaxes were lucrative and low risk.
Launches
Facebook’s Ex-Security Chief Details His ‘Observatory’ for Internet Abuse
Andy Greenberg profiles Alex Stamos’ new project at Stanford:
When it comes to tackling internet abuse ranging from extremism to disinformation to child exploitation, Stamos argues, Silicon Valley companies and academics are still trying to build their own telescopes. What if, instead, they shared their tools—and more importantly, the massive data sets they’ve assembled?
That’s the idea behind the Stanford Internet Observatory, part of the Stanford Cyber Policy Center where Stamos is a visiting professor. Founded with a $5 million donation from Craigslist creator Craig Newmark, the Internet Observatory aspires to be a central outlet for the study of all manner of internet abuse, assembling for visiting researchers the necessary machine learning tools, big data analysts, and perhaps most importantly, access to major tech platforms’ user data—a key to the project that may hinge on which tech firms cooperate and to what degree.
Takes
The stubborn, nonsensical myth that Internet platforms must be ‘neutral’
Daphne Keller, a former Google lawyer, takes apart conservatives’ favorite argument about Facebook, Google, and Twitter:
Requiring platforms to address these concerns by carrying everything the law permits won’t solve our problems, though. After all, platform users and policymakers of all political stripes often call for platforms to take down more content — including speech that is legal under the First Amendment. That category can include Holocaust denial, bullying, anti-vaccine material and encouragement of teen suicide.
U.S. law permits people to post the horrific video of the March 15 massacre in Christchurch, New Zealand, and the doctored video of Nancy Pelosi. There may be ethical or policy reasons to urge platforms to ban such content, but there aren’t legal reasons. If we want platforms to enforce values-based speech prohibitions in cases like these, they’re going to have to choose and apply some values. By definition, those values won’t be neutral.
For Tech, We’re the Gift That Keeps on Giving. But We Get Prime!
Kara Swisher argues that lately we have been giving big tech companies more than we get from them:
I’ll take a step further by saying that the way that the tech giants have been responsive to consumer demands has lulled us all into a state of continuous partial satisfaction. After all, who doesn’t love free email and maps and adorable photo posting and instant information gratification and getting your heart’s desire delivered in a flash?
But the fact is that we have all become cheap dates to these tech platforms, making a trade-off in which they get all the real value and we get some free stuff that is inexpensive and easy for them to provide.
I missed the controversy surrounding FaceApp and its viral old-person face filters, and John Herrman captures it thoughtfully here:
Discussion about the dangers of an app like FaceApp have revolved around competing possible future violations: users’ images being sold as stock photos, or used in an ad; a massive data set being sold to a company with different ambitions; a hack. But the real violation is right there in the concept, and in the name.
FaceApp, in order to do the innocent thing that it advertises, must collect data so personal that its frequent surrender and seizure could soon result in the end of anonymous free movement on Earth. This is what the app economy, often a synonym for the new economy, demands. You can make the most innocent assumptions about FaceApp and its creators and still arrive at the conclusion that it should not exist, and yet here it is, the perfect smartphone toy, with nearly a million reviews in the App Store, and a rating of 4.7/5 stars.
And finally ...
Lil Nas X became Twitter’s CEO for a day and didn’t ban the Nazis
“Old Town Road” artist Lil Nas X is legitimately great at Twitter, and so naturally Twitter decided to (checks notes) um, make him the CEO on Monday? Biijan Stephen reports:
Twitter also posted a video on its music account, Twitter Music, starring the young rapper and meme impresario, in which he grabs Jack Dorsey’s badge and becomes CEO for the day. His first act? Firing @Jack. His second? Demanding an edit button, and then firing a roomful of engineers when they didn’t begin typing fast enough.
The two-minute clip was fun, and funny; honestly, seeing someone ask Twitter for something that users have been demanding for years, and the company stubbornly refusing to deliver, was cathartic. I enjoyed it! Until I remembered that Twitter doesn’t listen to its users, really, when it comes to anything more serious than an edit button.
In any case, nice that Twitter had a full-time CEO, if only for a day.
Talk to me
Send me tips, comments, questions, and your suggestions for easing back into work after a nice vacation: casey@theverge.com.
Comments
Post a Comment