Site last updated: Thursday, April 18, 2024

Log In

Reset Password
MENU
Butler County's great daily newspaper

Facebook, Twitter decide on types of content

The New York Post published a story Wednesday about Joe Biden and his son Hunter that read suspiciously like disinformation. More important, the piece included images of Hunter Biden and emails obtained almost certainly without his permission, supposedly from a laptop purportedly abandoned at a Delaware repair shop.

The hacked material is what got the Post story into trouble with Twitter, which has a policy against publishing links to such content. Meanwhile, Facebook had already slowed the piece’s redistribution out of concern that it might violate its policy against “misinformation.”

These moves drew a chorus of boos from Republicans, along with a demand for a federal investigation. The denunciations multiplied after Twitter temporarily suspended the accounts of White House Press Secretary Kayleigh McEnany on Wednesday afternoon and the Trump campaign Thursday morning for tweets related to the Post story that violated its terms of service. In McEnany’s case, liberal media critic Parker Molloy said, the offending tweet included an image of an email address, a clear Twitter no-no.

And herein lies the free speech dilemma posed by the emergence of a handful of globally dominant communications platforms. Facebook, Twitter, YouTube and Google have accumulated such enormous audiences, many users (and policymakers) consider them to be the digital equivalent of the public square of yore — indispensable places to speak and be heard. Yet they are not public forums, they are privately operated networks with rules set by the owners to serve their business interests. Presumably, the rules are intended to help the platforms attract and retain as many users as possible.

The basic problem here is that it took these platforms far too long to recognize how they were being used to amplify misinformation and to start enforcing their rules against campaign-related content and major political figures. Right after the 2016 election, Facebook CEO Mark Zuckerberg famously said that it was a “pretty crazy idea” that fake news on the social network had influenced voters. He and other top Facebook officials have since been hauled before Congress to testify about how seriously they took the misinformation problem and what they were doing to stop their platform from being used to amplify it.

The free rein the platforms gave Donald Trump for several years led his supporters to believe that he could do whatever he liked there. It also set the stage for the sort of outrage Republicans displayed Wednesday, when they assumed Twitter and Facebook were objecting to the Post’s piece because it criticized Biden — which both services credibly argued they weren’t doing.

Here’s one concrete way the lack of early and consistent enforcement has hurt the platforms. As Sen. Mike Lee (R-Utah) pointedly noted, Facebook and Twitter freely spread links to the BuzzFeed story in 2016 that revealed the contents of a notorious dossier of wild and damaging allegations about then-candidate Trump.

The dossier has always seemed sketchy, which is why many news organizations decided not to publish its contents. We have since learned that the primary source for the dossier’s author was someone suspected of being a Russian spy.

I’m not going to litigate the value of the Post piece here, although I share concerns voiced by Judd Legum of Popular Information, among others, about its validity. What’s most interesting to me is the outrage triggered by Facebook and Twitter seeking to enforce their terms of service in the context of a politically explosive article.

It’s worth bearing in mind that nothing Facebook or Twitter did or can do affects what the Post publishes on its own site. They cannot “censor” the Post, they can only censor people who use their platforms. Sure enough, discussion of the piece on Twitter seemed to pick up after the social network blocked people from posting a link to it. Many responded to Twitter’s ban by quoting the story or by sharing pictures of its text. And the Post, savvily, wrote and tweeted links to other stories following up on its original story, driving more traffic to the issue that way.

The free speech distinction here is important. The companies aren’t interfering with people discussing the issue. They are interfering with the discussion of just one, specific description of the story: its URL. Nevertheless, President Trump and his supporters complained that Facebook and Twitter were meddling in the election by preventing links to the Post story from going viral on their platforms. It was an entirely predictable response — the sort of Big-Tech-is-biased-against-conservatives victimization narrative that they’ve been spooling out for a couple of years — that illustrates the no-win position the companies are in because of their scale.

Platforms should have rules against distributing hacked material and misinformation, and there’s no good argument for enforcing or not enforcing them according to how much it might affect a campaign. That’s a judgment call the platforms are completely unqualified to make.

On the other hand, the platforms are entirely qualified — even uniquely qualified — to decide what sorts of content and behavior violate their terms of service. Which is not to say that they enforce those rules fairly, it’s just to say that it’s their right.

The episode makes it more likely that lawmakers will alter Section 230 of the Communications Decency Act, the federal law that says websites and services aren’t liable for the content their users upload. It’s a hugely important provision, one that was instrumental in the development of open platforms where content creators and publishers can find an audience for their work.

Numerous Republicans, most notably Trump and Sen. Josh Hawley (R-Mo.), have called repeatedly for Section 230 to be repealed or rolled back, hoping to coerce Big Tech into meeting some federal standard for political neutrality. That’s not just a terrible idea that runs sharply counter to the First Amendment, it’s self-defeating. If the protections provided by Section 230 are weakened, sites would have a strong incentive to censor more content, not less, in order to avoid liability. Beyond that, liability protections mean far more to the small companies that want to be the next Facebook than they do to the current Facebook. Huge companies can shoulder the risk of lawsuits; small ones can’t.

Jon Healey is the Los Angeles Times’ deputy editorial page editor.

More in Other Voices

Subscribe to our Daily Newsletter

* indicates required
TODAY'S PHOTOS