How Facebook shot themselves in the foot in their Elizabeth Warren spat

When Elizabeth Warren criticised Facebook over their decision to let Trump run false ads, Facebook compared themselves to a broadcaster. That was a big mistake

Senator Elizabeth Warren, a Democrat from Massachusetts and 2020 presidential candidate (L), and Mark Zuckerberg, CEO of Facebook. Bridgett Bennet | Bloomberg | Getty Images
Senator Elizabeth Warren, a Democrat from Massachusetts and 2020 presidential candidate (L), and Mark Zuckerberg, CEO of Facebook. Bridgett Bennet | Bloomberg | Getty Images

Over the weekend, Facebook likened itself to a broadcaster - inadvertently asking to be regulated. This was in the third round of an argument with Elizabeth Warren over the company’s choice to run Donald Trump’s $1m advertising campaign containing lies about Joe Biden. Facebook had a stated policy of not running deceptive ads, but changed it right before the ad ran - just for politicians’ messages. Warren took aim at the practice by headlining her own Facebook ad with the cheeky claim that Facebook CEO Mark Zuckerberg had just endorsed Trump, arguing that choosing to profit from lies amounts to an endorsement of a particular kind of candidate.

Facebook took the highly unusual step of tweeting a public response to Warren by name, comparing itself to a local broadcaster who is required by law to carry political ads, and even citing Federal Communications Commission rules as a rationale. One wonders if any DC lawyers took a look at that argument.

Because, of course, giving federal candidates “reasonable access” to air political ads is only one of many “public interest” requirements imposed on broadcasters - from charging candidates the same price and publishing ad reach to kids programming and ownership caps. Broadcasters have fiduciary obligations to the public, not just shareholders, on account of their control over information flow.

We have floated the idea that digital platforms’ gatekeeping power entails responsibilities to the public and that self-regulation appears inadequate. We did not expect Facebook to make our arguments for us.

READ MORE

Facebook seems to concede that it - like broadcasters - exercises gatekeeping control over attention, advertising dollars, and political debate, and therefore has a fiduciary responsibility of some kind. But the platform wants to cherrypick only the permissive aspects of regulation: don’t moderate for disinformation. What Facebook fails to acknowledge is that it isn’t neutral. It is favoring candidates who smear their opponents and amplify baseless conspiracies. It’s not just that the platform takes these ads; its algorithmic design juices their circulation by advantaging the incendiary over the informative to increase engagement.

There’s a more serious risk of platforms operating without obligations. What’s to prevent a platform from demoting one candidate’s ad and promoting another’s? The Facebook-Warren dispute presents this danger. In a leaked address to employees two weeks ago, Zuckerberg explicitly called Warren an “existential threat” to the company and promised Facebook would take a Warren administration “to the mat” if it tried to enforce antitrust laws against the platform. Of course, there are no rules or methods of accountability to stop the company from taking her out now, so there can never be a Warren administration.

Harvard’s Jonathan Zittrain has described Facebook’s power to conduct “digital gerrymandering”, favoring some candidates or political parties over others. They could theoretically depress circulation of ads or unpaid content favorable to a candidate; charge her opponents less for ads; or target reminders to vote at her opponents’ likely base population rather than hers. Our understanding of Russia’s interference in the 2016 election is only possible because the Senate Intelligence Committee forced the platforms to hand over data.

Of course, candidates could try to combat any platform bias through their own digital astro-turf campaigns - by buying even more ads, building audiences for content sites and affinity groups designed to spread viral outrage, renting networks of bots and trolls. Competition over who can create the most virality in an algorithmic system that promotes conspiracy and rage will create the kind of disinformation arms race that can only further weaken democracy.

A regulatory regime that takes platform gatekeeping power seriously would entail clear principles to protect the public interest online, especially in the form of transparency, user safety and control, and platform accountability. Political ads and bots would be more clearly labeled and their funding and reach disclosed; after-action reports would be available to researchers and the government. Platforms would need to comply withonline versions of discrimination and harassment laws, adopt a code of conduct for hate speech, and grant users control of their own newsfeeds and data. Platforms would also need user consent to exploit personal data to micro-target content or run experiments and maybe even pay a tax for certain data practices. There might even be a public network alternative and funding for a PBS for the Internet.

American lawmakers of both parties have long recognized the danger that an information chokehold poses to democratic self-government. That recognition led to public rules ranging from the Radio Act of 1912 to the Communications Act of 1936 to the 1967 Public Broadcasting Act. Those rules have forced broadcasters to air political ads even when negative or false. But they have also required broadcasters to operate with transparency, concern for the public, and some degree of accountability. We should expect as much from our new media gatekeepers.

Ellen Goodman is a professor at Rutgers Law School, where she is the co-director and co-founder of the Rutgers Institute for Information Policy & Law. Karen Kornbluh is a Senior Fellow and Director, at the Digital Innovation & Democracy Initiative at the German Marshall Fund of the US

Guardian Service