What to do about Facebook (Meta)?
One Potentially Worthy Proposal, But Otherwise Nothing Easy and Likely Stalemate
Wow, what a difference a several years makes. I remember only a few years ago the chatter about Facebook CEO Mark Zuckerberg running for President. Now you can hardly read a newspaper, magazine or a blog post about the harms that the newly named Meta (replacing that out of style Facebook name) is inflicting on society, much of it amplified through the recent testimony of former Facebook employee and now whistleblower, Frances Haugen, and the multiple stories by The Wall Street Journal based largely on her account.
Three harms seem paramount or alleged. The first two are recognized in Facebook’s own research reports or other internal company documents.
First, Facebook’s algorithms are driving people into “filter bubbles”, where like-minded users reinforce each other’s beliefs and biases, fragmenting truth, weakening our social ties, and intensifying political polarization. On this, there seems a consensus across the political spectrum.
Second, Instagram, makes it all too easy for young children to become addicted and for young girls especially to be psychologically harmed by its platform. At this writing, attorney generals of eight states have launched investigations into this matter.
Third, members of both political parties believe that Facebook’s news feeds and/or content practices are biased against them. Republican members of Congress even have pushed back on Haugen’s claims and put forward their own former Facebook employee, Kara Frederick, to back their claims of Facebook’s anti-conservative bias. https://www.politico.com/news/2021/12/01/frances-haugen-congress-big-tech-523632.
There seems to be a consensus that the “government” should do “something” to address at least the first two of these harms. Even Facebook itself has called for regulation, at least in principle. Totally apart from the unique issues raised by Facebook, there appears to be mounting bipartisan support for regulating “Big Tech” more broadly.
But as they say, the devil is in the details. Even if Congress and the country were not so politically divided, it is now clear there is no consensus, at least in Congress, about any one or set of specific reforms to address the various problems attributed to Facebook. And it’s not just a matter of politics. With one exception I discuss below, every “solution” I have seen entails its own possible, if not likely, set of unintended consequences whose harms could easily outweigh any good they would do.
No wonder the paralysis that exists over “what to do about Facebook.” I now go through several of the more widely discussed suggestions and highlight their drawbacks. I may have missed some ideas, or readers may disagree with my assessments, in which case I’d very much appreciate your feedback in the comments section.
Require On-Line Platforms to Pay for User-Data
One of the leading critics of all tech platforms, not just Facebook, is Emeritus Harvard Business School professor Shoshana Zuboff, whose book Surveillance Capitalism shows in much detail how all the platforms hoover up our online data. That which we willingly and knowingly provide about ourselves, such as our profiles and comments on Facebook, Twitter, and Instagram. And our “data exhaust” that platforms like Google, but also many websites, collect from the websites we visit. Here’s a summary of her critique: https://www.nytimes.com/2021/11/12/opinion/facebook-privacy.html.
Encouraging more content moderation through liability rules or through formal government regulation or breaking up the company in some manner – all discussed shortly – only would treat the symptoms of the “Facebook problem,” in Zuboff’s view. The root cause, she argues, is Facebook’s ability to extract our data without our express permission.
Presumably, the answer to that problem, which many others before Zuboff have highlighted -- see the extensive writings of Jaron Lanier, for example -- is to require Facebook (and why not other information collectors, namely all business?) to gain “opt-in” consent for such collection, including its sale to third parties. With such a requirement in place, Facebook might even have to pay users for that right (California has had a privacy law in place since 2020 requiring all businesses in that state earning at least $25 million in annual revenue, among other things, to give people are right to opt out of data collection, a weaker protection than an opt in regime, which means that unless consumers affirmatively say “yes” to having their personal data collected, it cannot be collected).
I am sympathetic with a broader opt-in requirement, at least for companies like Facebook whose services or products reach close to public utility status, but I also recognize the limits of this idea. That is because even in an “opt-in” world, given how dependent users are on Facebook’s platform – think of all your Facebook friends accumulated over the years – it is not all that likely, at least in my view, that many users, even without payment, would refuse to grant Facebook consent to use their information to hone the company’s targeted advertising business (ad targeting is how Facebook, Google, and many other online businesses make money). I am more optimistic, however, that many, perhaps most, consumers would not consent to have their data sold to third parties, unless they were paid for it.
Zuboff asks a number of broad questions about how “our lawmakers” must address some basic questions about Facebook because it plays such a central role in our lives. How should we “structure and govern information, connection and communication in a democratic digital century? What new charters of rights, legislative frameworks and institutions are required …” And so on. But with no answers! That’s precisely the problem with all the Facebook critiques, lots of problems, no clear good answers, without a lot of potential unintended consequences.
Government Oversight of Facebook’s Content
Facebook knows it has a content problem and so created an independent (some question how independent) “Oversight Board” to deal with it. A private sector Supreme Court, populated by diverse range of superstar experts from all over the world: https://www.oversightboard.com/meet-the-board/.
The company created the Board not only to respond to fierce criticism from all parts of the political spectrum, but to get its CEO and founder, Mark Zuckerberg, out of the hugely difficult and time-consuming (no win) process of deciding what content should be eliminated, and who should be banned from its platform.
The Board also headed off calls for direct government content oversight – for example, Senator Josh Hawley’s 2019 proposal to eliminate liability protection for online platforms (like Facebook, Twitter and Google) unless 4 out of 5 Commissioners of the Federal Trade Commission determine that the platforms are not “politically biased.” More about the liability issue later, but for now, one rebuttal to the Hawley idea, or variations thereof, should suffice.
As others have highlighted, there is no objective way any government agency can decide whether a platform has political bias: https://www.vox.com/recode/2019/6/21/18693505/facebook-google-twitter-regulate-big-tech-hawley-bill-congress. For this reason, vesting any government agency with the authority to make that determination almost surely violates the First Amendment. Reminder, here is what the Amendment says: “Congress shall make no law …abridging the freedom of speech.” (The Supreme Court has carved limited exceptions, such as shouting fire in a crowded theater, or libel or slander, but it would be more than stretch to say that “political bias” falls within these exceptions).
One content oversight idea has more merit, though not without one important downside, however. As the Haugen disclosures reveal, Facebook’s algorithm that drives traffic to your news feed is five times more likely to pass something on to you that has an “angry” reaction than a mere “like. Haugen urges that if Facebook won’t eliminate this algorithmic bias toward helping to disseminate content more likely to inflame than to inform, that one government agency ought to be able step in and halt this practice
There is broader principle here: that a government agency, perhaps the FTC, should have the power to regulate product design of online platforms (again, not the content itself) that poses an undue danger to the public that outweighs the social costs of the regulation, just as the National Highway Traffic Safety Administration has long regulated car design and Consumer Product Safety Commission regulates consumer products.
I realize having a regulatory agency regulating software algorithms could have counter-productive effects, which is one reason the courts in antitrust suits have been reluctant to regulate software design. But things are different when clear harm has already been demonstrated and a reasonable fix is available.
Here’s the downside. Punishing firms like Facebook for the results of internal research that revealed the danger almost certainly would chill future internal research efforts, especially if there is no satisfactory way to subject online platforms to liability suits, which I suggest is likely (For all the criticism of lawyers and lawsuits, exposing firms to liability for harms they cause incentivizes them to conduct such research in order to avoid lawsuits).
Nonetheless, the tradeoffs deserve more debate. The algorithm problem is very real and worries people across the political spectrum. Are we willing as a society to accept the downsides of losing internal company research that would reveal these problems? I don’t know, but it is a question worth more debate. At the very least, the algorithm problem highlights the point I started with: there are no problem-free answers to the problems created by online platforms.
Some advocate taking one further step beyond the after-the-fact government review that Haugen has suggested – too far in my view – by having the features of online platforms subject to pre-approval, much as the FDA now does with new drugs, or as the European Union regulates new products by using the “precautionary principle.” We in the US have always operated differently: we wait for something bad to show up and then regulate it rather than trying to anticipate everything bad thing in advance and not letting something onto the market unless safety is assured beforehand. That makes sense in the case of drugs, whose side-effects can be deadly, but less so or not at all with other new products and services, such as online services, who consequences cannot be fully anticipated and tested for through randomized clinical trials in the same way that new drugs can.
Breaking Up Facebook
There have been many calls to use existing antitrust laws or enact a new one to break up the big online platforms, not just Facebook, but also Amazon and Google. In fact, at this writing, the FTC and 46 state attorneys’ general are seeking Facebook’s breakup under Section 2 of the Sherman Act, which prohibits “monopolization” – not merely having a monopoly but taking various anti-competitive acts to achieve or maintain it. This Act is hardly new. It was enacted in 1890 and has been used to break up Standard Oil and AT&T, among other monopolies.
The FTC’s case alleges that Facebook has cemented its monopoly in social media through its acquisitions of Instagram and WhatsApp, among many other companies, even though the agency did not earlier challenge those transactions under the anti-merger provisions of the Clayton Act of 1914. A federal court dismissed the first lawsuit but several weeks ago the FTC refiled an updated version of its complaint.
Even if the governments eventually “win” its Facebook lawsuit – and, if history is any guide, it could take several years to resolve – it is not clear that a court would order Facebook to spin off Instagram and WhatsApp now that their operations and software have been fully integrated with Facebook’s main platform. In other words, a court may find that now that the corporate eggs have been fully scrambled, it is impossible to unscramble them.
But suppose several years from now somehow, some way Facebook is ordered to unscramble its eggs. Will that solve Zuboff’s privacy problem? The dissemination of misinformation problem? Or the alleged political bias problem? Not really. There would be more competition, which is the proper goal of the antitrust laws. But breakup does not solve the other problems associated with Facebook.
Indeed, it is quite possible that with more competition, the social fragmentation fueled by information silos could even get worse. Former President Trump launched a shell company – a “SPAC” – in October, whose main purpose ostensibly is to launch a Trump-friendly social network to counter Facebook’s. Already, the SPAC is under a legal cloud: https://news.yahoo.com/trumps-300-million-spac-deal-140914411.html/ But suppose, for argument’s sake, the SPAC finds and buys (or creates) a real company that can build out such an alternative network, and suppose it succeeds. If that happens, then Trump-friendly Facebook users might be tempted to jump to the yet-to-be-formed Trump network. If that happens, then the information silo problem gets a bit worse than it is now. I say only a “bit” worse because progressives and Trump supporters already largely live in their own silos on the existing Facebook platform.
There is one unusual possible plot twist if the Trump network becomes real. The major challenge any such network faces is prying away Trump-leaning users from Facebook. That is because however much those individuals may be attracted to an alternative network, in principle, many are already “locked-in” to some degree to their Facebook account, where they have all their friends and family, some of whom they would “lose” if they moved to another network.
For that reason, with much irony, expect to see any Trump network backing one major “progressive” antitrust idea: “mandatory interoperability” with existing platforms, which means that Facebook could be required to connect users from other platforms, such as a new Trump network, with its users. Thus, a post from a Trump network user could be seen by that person’s “friends” on Facebook as well as on the new Trump network, and likewise the posts of Trump users’ Facebook friends would show up on the Trump network.
Of course, a new law would have to be passed to make all that happen, and at least during the three years that President Biden is in office, he could be expected to veto any such bill, even if passed by a Republican-controlled Congress after the mid-term elections (which many expect to happen). But if Republicans take back the White House and continue to control Congress after the 2024 election, and if the filibuster rule is modified to allow such a bill to pass over Democratic objections, then the new Trump network would be much more commercially viable than it might be otherwise.
Revise/Eliminate Section 230 immunity:
Finally, I turn to the most widely discussed Facebook-related reform of all (and it applies equally to all large online platforms): a scaling back or elimination of the current immunity from liability lawsuits online networks and Internet Service Providers (such as your cable, telecom or satellite internet company) enjoy under Section 230 of the Communications Decency Act. Section 230 treats these providers as if they were not newspapers, and thus not subject to suit for the comments made on their platforms.
Section 230 was enacted in 1996 when the internet was in its infancy and the major online networks of today – Facebook, Twitter, Instagram, and so on – hadn’t been invented. The thought was that because the ISPs weren’t speaking, but just providing internet service, they shouldn’t be liable for what others might say on the Internet. Moreover, if online platforms were held liable for what others might say on their networks, then that could have killed future platforms while they were infants in the cradle, or from not being conceived at all.
Clearly, those days are long gone, and while it seems that everyone wants to cut back the Section 230 liability immunity in some manger, there is no consensus yet on how. https://deadline.com/2021/12/section-230-facebook-big-tech-censorship-frances-haugen-1234883163/.
I am not going to advance my own favorite Section 230 reform idea because I’m not sure I have one that I am convinced is right, will not have undue unintended consequences, and doesn’t violate the First Amendment.
But I have my reservations about the effectiveness of any enforcement of a cutback in Section 230 lawsuit immunity that might eventually be passed (one day) by Congress.
First, unless individuals can show they have been economically harmed in some fashion by the “speech” for which an online platform may be liable – for example, a teenager who can prove damage to her health from body shaming from Instagram – they won’t have constitutional “standing” under Article III of the Constitution to file a lawsuit. This may be the case even if Congress says they can sue. Twice in recent years the Supreme Court has ruled that even where Congress passed a law enabling people to sue, they don’t automatically have that right under the Constitution unless they can show that have suffered some “concrete” harm. https://datamatters.sidley.com/u-s-supreme-court-tightens-standing-requirements-in-transunion-decision.
In practice, this means that if Congress were to strip Facebook of its lawsuit protection for “harmful” algorithms – harmful in this context meaning harm to the “body politic” that, say, increases political division – that doesn’t mean that you or I can march into court and file a lawsuit about the algorithm. “Body politics” don’t sue, people do. And even though an algorithm may be ripping our country apart, that doesn’t mean that you or I suffer “concrete harm,” namely an economic loss or a loss of our rights. No standing.
Second, if ordinary people can’t sue a less-legally-protected Facebook unless they can show actual harm, then what about the government? Why can’t Congress enable the Justice Department to stop Facebook or any other online platform from not sufficiently weeding out dangerous content? As implied by a point I made earlier, such a law might be constitutional if the “speech” being stopped is not protected by the First Amendment – such as shouting fire in a crowded theatre. But most of the speech that politicians want stopped – censorship to some, or stoking hate to others – does not, at least in my view, rise to that standard consistent with the First Amendment.
Of course, the online platforms themselves have taken it upon themselves to try to rid their platforms of various forms of objectionable content. But that doesn’t mean that Congress has a free hand on forcing them to, in one fashion or another.
Bottom line: You know that song “Breaking Up Is Hard to Do?” So is figuring out what to do about Facebook.
PS – For those who have loyally reading my posts, thank you, and I apologize for having been off the grid for Thanksgiving. Now back on.
The remarkably simple thing Congress could do is force social media platforms to disable (or, at minimum, offer consumers the option to disable) their algorithms. There is no reason I cannot/could not have a social media feed that is purely chronological of only those I choose to follow, rather than having Facebook's algorithm choose who to show me. Indeed, chronological of those I follow is how Twitter is constructed. Turn off the algorithms.