Credit - Illustration by Eleanor Shakespeare for TIME
The Trump era ended in 2021 with a violent mob storming the seat of American democracy. Among the many factors behind the riot—from white supremacy to President Trump’s inflammatory rhetoric—experts largely agree that the flourishing of misinformation online played a major part. But when we look back on the 2020s, will that dark day in January be seen as a crescendo, or as an omen?
For many of the most thoughtful analysts of 21st century democracy, any answer to that question runs through the terrain of Silicon Valley. Social media has connected families across oceans, allowed political movements to blossom and reduced friction in many parts of our lives. It has also led to the rise of industrial-scale misinformation and hate speech, left many of us depressed or addicted, and thrust several corporations into unprecedented roles as the arbiters of our new online public square. Our relationships, the way we’re governed and the fates of businesses large and small all hinge on algorithms understood by few and accountable to even fewer.
This was made clear to many Americans in the days after the Capitol riot, when Trump was suspended from Twitter, Facebook and eventually YouTube for his role in inciting the violence. Some denounced the moves as censorship; others wondered why it had taken so long. One thing most agreed on: Silicon Valley CEOs should not be the ones making such momentous decisions.
Under President Joe Biden, tech reform will take on a new, almost existential urgency for American democracy. With the new Congress, his Administration can set the terms to regulate an industry that, since its birth amid the tech optimism of the 1990s, has produced the most powerful corporations on earth while escaping almost all oversight. “This decade is critical,” says Shoshana Zuboff, a professor emerita at Harvard Business School and the author of The Age of Surveillance Capitalism. “Our sense of the possible has been crippled by two decades of helplessness and resignation under the thumb of the tech giants. That is changing as we come to understand the wizard behind the curtain. Right now, we are eager for change.”
The global tide of public opinion is turning against the tech companies. Activists have been sounding the alarm bell for years, particularly after 2017 Rohingya genocide in Myanmar, when hate speech shared by extremists on Facebook fanned the flames of ethnic cleansing, which the platform later admitted it did not do enough to prevent. The Cambridge Analytica scandal of 2018 brought home to people that personal data they had given up so freely to Facebook could be used against them. And while the platforms say they are doing all they can to scrub misinformation, hate speech and organized violence from their sites at scale, they’re often failing. The consequences of those so-called online harms appear in the offine world, in ISIS recruitment, white-supremacist terror, vaccine skepticism and the mainstreaming of conspiracy theories like QAnon.
“Social media has introduced this large-scale vulnerability into our media ecosystem,” says Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard University. “And they want to deal with it on a case-by-case basis rather than looking at the design of their products.” Like other experts, Donovan says the core problem with the social platforms lies in their algorithms that choose to amplify content according to the amount of “engagement” it provokes. Posts that are hateful or controversial or play into preconceived biases tend to gain more likes, comments and shares than those that are thoughtful, lengthy or nuanced.
“The platforms want people to stay on their sites as long as possible, and so there was always an incentive for content that was going to be emotionally resonant,” says Whitney Phillips, an assistant professor at the Syracuse University department of communication and rhetorical studies. “It’s not that these platforms love hate speech. It’s that their algorithms were designed to make sure people were seeing the kinds of things that were going to keep them on the site.”
The unaccountable power of the tech platforms lies not just in the algorithms that dictate what posts we see, but also in how that translates to profits. As Zuboff argues, the wealth of the Big Tech companies has come from extracting data about our behaviors and using the insights from those data to manipulate us in ways that are fundamentally incompatible with democratic values. Our emotions and behavior can now be intentionally, and secretively, manipulated by the platforms: in 2014, Facebook revealed it had conducted a study that found it could successfully make people more happy, or sad, by weighting posts differently in the News Feed. And it has long boasted that its I VOTED sticker increases voter turnout.
The business models of Facebook and Google are grounded in that manipulative power, because advertisers who want to know how people will behave will pay handsome sums to steer that behavior. But the same business model has produced personal news feeds that let anyone choose their own reality, and a shared delusion that propelled thousands of Americans toward the U.S. Capitol gates.
In the brighter future proposed by Donovan, Phillips and others, the deplatforming of a figure like Trump would be a last resort—something that could be avoided entirely, given time, by tweaking the algorithms so as to push misinformation, hatred and violence out of the center of our political discourse and back to the fringes where it came from.
Movement is already under way to build that future. Even before Biden took office, federal and state enforcers were pursuing new antitrust cases against Facebook and Google with renewed vigor, which could result in the platforms facing massive fines or even being broken up. Still, experts say, breaking up the firms will do nothing unless fundamental safeguards are put in place to limit their business models in a way similar to how strict regulations exist for food safety and aviation, for example.
One measure touted by President Biden has been the repeal of Section 230 of the Communications Decency Act, the federal law that protects social media companies from being sued for hosting illegal content (a provision that allowed them to scale quickly and without risk). The platforms oppose the outright repeal of Section 230, arguing that it would force them to censor more content. Experts say Biden will need to not just repeal the law but to replace it with a progressive, future-oriented version that gives social platforms the protections they need to exist but also offers built-in mechanisms to hold them to account for their worst excesses.
Across the Atlantic, there is already a model that American reformers could choose to follow. In December, the E.U. and U.K. each proposed sweeping new laws that would force tech companies to make their algorithms more transparent and, eventually, accountable to democratically elected lawmakers. A key part of the E.U.’s proposal is that large tech companies could be fined up to 6% of their annual global revenue (several billions of dollars) if they don’t open up their algorithms to public scrutiny and act swiftly to counter societal harms stemming from their business models. Crucially, however, the law would safeguard platforms from liability for hosting illegal content unless they were shown to be consistently negligent about removing it.
The architects of those laws say the U.S. should not act too hastily. “Repealing Section 230 would probably have massive unintended consequences for privacy and free speech,” says Felix Kartte, a former E.U. official who helped draft the bloc’s proposed Big Tech regulation. “Instead, U.S. lawmakers should join the E.U. in forcing tech companies to protect the rights and safety of their users and to mitigate large-scale threats to democracy. Rather than regulating content, this would imply holding Big Tech to account for the ways in which their design features curate and amplify content.” Welcoming Biden’s Inauguration as a “new dawn” on Jan. 20, European Commission president Ursula von der Leyen said she would like to work with the new U.S. President to write a common digital “rule book” to rein in the “unbridled power held by the big Internet giants.”
Even under Biden, one significant roadblock remains: the fact that business as usual is highly profitable for the Big Tech companies. Social media’s appeal is in creating community, Zuboff notes. “But Facebook’s $724 billion market capitalization doesn’t come from connecting us,” she says. “It comes from extracting from our connection.”
While tech CEOs say they welcome regulation, Silicon Valley lobbyists in Washington are frantically working to pre-empt any restrictive legislation that might affect their burgeoning wallets. (Facebook and Amazon each spent more money on federal lobbying in the first three quarters of 2020 than in the same period in any previous year, according to the nonpartisan Center for Responsive Politics.) “It’s really going to matter who specifically the Biden Administration listens to,” Donovan says. “If you get the platform companies’ conceptualization of the problem, it’s individual pieces of content. If you get the researchers’ approach to the problem, it’s the design of the service.”
Fixes to these problems won’t happen overnight. Phillips, the Syracuse professor, offers a metaphor of the platforms as factories leaking toxic waste into our democracies. As well as plugging the leak, and regulating the factories, and maybe looking for a cleaner form of energy altogether, the process of detoxifying more than a decade’s worth of pollution will take time. Still, if democracy reasserts itself over the tech giants in the years to come, the year 2030 will be a brighter one for our humanity and our democracies.
This appears in the February 1, 2021 issue of TIME.