On February 21, the nine justices heard oral arguments in the case of Gonzalez v. Google, a case brought by Reynaldo Gonzalez, whose daughter was killed in a 2015 ISIS terror attack in Paris and who alleges that YouTube’s algorithm aided in the attack by recommending the group’s recruitment videos to people who would be most susceptible to their message. The outcome of the case could decide the future of social media platforms worldwide.  At the heart of the case is the question of whether tech companies should be held liable for harmful content posted on their platforms by their users—something for which they are currently protected under Section 230 of the Telecommunications Act, a 1996 piece of legislation whose primary purpose was to increase competition in broadcasting and telecoms markets. It is a protection that has shielded companies whose platforms have enormous reach and influence from being held responsible for harms caused by extremist content and disinformation. But it is also a fundamental underpinning of free speech online.  “The purpose of Section 230 was to try to prevent platforms from becoming the soccer ball that gets kicked around whenever people disagree about what appropriate free expression on the internet is,” says Andrew Sullivan, president and CEO of the Internet Society, which filed an amicus brief in support of Section 230. “If you start to mess with this, you’re fundamentally messing with the design of the internet. And that is going to lead to a splintering of the network.” Debates over Section 230 have largely been confined to the circuit courts—lower levels of the US federal court system—for nearly two decades. That changed after the 2016 presidential election, when Republican lawmakers began to seize on and amplify often spurious claims that platforms were censoring conservative users. That message proved effective in galvanizing elements of their base, and Republican figures have continued to accuse major tech firms, such as Meta and Twitter, of bias.  One prominent example of this supposedly “biased” enforcement is Facebook’s 2018 decision to ban Alex Jones, host of the right-wing Infowars website who later was slapped with $1.5 billion in damages after harassing the families of the victims of a mass shooting. Many of the actions that infuriated Republicans were those shielded by the First Amendment to the US constitution, which guarantees free speech. Those protections are essentially unassailable legislatively, so lawmakers targeted Section 230 instead.  Starting in 2018, prominent conservatives began demanding changes to the law that would expressly hinge Section 230’s liability protections on how companies treat political speech. High-profile Republicans, including Missouri senator Josh Hawley and Texas senator Ted Cruz, frequently misconstrued the section’s language. “The predicate for Section 230 immunity … is that you’re a neutral public forum,” Cruz said in 2018, interpreting the law as shielding only websites that treat left- and right-wing political views equally.  Gonzalez v. Google takes a different track, focusing on platforms’ failure to deal with extremist content. Social media platforms have been accused of facilitating hate speech and calls to violence that have resulted in real-world harm, from a genocide in Myanmar to killings in Ethiopia and a coup attempt in Brazil. “The content at issue is obviously horrible and objectionable,” says G. S. Hans, an associate law professor at Cornell University in New York. “But that’s part of what online speech is. And I fear that the sort of extremity of the content will lead to some conclusions or religious implications that I don’t think are really reflective of the larger dynamic of the internet.” The Internet Society’s Sullivan says that the arguments around Section 230 conflate Big Tech companies—which, as private companies, can decide what content is allowed on their platforms—with the internet as a whole.  “People have forgotten the way the internet works,” says Sullivan. “Because we’ve had an economic reality that has meant that certain platforms have become overwhelming successes, we have started to confuse social issues that have to do with the overwhelming dominance by an individual player or a small handful of players with problems to do with the internet.”  Sullivan worries that the only companies able to survive such regulations would be larger platforms, further calcifying the hold that Big Tech platforms already have. Decisions made in the US on internet regulation are also likely to reverberate around the world. Prateek Waghre, policy director at the Internet Freedom Foundation in India, says a ruling on Section 230 could set a precedent for other countries. “It’s less about the specifics of the case,” says Waghre. “It’s more about [how] once you have a prescriptive regulation or precedent coming out of the United States, that is when other countries, especially those that are authoritarian-leaning, are going to use it to justify their own interventions.” India’s government is already making moves to take more control over content within the country, including establishing a government-appointed committee on content moderation and greater enforcement of the country’s IT rules. Waghre suspects that if platforms have to implement policies and tools to comply with an amended, or entirely obliterated, Section 230, then they will likely apply those same methods and standards to other markets as well. In many countries around the world, big platforms, particularly Facebook, are so ubiquitous as to essentially function as the internet for millions of people. “Once you start doing something in one country, then that’s used as precedent or reasoning to do the same thing in another country,” he says.