In legal disputes as in life, sometimes what is not said reveals more than what is there. Consider the briefs filed with the Supreme Court in defense of a law that gives Google and other tech companies limited immunity from lawsuits.
Gonzalez vs. Google, scheduled for oral argument before the Supreme Court on Tuesday, concerns Section 230 of the Communications Decency Act, a law enacted in 1996 to regulate the then-new Internet. Children’s advocacy groups that filed amicus briefs in the case noted that social media platforms intentionally harm children by presenting dangerous content in Addictive road. It is worth noting that none of the dozens of summaries submitted by technology companies addressed these harms.
One of the primary purposes of Congress in enacting Article 230 was, as some senators have said, “much-needed protection for children,Not only from explicit content but also from abuse. Ironically, the platforms are now arguing that Congress actually intended to give them immunity from business decisions they know will harm children.
the Gonzales case It was brought in by the family of an American who was killed by ISIS in the 2015 Paris terrorist attacks. The family claims that as a foreseeable result of efforts to keep as many eyes on Google as possible on YouTube, terrorist recruitment videos are being handed out to people likely to be interested in terrorism. In a similar case on Wednesday, Twitter vs. TamnaThe court will examine whether the platforms’ alleged failure to take “meaningful steps” to remove terrorist content violates federal anti-terrorism law.
The implications of the rise of social media go beyond increased access to terrorist content. During the years when Instagram went from 1 million to 1 billion users, the United States saw something amazing up 146% in gun suicides among children between the ages of 10 and 14. The number of suicides in general among young people An unprecedented 57% increase.. Although the relationship between the growth of platforms and the crisis of youth mental illness does not prove causation, Facebook does Leaked internal research He noted that 6% of teenage American Instagram users “track their desire to kill themselves” to the platform.
Similarly, researchers and clinicians have repeatedly documented extensively the social media-related mental health and physical harms of children. Last Monday, the US Centers for Disease Control and Prevention reported that teenage girls suffering from Record levels of grief and the risk of suicide, which some experts attribute in part to the rise of social media. And on Tuesday, a US Senate committee heard heartbreaking stories about the dangers of, as one grieving parent put it, “the sheer power of the social media industry.”
Social media platforms make money by selling ads. More time spent on the platform means more attention to its ads, which means it can charge more for those ads. Additionally, the more time a user spends on the platform, the more data the platform develops on the user, which in turn can be used to keep the user on the platform for longer.
Humans do not personally sort out who sees what is on these platforms. Instead, humans give AI technologies the instructions to maximize what the platforms call “user engagement.” AI does this at breakneck speeds by testing recommendations that work best across billions of users. It then delivers content that is not only based on what the child says they want but also statistically more likely to keep kids like her glued to the screen. Often, the answer is whatever exploits her fears and anxieties.
This means that with annoying redundancy, Advice about suicide is given to depressed teensanxious girls get body image Content that promotes eating disorders And young people who are excited about drugs get opportunities to buy Fatal fentanyl pills. Moreover, the platforms use neuroscientifically engineered tricks such as automatic scrolling, constant reminders to return to the platform, and dopamine likes that can be addictive for children. Often, kids who seriously want to turn the platform off can’t; Their brains are not big enough to resist addiction to the same degree as adults.
To maintain growth every quarter, platforms have to find ways to attract more users and keep them longer. If platforms are allowed to continue leveraging technology that they know will harm large numbers of children without fear of financial consequences, they will continue to perfect their technology, and more children will be harmed. Child suicide and the mental health crisis we’re going through right now is only going to get worse with no end in sight.
It doesn’t have to be this way. Google’s way of prioritizing content for viewers, designed to be based on websites’ experience, credibility, and reliability, shows that there are ways to decide who sees what that is much less dangerous to children — and others.
The court’s decision won’t end the controversy over Section 230, but it may begin to return the law to its original goal of protecting youth. But it should not be an argument that it is illegal to intentionally weaponize children’s vulnerabilities against them.
And if we can’t agree on that, then anyone who believes that the unprecedented harm to children is the price society has to pay for freedom on the Internet should at least acknowledge that harm.
Ed Howard is a senior advisor at the University of San Diego School of Law’s Children’s Advocacy Institute.