HIGH SCHOOL ESSAY CONTEST WINNER: First Amendment and Artificial Intelligence: Social Media Regulation

By: Chloe Kim

When debating the political influence of social media, one cannot overlook the pervasive spread of viral misinformation across billions of social media posts. Behind the scenes are the algorithms attempting to censor such powerful, politically motivated content—artificial intelligence programs that can manipulate, suppress, and censor any content at unprecedented speed and accuracy, potentially becoming a covert tool for social engineering. This technology has made the intersection between false information and the First Amendment much more complex. In light of the upcoming 2024 presidential elections, American lawmakers need to reconsider legislation on the role of artificial intelligence in social media regulation.

During the 2020 elections, American democracy fiercely grappled with false information on social media. By weaponizing social media, Russian interference disseminated harmful information, increasing social unrest and political polarization. Social media platforms raced to tackle such interference with content regulation, with Facebook removing about 75,000 posts associated with these accounts. (Gleicher 2019)

In 2024, the threat of misinformation to the integrity of American elections is even more apparent with generative AI, which allows anyone to produce realistic malicious content. Back in January, a fake AI-generated image of Donald Trump sitting next to Jeffrey Epstein on the disgraced financier and sex offender’s private jet went viral on Facebook. The following month, a “Democratic consultant working for a long-shot rival admitted that he commissioned an AI-generated robocall impersonating President Joe Biden that sought to discourage thousands of voters from participating in New Hampshire’s primary.” (Barrett and Hendrix 2024)

Consequently, social media companies are being urged to employ more rigorous measures to combat misinformation. In a call to action, The Global Coalition for Tech Justice criticized tech companies for failing to “implement adequate measures to protect people and democratic processes from tech harms, including disinformation, hate speech, and influence operations that ruin lives and undermine democratic integrity.” (Digital Action 2024) In response, the government and social media companies are developing projects to build an artificial intelligence service that combats false information. As part of 'The Convergence Accelerator Program, Track F was launched by the Biden Administration to effectively "prevent, mitigate, and adapt to critical threats in our communications system," garnering millions in funding. (Convergence Accelerator Office, n.d.)

Under Section 230 of the federal Communications Decency Act, it is entirely constitutional for social media companies to employ such artificial intelligence to regulate content. Section 230 protects online computer services concerning third-party content its users generate. (“47 U.S.C. 230 - Protection for private blocking and screening of offensive material - Content Details - USCODE-2011-title47-chap5-subchapII-partI-sec230”, n.d.) This statute was drafted in the decade of newly developing internet platforms that faced the dilemma between moderating third-party content at risk of being held liable or refusing to moderate third-party content at risk of eroding user experience with obscene content. Section 230 is touted as "the 26 words that made the internet," shielding growing internet corporations from destructive legal ramifications and nurturing them into the giant industry they constitute today. (Kosseff 2019)

However, many fear that Section 230 is unable to keep up with the times—and that the unprecedented advancement of social media and artificial intelligence calls for a potential re-evaluation of the often overlooked sub-clause under Section 230: "to remove disincentives for the development and utilization of blocking and filtering technologies." Current artificial intelligence algorithms are proving inconsistent and inaccurate, interfering with the First Amendment under unreasonable grounds. Without human input, even the world's leading artificial intelligence systems struggle with unintended censorship. Michael J. Abramowitz, president of Freedom House, points out that "AI can be used to supercharge censorship, surveillance, and the creation and spread of disinformation.” (Chandran and Tabary 2023) Artificial intelligence can censor information before it is posted through automated filtering, which poses even more significant risks to the First Amendment as it completely suppresses public discourse. (Simon 2020)

Overall, a valid argument can be made that the vast power of artificial intelligence erodes the core of the First Amendment. According to the libertarian approach, citizens organically adopt a self-regulatory framework known as the "marketplace of ideas" that "[lets] truth and falsity grapple on the open market, and the truth will rise to the top.” (Lenard, Lam, and Kosseff 2023) Rather than empower people to make independent, informed votes, automated censorship may manipulate mass sentiment by arbitrarily restricting access to information. Such criticism of Section 230 has compelled lawmakers to introduce amendments to Section 230. One such case, the Stop the Censorship Act, introduced in 2022, attempts to “limit a social media company's immunity from liability for screening and blocking offensive content on its platform.” (117th Congress)

In the same vein, it is important to recognize that the complex issue of social media censorship is not black and white: the removal or excessive editing of Section 230 would restrict freedom of speech by forcing companies to strictly moderate content in fear of legal responsibility. Although more discussion is needed to fully weigh both perspectives, it is clear that the repercussions of artificial intelligence censorship threaten the First Amendment. "Imagine a world where your word processor prevents you from analyzing, criticizing, lauding, or reporting on a topic deemed 'harmful' by an AI programmed only to process ideas that are 'respectful and appropriate for all.'"

The dangers of false digital information have only been exacerbated by the emergence of artificial intelligence. With the upcoming elections in 2024, American lawmakers must now consider how artificial intelligence has arguably changed the context of Section 230. The future of over 300 million Americans lay in the hands of legislators and how they decide to regulate the use of artificial intelligence in social media algorithms. 


Bibliography:

Barrett, Paul, and Justin Hendrix. 2024. “AI Isn't Our Election Safety Problem, Disinformation Is.” Time. https://time.com/6914961/ai-2024-election-disinformation/.

Chandran, Rina, and Zoe Tabary. 2023. “AI 'supercharges' online disinformation and censorship, report warns.” Reuters. https://www.reuters.com/article/idUSL8N3B53P6/.

Convergence Accelerator Office. n.d. “NSF Convergence Accelerator Phases I and II for the 2021 Cohort.” National Science Foundation. Accessed April 22, 2024. https://nsf-gov-resources.nsf.gov/solicitations/pubs/2021/nsf21572/nsf21572.pdf?VersionId=YB5eK0j6Izanw9fLsDNxxrpz9TF5bhSB.

Digital Action. 2024. Global Coalition for Tech Justice. https://yearofdemocracy.org/a-hundred-days-into-the-elections-megacycle-and-tech-platforms-are-failing-the-biggest-test-of-2024/.

“47 U.S.C. 230 - Protection for private blocking and screening of offensive material - Content Details - USCODE-2011-title47-chap5-subchapII-partI-sec230.” n.d. GovInfo. Accessed April 22, 2024. https://www.govinfo.gov/app/details/USCODE-2011-title47/USCODE-2011-title47-chap5-subchapII-partI-sec230/summary.

Gleicher, Nathaniel. 2019. “Removing More Coordinated Inauthentic Behavior From Iran and Russia.” Meta. https://about.fb.com/news/2019/10/removing-more-coordinated-inauthentic-behavior-from-iran-and-russia/.

Kosseff, Jeff. 2019. The Twenty-Six Words That Created the Internet. N.p.: Cornell University Press.

Lenard, Thomas, Sarah Lam, and Jess Kosseff. 2023. “Freedom of Speech in the Digital Age with Professor Jeff Kosseff - Publications.” The Technology Policy Institute. https://techpolicyinstitute.org/publications/privacy-and-security/freedom-of-speech-in-the-digital-age-with-professor-jeff-kosseff/.

117th Congress. n.d. “H.R.8612.” Congress.gov. Accessed April 22, 2024. https://www.congress.gov/bill/117th-congress/house-bill/8612.

Simon, Eva. 2020. “​Upload Filters Are Back, and We Are Still Strongly Against Them.” Civil Liberties Union for Europe. https://www.liberties.eu/en/stories/uploa-filter-back-eu-2020/18938.