Abstract
Excerpted From: Sikudhani Foster-McCray, When Algorithms See Us: An Analysis of Biased Corporate Social Media Algorithm Programming and the Adverse Effects These Social Media Algorithms Create When They Recommend Harmful Content to Unwitting Users, 18 Southern Journal of Policy and Justice 1 (May 2024) (268 Footnotes) (Full Document Requested)
Algorithms are built into many programs, including those that transmit infinite databases of information and files for human consumption. Some prominent platforms that run on the millions of rules comprising algorithms, include Google's search engine, Instagram's “Reels” function, and Twitter's “Suggested tweets.” These platforms all work from rules programmed into billions of lines of rapidly operating computer code. Because the code and algorithmic systems are incapable of self-critique for racial bias, an issue arises when the algorithms make unchecked choices based on bias deeply embedded into their code.
Often, the humans who write the rules and code that comprise the algorithms are also often unaware and ill-equipped to assess, diagnose, and rework their own racial biases to stem the harmful effects of their own conscious and subconscious choices.
According to internet, African-American, and gender studies professor Dr. Safiya Noble, “Human beings are developing the digital platforms we use, and as I present evidence of the recklessness and lack of regard that is often shown to women and people of color in some of the output of these systems, it will become increasingly difficult for technology companies to separate their systematic and inequitable employment practices, and the far-right ideological bents of some of their employees, from the products they make for the public.” Noble wrote of this “recklessness and lack of regard” from corporate employees, based on her research spanning from over a decade ago. Noble described both early-stage discriminatory practices, including hiring practices and algorithm programming by racially homogenous employees, and final stage practices, including corporate refusals to fix inaccurate algorithms and to remove racist, harmful content from search platforms. Since Noble released her text, that focused on Google Search algorithms, other information sharing platforms have risen to prominence; including Facebook, Instagram, and Twitter.
This Paper will expand on Noble's inquiry; pivoting from basic Google Search algorithmic discrimination, to incorporate an examination of dominant social media algorithmic discrimination. These newer algorithms not only produce a result when prompted by a user, but also independently suggest content to users unprompted. An analysis of this novel feature is especially important because it will indicate how algorithms are being programmed to make connections and possibly racist assumptions about content, user desires, and promotion prioritizations. This Paper will critically analyze Noble's arguments in Algorithms of Oppression propose a contemporary thesis concerning Google's social media counterparts, assess the current legal issues these social media platform algorithms have created, and finally, suggest future possible legal consequences social media corporations should face.
[. . .]
In current times, there are very narrow possible claims available to individuals against social media corporations, who have suffered tortious or criminal harms, rooted in algorithmic recommendation actions. Although individuals may seek limited, work-related damages based in tort, the contemporary legal immunities afforded to internet service providers under §230, for violent harms affecting large groups, are broad and forgiving. 230 must be modified to provide large-scale recourse to protect classes who have experienced violent harms from algorithmic wrongs.
Both the plaintiffs in Gonzalez and Judge Berzon of the Ninth Circuit Court have advocated for an overhaul of §230, to better protect the rights of targeted communities who have become victims of the polarizing ideologies supported and facilitated by internet service providers. Judge Berzon, in her concurrence in Gonzalez called “for a more limited reading of §230 immunity ... urging the court to reconsider its precedent en banc to the extent that it holds that §230 immunity extends to the use of machine-learning algorithms to recommend content and connections to users.”
Specific changes to §230 limiting internet service providers' immunity, have already been developed and proposed by the Department of Justice. These potential limitations appropriately hold entities, like social media corporations, liable for their “Bad Samaritan” acts under a new §230(d), including those that, “(1) purposefully promote, facilitate, or solicit third- party content [and behavior] that would violate federal criminal law; (2) have actual knowledge that specific content it is hosting violates federal law; and (3) fail to remove unlawful content after receiving notice by way of a final court judgement.” The additions to the current §230 immunity would allow victims to seek redress for specific harms experienced, due to “Bad Samaritan” algorithm actions that have improperly recommended or failed to restrict harmful content. Of the harms listed in the proposal, terrorism most aptly applies to both the harms claimed in Gonzalez and in the domestic hate crime terrorist cases discussed.
These immunity changes, and more, are necessary in the present day, where racialized violence is encouraged and permanently saved on internet sources. Where the youth are active social media users, formulating their world views on popularized information and disinformation, social media corporations must be made aware of the responsibility that is tied to the global impact their platforms create on normalcy. In Dr. Noble's words, “[w]e cannot ignore the long-term consequences ... of the harmful effects of deep machine-learning algorithms, or artificial intelligence, on society.” These corporations must be incentivized to formulate algorithms that understand and promote the true realities of all communities and stop misinformation before it morphs into mass murder.
Sikudhani Foster-McCray is a JD candidate at Emory University School of Law. Foster-McCray completed a B.A. in Business Management and Analytics in 2021 at Loyola University of New Orleans.