Abstract

Excerpted From: Rashida Richardson, Defining and Demystifying Automated Decision Systems, 81 Maryland Law Review 785 (2 022) (274 Footnotes) (Full Document)

RashidaRichardsonOver the last two decades the increased accessibility of vast amounts of data and advancements in computational techniques and resources have fueled what some call a “technological renaissance” where industry and governments alike seek to use “big data” for a variety of tasks and interests. Yet, recurring public relations failures of these technologies not working as marketed, producing stereotypical or biased outcomes, and leading to unintended and sometimes fatal consequences have forced governments to consider policy interventions to address the variety of challenges presented by the recent explosion in technological adoption. There is growing recognition that the common practice of deploying technologies without concomitant legal mechanisms to detect and mitigate attendant risks and harms can no longer suffice. Yet, policymakers' attempts at developing laws and regulations are often stymied by the difficulty of defining these technologies.

Artificial intelligence (“[Artificial intelligence]”) and automated decision systems (“[automated decision systems]”) have become the most prominent categorical terms used to refer to the suite of “big data” technologies and applications for legal and regulatory purposes. Though “algorithm” is the term commonly used in public discourse to refer to a variety of technologies and applications, this usage is a misnomer because algorithms are computer-implementable methods that are inherent in most technologies and applications, only some of which fit within the [Artificial intelligence] or [automated decision systems] categorical label. For example, an algorithm that is not [Artificial intelligence] or [automated decision systems] is the solving of a Rubik's Cube.

Some policymakers evade the difficulty of defining these terms by focusing on particularly concerning functions or systems, categories of risks, or specific effects and outcomes. 0 Other policymakers have relied on technical and mathematical terms or descriptions to define or explain the meaning of [Artificial intelligence] and [automated decision systems] in order to avoid inclusion of seemingly mundane or routinely used technologies. 1 While the functionality of such systems are typically communicated in mathematical or technical terms, technical language is informed by and meant for discipline-specific contexts because it enables those who use the language to “say more in a more comprehensible, thorough, and exact way, using less time and fewer words than ... ordinary English.” Thus, when technical language is heedlessly used in statutory or regulatory text, its misapplication can lead to misinterpretations that can frustrate the law's purpose.  It can also pose challenges for legal compliance, enforcement, and judicial interpretations due to sector or discipline-related semantic ambiguities. 

[Artificial intelligence] and [automated decision systems] are socio-technical systems that depend on and must be responsive to the contextual settings in which they function.  Yet, the failure to incorporate such reflexivity in legal definitions reinforces the mythology of mathematics and algorithm-based technologies by shrouding these technologies with a veneer of legitimacy because their primary functions are expressed in mathematical or technical terms.  For example, in the criminal justice context, whether the constitutional standard of probable cause is met can hinge on the accuracy and reliability of a technology used to determine issues of fact (e.g., the use of facial recognition to determine the identity of a suspect in a crime scene image). Accuracy and reliability are typically represented through mathematical terminology such as “true positive” or “false positive,” but these metrics alone lack the context needed to interpret their true implications under situational circumstances and can mislead decisionmakers into assuming that accuracy is a simple binary rather than a spectrum. 

Despite their integral role to our understanding of and the success of legislation and regulations, legal definitions remain under-examined by legal and social science scholarship, and legislative drafting manuals pay scant attention to this part of the drafting process, with few manuals offering tactical or substantive guidance. A review of state legislative drafting manuals revealed that most manuals only provide generic advice on drafting or the purpose of definitions, and some were completely silent on definitions. This lack of attention and guidance is significant because the scope, application, and meaning of statutory definitions are a frequent source of federal litigation. When definitions are absent or poorly constructed, statutes and regulations are susceptible to normal evolutions in word meaning and varying interpretations, which can ultimately lead to invalidation. 

Nonetheless, legal definitions remain important instruments of governance.  By giving meaning to terms as applied to factual circumstances, legal definitions can resolve ambiguity and communicate meaning to various audiences that interact with and relate to statutes and regulations differently (e.g., lawyers, judges, civil servants, corporations, the public).  Definitions create constraints for both legal and normative inquiries, designating the relevant contexts or circumstances for applying statutes and regulations and establishing limits of legal liability and coercive outcomes.  It is through this authoritative and inherently political function that legal definitions help provide legal certainty and uniformity because they limit the scope of areas where a law seeks to regulate, where a law's normative provisions have effect, and where interpreters can venture. 

Creating legal definitions pertaining to technology is particularly vexed because of the co-constitutive nature of technology and society--they enable and influence as much as they limit and catechize one another.  Throughout history, various kinds of technologies have become embedded in the conditions of modern politics and society, often without regard to their attendant consequences. Policymakers and consumers alike narrowly focus on the stated or professed uses and outcomes of a technology, which diverts attention from tacit functions, such as managing power and social dynamics or facilitating exclusionary practices that privilege some over others. 

Such parochial conceptions of technology can also influence two problematic tendencies, particularly amongst policymakers. First, policymakers tend to focus on “the retreating horizon of systems still-to-be-created at the risk of passing over autonomous systems already in place.” 0 Second, policymakers tend to undervalue or misconstrue demonstrable risks and harms by assuming that flaws are a necessary social cost for innovation, which normalizes problems rather than regulating them. 

[Artificial intelligence] and [automated decision systems] are similar to laws in that they both can construct social reality by reflecting and preserving power relations and social conditions.  Therefore, legal definitions of [Artificial intelligence] and [automated decision systems] that demonstrate awareness of the social and political dimensions of the policymaking process and of the technology itself can serve as an important public policy intervention. “[D]efinition inevitably--sometimes subtly, sometimes radically--changes meaning even as it tries to accurately reflect it.”  So, modernizing the meanings of [Artificial intelligence] and [automated decision systems] for legislation and regulation can “fundamentally change[] the exercise of power and the experience of citizenship.” 

In this Article, I focus on defining automated decision systems used by government agencies and actors, but the definitions can also apply to private uses and actors. This particular domain is both an active area of public policy development and an area ripe for intervention in light of how modern governance operates. In 2018, the Canadian federal government issued a directive on [automated decision systems] and implemented an Algorithmic Impact Assessment (“AIA”) questionnaire through the Treasury Board of Canada Secretariat, the federal government body that reviews and approves spending by the Government of Canada, including procurement of technologies.  This AIA was designed to help government agencies “assess and mitigate the impacts associated with deploying an automated decision system.”  In the United States, [automated decision systems] have been the focus of governmental task forces or commissions seeking to evaluate current uses and identify necessary legislative or regulatory reforms,  litigation challenging biased and harmful outcomes produced by the use of [automated decision systems],  and proposed legislation or regulations seeking to provide transparency and accountability regarding current and future [automated decision systems] use.  In Europe, most relevant laws focus on the outcomes of automated processing or high-risk [Artificial intelligence] rather than [automated decision systems] specifically, but there have been legal challenges to government use of specific [automated decision systems] and government commissioned research on [automated decision systems] policy frameworks.  Currently, no countries in the Global South have laws or regulations focused on [automated decision systems], but there is a growing body of research and public scrutiny regarding government use of some [automated decision systems]. 

Government use of [automated decision systems] is also a ripe area for intervention not only because it implicates particular legal interests and concerns, but also because unfettered and unexamined use of [automated decision systems] can distort perceptions of government operations, thus making deferred reform or regulation difficult and deficient. For example, in 2014, Boston Public Schools attempted to address decades of de facto racial and socioeconomic segregation in public schools by implementing a “home-based assignment” [automated decision systems].  This [automated decision systems] was geographically driven and attempted to improve school choice options closer to the student's home address. However, a 2018 evaluation of this [automated decision systems] project revealed that it failed to achieve most of its goals and the [automated decision systems] actually intensified segregation across the city's public schools.  Luckily, in this case, Boston Public Schools commissioned an evaluation of this [automated decision systems] project that revealed it was a failure, but most current government [automated decision systems] projects lack meaningful transparency and retrospective evaluations.  This means that [automated decision systems] can be implemented and fail without public awareness or scrutiny, and government officials can leverage this information asymmetry to advance narratives of progress as structural conditions worsen or to avoid necessary reforms.

Decades of research suggests that statistical models, like those commonly employed in [automated decision systems], outperform human experts on prognostic and optimization tasks.  These findings, along with [automated decision systems] marketing claims of increased efficiency, cost-savings, and even bias reduction, make their integration into modern governance seem like a logical progression.  Modern government decision-making is significantly diffused yet structured, where decisions are delegated and distributed across multiple actors within a hierarchical organizational structure, so [automated decision systems] should ideally “improve consistency, decrease bias, and lower costs.”  Yet, this logic is not normatively grounded because it ignores the role of pre-existing social inequities, how discretion and power dynamics operate within this evolved governance structure, and it assumes that technologically mediated decision-making is neutral. Such oversights can conceal inherent tradeoffs associated with [automated decision systems] use or belie government decision-making and policy implementation,  both of which are pertinent to evaluating the value and performance of [automated decision systems] in the government context. Neglecting these concerns also eschews questions related to capitalism, imperialism, and other subjugating phenomena that are aligned with market interests. Thus, creating a normatively grounded and reflexive definition of [automated decision systems] is the necessary premise for any meaningful legislative or regulatory reform.

In this Article, I offer two nested definitions of [automated decision systems]--one comprehensive and one narrow--developed through a series of workshops with a group of interdisciplinary scholars and practitioners 0 that can be used in legislation and proposed regulations:

Comprehensive [automated decision systems] Definition: “Automated Decision System” is any tool, software, system, process, function, program, method, model, and/or formula designed with or using computation to automate, analyze, aid, augment, and/or replace government decisions, judgments, and/or policy implementation. Automated decision systems impact opportunities, access, liberties, safety, rights, needs, behavior, residence, and/or status by predicting, scoring, analyzing, classifying, demarcating, recommending, allocating, listing, ranking, tracking, mapping, optimizing, imputing, inferring, labeling, identifying, clustering, excluding, simulating, modeling, assessing, merging, processing, aggregating, and/or calculating.

Narrow [automated decision systems] Definition: “Automated Decision Systems” are any systems, software, or processes that use computation to aid or replace government decisions, judgments, and/or policy implementation that impact opportunities, access, liberties, rights, and/or safety. Automated Decision Systems can involve predicting, classifying, optimizing, identifying, and/or recommending.

Two definitions are warranted because the current [automated decision systems] policy landscape is oriented around two distinct goals that require different assumptions, approaches, and definitional constraints. One policy goal assumes uncertainty or incompleteness regarding the complexities of the problem and seeks to better understand [automated decision systems] as currently and prospectively implemented to then inform subsequent reform. This policy goal requires a descriptive definition that aims to expand ordinary meanings or usage of terms by depicting attributes of what is defined, and not to rigidly establish boundaries of the definition. The comprehensive definition meets this goal because it is an intentionally inclusive definition designed for legislation and regulations that are investigatory or diagnostic in purpose. The comprehensive definition can be used in legislation that seeks to create a task force, commission, other quasi-government bodies, or government-commissioned studies that seek to understand [automated decision systems] use and its implications. The comprehensive definition can also be used in legislation or regulations mandating the enumeration of [automated decision systems] in use. These types of legislative or regulatory approaches are typically created to inform more prescriptive interventions, which is where the second policy goal and definition take effect.

The second policy goal makes some assumptions regarding the nature of the problem and conditions relevant to governance, and it seeks to assign obligations, invest rights, mitigate risks, and create greater accountability and responsibility regarding the development and use of [automated decision systems]. This policy goal requires a prescriptive definition that consists of a set of conditions, where compliance with each is necessary to fall within the scope of the definition and therefore the reach of relevant laws.  The narrow definition is honed for legislation and regulations that are restrictive in purpose or onerous in practice. This narrow definition can be used in legislation that seeks to ban or limit uses of [automated decision systems] (generally or in specific sectors) or regulations and laws that mandate stringent requirements for [automated decision systems] use, such as disclosure or audit requirements.

Prevailing statutory and regulatory [automated decision systems] definitions fall short in meeting these policy goals because they are neither precise nor clarifying, which le[automated decision systems] to two significant problems for [automated decision systems] legislation or regulations to be successful. First, prevailing definitions infer cultural baselines of expectations and presume knowledge, or at least a shared level of comprehension, amongst various audiences that must interpret the definitions and relevant laws. For instance, definitions that merely adopt mathematical or technical terms like “linear regression” or “neural networks” assume that the public, judges, lawyers, and government actors charged with enforcing, conforming to, and interpreting relevant laws or regulations know what these terms mean or can reasonably ascertain the correct meaning consistently.

Second, prevailing definitions present a time-bound conceptual framing of [automated decision systems] that is limited to current capabilities and stripped of social, political, and economic forces and contexts. Some definitions suggest that [automated decision systems] are technologies that merely aid human decision-makers using a range of techniques,  but such characterizations often fail to anticipate that current techniques and technical capabilities can and will evolve.  The omissions in these definitions also downplay the fact that many of the technical actions or functions performed by [automated decision systems] are inherently normative or value-laden,  and they tend to efface the nature of decisions made using [automated decision systems] and thus the significance of their impact. 

A major task of this Article is to change the meaning of [automated decision systems], and therefore the impact of relevant statutes and regulations, by accurately reflecting what [automated decision systems] are actually doing and their impact in a sector-agnostic manner. This Article proceeds in four parts. Part I further situates the comprehensive and narrow definitions of [automated decision systems]. It describes the key components of each definition and their relevance for [automated decision systems] regulation. This Part clarifies why my definitional project creates a new modality of regulation that does not presume knowledge or expertise amongst the various stakeholders and audiences within or affected by the broader [automated decision systems] policy and regulatory landscape.

Part II explores two examples of [automated decision systems] currently used by government agencies in the United States: teacher evaluation systems and gang databases. Each use case details the social and political history that engendered the development of these particular [automated decision systems] and how these technologies are practically implemented. I apply each [automated decision systems] case study to the narrow definition to demonstrate how these [automated decision systems] could otherwise evade scrutiny in the absence of the definitions and how the clarity offered through the definitions is valuable within each sectoral context. This Part is intended to clarify the political and social dimensions of [automated decision systems] within their sectoral contexts as well as demonstrate how the definitions bring new meaning and urgency to [automated decision systems] that are often misconstrued as neutral or passive.

Part III evaluates potential exemptions to the [automated decision systems] definitions. This Part examines three technologies commonly used by government agencies and demonstrates how policymakers should holistically analyze exemptions for [automated decision systems] legislation and regulations.

The Conclusion fastens analytical thre[automated decision systems] developed in the preceding Parts to reveal how the definitions and analysis bring new understandings to the problems of [automated decision systems]. While some [automated decision systems] appear to be new or novel, the problems and concerns they present are not, and this Article provides policymakers, advocates, and the public with a new framework and insights for addressing them.

[. . .]

In order for an [automated decision systems] law to be successfully complied with, enforced, and interpreted, various audiences must understand what [automated decision systems] are and their impact. Yet, this can only be accomplished if there is shared meaning that does not require or presume particular knowledge, expertise, or experience. The comprehensive and narrow [automated decision systems] definitions achieve this goal by clarifying the various forms [automated decision systems] can take, the role of computation, their relationship to governance, the actions or functions they are performing, and naming their impact in a sector and discipline agnostic manner. In addition to providing shared meaning, this definitional approach makes legislative and regulatory definitions more adaptable so that they can be adopted across jurisdictions that have different legal frameworks for addressing legal issues presented by emergent technologies.

These definitions also help demystify [automated decision systems] as objective or neutral tools. The definitions and the “real world” use cases demonstrate that [automated decision systems] are social and political artifacts as much as they are technical, in that they reflect and concretize the public policies and practices that preceded their development and use. The education policies that gave rise to teacher evaluation systems reflect the misguided logic of the education accountability movement that preceded their development, and their continued use contributes to growing educational inequities in American public schools. It is also not a coincidence that gang database designations produce a perpetual blacklist effect, since the intelligence practices that preceded and influenced the development and use of gang databases, especially law enforcement targeting of political activists or alleged Communists, yielded similar outcomes. Since public discourse about [automated decision systems] often ignores these histories and the full extent of their social consequences, the tendency for [automated decision systems] to enable or facilitate government subjugation is often rendered banal or normalized. Therefore, the [automated decision systems] definitions and analytical framework offered in this Article can serve as an important intervention in public policy and scholarly discourse by grounding future [automated decision systems] policies in the world of practice they intend to govern.

The use cases and definitional analysis also demonstrate that the motivations to create [automated decision systems], their design, and how they are ultimately used are inextricably linked to policy, social interests, and how these interests are renegotiated overtime. Both use cases reveal that despite good faith motivations, [automated decision systems] can and do produce counterproductive and negative outcomes, and such outcomes are more likely when: (1) [automated decision systems] development or use derives from public policies used to govern social marginality; and (2) when the [automated decision systems] disproportionately targets or affects communities of color and poor people. Thus, the [Artificial intelligence] definitions help clarify the function and process of these technologies as a prominent mode of governance, and this understanding can better enable systemic evaluation of [automated decision systems], their relevant social domains, and broader public policy needs.

The definitions account for the variation and uncertainty of [automated decision systems] in practice, and their embedded nature in modern politics and society. Often these systems are ill-defined and are operating on already amorphous and subjective categories without regard for social or economic costs, such as the inconsistent definitions of “gangs” in gang databases and measures of “value added” in standardized teacher evaluations. Yet, my definitions ground these governing technologies by acknowledging and emphasizing their capacity to transform liberties, rights, access, safety, and other social outcomes. The application of the use cases to the narrow definition demonstrates that even a more constrained definition that prioritizes impact over process or technical specificity can do better work in regaining accountability for the actors and institutions that support and use [automated decision systems]. The analysis of [automated decision systems] exemptions showed the technical, social, and political considerations that must be assessed to avoid exclusion of consequential technologies.

Though my analysis of [automated decision systems] is critical, I remain optimistic about their potential as tools for social change. Data-driven solutions should not foreclose opportunities for systemic re-evaluation of how society is governed, and the roles of government and technology. Indeed, a more critical examination of the history, politics, and social dynamics associated with any [automated decision systems] and its relationship to governance is crucial for identifying meaningful pathways forward. The definitions and analytical framework provided in this Article can aid in identifying appropriate laws, regulations, and other safeguards for [automated decision systems] use, such as the types of training government actors using [automated decision systems] should receive to better mitigate errors from flawed [automated decision systems] or consequences that stem from [automated decision systems]-human interactions, cumulative disadvantage, or related social policies. 74 This Article can also serve as an analytical guide for advocates and local communities seeking to evaluate what social problems can benefit from government or technological interventions versus community-based solutions.


Rashida Richardson is a Visiting Scholar at Rutgers Law School and incoming Assistant Professor of Law and Political Science at Northeastern University.