A Comprehensive Overview of AI Usage and Ethics in Legal Professions

By: Darya Tadlaoui

Edited By: anna westfall and emily yang

The past fifty or so years – and particularly the last decade – have cultivated previously unimaginable expansion of artificial intelligence (AI) technology into virtually every sector of the economy. Machines that previously were equipped solely to computerize the standardized tasks required of factory jobs have now been programmed with tools to deduce the meaning of and intention behind particular instructions. [1] With such drastic advancements in the field, numerous jobs that were once impossible to replicate have now been rendered obsolete. 

Discourse surrounding AI within legal occupations predicts varying degrees of possibility for total incorporation into the field. Some experts believe that law necessitates a human aspect that cannot be imitated by technology; others condemn current niche technology for perpetuating racialized stereotypes; others remain excitedly under the impression that there is profitable potential to be found in making particular advancements. Understanding the implications of employing AI in legal work is crucial to evaluating its value to attorneys, judges, and clients alike. 

Historic and present-day uses

On a basic level, AI has been well established in the field of law for the past decade. [2] Since the ruling in Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012) preserved the use of predictive coding to find electronically-stored information for discovery purposes, conducting research and screening documents and contracts with AI has become a conventional method utilized by most lawyers and their paralegals. [3] The task of performing keyword searches has been shortened by programs that can identify various files relevant to a certain word when given just one. [4] Moreover, software companies such as Casetext and ROSS Intelligence are developing advances in natural language processing (NLP) technology systems that can understand and interpret text in a quasi-human way, surpassing mere keyword searches to materialize exceedingly relevant information. [5]

Apart from the more simplistic processes of completing research and scanning documents, AI has been and is currently exercised in two main areas: contract analytics and litigation prediction. [6] In the former subsection, rudimentary NLP technology funded by companies like Kira Systems has made it possible to keep track of a multitude of contracts and their specifications, which is of particular use to corporate firms; this could look like compiling renewal dates and simplifying (or even igniting) the process of negotiating existing terms. [7] The latter involves predicting outcomes of pending cases based upon inputs of relevant precedent, serving not only to aid attorneys in planning litigation strategies but also to clarify decisions litigation investors should make regarding which cases to back. [8] 

Drawbacks of access to predictive AI

Nevertheless, developed predictive technology, when used in individual rather than corporate cases, is doomed to reflect a flawed carceral system. Algorithms that utilize past administrative data to inform their decisions are only as fair and just as that data. This issue has become glaringly obvious in the increased use of Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system by courtrooms, a program designed to predict the likelihood that a particular offender will become a recidivist. In fact, a two-year study conducted by ProPublica found major flaws in its code, such that only 20% of the people the system predicted to commit violent crimes actually went on to do so, pointing to an overly liberal methodology. [9] Further, the tool’s racial profiling became abundantly clear – the formula wrongly labeled Black defendants as future criminals almost twice as often as it did white defendants, and amongst all Black defendants regardless of criminal history, age, and gender, there was a 77% likelihood the system pegged them at higher risk of committing a violent crime in the future. [10]

The biased code has had such an effect on case outcomes that several defendants have sued miscellaneous involved parties over Wisconsin’s use of COMPAS in reaching a final decree, with varying rates of success. In Loomis v. Wisconsin, 137 S.Ct. 2290 (2017), Loomis claimed that factoring COMPAS’s predictions into a guilty verdict was a violation of his due process rights. [11] Loomis was denied post-conviction relief on the grounds that judges are made aware of COMPAS’s shortcomings before reaching a decision and ultimately impose a sentence based on their total knowledge of the defendant. [12] On the other hand, In Henderson v. Steinsberg et al., No. 21-1586 (7th Cir. 2021), Henderson sued both Northpointe’s leaders and the Wisconsin Parole Board after he was denied parole after a 40-year sentence, claiming all involved actors knew of COMPAS’s racial bias and that the parole board still chose to use it instead of financing an improvement. [13] Though the case was dismissed, the Judge was aligned with Henderson’s grievances, contending that Henderson’s appeal on the basis of equal rights should be distinguished from the precedent set by Loomis’s appeal on the basis of due process. [14]

Apart from COMPAS, other legal-oriented AI have proved to play a significant role in bolstering racially charged decisions in the criminal justice system. United States v. Curry, 965 F.3d 313 (4th Cir. 2020) saw an abuse of police power made possible by an algorithm detecting hotspots of criminal activity; Curry was stopped and arrested for possession of a firearm in one such hotspot and appealed, stating he was subject to unlawful search and seizure. [15] The 4th circuit agreed, with Judge Gregory claiming the exchange represented a “high-tech version of racial profiling.” [16] Thus, drawbacks in the usage of AI for legal purposes are evident. Not only do these “judge bots” perpetuate inequality, but they also have the potential to advance it even further as prejudiced outcomes seem to be the result of “objective” computer analysis and not the system this analysis mirrors. [17]

Potential for making the law more efficient and accessible

Of course, there is  no denying that the AI technology utilized by legal professionals today has made particular tasks easier for attorneys and consequently more affordable for clients. According to researchers Dana Remus and Frank Levy, if a firm were to adopt all existing legal technology immediately, their working hours would decrease by about 13%. [18] Even a more realistic adoption rate, they say, would result in a 2.5% annual decrease over five years. [19] Presently, even basic document review at large firms has become so automated that the task only takes up about 4% of a given lawyer’s time. [20] With emerging NLP technology, this trend could be applied to more arduous tasks and streamline efficiency further. For example, ROSS, the world’s first AI “lawyer,” saves attorneys an estimated 20 to 30 hours per case simply by understanding the intent behind their questions and drafting memos detailing appropriate responses. [21]

This is good news for clients: less time spent on a particular case means fewer billable hours. There’s a good chance that, should NLP-powered AI continue to evolve, those who previously could not afford legal representation will be able to utilize lawyer-AI duos at a fraction of the cost, thus creating less barriers to entry and more job opportunities for lawyers currently out of work. [22] The creator of ROSS has even pledged to offer the technology to deserving lawyers at no cost to stimulate the formation of new attorney-client relationships. [23]

The future landscape of law in an ever-advancing technological age

The unique capabilities of legal AI to simultaneously empower and victimize marginalized groups have rendered its usage an ethical dilemma to the many individuals poised at the intersection of law and technology. However, it should come as a relief that many legal scholars think we have at least a decade or two to iron out the kinks and debate the utility of our current technology while more complex legal AI is developed. [24] Though our current algorithms are capable of assessing language and conflicts in principle, they cannot yet assume a professional role based on moral judgment. And, frankly, we do not know that they ever will; the human element of law is something that might prove not to be iterable. [25] Rather, the future law firm could look something like what Michael Mills, a lawyer and legal technology start-up strategist, outlines : the partner will remain fixed as the leader of a team, “and more than one of the players will be a machine.” [26]

Notes:

  1. Lohr, Steve. “A.I. Is Doing Legal Work. But It Won't Replace Lawyers, Yet.” The New York Times. The New York Times, March 19, 2017. https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html?mcubz=0&_r=0.

  2. Donahue, Lauri. “A Primer on Using Artificial Intelligence in the Legal Profession.” Harvard Journal of Law & Technology, January 3, 2018. 

  3. Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012).

  4. Donahue, Lauri. “A Primer on Using Artificial Intelligence in the Legal Profession.” Harvard Journal of Law & Technology, January 3, 2018. 

  5. Toews, Rob. “AI Will Transform The Field Of Law.” Forbes, Forbes Magazine, 12 Oct. 2022, https://www.forbes.com/sites/robtoews/2019/12/19/ai-will-transform-the-field-of-law/?sh=34e358a57f01.

  6. Ibid.

  7. Ibid.

  8. Ibid.

  9. Angwin, Julia, Jeff Larson, Lauren Kirchner, and Surya Mattu. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  10. Ibid.

  11. Loomis v. Wisconsin, 137 S.Ct. 2290 (2017).

  12. Loomis v. Wisconsin (2017).

  13. Henderson v. Steinsberg, No. 21-1586 (7th Cir. 2021).

  14. Henderson v. Steinsberg (2021).

  15. United States v. Curry, 965 F.3d 313 (4th Cir. 2020)

  16. United States v. Curry (2020).

  17. Stepka, Matthew. “Law Bots: How AI Is Reshaping the Legal Profession.” Business Law Today from ABA. Business Law Today, February 21, 2022. https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/.

  18. Remus, Dana, and Frank S. Levy. “Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law.” SSRN Electronic Journal, 2015. https://doi.org/10.2139/ssrn.2701092.

  19. Ibid.

  20. Ibid.

  21. Nunez, Catherine. “Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions.” Tulane University Journal of Technology and Intellectual Property 20 (August 27, 2019). 

  22. Ibid.

  23. Ibid.

  24. Lohr, Steve. “A.I. Is Doing Legal Work. But It Won't Replace Lawyers, Yet.” The New York Times. The New York Times, March 19, 2017. https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html?mcubz=0&_r=0.

  25. Nunez, Catherine. “Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions.” Tulane University Journal of Technology and Intellectual Property 20 (August 27, 2019). 

  26. Lohr, Steve. “A.I. Is Doing Legal Work. But It Won't Replace Lawyers, Yet.” The New York Times. The New York Times, March 19, 2017. https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html?mcubz=0&_r=0.

    BIBLIOGRAPHY:

    Angwin, Julia, Jeff Larson, Lauren Kirchner, and Surya Mattu. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

    Donahue, Lauri. “A Primer on Using Artificial Intelligence in the Legal Profession.” Harvard Journal of Law & Technology, January 3, 2018. 

    Lohr, Steve. “A.I. Is Doing Legal Work. But It Won't Replace Lawyers, Yet.” The New York Times. The New York Times, March 19, 2017. https://www.nytimes.com/2017/03/19/technology/lawyers-artificial-intelligence.html?mcubz=0&_r=0.

    Nunez, Catherine. “Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions.” Tulane University Journal of Technology and Intellectual Property 20 (August 27, 2019). 

    Remus, Dana, and Frank S. Levy. “Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law.” SSRN Electronic Journal, 2015. https://doi.org/10.2139/ssrn.2701092.

    Stepka, Matthew. “Law Bots: How AI Is Reshaping the Legal Profession.” Business Law Today from ABA. Business Law Today, February 21, 2022. https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/.

    Toews, Rob. “AI Will Transform The Field Of Law.” Forbes, Forbes Magazine, 12 Oct. 2022, https://www.forbes.com/sites/robtoews/2019/12/19/ai-will-transform-the-field-of-law/?sh=34e358a57f01.

    Wiggers, Kyle. “The Pitfalls of AI That Could Predict the Outcome of Court Cases.” VentureBeat, VentureBeat, 1 Mar. 2022, https://venturebeat.com/business/the-pitfalls-of-ai-that-could-predict-the-outcome-of-court-cases