The Promise of AI for Positive Comparative Law

“For the rational study of law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics” (Oliver Wendell Holmes, 1897). Thus opens Benjamin Alarie’s presentation, “The Promise of AI for Positive Comparative Law”.

The CEO of Blue J Legal, Benjamin Alarie stresses that a client always wants to know whether or not they will win in court. With the Blue J Legal software and its use of machine-learning, we can already answer this concern exceptionally well. Canada, as it turns out, happens to be one of the world leaders in this front.

What we are witnessing is therefore an evolution of the legal informational infrastructure, from books/paper/loose-leaf publications to digital ways of collecting information (such as mobile applications) and computational (with the use of machine learning, AI, and predictive analytics).

Benjamin Alarie gave a demonstration of his software (which for now focusses on tax and employment law). The way you would use it includes clicking answers to several questions about your case, and before you know it, the software predicts and explains your chances of winning in court. This prediction is based on every case that the courts have decided on the specific issue at hand. Highly sophisticated, the machine-learning model can identify similarities and differences in how judges exercise discretion across many different contexts (e.g. federal versus provincial).

Blue J Legal’s software will be developed in other jurisdictions, such as the US market. Questions remain, however, as to the usefulness of this software in civil law jurisdictions, although the way a civil code is applied can still vary in certain cases.

Benjamin Alarie concluded with a couple of predictions as to what would happen in the next several decades regarding his software: better legal outcome predictability, which would lead to faster and fairer dispute resolution settlements; significant changes in legal education, with emerging methodologies in legal reseach; more productive provision of legal services; and paradoxically, making the task of judging more difficult, since “easy” cases will be settled, while the normative and policy debates will be handed over to the judges.

From Facial Recognition to Moral Recognition: Early Experiences in AI, Ethics and Law

On Thursday, we had the honour of having David Robinson inaugurate our very first talk as part of our speaker series.

The talk, titled “From Facial Recognition to Moral Recognition: Early Experiences in AI, Ethics and Law”, began with two personal anecdotes. In the first, David spoke of the aid that technology came to be as a child with mild cerebral palsy. Mild cerebral palsy, in the case of David, comes with consequences, such as a shaky handwriting or a slight wobble when walking. That’s where computers came in: Notwithstanding difficulties writing, typing became the way to make David’s life suddenly much easier. The second anecdote told of the importance of accessible design and disability (Twitter thread on that here: https://goo.gl/BB2E5Z).

David then went on to speak on his prior belief that body cameras on police officers would help boost accountability for law enforcement and citizens. However, later events demonstrated that this new technology ultimately allowed for the police to conserve their power over that of the community in case of conflict. For example, footage of these body-worn videos is often hard to obtain for the public, or are tampered with. As a result, video evidence often ends up coming from bystanders.

The talk then transitioned to the limitations of facial recognition technology, which is widely available – you can even buy it off Amazon, under the name “Rekognition”! David higlighted that these systems often struggled to get the gender of the people in question right, and often flagged people of color as dangerous, in part because the system tended to be trained uniquely on white faces. David concluded that facial recognition is an inherently unethical technology, and successfully advocated against it in a letter written with the ACLU to Axon Investor Relations.

Next came a critique of the use of AI in pretrial risk assessment. David mentioned Megan Stevenson’s work, which states that there is no evidence of algorithms influencing outcomes. Indeed, these algorithms only predict failure to appear (which usually involves missing the bus, forgetting the court date, or even being misinformed about the date), and ignore the dangerousness aspect (that is, reoffending) in deciding whether to release the person in question. Furthermore, these algorithms are based on old data that is no longer relevant (“zombie predictions”), or are based on unconstitutional patterns, and so are legally and statistically untenable.

David concluded the talk that technology is above all an amplifier, and what gets amplified depends on politics as opposed to technology. In addition, he stressed the need for morality and empathy as we go more tech: “With each step forward, the need for empathy grows. It does not shrink”.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.