Robots in Public Spaces: Privacy & Design

Kristen Thomasen’s talk can be found here

Robot Rules: Regulating Artificial Intelligence

Jacob Turner’s talk on Monday was filmed – find the link here

His powerpoint slides can be found here

Discrimination, AI & the Criminal Justice System

Our final event last semester was a roundtable with Fahad Diwan (Smartbail), Yuan Stevens (Centre for Comparative Criminology), Vincent Southerland (Center on Race, Inequality, and the Law, NYU), and Marie Manikis (McGill), on the theme “Discrimination, AI & the Criminal Justice System”. Here are a couple of points that were brought up in the almost two-hour discussion:

  • Fahad started us off, saying that AI is a powerful tool for the future, even more so than the internet was in the late 1990s. He stressed that there was a possibility for a lot of harm or good, depending on how the technology was used.
  • Vincent corroborated this, essentially stating that criminal justice in the USA is rooted in slavery and oppression, and that though there is tremendous promise that technology can be used for good, the criminal system has often overwhelmed any good use of this technology. Examples that he gave were that of body-worn cameras that police officers wear, which casts suspicion on anyone that the police officer happens to interact with.
  • Fahad jumped in, talking about his project, Smartbail. Smartbail is a risk assessment tool that uses machine learning to assess the profiles of defendants and whether they will turn up in court if they are let out on bail. Fahad believes that you can use AI in law for good, and that AI is more of a “black box”, so to speak, than judges.
  • Yuan asked how we could have safeguards in computer code to ensure the fairness of the process. Fahad thinks that the software ought to be transparent and open in order to ensure complete legitimacy.
  • Vincent thought that the argument that a judge is a black box is a red herring, since at the end of the day, the judge is still the one making the decisions, AI tools or not. He believes it would be better to to turn the tools on the judges, in order to monitor their racial bias.
  • Fahad believes that you can “sanitize” the data from racial bias. He doesn’t believe that data will necessarily discriminate.
  • Yuan responds, saying that internet is a mirror of what already exists, and that we need to criticize what is already being enabled by technology. She doesn’t think that technology can solve our problems.
  • Vincent and Yuan both agree that we need to make the system more difficult for others to lock you up. Possibilities of e-carceration exist (with ankle monitors). Vincent also believes that judges will only listen to tools like Smartbail if they confirm what the judges already believe.

The Promise of AI for Positive Comparative Law

“For the rational study of law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics” (Oliver Wendell Holmes, 1897). Thus opens Benjamin Alarie’s presentation, “The Promise of AI for Positive Comparative Law”.

The CEO of Blue J Legal, Benjamin Alarie stresses that a client always wants to know whether or not they will win in court. With the Blue J Legal software and its use of machine-learning, we can already answer this concern exceptionally well. Canada, as it turns out, happens to be one of the world leaders in this front.

What we are witnessing is therefore an evolution of the legal informational infrastructure, from books/paper/loose-leaf publications to digital ways of collecting information (such as mobile applications) and computational (with the use of machine learning, AI, and predictive analytics).

Benjamin Alarie gave a demonstration of his software (which for now focusses on tax and employment law). The way you would use it includes clicking answers to several questions about your case, and before you know it, the software predicts and explains your chances of winning in court. This prediction is based on every case that the courts have decided on the specific issue at hand. Highly sophisticated, the machine-learning model can identify similarities and differences in how judges exercise discretion across many different contexts (e.g. federal versus provincial).

Blue J Legal’s software will be developed in other jurisdictions, such as the US market. Questions remain, however, as to the usefulness of this software in civil law jurisdictions, although the way a civil code is applied can still vary in certain cases.

Benjamin Alarie concluded with a couple of predictions as to what would happen in the next several decades regarding his software: better legal outcome predictability, which would lead to faster and fairer dispute resolution settlements; significant changes in legal education, with emerging methodologies in legal reseach; more productive provision of legal services; and paradoxically, making the task of judging more difficult, since “easy” cases will be settled, while the normative and policy debates will be handed over to the judges.

From Facial Recognition to Moral Recognition: Early Experiences in AI, Ethics and Law

On Thursday, we had the honour of having David Robinson inaugurate our very first talk as part of our speaker series.

The talk, titled “From Facial Recognition to Moral Recognition: Early Experiences in AI, Ethics and Law”, began with two personal anecdotes. In the first, David spoke of the aid that technology came to be as a child with mild cerebral palsy. Mild cerebral palsy, in the case of David, comes with consequences, such as a shaky handwriting or a slight wobble when walking. That’s where computers came in: Notwithstanding difficulties writing, typing became the way to make David’s life suddenly much easier. The second anecdote told of the importance of accessible design and disability (Twitter thread on that here: https://goo.gl/BB2E5Z).

David then went on to speak on his prior belief that body cameras on police officers would help boost accountability for law enforcement and citizens. However, later events demonstrated that this new technology ultimately allowed for the police to conserve their power over that of the community in case of conflict. For example, footage of these body-worn videos is often hard to obtain for the public, or are tampered with. As a result, video evidence often ends up coming from bystanders.

The talk then transitioned to the limitations of facial recognition technology, which is widely available – you can even buy it off Amazon, under the name “Rekognition”! David higlighted that these systems often struggled to get the gender of the people in question right, and often flagged people of color as dangerous, in part because the system tended to be trained uniquely on white faces. David concluded that facial recognition is an inherently unethical technology, and successfully advocated against it in a letter written with the ACLU to Axon Investor Relations.

Next came a critique of the use of AI in pretrial risk assessment. David mentioned Megan Stevenson’s work, which states that there is no evidence of algorithms influencing outcomes. Indeed, these algorithms only predict failure to appear (which usually involves missing the bus, forgetting the court date, or even being misinformed about the date), and ignore the dangerousness aspect (that is, reoffending) in deciding whether to release the person in question. Furthermore, these algorithms are based on old data that is no longer relevant (“zombie predictions”), or are based on unconstitutional patterns, and so are legally and statistically untenable.

David concluded the talk that technology is above all an amplifier, and what gets amplified depends on politics as opposed to technology. In addition, he stressed the need for morality and empathy as we go more tech: “With each step forward, the need for empathy grows. It does not shrink”.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.