From Facial Recognition to Moral Recognition: Early Experiences in AI, Ethics and Law

On Thursday, we had the honour of having David Robinson inaugurate our very first talk as part of our speaker series.

The talk, titled “From Facial Recognition to Moral Recognition: Early Experiences in AI, Ethics and Law”, began with two personal anecdotes. In the first, David spoke of the aid that technology came to be as a child with mild cerebral palsy. Mild cerebral palsy, in the case of David, comes with consequences, such as a shaky handwriting or a slight wobble when walking. That’s where computers came in: Notwithstanding difficulties writing, typing became the way to make David’s life suddenly much easier. The second anecdote told of the importance of accessible design and disability (Twitter thread on that here: https://goo.gl/BB2E5Z).

David then went on to speak on his prior belief that body cameras on police officers would help boost accountability for law enforcement and citizens. However, later events demonstrated that this new technology ultimately allowed for the police to conserve their power over that of the community in case of conflict. For example, footage of these body-worn videos is often hard to obtain for the public, or are tampered with. As a result, video evidence often ends up coming from bystanders.

The talk then transitioned to the limitations of facial recognition technology, which is widely available – you can even buy it off Amazon, under the name “Rekognition”! David higlighted that these systems often struggled to get the gender of the people in question right, and often flagged people of color as dangerous, in part because the system tended to be trained uniquely on white faces. David concluded that facial recognition is an inherently unethical technology, and successfully advocated against it in a letter written with the ACLU to Axon Investor Relations.

Next came a critique of the use of AI in pretrial risk assessment. David mentioned Megan Stevenson’s work, which states that there is no evidence of algorithms influencing outcomes. Indeed, these algorithms only predict failure to appear (which usually involves missing the bus, forgetting the court date, or even being misinformed about the date), and ignore the dangerousness aspect (that is, reoffending) in deciding whether to release the person in question. Furthermore, these algorithms are based on old data that is no longer relevant (“zombie predictions”), or are based on unconstitutional patterns, and so are legally and statistically untenable.

David concluded the talk that technology is above all an amplifier, and what gets amplified depends on politics as opposed to technology. In addition, he stressed the need for morality and empathy as we go more tech: “With each step forward, the need for empathy grows. It does not shrink”.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.