MCQLL, 10/9 — Siva Reddy

At the meeting of MCQLL this week, Siva Reddy will discuss ongoing work on Measuring Stereotypical Bias in Pretrained Neural Network Models of Language.

A key ingredient behind the success of neural network models for language is pretrained representations: word embeddings, contextual embeddings and pretrained architecutures. Since pretrained representations are obtained from learning on massive text corpora, there is a danger that unwanted societal biases are reflected in these models. I will discuss ideas on how to assess these biases in popular pretrained language models.

This meeting will be in room 117 at 14:30 on Wednesday, October 9th.

Comments are closed.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.