MCQLL Lab Meeting, 9/25 — Jacob Hoover

At next week’s MCQLL meeting, Jacob will be presenting about a new research project on finding syntactic structures in contextual embeddings.

A recent paper by Hewitt and Manning shows that the pretrained embeddings of contextual neural networks (ELMo, BERT) encode information about dependency structure (more concretely: a learned linear transformation (“probe”) on top of pre-trained embeddings is able to reconstruct dependencies based on the Penn Treebank). Several questions arise when considering this result: Does BERT have a theory of syntax? What is does this mean? What structure/information is this probe extracting from the embeddings? Jacob will introduce the results from this paper for a general audience, and discuss some of these questions as current ideas for research.

The meeting will be in room 117 at 14:30 on Wednesday.

Comments are closed.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.