MCQLL, 2/20 – Seara Chen

At next week’s MCQLL meeting, Seara Chen will be presenting on computational model for phonotactics. She will present a survey of two of the major types of computational models for phonotactics, which are based on a collection of papers. In addition, she will also give a short explanation of the current experiment they are working in the lab that will be used to compare different automated phonotactic scorer.

The meeting will be Wednesday 2/20 5:30pm in room 117. All are welcome!

MCQLL, 2/13 – Vanna Willerton

At next week’s MCQLL meeting, Vanna will be presenting two short papers on the topic of language acquisition. Both papers use statistical methods to deduce interesting information regarding the role of data in early language learning:

  1. How Data Drive Early Word Learning: A Cross-Linguistic Waiting Time Analysis. Mollica & Piantadosi (2017)
  2. Humans store ~1.5 megabytes during language acquisition: information theoretic boundsMollica & Piantadosi (?)

It is not required that you all read them, but they are quite short so you are welcome to read ahead of time to make a more interesting discussion. Please click the titles for papers.

We will be meeting Wednesday 5:30pm in room 117.

MCQLL, 2/6 – Jacob Hoover

At next week’s MCQLL lab meeting, Jacob will present on Non-projectivity and mild–context sensitivity. He will be presenting on Marco Kuhlmann’s 2010 book “Dependency Structures and Lexicalized Grammars”.  Word-to-word dependencies have a history in descriptive linstuistics, based on the intuition that the structure of a sentence can be captured by the relationships between the words.  Dependency structures can be sorted into different classes depending on the amount and form of crossing dependencies that are allowed.  Examining classes of non-projective dependency structures and how they relate to grammar formalisms (starting with projective dependency structures = lexicalized context-free grammars), as well as dependency corpora is a way to investigate what kind of limited context-sensitivity should be used to best deal with the long distance dependencies and free word order in natural languages.

We will meet Wednesday 2/6 5:30pm in room 117.

 

MCQLL, 1/30 – Amy Bruno

Next Wednesday, Amy will present her project on Empirical Learnability and Inference with Minimalist Grammars, which is the second part of the presentation that she did at the end of last semester.

Abstract: This is a draft presentation of some of my current PhD research, intended for a more computationally-oriented audience. It contains collaborative work done over the past year with Eva Portelance (Stanford), Daniel Harasim (EPFL), and Leon Bergen (UCSD). Minimalist Grammars are a lexicalied grammar formalism inspired by Chomsky’s (1994) Minimalist Program, and as such are well suited to formalize theories in contemporary syntactic theory. Our work formulate a learning model based on the technique of Variational Bayesian Inference and apply the model to pilot experiments. In this presentation, I focus on giving an introduction to the central issues in syntactic theory and motivating the problems we wish to address. I give an introduction to syntactic theory and formal grammars, and demonstrate why context free grammars are insufficient to adequately characterize natural language. Minimalist Grammars, a lexicalized mildly context-sensitive formalism are introduced as a more linguistically adequate formalism.
The meeting will be Wednesday 5:30pm in room 117.

MCQLL, 12/5

At next week’s meeting, Amy and Benji will both give a presentation. Amy is going to present her project “Inference and Learnability over Minimalist Grammars” (abstract below). Benji is going to present the paper Parsing as Deduction (Pereira &Warren, 1093) (paper attached).

(Working) Title: Inference and Learnability over Minimalist Grammars

Abstract: This is a draft presentation of some of my current PhD research, intended for a more computationally-oriented audience. It contains collaborative work done over the past year with Eva Portelance (Stanford), Daniel Harasim (EPFL), and Leon Bergen (UCSD). Minimalist Grammars are a lexicalied grammar formalism inspired by Chomsky’s (1994) Minimalist Program, and as such are well suited to formalize theories in contemporary syntactic theory. Our work formulate a learning model based on the technique of Variational Bayesian Inference and apply the model to pilot experiments. In this presentation, I focus on giving an introduction to the central issues in syntactic theory and motivating the problems we wish to address. I give an introduction to syntactic theory and formal grammars, and demonstrate why context free grammars are insufficient to adequately characterize natural language. Minimalist Grammars, a lexicalized mildly context-sensitive formalism are introduced as a more linguistically adequate formalism.

We will meet Wednesday at 5:30pm in room 117. Food will be provided.

MCQLL, 11/28

At next week’s meeting, Yves will be presenting the family of stochastic processes known as Dirichlet processes.

The Dirichlet distribution, a generalization of the Beta distribution, is a probabilistic distribution over a finite-dimensional categorical distribution. The Dirichlet process can be seen as an infinite-dimensional generalization of this which balances the trade-off between partitioning random observations into fewer or additional categories. I will describe this through the metaphor of the “Chinese restaurant process” and talk about its use in the fragment grammar model of morphological productivity.

We will be meeting at 5:30pm Wednesday November 28th in room 117.

MCQLL Meeting Wednesday, 11/21

At this week’s MCQLL meeting, Bing’er Jiang will present Feldman et al.’s (2013) A Role for the Developing Lexicon in Phonetic Category Acquisition. Please find the abstract below:

Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.

 

We will be meeting Wednesday November 21 at 5:00pm in room 117. Food will be provided. See you then!

MQLL Meeting, 10/24

At next week’s meeting, Seara will present her project on the inverse relation between size of inflectional classes and word frequency. Here is the abstract:

In this project, we attempt to quantitatively demonstrate the the inverse relation between size of inflectional classes and word frequency. I will go over the background behind productivity in inflections and word frequency, the stages in quantitatively demonstrating the relationship between word frequency and size of inflectional class. Then finally the next step of the project moving forward.

The meeting will be next Wednesday from 5:30pm to 7:30pm at room 117.

MQLL Meeting, 10/17

At next week’s meeting, Wilfred will be presenting the following paper: “Learning Semantic Correspondence with Less Supervision” by Liang et al. (2009). Please find the abstract below:

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high de- gree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty—Robocup sportscasting, weather forecasts (a new domain), and NFL recaps.

Meeting will be Wednesday Oct 17 from 5:30pm to 7:30pm at room 117.

MQLL Meeting, 10/10

For this week’s MQLL meeting, James Tanner will present new data of individual speaker variability in the Tokyo Japanese voicing contrast. The meeting will be Wednesday Oct 10 from 5:30pm to 7:30pm in room 117.

MCQLL Meeting October, 10/3

This week, MCQLL meeting will be meeting Wednesday from 5:30pm to 7:30 in room 117. Greg Theos will present about his work in analyzing data from lexical decision tasks.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.