Archive for the 'MCQLL' Category

MCQLL, 12/5

At next week’s meeting, Amy and Benji will both give a presentation. Amy is going to present her project “Inference and Learnability over Minimalist Grammars” (abstract below). Benji is going to present the paper Parsing as Deduction (Pereira &Warren, 1093) (paper attached).

(Working) Title: Inference and Learnability over Minimalist Grammars

Abstract: This is a draft presentation of some of my current PhD research, intended for a more computationally-oriented audience. It contains collaborative work done over the past year with Eva Portelance (Stanford), Daniel Harasim (EPFL), and Leon Bergen (UCSD). Minimalist Grammars are a lexicalied grammar formalism inspired by Chomsky’s (1994) Minimalist Program, and as such are well suited to formalize theories in contemporary syntactic theory. Our work formulate a learning model based on the technique of Variational Bayesian Inference and apply the model to pilot experiments. In this presentation, I focus on giving an introduction to the central issues in syntactic theory and motivating the problems we wish to address. I give an introduction to syntactic theory and formal grammars, and demonstrate why context free grammars are insufficient to adequately characterize natural language. Minimalist Grammars, a lexicalized mildly context-sensitive formalism are introduced as a more linguistically adequate formalism.

We will meet Wednesday at 5:30pm in room 117. Food will be provided.

MCQLL, 11/28

At next week’s meeting, Yves will be presenting the family of stochastic processes known as Dirichlet processes.

The Dirichlet distribution, a generalization of the Beta distribution, is a probabilistic distribution over a finite-dimensional categorical distribution. The Dirichlet process can be seen as an infinite-dimensional generalization of this which balances the trade-off between partitioning random observations into fewer or additional categories. I will describe this through the metaphor of the “Chinese restaurant process” and talk about its use in the fragment grammar model of morphological productivity.

We will be meeting at 5:30pm Wednesday November 28th in room 117.

MCQLL Meeting Wednesday, 11/21

At this week’s MCQLL meeting, Bing’er Jiang will present Feldman et al.’s (2013) A Role for the Developing Lexicon in Phonetic Category Acquisition. Please find the abstract below:

Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.

 

We will be meeting Wednesday November 21 at 5:00pm in room 117. Food will be provided. See you then!

MQLL Meeting, 10/24

At next week’s meeting, Seara will present her project on the inverse relation between size of inflectional classes and word frequency. Here is the abstract:

In this project, we attempt to quantitatively demonstrate the the inverse relation between size of inflectional classes and word frequency. I will go over the background behind productivity in inflections and word frequency, the stages in quantitatively demonstrating the relationship between word frequency and size of inflectional class. Then finally the next step of the project moving forward.

The meeting will be next Wednesday from 5:30pm to 7:30pm at room 117.

MQLL Meeting, 10/17

At next week’s meeting, Wilfred will be presenting the following paper: “Learning Semantic Correspondence with Less Supervision” by Liang et al. (2009). Please find the abstract below:

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high de- gree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty—Robocup sportscasting, weather forecasts (a new domain), and NFL recaps.

Meeting will be Wednesday Oct 17 from 5:30pm to 7:30pm at room 117.

MQLL Meeting, 10/10

For this week’s MQLL meeting, James Tanner will present new data of individual speaker variability in the Tokyo Japanese voicing contrast. The meeting will be Wednesday Oct 10 from 5:30pm to 7:30pm in room 117.

MCQLL Meeting October, 10/3

This week, MCQLL meeting will be meeting Wednesday from 5:30pm to 7:30 in room 117. Greg Theos will present about his work in analyzing data from lexical decision tasks.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.