« Older Entries

MULL-Lab, 10/26 – Jessica Coon and Justin Royer

MULL-Lab will be meeting Tuesday, October 26, at 4:30pm.  Jessica Coon and Justin Royer will be presenting their paper titled Object raising bleeds binding: A new correlate of high-absolutive syntax in Mayan (see abstract below).  If you have not registered for MULL-Lab but would like to attend, you can register here.

Abstract: A subset of Mayan languages prohibit the extraction of subjects from transitive sentences, a phenomenon known as the Ergative Extraction Constraint (EEC) (Aissen 2017, Coon et al. 2021). One family of accounts connects the EEC to object raising: the object consistently raises above the subject in transitive sentences, which consequently blocks extraction of the subject (Campana 1992, Coon et al. 2014). A second family of accounts leaves the object in its canonical position, but ties the EEC to optimality; in short, a construction other than a regular transitive sentence is available in cases of subject extraction, and a ranking of constraints enforces the use of that construction (Stiebels 2006, Erlewine 2016). In this talk, we provide new evidence for the object raising approach. We show that object raising leads to a configuration in which the subject does not bind into the object, with important repercussions for the distribution of coreferential nominals in Mayan languages that exhibit the EEC.

MULL-Lab, 10/21 – Sigwan Thivierge

MULL-Lab will be meeting this Tuesday at 4:30pm.  Sigwan Thivierge (Concordia University) will be presenting on community-led language reclamation.  If you would like to attend but haven’t registered, you can do so here.

MULL-Lab, 10/12 – Will Johnston

MULL-Lab will be meeting Tuesday, October 12, at 4:30pm.  This week, Will Johnston will be presenting on motion predicates in Hmong.

Montreal Underdocumented Languages Linguistics Lab (MULL-Lab) launched

We are happy to report that the former McGill Fieldwork Lab has been reconfigured into the Montreal Underdocumented Languages and Linguistics Lab (MULL-Lab), led by McGill faculty members Jessica CoonJames Crippen, and Martina Martinović, together with Lisa Travis, Richard Compton (UQAM) and Sigwan Thivierge (Concordia).

Learn more at the new website:

MULL-Lab Meeting, 09/28 – Richard Compton

MULL-Lab (formerly Fieldwork Lab) will be meeting Tuesday, September 28, at 4:30pm.  Richard Compton (UQAM) will be presenting a paper titled On the structure of (personal) pronouns in Inuktitut.  If you would like to attend but still haven’t registered, you may do so here.

MULL-Lab Meeting

Montreal Underdocumented Languages Linguistics Lab (MULL-Lab, formerly Fieldwork Lab) will be having its first meeting this Tuesday, September 21, from 3:30-4:30pm.  If you would like to attend, please fill out this registration form.  We will be setting our schedule of talks for this semester, so please come with ideas!

MCQLL Lightning Talks, 9/14

MCQLL will be meeting this Tuesday, September 14 at 3:00 PM on Zoom.

This week’s meeting will be a series of lightning talks by MCQLL lab members, giving brief introductions to their research. All are welcome to come learn more about current work being done in the lab.

If you haven’t already, please register here to get the meeting link.

Fieldwork group meetings

Fieldwork group meetings will start back up this semester. Please fill out the poll here to indicate your availability for a regular meeting: https://doodle.com/poll/36t4k8czky9e8bzz?utm_source=poll&utm_medium=link. Email Clint Parker with any questions.

Mpoke Mimpongo completes MA at UQÀM

Congratulations to Mpoke Mimpongo, who recently completed is MA in Linguistics at UQÀM, under the supervision of Heather Newell (McGill PhD ’08)  with a thesis entitled “Le statut phonologique des groupes NC en Bobangi/Mangala.”

Mpoke served as the language consultant for the Field Methods class co-taught by Jessica and Morgan in 2017, and continued working with McGill students after. Mpoke credits his experience as a language consultant with deepening his interest in studying Bantu linguistics.

Congratulations Mpoke!

 

 

Fieldwork Lab, 4/15 — Will Johnston

This week, Will Johnston will present a talk titled: “Verb serialization as event-building: Evidence from Hmong”. (This is a 20-minute practice talk for MOTH; abstract follows.) Fieldwork Lab meets on Thursdays, though due to the unusual class schedule, Fieldwork Lab will exceptionally begin at 4:15 this week.

Abstract:  I examine two common and highly productive types of serial verb construction in Hmong (Hmong-Mien). These are the so-called ‘Attainment’ SVCs, which express telicity, and ‘Cause-Effect’ SVCs, which express direct causation. I argue that both are reflexes of the same underlying system: both are formed by merging multiple verbal roots within the event-building portion of the verbal projection. I then discuss the extent to which this treatment might apply to other types of SVCs in Hmong.

Fieldwork Lab, 4/8 — Hermann Keupdjio

This Thursday, during Fieldwork Lab, Hermann Keupdjio will talk to us about doing a virtual fieldtrip. Contact Carol-Rose Little if you would like to join.

Doing a virtual “fieldtrip”:

Collecting data from understudied languages is a vital enterprise that enriches our knowledge of the nature of human language. Accomplishing this with in person visits is invaluable, however, in addition to the current pandemic situation, there is an urgent need for more data, and a limited number of linguists with the training and resources to conduct field work. In this situation, online experiments provide a powerful supplementary tool for linguists and fieldworkers studying underdocumented languages. Specifically, rather than supplanting fieldwork, online experiments can allow for an expansion of field work with pre-visit pilots and follow-up experiments. More importantly, they are a helpful tool in creating and enhancing global collaborations and capacity building between field linguists, members of understudied language communities, and linguists without field training.

MCQLL Meeting, 4/8 — Michaela Socolof

This week’s MCQLL meeting on Thursday, April 8 at 1:30-2:30pm, will feature a talk from Michaela Socolof, a third year PhD student in the Linguistics department at McGill.

Abstract: I will be presenting an overview of issues relating to the syntax of relative clause constructions across languages. The purpose of this talk is to explore possibilities for computational projects in this area.

If you would like to attend the talk but have not yet signed up for the MCQLL meetings this semester, please send an email to mcqllmeetings@gmail.com.

MCQLL Meeting, 4/1 — Maya Watt

This week’s MCQLL meeting Thursday, April 1, 1:30-2:30pm, will feature a talk from Maya Watt. Bio and talk abstract are below.

If you would like to attend the talk but have not yet signed up for the MCQLL meetings this semester, please send an email to mcqllmeetings@gmail.com.

Bio: Maya Watt is a U3 undergraduate student in Honours Linguistics with a minor in Computer Science.

Abstract: Theories of inflectional morphology differ in terms of how they treat semi-productive inflection types, that is, inflections  that apply to multiple words but are not completely productive (e.g. grow-grew, know-knew, but not clow-clowed). How such semi-regular classes generalize may help distinguish theories, but little work has explored this question due to the difficulty of finding overgeneralized uses of these inflectional classes  in naturalistic corpora. We address this issue by conducting a prompted lexical decision study on English past tenses. Participants were shown a regular or irregular verb in the infinitive form (to snow, to grow) and then presented with either a correct inflection (snowed, grew) or an overgeneralization (snew, growed) and asked to indicate whether it is the correct past tense form. We compare how various overgeneralized  types (snow-snew, sneeze-snoze) differ in terms of reaction times and accuracy rates finding differences between classes which may inform future theoretical comparisons.

Fieldwork Lab Meeting, 4/1 — Eszter Ótott-Kovács

This week in Fieldwork Lab Eszter Ótott-Kovács, PhD candidate at Cornell University, will be presenting her work “Genitive-Nominative Case Alternation in the Nominal Domain in Kazakh”. Fieldwork lab meets Thursday at 4pm. Contact Carol Rose Little if you would like to attend.

Abstract:

It is well-known that Turkic languages have Differential Object Marking, where the specific (presuppositional) direct object is marked with the accusative, while the non-specific object is unmarked for case/nominative (Enç 1991, Diesing 1992, Kelepir 2001). Relying on (mostly) Turkish data, it has been assumed that specificity drives the genitive-nominative case “alternation” in a similar manner to DOM (Kornfilt 2009, a.o.).

The talk explores the genitive-nominative “alternation” in Kazakh (Turkic), found (1) on the possessor in possessive constructions, on the subjects of (2) nominalized argument clauses and (3) relative clauses, based on novel data elicited by the author. I show that, in contrast to DOM, genitive-nominative alternation is not solely driven by specificity in this language. The genitive-nominative alternation on the possessor and the relative clause subject follows the pattern described for Turkish in terms of specificity. However, the genitive-nominative alternation on the argument clause subject is determined by the anaphoricity of the subject DP: genitive is marked on anaphoric DP subjects, nominative is used otherwise (in the case of unique definite or indefinite subjects).

MCQLL Meeting, 3/25 — Emily Goodwin

This week’s MCQLL meeting Thursday, March 25, 1:30-2:30pm, will feature a talk from Emily Goodwin. Talk abstract is below.

If you would like to attend the talk but have not yet signed up for the MCQLL meetings this semester, please send an email to mcqllmeetings@gmail.com.

Abstract: Recent attention in neural natural language understanding models has focused on generalization that is compositional (the meanings of larger expressions are a function of the meanings of smaller expressions) and systematic (individual words mean the same thing when put in novel combinations). Datasets for compositional and systematic generalization often focus on testing classes of syntactic constructions (testing only on strings of a certain length or longer, or novel combinations of particular predicates). In contrast, the compositional freebase queries (CFQ) training and test sets are automatically sampled. To measure the compositional challenge of a test set relative to its training set, they measure the divergence between the distribution of syntactic compounds in test and train. Training and test splits with maximum compound divergence (MCD) are highly challenging for semantic parsers, but (unlike other datasets designed to test compositional generalization) the splits do not specifically hold-out human-recognizable classes of syntactic constructions from the training set.In this talk I will present preliminary results of a syntactic analyses of the MCD splits released in the CFQ dataset, and explore whether model failures on MCD splits can be explained in terms of phenomena familiar to syntactic theory.

MCQLL Lab Meeting, 3/14 — Ben LeBrun

This week’s MCQLL meeting Thursday, March 18, 1:30-2:30pm, will feature a talk from Ben LeBrun. Talk abstract is below.

If you would like to attend the talk but have not yet signed up for the MCQLL meetings this semester, please send an email to mcqllmeetings@gmail.com.

Abstract: The use of pre-trained Transformer language models (TLMs) has led to significant advances in the field of natural language processing. This success has typically been measured by quantifying model performance on down-stream tasks, or through their ability to predict words in large samples of text. However, these benchmarks are biased in favour of frequent natural language constructions, measuring performance on common, recurring patterns in the data. The behaviour of TLMs on the large set of complex and infrequent linguistic constructions is in comparison understudied. In this talk, I will present preliminary results exploring GPT2’s ability to reproduce this long-tail of syntactic constructions, and how this ability is modulated by fine-tuning.

MCQLL Lab Meeting, 3/11 — Eva Portelance

This week’s MCQLL meeting (Thursday, March 11th, 1:30-2:30pm), will feature a talk from Eva Portelance. Abstract and bio are below.

If you would like to join the meeting but have not yet registered for this semester’s MCQLL meetings, please send an email to mcqllmeetings@gmail.com.

Bio: Eva is currently a Ph.D. candidate at Stanford University in Linguistics, working with Mike Frank and Dan Jurafsky. She completed a B.A. Honours in Linguistics and Computer Science at McGill University in 2017. She is interested in linguistic structure and language learning both in humans and machines. This work was started during an internship at Microsoft Research Montreal.

Abstract: Learning Strategies for the Emergence of Language in Iterated Learning

In emergent communication studies, agents play communication games in order to develop a set of linguistic conventions referred to as the emergent language. Here, we compare the effects of a variety of learning functions and play phases on the efficiency and effectiveness of emergent language learning. We do so both within a single generation of agents and across generations in an iterated learning setting. We find that allowing agents to engage in forms of selfplay ultimately leads to more effective communication. In the iterated learning setting we compare different approaches to intergenerational learning. We find that selfplay used jointly with imitation can also lead to effective communication in this setting. Additionally, we find that encouraging agents to successfully communicate with previous generations rather than to successfully imitate them can lead to both effective language and efficient learning. Finally, we introduce a new dataset and a new agent architecture with split visual perception and representation modules in order to conduct our experiments.

Fieldwork Lab Meeting, 3/11 — Jaime Pérez González

This week during Fieldwork Lab, Jaime Pérez González, a PhD candidate in the Department of Linguistics at University of Texas at Austin, will present Grammatical Aspect in Mocho’ (Mayan). We meet at 4pm on Thursday. Contact Carol-Rose Little if you would like to join.

Abstract:

This talk addresses in detail the aspectual system in Mocho’, a highly endangered Mayan language. Its complexity has led to a different analysis by Kaufman (1967) and Palosaari (2011). The outcome of this research is an alternative analysis to those proposed in previous studies. I show that this language has a split aspectual system based on transitivity and partially on person. Mocho’ exhibits two sub-paradigms of aspect based on the type of verb that heads the clause. On the one hand, when the head of the predicate corresponds to an active transitive verb, or when the head of the predicate is an intransitive underived verb that indicates its subject with the pronominal markers from Set A, the language will display three aspectual distinctions that contrast with one another in their temporal interpretations. On the other hand, inverse verbs and any intransitivized verbs with a suffix -(v)vn that take Set C to indicate their subject will have a binary opposition. On top of this, the morphological ergative split alignment in Mocho’ leads to an aspectual marker distinction between Speech Act Participants (SAPs) and third person. Based on corpus and elicitation sessions, this complex aspectual system is untangled here. Previous proposals have not been tested with corpus data, which can serve as a test-bed for the linguistic analysis proposed as well as for the intuitions on which the proposal is based. Thus, I will show that grammatical aspect (viewpoint aspect) in Mocho’ cannot solely be understood by eliciting data, but rather, a look from a corpus can tell us more about the nature of the language.

Fieldwork Lab Meeting, 2/25 — Victoria Chen

This week during our fieldwork lab meeting, Victoria Chen (Assistant Professor in Syntax at Victoria University of Wellington, New Zealand) will present “When Austronesian-type voice meets Indo-European-type voice: Insights from Puyuma”. See attached abstract! Contact Carol-Rose if you would like to join the fieldwork lab. We meet from 4-5pm on Thursdays.

MCQLL Meeting, 2/25 — Richard Futrell

This week’s MCQLL meeting, taking place Thursday, Feb 25th, 1:30-2:30pm will feature a talk entitled “Information-theoretic models of natural language” by Professor Richard Futrell. Abstract and bio are below. If you would like to join the meeting and have not yet registered for this semester’s MCQLL meetings, please send an email to mcqllmeetings@gmail.com requesting the link.

Abstract: I claim that human languages can be modeled as information-theoretic codes, that is, systems that maximize information transfer under certain constraints. I argue that the relevant constraints for human language are those involving the cognitive resources used during language production and comprehension. Viewing human language in this way, it is possible to derive and test new quantitative predictions about the statistical, syntactic, and morphemic structure of human languages.

I start by reviewing some of the many ways that natural languages differ from optimal codes as studied in information theory. I argue that one distinguishing characteristic of human languages, as opposed to other natural and artificial codes, is a property I call “information locality”: information about particular aspects of meaning is localized in time within a linguistic utterance. I give evidence for information locality at multiple levels of linguistic structure, including the structure of words and the order of words in sentences.

Next, I state a theorem showing that information locality is a property of any communication system where the encoder and/or decoder are operating incrementally under memory constraints. The theorem yields a new, fully formal, and quantifiable definition of information locality, which leads to new predictions about word order and the structure of words across languages. I test these predictions in broad corpus studies of word order in over 50 languages, and in case studies of the order of morphemes within words in two languages.

Bio: Richard Futrell is an Assistant Professor of Language Science at the University of California, Irvine. His research applies information theory to better understand human language and how humans and machines can learn and process it.

« Older Entries
Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.