« Older Entries

MULL-lab, 04/06 – David Shanks

MULL-lab will be meeting this Wednesday, April 6th at 4pm. David will be presenting: The Southern Tutchone NP. The abstract is attached below:

Abstract: Southern Tutchone is a critically endangered Dene (Athabaskan) language spoken in the southern Yukon. This talk focuses on two processes in the nominal domain: possession and nominalization. I will first outline the possessive system before focusing on innovative processes in Southern Tutchone that differ from nearby Dene languages. For example, binding in clausal subject and object pronouns appears to have influenced possession. I will then discuss nominalization, which can be divided into two forms: unmarked nominalizations, which are used to form most nouns in Southern Tutchone; and marked nominalizations, which are found in temporal subordinate constructions.

MCQLL, 04/05 – Michaela Socolof

At this week’s MCQLL meeting on Tuesday, April 5 at 3:00-4:00, Michaela Socolof will give a talk titled ‘Characterizing morphological systems using partial information decomposition.’ If you’d like to attend, please register for the Zoom meeting here if you haven’t already.
Abstract

Morphological systems across languages vary when it comes to the relation between form and meaning. In some languages, a single unit of meaning corresponds to a single morpheme, whereas in other languages, multiple units of meaning are bundled together into one morpheme. These two types of languages have been called agglutinative and fusional, respectively, but this distinction does not capture the continuous nature of the phenomenon. We provide a mathematically precise way of characterizing morphological systems using partial information decomposition, which is a framework for decomposing mutual information into three components: unique, redundant, and synergistic information. We show that highly fusional languages are characterized by high levels of synergy.

MULL-lab, 03/30 – Brandon Chaperon

MULL-lab will be meeting this Wednesday, March 30th at 4pm. Brandon will be presenting: Igala’s Dual Negation. Abstract is attached below:

Abstract: This talk presents a puzzle pertaining to two different surface forms for negation in Igala (Niger-Congo). Negation can either surface as a (super) high tone on the subject or as a pre-verbal particle. I will first go over the general distribution patterns of these two segments. Afterwards, I will lay out some tests that showcase the interaction and restrictions of negation with other phenomena (e.g., modals and conditionals). These will hopefully hint at what causes them to surface differently.

MULL-lab, 03/23 – Katya Morgunova

MULL-lab will be meeting this Wednesday, March 23rd at 4pm. 

Katya will be presenting: Augmenting the Kirundi augment. Abstract is attached below:

 

Abstract: The augment is a nominal prefix found in some Bantu languages. It is usually associated with the semantics of definiteness and is often argued to be a D-head. In this talk, I present some new data collected from ongoing fieldwork on the distribution of the augment in Kirundi (Great Lakes Bantu). Following a discussion of prior work in Kirundi and other Bantu languages, I also share my initial analysis of the syntax and semantics of the Kirundi augment.

MULL-lab, 03/16 – Terrance Gatchalian

MULL-lab will be meeting this Wednesday, March 16th at 4pm. 

Terrance will be presenting work on Ktunaxa causatives. Abstract is attached below:

 

Abstract: This talk presents data on the Ktunaxa causative construction, which are morphologically complex, consisting of a valency-preserving causative morpheme and a more general valency-increasing morpheme. I discuss various proposals for the structure of causatives, and show that Ktunaxa demonstrates the need for the syntactic separation of causativization and the introduction of an additional causer argument, along the lines of Pylkkänen’s (2008) theory of causatives. I end with a puzzle on the distribution of causatives.

MULL-lab, 03/09 – Yoann Léveillé

MULL-lab will be meeting this Wednesday, March 9th at 4pm. 

Yoann will be presenting his talk Some observations on the morphosyntax of Inuktitut demonstratives . Abstract is attached below:

Abstract: This talk presents a morphosyntactic overview of demonstratives in Nunavimmiutitut, a dialect of Eastern Canadian Inuktitut spoken in Nunavik, focusing on ongoing work with a consultant and data presented in Beach (2011). First, I present a quick overview of the internal structure of Inuktitut demonstratives. Second, I discuss a few distributional facts and cooccurrence restrictions: ability to bear affixal attributive adjectives, cliticization on nouns and verbs, etc. Third, the incorporation of demonstratives is put in relation to that of other nominals. Finally, I consider the implications of these facts for the categorial status of Inuktitut demonstratives.

MULL-lab, 2/16 – Lisa Travis

The MULL-lab will be meeting on Wednesday, February 16th at 4pm. Lisa Travis will be presenting the second part of Event structure through the lens of Malagasy morphology. Please see the abstract below.

Abstract:

This talk will present a particular view of event structure as suggested by the morphological breakdown of particular verb forms in Malagasy.  The proposal is that Malagasy has morphemes for v (state, inchoative, cause), a verbal base (V), as well as an intervening functional category, Inner Aspect, that encodes telicity. Further, an argument will be made that Achievements are formed by merging a verbal base with a telic Inner Aspect and a stative v.

 

Please register in advance at the link below to receive the Zoom link: https://mcgill.zoom.us/meeting/register/tZYpfumsrTsoGNy1hDF4Y6B-kGgUQXE_Zcr1

 

After registering, you will receive a confirmation email containing information about joining the meeting.

MULL-lab, 2/9 – Lisa Travis

The MULL-lab will be meeting on Wednesday, February 9th at 4pm. Lisa Travis will be presenting Event structure through the lens of Malagasy morphology. Please see the abstract below.

Abstract:

This talk will present a particular view of event structure as suggested by the morphological breakdown of particular verb forms in Malagasy.  The proposal is that Malagasy has morphemes for v (state, inchoative, cause), a verbal base (V), as well as an intervening functional category, Inner Aspect, that encodes telicity. Further, an argument will be made that Achievements are formed by merging a verbal base with a telic Inner Aspect and a stative v.

 

Please register in advance at the link below to receive the Zoom link: https://mcgill.zoom.us/meeting/register/tZYpfumsrTsoGNy1hDF4Y6B-kGgUQXE_Zcr1

 

After registering, you will receive a confirmation email containing information about joining the meeting.

MULL-lab, 2/2 – Willie Myers

The MULL-lab will be on Wednesday, February 2nd at 4pm. Willie Myers will be presenting Nasals, Glides and Complex Onsets in Kirundi. Please see the abstract below.

Abstract:

This talk presents three phonology puzzles based on preliminary fieldwork in Kirundi. The first puzzle examines NC clusters with a focus on nasal + voiceless stop clusters (which are commonly prohibited in Bantu languages). The second puzzle looks at the distribution of glides in the language. The third puzzle brings these two nasals and glides together to analyze Kirundi’s complex onset clusters – unexpected based on Proto-Bantu’s traditional (C)V syllable structure.

Please register in advance at the link below to receive the Zoom link: https://mcgill.zoom.us/meeting/register/tZYpfumsrTsoGNy1hDF4Y6B-kGgUQXE_Zcr1

After registering, you will receive a confirmation email containing information about joining the meeting.

MCQLL, 01/25 – Jacob Louis Hoover

At this week’s MCQLL meeting on Tuesday, January 25 at 3:00-4:00, Jacob Louis Hoover will give a talk titled ‘Processing time is a superlinear function of surprisal.’ If you’d like to attend, please register for the Zoom meeting here if you haven’t already
Abstract:
The incremental processing difficulty of a linguistic item is related to its predictability. Surprisal theory (Hale, 2001; Levy, 2008) posits that the processing cost of a word in context is a linear function of its surprisal. This prediction has received considerable attention and broad support from empirical studies using a variety of language models to estimate surprisal. However, no algorithmic theory of processing has been proposed which scales linearly in surprisal. Additionally, recent empirical work has also begun raise questions about the assumption of linearity.  We present a study specifically aimed at discerning the general shape of the linking function, using a collection of modern pretrained language models (LMs) to estimate surprisal. We find evidence of a superlinear effect on reading time. We also find that the better a language model’s predictions are on average, the more clearly the relationship is between surprisal and processing is superlinear. These results suggest revising the linearity hypothesis of surprisal theory, and can provide support for algorithmic theories of human language processing which scale faster than linearly in surprisal.

MULL-Lab, 01/26 – Will Johnston

The MULL-lab will be on Wednesday, January 26th at 4pm. Will Johnston will be presenting ‘Disposal’ constructions in Hmong. Please see the abstract below.

Abstract:

In this talk, I present three interrelated puzzles involving so-called ‘Disposal’ serial verb constructions (SVCs) in Hmong. These involve the use of two or more transitive verbs (with shared subject and object arguments) to jointly describe a single event, as in (1). The first puzzle is the word order of Disposal SVCs, which in contrast to other Hmong SVCs (i) is flexible, and (ii) takes into account semantic/temporal information. The second relates to a type of object shift unique to Disposal constructions (and unattested in the literature). The third is the unique behavior of examples involving a particular verb, muab `to take’, which suggests they require a separate treatment.

 

  1. nws        tsa                          cov taws                           txhoov pov        cia
  2. 3SG        stand.up              CLF.PL firewood                 cut          throw    set.aside

‘He stood up the wood, chopped it, and threw it aside (into storage).’

 

Please register in advance at the link below to receive the Zoom link: https://mcgill.zoom.us/meeting/register/tZYpfumsrTsoGNy1hDF4Y6B-kGgUQXE_Zcr1

 

After registering, you will receive a confirmation email containing information about joining the meeting.

MULL-lab, 01/19 – First meeting

This semester, the MULL-lab will be meeting Wednesdays at 4 pm on Zoom. Our first meeting will be on Wednesday, January 19th.

Please register in advance at the link below to receive the Zoom link:

https://mcgill.zoom.us/meeting/register/tZYpfumsrTsoGNy1hDF4Y6B-kGgUQXE_Zcr1

After registering, you will receive a confirmation email containing information about joining the meeting.

MULL-Lab, 12/07 – David Shanks

MULL-Lab will be meeting Tuesday, December 7 at 4:30pm. David Shanks will give a talk titled Possession in Southern Tutchone. An abstract for the talk is below. If you would like to attend but haven’t registered for MULL-Lab, you can do so here.

Description: Southern Tutchone is an indigenous language of the Dene (Athabaskan) family spoken in the Yukon. Like other Dene languages, it has morphologically rich verbs and simple nominals. This talk will outline the possessive system, which differentiates alienable, or optional, possession from inalienable, or obligatory, possession.

MCQLL Meeting, 11/30 – Julien Carrier

MULL-Lab will be meeting Tuesday, November 30 at 4:30pm. Julien Carrier (UQAM) will give a talk titled Topicality and referentiality in Inuktitut. An abstract for the talk is below. If you would like to attend but haven’t registered for MULL-Lab, you can do so here.

Abstract: In this talk, I argue that Inuktitut is a discourse-configurational language whose grammatical functions have distinct discourse qualities. I demonstrate using a variationist approach with naturalistic data from speakers of North Baffin Inuktitut that nominal arguments marked with the absolutive case or the ergative case are, respectively, aboutness topics and familiar topics based on Frascarelli & Hinterhölzl’s (2007) typology of topics, while other nominal arguments marked with an oblique case or incorporated into the verb are non-topics. I also propose that D heads in Inuktitut are interpreted as a choice function whose existential closure may apply anywhere at LF (e.g., Reinhart 1992, 1997; Winter 1997) except for nominal arguments marked with the absolutive case or the ergative case as the latter move overtly to the CP-domain and systematically have wide scope over operators like negation.

MCQLL Meeting, 11/23 – Vikash Mansinghka

At this week’s MCQLL meeting on Tuesday, November 23 at 4:00 PM, Vikash Mansinghka will give a talk titled “Scaling towards human-like AI via probabilistic programming.” An abstract and speaker bio follows.
This week we’ll be meeting an hour later, at 4:00, rather than 3:00. This time change only applies to this meeting.

If you haven’t already registered for the Zoom meeting, you can do so here.
Abstract
A great deal of enthusiasm has been focused on building increasingly large neural models. We believe it is now possible to pursue an alternate scaling roadmap based on probabilistic programming, to build AI systems that actually see, learn and think like people, with more human-like flexibility, data efficiency, robustness, and generalizability. The probabilistic source code for these AI systems is partly written by AI engineers and partly learned from data. This approach integrates the best of large-scale generative modeling and deep learning with probabilistic inference and symbolic programming. Unlike neural networks, probabilistic programs can report what they know and what they don’t; they model the world in terms of explainable, human-editable representations; they can be modularly trained & tested; and they can learn new symbolic code rapidly and accurately from sparse data.
This talk will introduce basic concepts in probabilistic programming, and survey AI applications where probabilistic programming has recently outperformed machine learning:
(i) 3D object & scene perception from cluttered indoor video, improving accuracy and robustness over deep learning
(ii) common-sense deduplication, linkage, and cleaning of databases with millions of records
(iii) automated model discovery for multivariate data streams
It will also briefly review larger MIT efforts to apply probabilistic programming to reverse-engineer human common sense, to engineer data-driven expert systems, and to scale to low-power, biologically plausible hardware implementations of probabilistic programming, via massively parallel circuits of stochastic spiking neurons.
Speaker Bio

Vikash Mansinghka is a Principal Research Scientist at MIT, where he leads the MIT Probabilistic Computing Project. Vikash holds S.B. degrees in Mathematics and in Computer Science from MIT, as well as an M.Eng. in Computer Science and a PhD in Computation from the Department of Brain & Cognitive Sciences. He also held graduate fellowships from the National Science Foundation and MIT’s Lincoln Laboratory. His PhD dissertation on natively probabilistic computation won the MIT George M. Sprowls dissertation award in computer science, and his research on the Picture probabilistic programming language won an award at CVPR. He co-founded three VC-backed startups: Prior Knowledge (acquired by Salesforce in 2012) and Empirical Systems (acquired by Tableau in 2018), and Common Sense Machines (founded in 2020). He has also advised DeepMind and Intel on AI research, and helped leading companies in banking, insurance, IT, pharma, and healthcare apply open-source software implementing his lab’s research. He served on DARPA’s Information Science and Technology advisory board from 2010-2012, currently serves as an action editor for the Journal of Machine Learning Research, and co-founded the International Conference on Probabilistic Programming.

MULL-Lab, 11/16 – Clint Parker

MULL-Lab will be meeting Tuesday, November 16 at 4:30pm. Clint Parker will give a talk titled Toward an analysis of Shughni causative constructions. A description for the talk is below. If you would like to attend but haven’t registered for MULL-Lab, you can do so here.

Description: Shughni (Iranian; Tajikistan & Afghanistan) has at least three commonly used causative-like constructions: a morphological causative, a (biclausal) syntactic construction, and a causative~instrumental syncretism built off the locative postposition -ti. Each type of causative exhibits certain peculiarities. In this talk, I will present an overview of each type of causative along with findings from fieldwork and initial thoughts on possible analyses.

MULL-Lab, 11/09 – Terrance Gatchalian

MULL-Lab will be meeting Tuesday, November 9 at 4:30pm. Terrance Gatchalian will give a talk titled Some Notes on Ktunaxa causatives. A description for the talk is below. If you would like to attend but haven’t registered for MULL-Lab, you can do so here.

Description: I will be presenting some preliminary data and generalizations on Ktunaxa causatives, showing that the basic causative construction introduces causative semantics but no additional causer argument. To introduce the causer, Ktunaxa requires both the causative suffix and a separate valency-increasing morpheme.

MULL-Lab, 11/02 – Willie Myers

MULL-Lab will be meeting Tuesday, November 2 at 4:30pm.  Willie Myers will give a talk titled High, Low, and No Absolutive Mayan Syntax: Effects of No Object Raising in Heritage Mam.  An abstract for the talk is below.  If you would like to attend but haven’t registered for MULL-Lab, you can do so here.
Abstract: Coon et al. (2014) propose that absolutive morphemes come from two different sources in Mayan languages: finite Infl in HIGH-ABS languages like Mam, and v in LOW-ABS languages like Cho’l. As a result, the languages also differ in a variety of properties related to syntactic ergativity. This talk brings in new data from a heritage speaker of Mam which does not demonstrate any of the expected HIGH-ABS properties in transitive clauses. I argue that this variation is a consequence of an underlying lack of object raising and that the heritage Mam requires a new option for parametrization – NO-ABS – in which absolutive morphemes are never licensed in transitive clauses. I show that NO-ABS Mam patterns with LOW-ABS languages in the parameter, supporting Coon et al.’s (2021) claim that object raising is the source of HIGH-ABS syntax and providing additional evidence for the role of the transitive object in creating syntactic ergativity in Mayan.

MCQLL Meeting, 10/26 — Tom McCoy

At this week’s MCQLL meeting on October 26 at 3:00 PM, Tom McCoy will give a talk titled “Discovering implicit compositional representations in neural networks.” An abstract of the talk follows.
If you haven’t already registered for the Zoom meeting, you can do so here.
———————————-
Abstract:

Neural networks excel at processing language, yet their inner workings are poorly understood. One particular puzzle is how these models can represent compositional structures (e.g., sequences or trees) within the continuous vectors that they use as their representations. We introduce an analysis technique called DISCOVER and use it to show that, when neural networks are trained to perform symbolic tasks, their vector representations can be closely approximated using a simple, interpretable type of symbolic structure. That is, even though these models have no explicit compositional representations, they still implicitly implement compositional structure. We verify the causal importance of the discovered symbolic structure by showing that, when we alter a model’s internal representations in ways motivated by our analysis, the model’s output changes accordingly.

MULL-Lab, 10/26 – Jessica Coon and Justin Royer

MULL-Lab will be meeting Tuesday, October 26, at 4:30pm.  Jessica Coon and Justin Royer will be presenting their paper titled Object raising bleeds binding: A new correlate of high-absolutive syntax in Mayan (see abstract below).  If you have not registered for MULL-Lab but would like to attend, you can register here.

Abstract: A subset of Mayan languages prohibit the extraction of subjects from transitive sentences, a phenomenon known as the Ergative Extraction Constraint (EEC) (Aissen 2017, Coon et al. 2021). One family of accounts connects the EEC to object raising: the object consistently raises above the subject in transitive sentences, which consequently blocks extraction of the subject (Campana 1992, Coon et al. 2014). A second family of accounts leaves the object in its canonical position, but ties the EEC to optimality; in short, a construction other than a regular transitive sentence is available in cases of subject extraction, and a ranking of constraints enforces the use of that construction (Stiebels 2006, Erlewine 2016). In this talk, we provide new evidence for the object raising approach. We show that object raising leads to a configuration in which the subject does not bind into the object, with important repercussions for the distribution of coreferential nominals in Mayan languages that exhibit the EEC.

« Older Entries
Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.