« Older Entries

Special talk, 4/25 – Stefan Keine and Ethan Poole

Who: Stefan Keine (USC) and Ethan Poole (UCLA)
When: Thursday April 25th, 3:00–4:30
Where: Linguistics room 117
Not all reconstruction effects are syntactic

With the advent of the copy theory of movement (Chomsky 1995), reconstruction effects have typically been analyzed in terms of interpreting the lower copy of a movement chain (e.g. Fox 1999). In this talk, we present evidence from Hindi-Urdu that indicates that interpretation of a lower copy cannot be the only route to reconstruction effects. Our argument is based on the observation that some but not all reconstruction effects induce Condition C connectivity. We argue that Hindi-Urdu requires the hybrid approach to reconstruction developed on independent grounds by Lechner (1998, 2013, to appear), where both copy neglection (a syntactic mechanism) and higher-type traces (a semantic mechanism) are available as independent interpretation mechanisms.

Semantics Group, 4/5 – Daniel Hole (Stuttgart University)

This Friday, Daniel Hole (Stuttgart University) will be giving a talk titled “Arguments for a universal distributed syntax of evaluation, scalarity and basic focus quantification with ‘only’”.

Abstract: In this talk, I review the evidence that has been adduced for a multi-constituent syntax of focus particle constructions. Traditionally, those components that I model as independent morphemes with their own scope-taking properties have been analyzed as submorphemic components of focus particles. I use ‘only’ words to make this point. This work is based on Hole (2013, 2015, 2017), and it makes use of data from Chinese, Vietnamese, German and Dutch. However, many arguments carry over to English. Time allowing, I will also present novel data from the interaction of German nur with modals and the German NPI modal brauchen ‘need (+NPI)’. This approach to focus particles stands in stark contrast to Büring & Hartmann (2001) or Coppock & Beaver (2013) and follows trains of thought as laid out in Smeets and Wagner (2018).
We will meet at 3:30 (Room TBD, but likely R117). All are welcome to attend!

Linguistics/CS Seminar, 3/28 — Fatemeh Asr


SpeakerFatemeh Asr
Date & Time: Thursday, March 28, 2019 9:30am
Place: RPHYS 114
Title: Relations between words in a distributional space: A cognitive and computational perspective.


Word embeddings obtained from neural networks trained on big text corpora have become popular representations of word meaning in computational linguistics. In this talk, we first take a look at the different types of semantic relations between two words in a language and ask whether these relations can be identified with the help of popular embedding models such as Word2Vec and GloVe. I propose different measures to obtain the degree of paradigmatic similarity vs. syntagmatic relatedness between two words. In order to evaluate these measures, we use two datasets obtained from experiments on human subjects: SimLex-999 (Hills et al. 2016) including explicitly instructed ratings for word similarity, and explicitly instructed production norms (Jouravlev & McRae, 2016) for word relatedness.

In the second part of the talk, we look into the question of modeling the meaning of discourse connectives. Similarities between a pair of such particles, e.g., “but” and “although”, cannot be computed based directly on surrounding words. I explain however that discourse connectives can also be viewed from a distributional semantics perspective if a suitable abstraction of context is employed. For example, the slightest differences in the meaning of “but” and “although” can be revealed by studying their distribution in a corpus annotated with discourse relations. Finally, I draw some future directions for research based on our findings and the current developments in computational linguistics and natural language processing.

Special talk, 3/25 – Caroline Féry

Who: Caroline Féry (Goethe University Frankfurt)
Coordinates: Monday, March 25 2019, 1-2.30pm, in Room EDUC 338
Title: Prosody and information structure in European French
It has repeatedly been reported in the literature that French prosody reacts in a different way to changes in information structure as compared to Germanic languages (Delais-Roussarie 1995, Post 2000, Jun & Fougeron 2002, vander Klock, Portes et al 2014, Goad & Wagner 2018, among others). But all authors do not agree as to how to analyse this difference. Some propose that it is just a matter of degree across those languages (see the authors above), and thus the same prosodic tools can be used in French and in English. I propose that French has a different intonation system altogether (Féry 2014), the most important clues being the absence of pitch accents and the emphasis on the boundaries of prosodic constituents. I will show two experiments on French prosody in collaboration with Emilie Destruel. The first one compares post-verbal given and new objects and adjuncts and finds that the phonetic correlates of phrasing are larger for adjuncts than for objects. The second one investigates pairs of post-verbal objects and adjuncts in different information structural conditions: all-new, only one of the two constituents is focused, or both are (dual focus). In both experiments, it is the correlates of phrasing that are variable, but these correlates do only a poor job in unambiguously expressing information structural roles. The reason is that information structure cannot change the syntax-based phrasing, and the role of phonetic prominence is not clear in French. I will also briefly discuss vander Klock et al.’s semantic proposal and assess it in comparison with my intonational one.

Linguistics/CS Seminar, 3/24 – Siva Reddy


SpeakerSiva Reddy
Date & Time: Monday, March 25, 2019 9:30am
Place: ARTS W-20
Title: Interacting with machines in natural language: A case for the interplay between linguistics and machine learning


Computing machinery such as smartphones are ubiquitous, and so will be smart home appliances, self-driving cars and robots in the near future. Enabling these machines with natural language understanding abilities opens up potential opportunities for the broader society to benefit from, e.g., in accessing the world’s knowledge, or in controlling complex machines with little effort.

In this talk, we will focus on the task of accessing knowledge stored in knowledge-bases and text documents in a colloquial manner. First, we will see how brittle the current models are to compositional and conversational language. Then we will explore how linguistic knowledge and inductive biases on neural architectures can circumvent these problems.

The scientific questions we will address are 1) Are linguistically-informed models better than uninformed models? 2) How can inductive biases help machine learning? and 3) What are the challenges in enabling conversational interactions? For building linguistically-informed models, I will propose a novel syntax-semantics interface based on typed lambda calculus for converting dependency syntax into formal semantic representations.


Siva Reddy is a postdoc in the Computer Science Department at Stanford University working with Chris Manning. His research goal is to understand universal semantic structures in languages and build linguistically-informed machine learning models to enable natural language interaction between humans and machines. His research is supported by grants from Amazon and Facebook. Before postdoc, he was a Google PhD fellow at the University of Edinburgh working with Mirella Lapata and Mark Steedman. His work experience includes an internship at Google and a research position at Sketch Engine.

Amazigh Workshop talks, 3/21 – Achab, Baier, Ouali, Fahloune

This Thursday and Friday McGill  will host a Workshop on Amazigh languages, featuring invited talks by Karim Achab (University of Ottawa), Hamid Ouali (University of Wisconsin, Madison), and Khokha Fahloune (UQAM), as well as short presentations on Kabyle by the students in this semester’s Field Methods course.
Thursday, the talks will be held in Leacock 738
All are welcome! Below, find the titles/times of the four long talks for the conference. For a detailed scheduled and abstracts for the talks, please visit the workshop website.
Thursday, March 21st (Leacock 738)
1:00 — 2:00:  Karim Achab (University of Ottawa) — Diachronic and Synchronic Account of Anti-Agreement in Amazigh Languages
2:00 — 3:00: Hamid Ouali (University of Wisconsin-Milwaukee) — On Tense and Aspect in Tamazight
3:30 — 4:30:  Khokha Fahloune (UQAM) — Retour sur les marqueurs sujet et objet en kabyle
4:30 — 5:30: Nico Baier (McGill University) — Person Case Constraint Effects in Kabyle

Please feel free to drop by for any of the talks.

Linguistics/CS Seminar 3/11 — Rachel Rudinger,


SpeakerRachel Rudinger, Center for Language and Speech Processing, Johns Hopkins University
Date & Time: Monday, March 11, 2019 9:30am  
PlaceARTS W-20
TitleNatural Language Understanding for Events and Participants in Text


Consider the difference between the two sentences “Pat didn’t remember to water the plants” and “Pat didn’t remember that she had watered the plants.” Fluent English speakers recognize that the former sentence implies that Pat did not water the plants, while the latter sentence implies she did. This distinction is crucial to understanding the meaning of these sentences, yet it is one that automated natural language processing (NLP) systems struggle to make. In this talk, I will discuss my work on developing state-of-the-art NLP models that make essential inferences about events (e.g., a “watering” event) and participants (e.g., “Pat” and “the plants”) in natural language sentences. In particular, I will focus on two supervised NLP tasks that serve as core tests of language understanding: Event Factuality Prediction and Semantic Proto-Role Labeling. I will also discuss my work on unsupervised acquisition of common-sense knowledge from large natural language text corpora, and the concomitant challenge of detecting problematic social biases in NLP models trained on such data.

Linguistics/CS Seminar 3/13 — Kyle Mahowald


SpeakerKyle Mahowald
Date & Time: Wednesday, March 13, 2019 9:30am  
Place: WILSON 105
Title: Cognitive and communicative pressures in natural language


There is enormous linguistic diversity within and across language families. But all languages must be efficient for their speakers’ needs and cognitively tractable for processing. Using ideas and techniques from computer science, we can generate hypotheses about what efficient languages should look like. Using large amounts of multilingual linguistic data, computational modeling, and online behavioral experiments, we can test these hypotheses and therein explain phenomena observed across and within languages. In particular, I will focus on the lexicon and explore why languages have the words they do instead of some other set of words. First, consistent with predictions from Shannon’s information theory, languages are optimized such that the words that convey less information are a) shorter and b) easier to pronounce. For instance, word shortenings like chimpanzee -> chimp are more likely to occur when the context is predictive. Second, across 97 languages, phonotactically probable words are more likely to also have high token frequency. Third, applying these ideas about efficiency to syntax, I show that, across 37 languages, the syntactic distances between dependent words are minimized. I will conclude with a discussion of my work in experimental methods and my directions for future research.

Departmental talk, 2/12 – Michelle Yuan

Please join us for a talk by Michelle Yuan (University of Chicago).
Coordinates: Tuesday 2/12 at 3:30pm in Wilson Hall WPRoom (room 118)
Title: Pronoun movement and doubling in Inuktitut (and beyond)

A key working hypothesis in generative linguistic research is that the syntax of natural language is organized by a finite set of abstract principles with a constrained space for potential variation. A natural consequence of this view is that linguistic phenomena that appear unrelated on the surface may in fact be underlyingly linked—and, as such, are expected to interact in systematic ways. This talk offers a case study of this idea from Inuktitut (part of the Inuit dialect continuum), in which the underlying status of the object agreement morphemes predicts properties of seemingly independent aspects of the grammar, such as ergativity and the spell-out of movement copies.
I begin by establishing that the object agreement morphemes in Inuktitut are morphologically reduced pronouns doubling full DPs, rather than exponents of phi-agreement, and that the pronominal nature of these morphemes interacts fundamentally with other properties of Inuktitut syntax. First, I show that this idea may be subsumed within previously-noticed differences in the distribution of ergative case morphology across the Inuit dialect continuum (e.g. Johns 2001, Carrier 2017). From there, I present a novel analysis that links variation in ergative alignment in Inuit to variation in object movement. Second, the proposal that these object agreement forms are syntactically pronouns offers a new window into Cardinaletti & Starke’s (1994) strong vs. deficient pronoun distinction. I recast this well-known contrast as following from a small set of morphological conditions on chain pronunciation and copy spell-out (Landau 2006). As independent evidence for this approach, these conditions are shown in Inuktitut to both constrain the distribution of strong pronouns and extend straightforwardly to certain recalcitrant aspects of noun incorporation.

Departmental talk, 2/14 – Zheng Shen

Please join us for a talk by Zheng Shen (Goethe University Frankfurt).
Coordinates: Thursday 2/14 at 3:30pm in Peterson Hall, room 116
Title: What we can learn from Multi-valuation

Abstract: One of the major goals of syntax is to understand its basic building blocks and how they interact. Taking features to constitute one of these basic building blocks of syntax, I investigate how different agreement patterns can be derived from the nature of different types of features.

In this talk I use Multi-valuation as a tool to address such issues. Multi-valuation involves a probe acquiring multiple values. I will argue that multi-valued Ns can be observed in nominal Right Node Raising constructions (1), and multi-valued Ts in TP Right Node Raising constructions (2). In English, the noun valued by two singular features must be singular while the T head valued by two singular subjects can be singular or plural.
(1) This tall and that short student/*students are a couple.
(2) Sue’s proud that Bill, and Mary’s glad that John, has/have traveled to Cameroon.
A cross-linguistic survey reveals that three out of the four logically possible patterns of multi-valued Ns and Ts are attested as in (3), parallel to the Agreement Hierarchy observed for hybrid noun agreement (Corbett 1979). I argue that this pattern in Multi-valuation is also an instantiation of the Agreement Hierarchy.
a. Multi-valued Ns – singular, Multi-valued Ts – singular: Slovenian.
b. Multi-valued Ns – plural, Multi-valued Ts – plural: Russian.
c. Multi-valued Ns – singular, Multi-valued Ts – plural: English.
d. Multi-valued Ns – plural, Multi-valued Ts – singular: unattested.
Furthermore, I argue that the plural pattern in Multi-valuation results from agreeing with semantic features while the singular pattern results from agreeing with morphological features. I show that this mapping falls out naturally if we assume a referential index theory of semantic features (Grosz 2015). Multi-valuation thus motivates two types of number features with distinct properties, shedding light on the inventory of the basic building blocks of syntax.

Departmental talk, 2/5 – Martina Martinović

Please join us for a talk by Martina Martinović (University of Florida).
Coordinates: Tuesday 2/5 at 3:30pm in Arts 160
Title: From syntax to postsyntax and back again
Abstract: A fairly widely adopted view of the syntax-postsyntax(PF) interface is that narrow syntactic processes precede any PF processes (Spell-out), meaning that, once a particular domain (commonly called a phase) is spelled out, it is no longer accessible to syntax (Chomsky 2000, 2001, 2004, etc.). This talk presents ongoing research of the interaction between these two modules of the grammar, and proposes that the boundary between them is much more permeable than traditionally assumed. Specifically, I argue that syntax and PF (postsyntax) can be interleaved in such a way that a syntactic phase first undergoes Spell-out, and then participates in further narrow syntactic computation. I provide two pieces of evidence for this claim from the Niger-Congo language Wolof. The first one addresses a phenomenon in which elements that are in the final structure separated by intervening syntactic material undergo vowel harmony (Ultra Long-distance Vowel Harmony; Sy 2005). I show that at the moment of Spell-out the harmonizing elements are in a local configuration, only to be separated by syntactic movement in a later step in the derivation, resulting in a surface opacity effect. The second argument comes from the behavior of the past tense morpheme, which is in one configuration affixed onto the verb and carried along with it up the clausal spine, and in another stranded by the moving verb, exhibiting a Mirror Principle violation. I show that the past tense morpheme is affixed onto the verb in postsyntax (Marantz 1988, Embick & Noyer 2001), and that the syntax/postsyntax interleaving explains its variable position. The architecture of the grammar in which the syntax and postsyntax interact in a way proposed in this talk predicts precisely these types of surface opacity effects and removes the burden of accounting for them from narrow syntax. This spares us from positing idiosyncratic syntactic operations to account for anomalous phenomena that are in fact the domain of morphology or phonology, and allows us to maintain a view of syntax as cross-linguistically relatively uniform.

Departmental talk, 2/7 – Emily Clem

Please join us for a talk by Emily Clem (UC Berkeley).
Coordinates: Thursday, 2/7 at 3:30pm in WILSON WPROOM (room 118)
Title: Cyclicity in Agree: Maximal projections as probes
The relationships between arguments that are morphologically tracked in switch-reference systems look challenging from the perspective of a constrained theory of syntactic dependency formation. In this talk, I argue that the challenge is only apparent. In particular, I propose that the adoption of Cyclic Agree (Rezac, 2003; Béjar and Rezac, 2009) provides the tools needed to handle the relevant syntactic dependencies in a strictly local way. Drawing on data from original fieldwork, the talk centers on a pattern of switch-reference in Amahuaca (Panoan; Peru), which is typologically unusual (and especially striking from a locality perspective) in that the reference of both objects and subjects in both matrix and dependent clauses is tracked. I argue that Amahuaca adjunct C, which is spelled out as a switch-reference marker, agrees directly with DPs in its own complement and with matrix DPs. This is possible because the maximal projection of this high adjunct C can probe its c-command domain––the matrix TP. I argue that this happens through cyclic expansion of C’s probe in a manner consistent with the predictions of Cyclic Agree and Bare Phrase Structure (Chomsky, 1995). Not only is this account based on cyclic expansion able to accommodate object tracking in switch-reference, but it also provides a straightforward way to capture this apparently non-local pattern of agreement without loosening the conditions on locality in Agree. I conclude with a look at the typology of switch-reference systems and the syntactic and morphological sources of diversity in this domain.

Bernhard Schwarz at McGill Student Association of Cognitive Science

On November 13, Bernhard gave an invited presentation “How and why: a case study in meaning”  in the Cognitive Science speaker series,x organized by McGill’s Student Association of Cognitive Science. The presentation was based on joint work with Alexandra Simonenko (McGill PhD ’14).
Abstract: The body of literature on the semantics of questions, sparked by classic works from the 1970s and 1980s, is substantial, yet most of this literature focuses narrowly on questions about individuals (Who left?or degrees (How long is it?)In this talk, I will offer some remarks about how– and why-questions like How did you open the door? or Why did the lights go out?. I will discuss why investigating the semantics of such questions is hard, what types of evidence are available to probe their meanings, and I will report on some surprising differences in logical behaviour between different types of how– and why-questions

Michael Wagner in France

Michael is back from at talk at LINGUAE at ENS in Paris, and presenting a joint keynote at the Workshop on Prosody & Meaning and SemDial in Aix en Provence. The talks reported on joint work with Dan Goodhue on their project ‘Toward an Intonational Bestiary‘.

Kyle Gorman Visit

Kyle Gorman from Google AI and CUNY will be visiting the Department the week of November 12th. He will be giving a talk at 15:30 – 17:00 on Monday in Room 117 1085 Dr. Penfield (title and abstract will be sent out soon), and a Tutorial on Pynini, a Python library he developed for weighted finite-state grammar compilation, on Wednesday 12:00-15:00 in Ferrier room 230.

(Talk, Monday)
Grammar engineering in text-to-speech synthesis
Many speech and language applications, including speech recognition and speech synthesis, require mappings between “written” and “spoken” representations of language. Despite substantial progress in applied machine learning, it is still the case that real-world industrial text-to-speech (TTS) synthesis systems largely depend on language-specific hand-written rules for these conversions. These may require a great deal of development effort and linguistic sophistication, and as such represent substantial barriers for quality control and internationalization. 
I first consider the case of number names, where the goal is to map written forms like 328 to three hundred twenty eight. I propose two computational models for learning this mapping. The first uses end-to-end recurrent neural networks. The second, inspired by prior literature on cross-linguistic variation in number naming, uses an induction strategy based on finite-state transducers. While both models achieve near-perform performance, the latter model is trained using several orders of magnitude less data, making it particularly useful for low-resource languages. The latter model is being used at Google to produce number grammars for dozens of languages and locales. 
I then consider the case of grapheme-to-phoneme conversion, where the task is to map written words onto their phonemic transcriptions. I describe a model in which the grammar engineering is performed by providing input and output vocabularies; in Spanish for instance, the input vocabulary includes digraphs like ll and rr, which denote single phonemes, and for Japanese kana, the output vocabulary includes entire syllables. This grammatical information, incorporated into a finite-state generative model, results in a significant improvement over a baseline system which lacks direct access to such information.
(Tutorial, Wednesday)
Pynini: Finite-state grammar development in Python
Finite-state transducers are abstract computational models of relations between sets of strings, widely used in speech and language technologies and studied as computational models of morphophonology. In this tutorial, I will introduce the finite-state transducer formalism and Pynini (Gorman 2016; http://pynini.opengrm.org), a Python library for compiling and processing finitestate grammars. In the first part of the tutorial, we will cover the finite-state formalism in detail. In the second part, we will install the Pynini library and survey its basic functionality. In the third, we will tackle case studies including Finnish vowel harmony rules and decoding ambiguous text messages. Participants are assumed to be familiar with the Python programming language, but I do not assume any experience with finite-state methods or natural language processing. Note to participants: You are encouraged to bring a working laptop. We will reserve some time to install the necessary libraries so that you can follow along and participate in a few select exercises. This software has been tested on Linux, Mac OS X (with an up-to-date version of XCode), and Windows 10 (with the Ubuntu flavor of Windows Subsystem for Linux). In case you wish to get a head start, installation instructions are available here: http://wellformedness.com/courses/PyniniTutorial/installation-instructions.html  

Morgan Sonderegger at University of Oregon

Morgan Sonderegger was at University of Oregon’s Department of Linguistics October 25-26, where he gave a workshop entitled “Topics in fitting and using mixed-effects regression models” and a colloquium talk, “Towards larger-scale cross-linguistic and cross-variety studies of speech”.

Linguistics talk, today (10/22) – David Barner

Today David Barner (UCSD) will be giving a talk in the linguistics department. (He will also be giving a different talk tomorrow.)
Time/Date: Monday, 22 October, 2018, 15:30 – 17:00
Place: McGill Campus, 1085 Dr. Penfield, Room 117
Title: Access to alternatives and the acquisition of logical language
Though children begin to use logical connectives and quantifiers earlier in acquisition, studies in both linguistics and psychology have documented surprising failures in children’s interpretation of expressions. Early accounts, beginning with Piaget, ascribed these failures to children’s still burgeoning semantic and conceptual representations, arguing that children acquire ever more powerful logical resources as they development and acquire language. But more recent accounts, drawing on a Gricean divide between semantics and pragmatics, have argued that certain of these failures might not reflect semantic incompetence, but instead changes in children’s pragmatic reasoning abilities. In particular, early studies argued that children might be more “logical” than adults, perhaps because of difficulties with Gricean reasoning, or theory of mind. In this talk, I investigate this question, and argue that neither pragmatic incompetence nor conceptual/semantic change can explain children’s behaviors, and that instead children’s judgments stem from difficulties with “access to alternatives”.
I show this in two parts. First, I consider the case study of scalar implicature, and show that children when children hear an utterance like the one in (1) they fail to compute a scalar implicature like in (3) because they are unable to spontaneously generate the stronger alternative scale mate in (2). But when scalar alternatives are provided contextually or are “unique” alternatives, children no longer struggle with implicatures. I show that children easily compute “ad hoc” implicatures and ignorance implicatures (where all relevant alternatives are provided in the original utterances), as well as inferences that exhibit similar computational structure, like mutual exclusivity. Also, I show that children’s problems cannot be ascribed to difficulties with epistemic (theory of mind) reasoning, ruling out the idea that their problems are related to understanding other minds and intentions.
(1) I ate some of the cake
(2) I ate all of the cake
(3) I ate some (but not all) of the cake
In the second part, I discuss one variant of the “access to alternatives” hypothesis, that exploits Roberts’ (1996) notion of the Question Under Discussion. On this hypothesis, there is a symmetrical relation between a speaker’s intended QUD when uttering a statement, and the alternative statements are relevant to evaluating that QUD, such that (1) knowing a speaker’s intended QUD specifies which alternatives are relevant, and (2) knowing which alternatives are relevant specifies the speaker’s intended QUD. On this view, children’s ability to make logical inferences should be affected either by making alternatives available in context, or by narrowing the QUD. To explore this idea, I present data from three studies. First, I review evidence from a recent study by Skordos and Papafragou (2016) in which children’s rate of implicature can be improved by either means (alternatives or direct QUD narrowing). Second, I present data regarding quantifier spreading in (4). Like in past studies, I show that, in a context where three girls are riding 3 out of 4 available elephants (Context A), children judge (4) to be false (as though the intended question is, “Is every elephant is ridden by a girl?). However, when identical utterance is first probed in a context that renders it false (Context B), children subsequently judge (4) to be true in Context A (now understanding the question to be “Is every girl riding an elephant?). I argue that Context B provides a state of affairs providing what Crain calls “plausible dissent”, by making clear the speaker’s intended meaning (i.e., here, the QUD), which in absence of Context B children must infer from other contextual cues – e.g., “What is question is the speaker most likely to ask in this context?”
(4) Every girl is riding an elephant.
Context A: <g, e> <g, e> <g, e> <e>
Context B: <g, e> <g, e> <g> <e>
Also, I show that providing relevant states of affairs can likewise affect scalar implicature, and that when children are not provided with a context that makes the denial of a statement plausible (a la Crain), they fail to converge on the intended QUD, fail to generate relevant linguistic alternatives, and derive non-adult-like inferences – e.g., interpreting disjunction as conjunction. I show that, contrary to several recent reports, children do not interpret disjunction as conjunction if the context properly narrows the speaker’s intended QUD by providing states of affairs that render test statements deniable.

Jessica Coon to Liverpool Biennial

Jessica will give a public lecture at the UK Biennial of Contemporary Art in Liverpool later this week. The talk, “Aliens, Fieldwork, and Universal Grammar” is one of ten public lectures during the 15-week event.

Special talk, 10/23 – David Barner

Speaker: Dr. David Barner, UCSD
Place: Room 461, 2001 McGill College
Title: Linguistic origins of uniquely human abstract concepts
Abstract: Humans have a unique ability to organize experience via formal systems for measuring time, space, and number. Many such concepts – like minute, meter, or liter – rely on arbitrary divisions of phenomena using a system of exact numerical quantification, which first emerges in development in the form of number words (e.g., one, two, three, etc). Critically, large exact numerical representations like “57” are neither universal among humans nor easy to acquire in childhood, raising significant questions as to their cognitive origins, both developmentally and in human cultural history. In this talk, I explore one significant source of such representations: Natural language. In Part 1, I draw on evidence from six language groups, including French/English and Spanish/English bilinguals, to argue that children learn small number words using the same linguistic representations that support learning singular, dual, and plural representations in many of the world’s languages. For example, I will argue that children’s initial meaning for the word “one” is not unlike their meaning for “a”. In Part 2, I investigate the idea that the logic of counting – and the intuition that numbers are infinite – also arises from a foundational property of language: Recursion. In particular, I will present a series of new studies from Cantonese, Hindi, Gujarati, English, and Slovenian. Some of these languages – like Cantonese and Slovenian – exhibit relatively transparent morphological rules in their counting systems, which may allow children to readily infer that number words – and therefore numbers – can be freely generated from rules, and therefore are infinite. Other languages, like Hindi and Gujarati, have highly opaque counting systems, and may make it harder for children to infer such rules. I conclude that the fundamental logical properties that support learning mathematics can also be found in natural language. I end by speculating about why number words are so difficult for children to acquire, and also why not all humans constructed count systems historically.
Bio: Dr. Barner’s research program engages three fundamental problems that confront the cognitive sciences. The first problem is how we can explain the acquisition of concepts that do not transparently reflect properties of the physical world, whether these express time, number, or logical content found in language. What are the first assumptions that children make about such words when they hear them in language, and what kinds of evidence do they use to decode their meanings? Second, he is interested in how linguistic structure affects learning, and whether grammatical differences between languages cause differences in conceptual development. Are there concepts that are easier to learn in some languages than in others? Or do cross-linguistic differences have little effect on the rate at which concepts emerge in language development? Dr. Barner studies these case studies taking a cross-linguistic and cross-cultural developmental approach informed by methods in both psychology and linguistics, and studies children learning Cantonese, Mandarin, Japanese, Hindi, Gujarati, Arabic, Slovenian, Spanish, French, and English, among others.

Jessica to Calgary

Jessica was at the University of Calgary last week where she gave a colloquium talk, “Feature Gluttony and the Syntax of Hierarchy Effects” (collaborative work with Stefan Keine, USC).

« Older Entries
Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.