Please join us for the next talk in our 2016–2017 colloquium series:
Speaker: Stephanie Shih (University of California Merced)
Date & Time: March 17th at 3:30 pm
Place: Education Bldg. rm. 433
Title: A multilevel approach to lexically-conditioned phonology
Lexical classes often exhibit different phonological behaviours, in alternations or phonotactics. This talk takes up two interrelated issues for lexically-conditioned phonological patterns: (1) how the grammar captures the range of phonological variation that stems from lexical conditioning, and (2) whether the relevant lexical classes needed by the grammar can be learned from surface patterns. Previous approaches to lexically-sensitive phonology have focused largely on constraining it; however, only a limited understanding currently exists of the quantitative space of variation possible (i.e., entropy) within a coherent grammar.
In this talk, I present an approach that models lexically-conditioned phonological patterns as a multilevel grammar: each lexical class is a cophonology subgrammar of indexed constraint weight adjustments (i.e., varying slopes) in a multilevel Maximum Entropy Harmonic Grammar. This approach leverages the structure of multilevel statistical models to quantify the space of lexically-conditioned variation in natural language data. Moreover, the approach allows for the deployment of information-theoretic model comparison to assess competing hypotheses of what the phonologically-relevant lexical classes are. I’ll show that under this approach, the relevant lexical classes need not be a priori assumed but can instead be induced from noisy surface input via feature discovery.
Two case studies are examined: part of speech-conditioned tone patterns in Mende and content versus function word prosodification in English. Both case studies bring to bear new quantitative evidence on classic category-sensitive phenomena. The results illustrate how the multilevel approach proposed here can capture the probabilistic heterogeneity and learnability of lexical conditioning in a phonological system, with potential ramifications for understanding the structure of the developing lexicon in grammar acquisition.
Speaker: Boris Harizanov (Stanford University)
Date & Time: February 17th at 3:30 pm
Place: Education Bldg. rm. 433
Title: On the nature of syntactic head movement
In Harizanov and Gribanova 2017, we argue that head movement phenomena having to do with word formation (affixation, compounding, etc.) must be empirically distinguished from head movement phenomena having to do purely with the displacement of heads or fully formed words (verb initiality, verb-second, etc.). We suggest that the former, word-formation type should be implemented as post-syntactic amalgamation, while the latter, displacement-type should be implemented as regular syntactic movement.
In this talk, I take this result as a starting point for an investigation of the latter, syntactic type of head movement. I show in some detail that such movement has the properties of (Internal) Merge and that it always targets the root. In addition, I suggest that, once a head is merged with the root, there are two available options (traditionally assumed to be incompatible with one another or with other grammatical principles): either (i) the target of movement projects or (ii) the moved head projects. The former scenario yields head movement to a specifier position, while the latter yields head reprojection. I offer participle fronting in Bulgarian as a case study of head movement to a specifier position and show how this analysis explains the apparently dual X- and XP-movement properties of participle fronting in Bulgarian, without stipulating a structure-preservation constraint on movement. As a case study of head reprojection, I discuss free relativization in Bulgarian. A treatment of this phenomenon in terms of reprojection allows for an understanding of why an element that has the distribution of a relative complementizer C in Bulgarian free relatives looks like a determiner D morphologically.
This work brings together and reconciles two strands of research, usually viewed, at least to some degree, as incompatible: head movement to specifier position and head movement as reprojection. Such synthesis is afforded, in large part, by the exclusion of the word-formation type of head movement phenomena from the purview of syntactic head movement, as in Harizanov and Gribanova 2017.
Speaker: Dan Lassiter (Stanford University)
Date & Time: January 27th at 3:30pm
Place: Education Bldg. rm. 433
Title: Epistemic language in indicative and counterfactual conditionals
Abstract: In this talk I’ll report on a series of experiments which examine judgments about epistemic modals, both in unembedded contexts and in indicative and counterfactual conditionals. Building on these results and recent probabilistic theories of epistemic language, I propose a probabilistic version of Kratzer’s restrictor theory of conditionals that identifies the indicative/counterfactual distinction with Pearl’s distinction between conditioning and intervening in probabilistic graphical models. Combining this theory with recent accounts of must, we can also derive a theory of bare conditionals; I describe the predictions and consider their plausibility in light of the experimental data.
Speaker: Jackie Cheung (McGill University)
Date & Time: December 2nd at 3:30 pm
Place: Education Bldg. rm. 624
Title: Generalized Natural Language Generation
In popular language generation tasks such as machine translation, automatic systems are typically given pairs of expected input and output (e.g., a sentence in some source language and its translation in the target language). A single task-specific model is then learned from these samples using statistical techniques. However, such training data exists in sufficient quantity and quality for only a small number of high-profile, standardized generation tasks. In this talk, I argue for the need for generic tools in natural language generation, and discuss my lab’s work on developing generic generation tasks and methods to solve them. First, I discuss progress on defining a task in sentence aggregation, which involves predicting whether units of semantic content can be meaningfully expressed in the same sentence. Then, I present a system for predicting noun phrase definiteness, and show that an artificial neural network model achieves state-of-the-art performance on this task, learning relevant syntactic and semantic constraints.
Michael Wagner gave talks at colloquia at Princeton University (16th November) and Johns Hopkins University (17th November), in which he reported on his joint work with Meghan Clayards, Oriana Kilbourn-Ceron, Morgan Sonderegger and James Tanner with the title “Allophonic variation and the locality of production planning“. The abstract is given below.
The application of allophonic processes across word boundaries (processes such as flapping (cf. De Jong, 1998; Patterson and Connine, 2001) and sibilant assimilation (cf. Holst and Nolan, 1995) in English, or liaison in French (Durand and Lyche, 2008)) is known to be subject to locality conditions. The same processes are also known to be variable. While a correlation between the locality of cross word processes on the one hand and their inherent variability is often observed (e.g. Kaisse, 1985), existing theories of either aspect usually do not make any predictions about the other. In this paper we report on several projects that pursue the hypothesis that the locality and variability of cross-word allophonic processes are tightly linked, and can be both be understood as a consequence of the locality of production planning.
The basic idea is that flapping, sibilant assimilation, liaison and related processes are sensitive to the segmental environment in a following word, but the following segmental environment can only exert its effect of the relevant information is already available when the phonetic detail of the current word is being planned. Under this view, effects of syntax and prosody on the application of these processes are reducible to their indirect effects on production planning: For example, a speaker is less likely to plan ahead across a sentence boundary, and less likely to plan ahead across a prosodic juncture. This hypothesis makes specific predictions that all factors affecting planning should affect the likelihood of cross-word allophonic processes (such as the predictability of the following word, the # syllables of the following word, etc.). We report evidence from several experimental and corpus studies that test our hypothesis, which makes different predictions than accounts that tie allophonic processes to particular phonological domains. It also makes different predictions than accounts that try to explain sandhi processes as an effect of gestural overlap, or than currently popular accounts in terms of probabilistic reduction.
An account of the the locality of sandhi processes in terms of the locality of production planning removes some of the motivation for categorically distinct phonological domains as they are assumed in the theory of the prosodic hierarchy. It also makes new predictions about what types of processes will necessarily have to be local and variable, and also about the degree of locality/variability depending on which information their application relies on.
Please join us for the next colloquium in our fall colloquium series.
Speaker: Judith Degen (Stanford University)
Date & Time: November 4th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Beyond “overinformativeness”: rationally redundant referring expressions
Abstract: What guides the choice of a referring expression like “the box”, “the big box”, or “the big red box”? Speakers have a well-documented tendency to add redundant modifiers in referring expressions (e.g., “the big red box” when “the big box” would suffice for uniquely picking out the intended object). This “overinformativeness” poses a challenge for theories of language production, especially those positing rational language use (e.g., in the Gricean tradition). We present a novel production model of referring expressions in the Rational Speech Act framework. Speakers are modeled as rationally trading off the cost of additional modifiers with the amount of information added about the intended referent. The innovation is assuming that truth functions are probabilistic rather than deterministic.
This model captures a number of production phenomena in the realm of overinformativeness, including the color-size asymmetry in probability of overmodification (speakers overmodify more with color than size adjectives); visual scene variation effects on probability of overmodification (increased visual scene variation increases the probability of overmodifying with color); and color typicality effects on probability of overmodification (speakers overmodify less with more typical colors). In addition to demonstrating how the model accounts for these qualitative effects, we present fine-grained quantitative predictions that are beautifully borne out in data from interactive free production reference game experiments.
We conclude that the systematicity with which speakers redundantly use modifiers implicates a system geared towards communicative efficiency rather than towards wasteful overinformativeness.
We are pleased to announce the second talk in our 2016-2017 McGill Linguistics Colloquium Series will be given by Yvan Rose (Memorial University Newfoundland). For more information on upcoming events in the McGill Linguistics department, please see our website (http://www.mcgill.ca/linguistics/events).
Who: Yvan Rose
When: Friday 10/28 at 3:30pm
Where: Education room 433
Title: “Perceptual-Articulatory Relationships in Phonological Development: Implications for Feature Theory”
In this presentation, I discuss a series of asymmetries in phonological development, the nature of which is difficult to address from a strictly phonological perspective. In particular, I focus on transitional periods between developmental stages. I show that these transitions are best interpreted in terms of phonological categories at both prosodic and segmental levels of representation, including segmental features. Using computer-assisted methods of data classification, I describe the detail of these transitions, highlighting both perceptual and articulatory pressures on the child’s developing system of phonological representation. I discuss implications of these findings for Phonological Theory, in particular for traditional models of segmental representation relying on phonological features. While the data support the need for sub-segmental units of phonological representation, these units do not appear to match fully the set of features typically used in the analysis of adult phonological systems.
We are pleased to announce that the first talk in our 2016-2017 McGill Linguistics Colloquium Series will be given by our own Michael McAuliffe. For more information on upcoming events in the McGill Linguistics department, please see our website (http://www.mcgill.ca/linguistics/events).
Who: Michael McAuliffe
When: Friday 9/23 at 3:30pm
Where: Education room 433
Title: “Dual nature of perceptual learning: Robustness and specificity”
Abstract: “In perceiving speech and language, listeners need to both perceive specific, highly variable utterances, and generalize to larger linguistic categories. One large source of the variability is in how individual speakers produce sounds, but another source of variation is the way in which speech and language are used in a particular task to accomplish a goal. Perceptual learning is a phenomenon in which listeners update their perceptual sound categories when exposed to a novel speaker. Perceptual learning is robust in the sense that most listeners show perceptual learning effects, most sound categories can be easily updated, and most tasks involving speech facilitate perceptual learning. In this talk, I focus more on the ways that perceptual learning can be task-specific. I present a series of perceptual learning experiments for exposing listeners to a novel talker through single words or longer sentences, varying tasks and the linguistic context. The instructions and goals of the task exert a size-able influence over the amount of perceptual learning that listeners exhibit. In general, listeners adapt less in the course of an experiment if they do not have to rely on the acoustic signal as much. For instance, if listeners are presented the orthography of the word along with the audio, they will not learn as much as if they had heard the audio alone. In sentence tasks, listeners matching pictures to a word at the end of a predictable sentence (i.e., A deep moat protected the old castle) will not learn as much from the final word as from an unpredictable sentence (i.e., He dreaded the long walk to the castle). However, the inverse is true for sentence transcription tasks, with larger perceptual learning effects from predictable sentences than unpredictable. Perceptual learning effects can generally be seen for all listeners and all tasks, but the size of the effects are dependent on the exposure task and how the linguistic system is engaged.”
Below is the finalized colloquium schedule for the upcoming academic year, also available here
. As always, colloquia will take place Fridays at 3:30, rooms TBA. Mark your calendars!
Michael McAuliffe (McGill) – September 23
Yvan Rose (Memorial Univ. Newfoundland) – October 28
Judith Degen (Stanford) – November 4
Jackie Cheung (McGill) – December 2
Dan Lassiter (Stanford) – January 27
Jeremy Hartman (UMass Amherst) – February 3
Boris Harizanov (Stanford) – February 17
Stephanie Shih (UC Merced) – March 17
Lucie Ménard (UQÁM) – March 31
Jessica Coon is returning from Stanford, where she gave a colloquium talk titled “Case Discrimination in Caseless Languages.”
Lisa Travis gave a colloquium talk at the University of Ottawa last week, titled: “Determining the position of Out of Control morphemes in Malagasy and Tagalog.”
Speaker: Pat Keating (UCLA)
Date & Time: Friday, April 8th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Linguistic Voice Quality
Abstract: In this talk I will present several results concerning the production and perception of voice quality (phonation type), from a larger interdisciplinary project at UCLA. First, I compare the acoustic properties of phonation type distinctions in several languages, deriving a simple (low-dimensional) phonetic space for voice quality in which phonation types cluster across languages. Second, I discuss the relation between phonation and lexical tone. In some languages, phonation type is phonemic, and independent of tone, either because the languages are non-tonal (e.g. Gujarati), or because tones and phonation cross-classify (e.g. Mazatec, Yi languages). In other languages, phonation is non-phonemic, instead conditioned by voice pitch and segmental/prosodic contexts (e.g. English). In some such languages (e.g. Mandarin), this relation between voice pitch and voice quality gives voice quality a secondary role in tonal contrasts, increasing the effective size of the tone space. Still other tone languages have both independent phonation and pitch-related phonation (e.g. Hmongic languages); we show that in one such language, White Hmong, the perceptual role of phonation is different for different tones. These cases will be illustrated with acoustic and physiological measures of voice production, obtained with our freely-available tools for voice analysis.
Meghan Clayards returned from University of Maryland last week, where she gave a colloquium talk titled “Modulation of Phonetic Contrasts”. The abstract is available here.
Speaker: Lisa Pearl (UC Irvine)
Date & Time: Friday, March 18th at 3:30 pm
Place: ARTS Bldg. room 260
Title: How to know what’s necessary: Using computational modeling to specify Universal Grammar
One explicit motivation for Universal Grammar (UG) is that it’s what allows children to acquire language as effectively and as rapidly as they do. Proposals for the contents of UG typically come from characterizing a learning problem precisely and identifying a potential solution to that problem. One benefit of computational modeling is to see if that solution works when it’s embedded in a learning strategy used during the acquisition process. This includes specifying (i) what the child knows already, (ii) what data the child is learning from, (iii) how long the child has to learn, and (iv) what the child needs to learn along the way.
When we identify successful learning strategies this way, we can then examine their components to see if any are necessarily both innate and domain-specific (and so part of UG). I have previously used this approach to propose new UG components (and remove the necessity of others) for learning both syntactic islands and English anaphoric one. In this talk, I investigate what’s been called the Linking Problem, which concerns where event participants appear syntactically. I’ll discuss some initial findings about when prior (and likely UG) knowledge, such as the Uniformity of Theta Assignment Hypothesis (UTAH), is helpful for learning useful information about the Linking Problem.
Jessica Coon spent the last few days of break in Minneapolis, where she gave a colloquium talk, “Unergatives, antipassives, and Roots in Chuj” at the University of Minnesota. This Friday she will present joint work with Alan Bale at a colloquium at Concordia University. The title of their talk is “Counting banana trees in Ch’ol: Crosslinguistic consequences for the syntax and semantics of classifiers.” Stay tuned for a Ling-Mont announcement with details.
Speaker: Stefan Keine (UMass Amherst)
When: Monday February 8, 3:30pm
Where: Arts Building, 145
Title: Selective Opacity
In this talk, I develop a systematic account of selective opacity effects, wherein one and the same constituent is opaque for one operation but transparent for another. Classical observations of selective opacity lie in the realm of movement. Finite clauses, for instance, are opaque for A-movement but transparent for A’-movement. This pattern generalizes above and beyond the A/A’-distinction. Recent research has shown that locality mismatches between movement types are not arbitrary, but subject to systematic restrictions (Williams 2003, 2011, Abels 2007, 2012, Müller 2014). For example, recent research has argued that the locality of a movement type is related to the height of its landing site in the clausal spine: Movement that targets a structurally high position (like A’-movement) is able to escape more domains that movement that lands in a structurally low position (like A-movement).
I propose an account of selective opacity that not only allows for locality mismatches, but also derives restrictions on these mismatches. First, based on a case study of selective opacity in Hindi/Urdu, I show that the phenomenon is not restricted to movement, but also encompasses phi-agreement and in-situ wh-licensing. Second, I conclude from this insight that selective opacity involves a restriction in the operation Agree, not movement itself. In particular, I propose that Agree-probes differ in what constituents they may or may not search into. Third, I show how this account derives various restrictions on locality mismatches. For example, it derives in a principled way the connection between a probe’s structural height and its locality profile.
In this way, the account unifies, in a systematic and novel way, selective opacity across operations and constructions, mismatches between the locality of movement and agreement, and intricate interactions between movement types and agreement.
Speaker: Jim Wood (Yale)
When: Monday February 1, 3:30pm
Where: Arts 145
Title: What is Case?
Case marking, in languages that have it, is a bit of mystery. It straddles the line between the systematic and the idiosyncratic. It follows regular rules, but allows a wide array of exceptions to those rules. It is trying to tell us something—even many things—about how natural language works, but what exactly is it telling us?
Standard treatments of case would have us believe that case tells us something about where a DP ends up—its final, licensing position (prior to any A’-dependencies). I will argue, to the contrary, that case tells us more about where a DP comes from than where it ends up, and that this holds even for “structural” cases like accusative.
I will make this point by probing the peculiar properties of accusative subjects in Icelandic. Although accusative subjects are often thought to be among the most idiosyncratic patterns of case marking, I will show that the various dimensions of idiosyncrasy coalesce under the following conclusion: accusative subjects are the promoted objects of hidden transitives.
This conclusion explains a range of facts that span the syntax, semantics and morphology. But it should force us to come to grips with its corollary: case can’t be about where a DP ends up, in the standard, licensing sense. A structural accusative object can, in the right circumstances, move to the subject position. What needs to be explained is why this doesn’t happen more often, and I will propose that the answer stems from the locality of A-dependencies.
Speaker: Aron Hirsch (MIT)
When: Thursday February 4, 3:30pm
Where: MAASS Building, room 217
Title: A case for conjunction reduction
And can apparently conjoin constituents of any syntactic category. This distribution seems at odds with a possible hypothesis about the semantics of and: that and has a parallel semantics to the connective ‘&’ of propositional logic and composes with arguments denoting truth-values (type t). Given this hypothesis, examples where and appears to conjoin constituents not of type t are puzzling. I focus on examples like (1), where and apparently conjoins object DPs.
(1) John saw [every student] and [every professor].
I provide new evidence that the grammar makes available a mechanism of conjunction reduction (‘CR’; e.g. Ross 1967, Schein 2014) by which and may conjoin constituents of type t, even when it appears to conjoin constituents not of type t. CR is supported empirically: the extra structure associated with CR is required to host adverbs, derive scope readings, and license ellipsis. CR is also supported theoretically: CR is a predicted epiphenomenon of independently needed syntactic mechanisms.
After arguing that CR is available, I discuss data which are most straightforwardly understood if (1) must be parsed with CR, i.e. consistent with the semantic hypothesis, every student and every professor cannot be directly conjoined. This result has implications for a broad set of constructions, as I illustrate in the final part of the talk with clefts and right node raising.
Speaker: Timothy J. O’Donnell (MIT)
When: Monday January 25th, 3:30pm
Where: Arts 145
Title: Productivity and Reuse in Language
A much-celebrated aspect of language is the way in which it allows us to express and comprehend an unbounded number of thoughts. This property is made possible because language consists of several combinatorial systems which can be used to productively build novel forms using a large inventory of stored, reusable parts: the lexicon.
For any given language, however, there are many more potentially storable units of structure than are actually used in practice — each giving rise to many ways of forming novel expressions. For example, English contains suffixes which are highly productive and generalizable (e.g., -ness; Lady-Gagaesqueness, pine-scentedness) and suffixes which can only be reused in specific words, and cannot be generalized (e.g., -th; truth, width, warmth). How are such differences in generalizability and reusability represented? What are the basic, stored building blocks at each level of linguistic structure? When is productive computation licensed and when is it not? How can the child acquire these systems of knowledge?
I will discuss a theoretical framework designed to address these questions. The approach is based on the idea that the problem of productivity and reuse can be solved by optimizing a tradeoff between a pressure to store fewer, more reusable lexical items and a pressure to account for each linguistic expression with as little computation as possible. I will show how this approach addresses a number of problems in English inflectional and derivational morphology, and briefly discuss it’s applications to other domains of linguistic structure.