Software: PRAAT: doing phonetics by computer
Teaching: Methoden en technieken LOT Summer School course
Research: all my writings in chronological order
My research focuses on showing (by computer simulations) how the production, comprehension and acquisition of the phonetics, phonology and morphology of a language, as well as how these change over the generations, can be explained by assuming multi-level representations in the language user’s mind. The following paper shows how in a parallel multi-level model of production phonological (i.e. ‘later’) considerations can influence morphological (i.e. ‘earlier’) choices:
|2017||Paul Boersma & Jan-Willem van Leussen:|
Efficient evaluation and learning in multi-level parallel constraint grammars.
Linguistic Inquiry 48: 349-388. [copyright]
Multi-level models allow us to regard the typologies of the world’s languages as emergent rather than as innate or synchronically functionalist. For example, markedness is an emergent property that follows from frequency differences in the learner’s input (and not from innate markedness constraints or from synchronically functionalist faithfulness rankings), and licensing by cue emerges from differences in auditory cue reliability in the learner’s input (and not from innate specific-over-general faithfulness rankings or from synchronically listener-oriented faithfulness rankings):
Emergent ranking of faithfulness explains markedness and licensing by cue.|
Rutgers Optimality Archive 954. 30 pages.
Earlier version: Handout 14th Manchester Phonology Meeting, 2006/05/28.
As another example, auditory dispersion in inventories of phonemes emerges from the fact that learners use in production the same constraint rankings that have optimized their comprehension (and not from innate markedness constraints or synchronically functionalist dispersion constraints). The following two papers are the one- and two-dimensional cases, respectively:
|2008||Paul Boersma & Silke Hamann:|
The evolution of auditory dispersion in bidirectional constraint grammars.
Phonology 25: 217–270.
Material: scripts for the simulations and pictures.
Earlier version: Rutgers Optimality Archive 909, 2007/04/17.
Earlier version: Handout OCP 3, Budapest, 2006/01/17.
The emergence of auditory contrast.|
Presentation GLOW 30, Tromsø. 24 slides.
As for the emergence of categories and constraints themselves, that is discussed for Optimality Theory as well as neural networks:
|2018/09/11||Paul Boersma, Titia Benders & Klaas Seinhorst:|
Neural networks for phonology and phonetics.
Manuscript, University of Amsterdam.
|2003/02/28||Paul Boersma, Paola Escudero &
Learning abstract phonological from auditory phonetic categories: An integrated model for the acquisition of language-specific sound categories.
Proceedings of the 15th International Congress of Phonetic Sciences, Barcelona, 3–9 August 2003, pp. 1013–1016 (= Rutgers Optimality Archive 585).
|Functional phonology: Formalizing the interactions between articulatory and perceptual drives.|
Ph.D. dissertation, University of Amsterdam, 504 pages.
A hardcopy edition is available from the author for free!
For more detail on separate chapters, and scripts, see Functional Phonology (1998).
Such simulations make it possible to track languages over the generations (for more, see sound change):
|2007/10/27||Paul Boersma & Joe Pater:|
Constructing constraints from language data: the case of Canadian English diphthongs.
Handout NELS 38, Ottawa. 18 pages.
The odds of eternal optimization in Optimality Theory.|
In D. Eric Holt (ed.): Optimality Theory and language change, 31–65. Dordrecht: Kluwer. [Abstract]
Earlier version: Rutgers Optimality Archive 429, 2000/12/13.
The above papers (if younger than 2005) rely heavily on the framework of Parallel Bidirectional Phonology and Phonetics, i.e. on the idea that you use the same constraint ranking as a listener and as a speaker and on the parallel multi-level evaluation of your phonology and your phonetics. Here is more information on that subject:
A programme for bidirectional phonology and phonetics and their acquisition and evolution.|
In Anton Benz & Jason Mattausch (eds.), Bidirectional Optimality Theory, 33–72. Amsterdam: John Benjamins.
Earlier version: Handout LOT Summerschool, June 2006, and Jadertina Summerschool (Rutgers Optimality Archive 868), 2006/09/12.
|2009||Paul Boersma & Silke Hamann:|
Loanword adaptation as first-language phonological perception.
In Andrea Calabrese & W. Leo Wetzels (eds.), Loanword phonology, 11–58. Amsterdam: John Benjamins.
Earlier version: Rutgers Optimality Archive 975, 2008/06/15.
Earlier version: Presentation OCP 4, Rhodes, 2007/01/20.
Cue constraints and their interactions in phonological perception and production.|
In Paul Boersma & Silke Hamann (eds.): Phonology in perception, 55–110. Berlin: Mouton de Gruyter.
Earlier version: Rutgers Optimality Archive 944, 2007/11/11.
|2007||Some listener-oriented accounts of h-aspiré in French.|
Lingua 117: 1989-2054.
Earlier version: Rutgers Optimality Archive 730, 2005/04/13.
The evolution of phonotactic distributions in the lexicon.|
Presentation Workshop on Variation, Gradience and Frequency in Phonology, Stanford. 32 slides.
A constraint-based explanation of the McGurk effect.|
In Roland Noske and Bert Botma (eds.) Phonological Architecture: Empirical, Theoretical and Conceptual Issues. Berlin/New York: Mouton de Gruyter. 299–312.
Preprint: 2011/07/03, 11 pages.
Earlier version: Rutgers Optimality Archive 869, 2006/09/15.
||Prototypicality judgments as inverted perception.|
In Gisbert Fanselow, Caroline Féry, Matthias Schlesewsky & Ralf Vogel (eds.): Gradience in Grammar, 167–184. Oxford: Oxford University Press. [Abstract]
Earlier version: Rutgers Optimality Archive 742, 2005/05/17.
Most of the papers with simulations utilize the Gradual Learning Algorithm for Optimality Theory, which was defined in the following two papers:
|2001||Paul Boersma & Bruce Hayes:|
Empirical tests of the Gradual Learning Algorithm.
Linguistic Inquiry 32: 45–86. [copyright]
Earlier version: Rutgers Optimality Archive 348, 1999/09/29.
Additional material: the GLA web page.
|1997||How we learn variation, optionality, and probability.|
IFA Proceedings 21: 43–58.
Additional material: Simulation script.
Earlier version: Rutgers Optimality Archive 221, 1997/10/12 (incorrect!).
Also appeared as: chapter 15 of Functional Phonology (1998).
Since 2007 we have routinely checked how the simulations behave if we use Harmonic Grammar instead of Optimality Theory. The following paper describes a proof of the learning algorithm:
|2016||Paul Boersma & Joe Pater:|
Convergence properties of a gradual learning algorithm for Harmonic Grammar. [preprint, 2013/03/13]
In John McCarthy & Joe Pater (eds.): Harmonic Serialism and Harmonic Grammar, 389-434. Sheffield: Equinox.
Earlier version: Rutgers Optimality Archive 970, 2008/05/21.
Additional material: the GLA web page.
Writings by subject:
Talks and posters