|
This framework regards the grammar as consisting of two phonological levels of representation, sandwiched between two semantic and two phonetic levels (there may well be more levels, but this seems to be the minimum). Some levels and some mappings can be evaluated with direction-free constraints:
‘Meaning’ ------ semantic constraints \ reference constraints / <Morphemes> ------ morphemic constraints \ lexical constraints / |Underlying Form| \ faithfulness constraints / /Surface Form/ ------ structural constraints \ cue constraints / [Auditory Form] \ sensorimotor constraints / [Articulatory Form] ------ articulatory constraints
Several linguistic processes can be defined on this grammar:
+-------> ‘Meaning’ -----+ | | | | +-- +--> <Morphemes> <--+ ---+ | | | | +--> |Underlying Form| <--+ --+ recognition | | phonological production | | +-- +---> /Surface Form/ <----+ | | perception | | phonetic implementation +--- [Auditory Form] <----+ | | [Articulatory Form] <----+
It is still an open question which of the mappings between the representations are serial (“modular”) and which run in parallel instead.
The figure shows comprehension as a sequence of three modules, namely pre-lexical perception, morpheme (or word) recognition, and the access of meaning. The recognition module consists of two parallel mappings here. Other kinds of seriality and parallelism seem possible and plausible, though.
The figure shows production as a sequence of three modules, the last of which is a parallel mapping from Underlying Form to the three lower levels. This particular parallelism allows continuous phonetic considerations, such as articulatory effort and auditory distinctiveness, to influence discrete phonological decisions. Effects accounted for include licensing by cue, enhancement, apparent teleology in sound change, and incomplete neutralization.
The following examples work within the Optimality-Theoretic version of the framework (BiPhon-OT):
The following examples work within the neural-network version of the framework (BiPhon-NN):
2020 | Paul Boersma, Titia Benders & Klaas Seinhorst: Neural networks for phonology and phonetics. Journal of Language Modelling 8: 103-177. |
2019 | Paul Boersma: Simulated distributional learning in deep Boltzmann machines leads to the emergence of discrete categories. Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, 5–9 August 2019. 1520–1524. |
2022 | Paul Boersma, Kateřina Chládková & Titia Benders: Phonological features emerge substance-freely from the phonetics and the morphology. Canadian Journal of Linguistics 67: 611–669 (a special issue on substance-free phonology). |
Go to Paul Boersma’s home page