The following work compares Tesar's & Smolensky's Error-Driven Constraint Demotion (EDCD) with the Gradual Learning Algorithm (GLA). Neither algorithm is guaranteed to succeed if its only input consists of an overt metrical form (a sequence of stressed and unstressed light and heavy syllables) so that it has to construct the phonological surface form (foot structure) all by itself. On Tesar & Smolensky's 124 languages, the GLA turns out to succeed in more cases than EDCD, if the circumstances are the same (i.e. overt forms come in random order).
|2003||Review of Tesar & Smolensky (2000): Learnability in Optimality Theory.|
Phonology 20: 436-446.
Earlier version: Rutgers Optimality Archive 638, 2004/01/17.
The following sequence of papers shows that GLA learners are more likely to learn Latin than EDCD learners.
|2003||Diana Apoussidou & Paul Boersma:|
The learnability of Latin stress.
IFA Proceedings 25: 101-148 (= Rutgers Optimality Archive 643).
|2004/01/30||Diana Apoussidou & Paul Boersma:|
Comparing different Optimality-Theoretic learning algorithms: the case of metrical phonology.
Proceedings of the 2004 Spring Symposium Series of the American Association for Artificial Intelligence, pp. 1-7.
|2004/07/08||Diana Apoussidou & Paul Boersma:|
Comparing two Optimality-Theoretic learning algorithms for Latin stress.
WCCFL 23: 29-42.
Since none of these papers discusses overt forms in production, all of these papers are compatible with the framework of listener-oriented production as well as with the framework of Phonology and Phonetics in Parallel.
Go to Paul Boersma's home page