Variation is controlled by the grammar, though indirectly: it follows automatically from the robustness requirement of learning. If every constraint in an Optimality-Theoretic grammar has a ranking value along a continuous scale, and the disharmony of a constraint at evaluation time is randomly distributed about this value, the phenomenon of optionality in determining the winning candidate follows from the ̃niteness of the difference between the ranking values of the relevant constraints; the *degree of optionality is a descending function of this difference. In the *production grammar, a symmetrized maximal gradual learning algorithm will cause the learner to copy the degrees of optionality from the language environment. In the *perception grammar, even the slightest degree of noise in constraint evaluation will cause the learner to become a probability-matching listener, whose categorization distributions match the production distributions of the language environment. Evidence suggests that natural learners follow a symmetric demotion-and-promotion strategy, not a demotion-only strategy.