| 
OT learning 5. Learning a stochastic grammar
 |  
  | 
Having shown that the algorithm can learn deep obligatory rankings, we will now see that it also performs well in replicating the variation in the language environment.
Create a place assimilation grammar as described in §2.6, and set all its rankings to 100.000:
-    
 |       ranking value |       disharmony |       plasticity | 
 
-    
| *GESTURE |       100.000 |       100.000 |        1.000 | 
 
-    
| *REPLACE (t, p) |       100.000 |       100.000 |        1.000 | 
 
-    
| *REPLACE (n, m) |       100.000 |       100.000 |        1.000 | 
 
Create a place assimilation distribution and generate 1000 string pairs (§3.1). Select the grammar and the two Strings objects, and learn with a plasticity of 0.1:
-    
 |       ranking value |       disharmony |       plasticity | 
 
-    
| *REPLACE (t, p) |       104.540 |       103.140 |        1.000 | 
 
-    
| *REPLACE (n, m) |       96.214 |       99.321 |        1.000 | 
 
-    
 
The output distributions are now (using OTGrammar: To output Distributions..., see §2.9):
-    
 
-    
 
-    
 
-    
 
After another 10,000 new string pairs, we have:
-    
 |       ranking value |       disharmony |       plasticity | 
 
-    
| *REPLACE (t, p) |       106.764 |       107.154 |        1.000 | 
 
-    
| *GESTURE |       97.899 |       97.161 |        1.000 | 
 
-    
| *REPLACE (n, m) |       95.337 |       96.848 |        1.000 | 
 
With the following output distributions (measured with a million draws):
-    
 
-    
 
-    
 
-    
 
The error rate is acceptably low, but the accuracy in reproducing the 80% - 20% distribution could be better. This is because the relatively high plasticity of 0.1 can only give a coarse approximation. So we lower the plasticity to 0.001, and supply 100,000 new data:
-    
 |       ranking value |       disharmony |       plasticity | 
 
-    
| *REPLACE (t, p) |       106.810 |       107.184 |        1.000 | 
 
-    
| *GESTURE |       97.782 |       99.682 |        1.000 | 
 
-    
| *REPLACE (n, m) |       95.407 |       98.760 |        1.000 | 
 
With the following output distributions:
-    
 
-    
 
-    
 
-    
 
So besides learning obligatory rankings like a child does, the algorithm can also replicate very well the probabilities of the environment. This means that a GLA learner can learn stochastic grammars.
Links to this page
	© ppgb 20070725