|
A method for individual difference scaling analysis in Praat.
An INDSCAL analysis can be performed on objects of type Distance.
If you start with Dissimilarity objects you first have to transform them to Distance objects.
If you start with a Confusion you can use:
The function to be minimized in INDSCAL is the following:
f(X, W1,..., WnumberOfSources) = ∑i=1..numberOfSources |Si – XWiX′|2 |
where X an unknown numberOfPoints x numberOfDimensions configuration matrix, the Wi are numberOfSources unknown diagonal numberOfDimensions x numberOfDimensions matrices with weights, often called saliences, and the Si are known symmetric matrices with scalar products of dimension numberOfPoints x numberOfPoints.
In the absence of an algorithm that minimizes f, Carroll & Chang (1970) resorted to the CANDECOMP algorithm, which instead of the function given above minimizes the following function:
g(X, Y, W1,..., WnumberOfSources) = ∑i=1..numberOfSources |Si – XWiY′|2. |
Carroll & Chang claimed that for most practical circumstances X and Y converge to matrices that will be columnwise proportional. However, INDSCAL does not only require symmetry of the solution, but also non-negativity of the weights. Both these aspects cannot be guaranteed with the CANDECOMP algorithm.
Ten Berge, Kiers & Krijnen (1993) describe an algorithm that automatically satisfies symmetry because it solves f directly, and, also, can guarantee non-negativity of the weights. This algorithm proceeds as follows:
Let xh be the h-th column of X. We then write the function f above as:
f(xh, w1h, ..., wnumberOfSources h) = ∑i=1..numberOfSources |Sih – xhwihx′h|2, |
with Sih defined as:
Sih = (Si - ∑j≠h, j=1..numberOfDimensions xjwijx′j). |
Without loss of generality we may require that
x′hxh = 1 |
Minimizing f over xh is equivalent to minimizing
∑i=1..numberOfSources |Sih|2 – 2tr ∑ Sihxhwihx′h + ∑ w2ih |
This amounts to maximizing
g(xh) = x′h(∑ wihSih)xh |
subject to x′hxh = 1. The solution for xh is the dominant eigenvector of (∑ wihSih), which can be determined with the power method (see Golub & van Loan (1996)). The optimal value for the wih, given that all other parameters are fixed:
wih = x′hSihxh |
In an alternating least squares procedure we may update columns of X and the diagonals of the W matrices in any sensible order.
© djmw 20120306