
A method for individual difference scaling analysis in PRAAT.
An INDSCAL analysis can be performed on objects of type Distance.
If you start with Dissimilarity objects you first have to transform them to Distance objects.
If you start with a Confusion you can use:
The function to be minimized in INDSCAL is the following:
f(X, W_{1},..., W_{numberOfSources}) = ∑_{i=1..numberOfSources} S_{i} – XW_{i}X′^{2} 
where X an unknown numberOfPoints x numberOfDimensions configuration matrix, the W_{i} are numberOfSources unknown diagonal numberOfDimensions x numberOfDimensions matrices with weights, often called saliences, and the S_{i} are known symmetric matrices with scalar products of dimension numberOfPoints x numberOfPoints.
In the absence of an algorithm that minimizes f, Carroll & Chang (1970) resorted to the CANDECOMP algorithm, which instead of the function given above minimizes the following function:
g(X, Y, W_{1},..., W_{numberOfSources}) = ∑_{i=1..numberOfSources} S_{i} – XW_{i}Y′^{2}. 
Carroll & Chang claimed that for most practical circumstances X and Y converge to matrices that will be columnwise proportional. However, INDSCAL does not only require symmetry of the solution, but also nonnegativity of the weights. Both these aspects cannot be guaranteed with the CANDECOMP algorithm.
Ten Berge, Kiers & Krijnen (1993) describe an algorithm that automatically satisfies symmetry because it solves f directly, and, also, can guarantee nonnegativity of the weights. This algorithm proceeds as follows:
Let x_{h} be the hth column of X. We then write the function f above as:
f(x_{h}, w_{1h}, ..., w_{numberOfSources h}) = ∑_{i=1..numberOfSources} S_{ih} – x_{h}w_{ih}x′_{h}^{2}, 
with S_{ih} defined as:
S_{ih} = (S_{i}  ∑_{j≠h, j=1..numberOfDimensions} x_{j}w_{ij}x′_{j}). 
Without loss of generality we may require that
x′_{h}x_{h} = 1 
Minimizing f over x_{h} is equivalent to minimizing
∑_{i=1..numberOfSources} S_{ih}^{2} – 2tr ∑ S_{ih}x_{h}w_{ih}x′_{h} + ∑ w^{2}_{ih} 
This amounts to maximizing
g(x_{h}) = x′_{h}(∑ w_{ih}S_{ih})x_{h} 
subject to x′_{h}x_{h} = 1. The solution for x_{h} is the dominant eigenvector of (∑ w_{ih}S_{ih}), which can be determined with the power method (see Golub & van Loan (1996)). The optimal value for the w_{ih}, given that all other parameters are fixed:
w_{ih} = x′_{h}S_{ih}x_{h} 
In an alternating least squares procedure we may update columns of X and the diagonals of the W matrices in any sensible order.
© djmw, March 6, 2012