Discrepancy Functions Used in SEM

(Most of this discussion is taken from Jöreskog and Sörbom (1989), pp. 21-24.)
In SEM, the parameters of a proposed model are estimated by minimizing the discrepancy between the empirical covariance matrix, S, and a covariance matrix implied by the model, . How should this discrepancy be measured? This is the role of the discrepancy function.

"Classical" Discrepancy Functions--ML, GLS, and ULS

Much of the early excitement about SEM was due to Jöreskog's (1967) development of a maximum likelihood (ML) discrepancy function:

where "|.|" indicates the determinant of a matrix, "tr" indicates the trace, and p is the total number of manifest variables (x and y) in the model. Utilizing the assumption of multivariate normality in the observed data, or equivalently, the assumption of a joint Wishart distribution among the elements of S, the ML discrepancy function yields asymptotically correct standard errors for the parameter estimates and an overall fit statistic that follows, asymptotically, a chi-square distribution when the model is correct in the population.

Alternatively, the Generalized Least Squares (GLS) discrepancy function:

yields results that are asymptotically equivalent to ML results. In practice, researchers have observed differences between the implementations of the two functions, within the same software package, in terms of their robustness and the frequency of yielding convergence problems and improper solutions. Jöreskog has indicated that the GLS discrepancy function ought to have a speed advantage over ML, but little difference has been observed in practice.

As Walt David has pointed out, one significant difference between the two discrepancy functions lies in the "scores" that they assign to "null models," that is, models which specify that all manifest variables are uncorrelated. The ML and GLS discrepancy function values can vary wildly. The result is that null model-based comparative fit indices will not behave consistently across these two estimation methods.

A less commonly used alternative in this group is the Unweighted Least Squares (ULS) discrepancy function:

This discrepancy function is analogous to OLS estimation in regression. This function differs from the others in that it is not built on an assumption of multivariate normality in the data. As a result, this discrepancy function does not, in itself, lead to estimated standard errors or an overall chi-square fit statistic. However, some programs may provide those results by adopting the multivariate normality assumption after the fact.

Asymptotically Distribution Free Discrepancy Functions (ADF/WLS)

In seminal work, Browne (1982,1984) demonstrated that the existing discrepancy functions were all, asymptotically, special cases of a generic discrepancy function:

where "T" indicates transposition, W is a weight matrix, and:

In words, the vectors in this function are lists of the unique or nonredundant elements of S and , respectively.

This discrepancy function defines F as a weighted sum of squared residuals (hence the label, "Weighted Least Squares," or WLS, in the LISREL package), just as in generalized regression. As one might expect, a good estimate of W would be based on the covariance matrix of the residuals, but there are different ways to estimate that matrix. The distinctions between different discrepancy functions were merely differences in the way W was estimated:

Discrepancy function W derived as
ULS Identity matrix
GLS Function of elements of S
ML Function of elements of
This means, among other things, that the ML weight matrix is effectively updated at each iteration in the estimation process, as the estimate of changes, while the GLS weight matrix is not.

GLS and ML estimation capitalize on the tremendous simplification that is possible when data are multivariate normal. When the distributional assumption is false, however, these discrepancy functions are, in effect, operating with incorrect weight matrices.

Browne suggested, for the more general case, an "asymptotically distribution free" (ADF) discrepancy function, where W is based on direct estimation of the fourth-order moments of the residuals. This approach can be problematic, however. Studies show that this approach only yields stable results in very large sample sizes (Muthén and Kaplan, 1985, 1992), so it is not widely used. On the opther hand, Yung and Bentler (1994) recently suggested overcoming the sample size problem by using bootstrap methods to estimate the weight matrix.

References

Browne, M. W. (1982). Covariance structures. In D. M. Hawkins (Ed.), Topics in applied multivariate analysis (pp. 72-141). Cambridge, UK: Cambridge University.

Browne, M. W. (1984). Asymptotically distribution-free methods for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology, 37, 1-21.

Jöreskog, K. G. (1967). Some contributions to maximum likelihood factor analysis. Psychometrika, 32(4), 443-482.

Muthén, B., & Kaplan, D. (1985). A comparison of methodologies for the factor analysis of non-normal Likert variables. British Journal of Mathematical and Statistical Psychology, 38, 171-189.

Muthén, B., & Kaplan, D. (1992). A comparison of some methodologies for the factor analysis of non-normal Likert variables: A note on the size of the model. British Journal of Mathematical and Statistical Psychology, 45, 19-30.

Yung, Y.-F., & Bentler, P. M. (1994). Bootstrap-corrected ADF test statistics in covariance structure analysis. British Journal of Mathematical and Statistical Psychology, 47, 63-84.


http://www.gsu.edu/~mkteer/discrep.html
Return to the SEMNET FAQ home page.
Return to Ed Rigdon's home page.
Last updated: May 3, 1996