MediaSpace Wiki | ## Statistical Methods |

Basic statistics help:

Correspondence Analysis

- A nice overview with annotated SPSS output. http://www2.chass.ncsu.edu/garson/pa765/correspondence.htm
- A nice explanation of why correspondence analysis works well for categorical data: http://www.okstate.edu/artsci/botany/ordinate/CA.htm

Factor Analysis

- Some nice explanations:
- KMO and Bartlett's Test of Sphericity (Factor Analysis)
- The Kaiser-Meyer-Olkin measure of sampling adequacy tests whether the partial correlations among variables are small. Bartlett's test of sphericity tests whether the correlation matrix is an identity matrix, which would indicate that the factor model is inappropriate. -- from the SPSS on-line help.
- Kaiser-Meyer-Olkin (KMO) and Bartlett's Test: The next item from the output is the Kaiser-Meyer-Olkin (KMO) and Bartlett's test. The KMO measures the sampling adequacy which should be greater than 0.5 for a satisfactory factor analysis to proceed. Looking at the table below, the KMO measure is 0.417. From the same table, we can see that the Bartlett's test of sphericity is significant. That is, its associated probability is less than 0.05. In fact, it is actually 0.012. This means that the correlation matrix is not an identity matrix. - from http://www.ncl.ac.uk/ucs/statistics/common/specialisttopics/factor_analysis/factoranalysis.html
- Is the strength of the relationship among variables large enough? Is it a good idea to proceed a factor analysis for the data? -- from http://www.public.asu.edu/~pythagor/principal_components.htm
- The Kaiser-Meyer-Olkin measure of sampling adequacy is an index for comparing the magnitudes of the observed correlation coefficients to the magnitudes of the partial correlation coefficients (refer to SPSS User's Guide). Large values for the KMO measure indicate that a factor analysis of the variables is a good idea. For the example, notice that the Kaiser-Meyer-Olkin measure of sampling adequacy is greater than .90.
- Another indicator of the strength of the relationship among variables is Bartlett's test of sphericity. Bartlett's test of sphericity is used to test the null hypothesis that the variables in the population correlation matrix are uncorrelated. The observed significance level is .0000. It is small enough to reject the hypothesis. It is concluded that the strength of the relationship among variables is strong. It is a good idea to proceed a factor analysis for the data.

Path Analysis

- Structural Equation Modeling Software, including AMOS (which looks good, but kind of expensive): http://www.gsm.uci.edu/~joelwest/SEM/Software.html
- I have been seeing several papers (both as a reviewer and as a reader of published work) that use AMOS for CFA, path analysis, or SEM models. Very often the global fit values that are reported suggest a near perfect fit. I am wondering if anyone else has noticed this and finds these values suspicious. In my experience, complex social science models just don't fit the data perfectly. If nothing else, the theory and measurement just are not that precise, and there is always sampling error. Has anyone tested AMOS results against EQS, LISREL, PACKAGE etc. to see if AMOS gives comparative results? Has anyone with programing skills checked out the code? It seems to me that papers using AMOS report much higher fit statistics than papers using alternative software packages. Perhaps I am just cynical, but if something seems to good to be true, it just might be. from Tim Levine, Professor
- Professor Levine made a general inquiry as to the ability of AMOS to replicate results provided by other SEM programs (e.g., LISREL, EQS) using the same data. I am not sure anyone has conducted the type of study outlined by Professor Levine, but I do know that the various software packages are quite similar in the codes used to produce various estimates. For example, all of the software packages use the Sobel equation for the testing of mediation-based indirect effects. With this stated, some of the fit statistics provided by AMOS do not appear to be computed correctly (e.g., the Bayesain Information Criterion [BCI]). I have made general inquiries similar to Professor Levine's to some true SEM experts and the consensus has been that any differences in estimates across software packages would be minimal. With this stated, I don't think I am going out on a limb when I state that most SEM experts approach AMOS with some reservations. This reluctance does not stem so much for the actual calculations produced by the program, but what the program allows SEM practitioners to do with their models. One of the first things I do when reviewing an SEM piece that uses AMOS is to calculate the degrees of freedom in the model. Rick Hoyle (1991) has a nice piece in the Journal of Consulting and Clinical Psychology that outlines how to calculate the degrees of freedom in a model. I almost always find with papers using AMOS that the degrees of freedom estimate reported with the chi-square does not match up with the degrees of freedom that comes out of my reading of the model itself. In short, there is usually something else going on in the model that is not being reported. There could be a covarying of error terms that is not appropriate or researchers will pick and choose which exogenous variables will covary with one another. These acts and others like them are employed in order to produce better fit statistics. Far too often researchers will simply look to the modification indices and fix or free paths based on what improving fit. This sequence of events leads to data driven models that do not replicate.
- There are some other distinctions of note when comparing the various SEM software packages. For example, AMOS varies from LISREL in how it approaches exogenous variables. LISREL (correctly) asks researchers to make a conceptual distinction between exogenous and endogenous variables prior to the testing of a model. AMOS simply treats any variables that do not have paths going to them as exogenous. As a result, LISREL by default allows all exogenous variables to freely covary with one another, while researchers using AMOS have to be proactive in establishing this set of relationships.
- Robert Hauser, a well known sociologist at UW-Madison who taught me the ins and outs of SEM, stated at the end beginning of his SEM class that we would know enough by the end of one semester to be dangerous. By this he meant that we may be able to do things that we don't quite understand and which may not be statistically appropriate. My general feeling is that the lack of constraints within AMOS and the ease with which it can be used allows for many more dangerous models to be introduced into the literature, especially when you view this program relative to more well established packages (e.g., LISREL, EQS). I am firmly convinced that certain models that are allowed to run in AMOS could not be tested via LISREL or EQS.
- There is one other point of interest in assessing fit. There are three modeling techniques commonly used in the communication sciences: hybrid, latent composite and observable. It is much easier to get a model to fit well using the latter two techniques given that they are not dealing directly with measurement. In short, the answer to your initial questioning of whether most communication models fit well contains a moderator variable, model type.
- from R. Lance Holbert

Hi Matthew,

Thanks very much for sending me the messages on the CRTNET listserv related to Amos. Some of the correspondents mentioned differences between fit values obtained from Amos and results from other SEM programs. I might be able to contribute some relevant technical information about this. Please feel free to forward this to the listserv.

Up until version 4.02, when a model included means and intercepts as explicit model parameters, Amos used a different baseline model than most other SEM programs used in computing fit measures like NFI, NNFI, CFI, etc. Amos's baseline model required each observed variable to have a mean of zero. By contrast, most other SEM programs allowed the means to be unconstrained in the baseline model. Because Amos's baseline model typically fit extremely badly, fit measures like NFI, NNFI, CFI etc. took on larger values in Amos than in most SEM programs. In other words, Amos's baseline model was so bad it made all your models look good by comparison. Amos's old baseline model (used prior to version 4.02) was not wrong, because the choice of baseline model is up to the individual. In fact, Amos 5 still allows that old baseline model, among others, as an option when you perform a specification search. However, the difference between Amos's baseline model and the one used by most other S!

EM programs was causing a lot of confusion, and so the decision was finally made in 4.02 to allow means to be unconstrained in the Amos's standard baseline model. So in version 4.02, Amos fell into line with the other SEM programs. I might mention that in Amos 5, when doing a specification search, you can use any of four baseline models -- means can be either fixed at zero or unconstrained, and correlations can be either fixed at zero or constrained to be equal.

As for BIC, the Bayes Information Criterion, I think the reason that one of the correspondents questioned Amos's BIC calculation is that Adrian Raftery provided two different formulas for computing BIC. The Raftery references are given at the end of this email. Up through version 4, Amos used the 1993 formula. Amos 5 uses the 1995 formula.

There were also some comments about Amos that are not related to correctness of results, but have more to do with the user interface and ease of use. I read these comments with great interest, but I mainly wanted to address the issues of correctness of results.

If any CRTNET members want to follow up, please ask them to get in touch with me at ggwebm@amosdevelopment.com.

Best regards, Jim

BIC REFERENCES

Raftery, A. (1993). Bayesian model selection in structural equation models. In K. Bollen & J. Long (Eds.), Testing structural equation models (pp. 163-180): Newbury Park, California.

Raftery, A. (1995). Bayesian model selection in social research. In P. Marsden (Ed.), Sociological Methodology 1995 (pp. 111-163): San Francisco.

Communication Research Measures -- Volume II

A new volume of scales and indexes in communication is now underway and we encourage you to nominate measures that you find particularly useful for your research. In this new volume, we'll be covering group, intercultural, organizational, health, interpersonal, mass, electronic media, instructional, political, and other related areas. We'll also include some measures from outside the discipline that communication researchers have found useful (e.g., sensation seeking, locus of control, personality). Mainly we'll focus on measures created in the last 10 years, but might consider some that weren't included in the first volume.

Send the measure's name (and citation, if you have it) to me and I'll forward them to my editorial colleagues on this project (Dave Seibold, Betsy Perse, Beth Graham, and Alan Rubin).

Thanks for your input!

Rebecca Rubin rrubin@kent.edu

--- Here is a review of statistical resources on the web

and a list of free statistical software

and then, after you do the statistics, here are links to sites on how to present data

hope this is useful!

gene shackman The Global Social Change Research Project

Correction: Bartlett test in this particular application does not test if the correlation matrix is an identity matrix (or it would almost always reject the null) but that the *residual* correlation matrix is identity.

-- Last edited September 18, 2015 Go to Main Topic list |

Foulger, D. and other participants. (September 18, 2015).