Packages to install prior to starting the module: curl()
– get that data! dplyr()
– manipulate that data psych()
– get your psychology on… and explanatory factor analysis ggplot2()
– make those fabulous figures mice()
– eeek there is missing data! GPArotation()
– round and round we go (advanced rotations!) rlang()
– lots of pretty plots! yaml()
– I wasn’t lying about the prettiness of these plots stringi()
– can you believe how many packages there are for plots? gplots()
– did someone say pretty plots? gridExtra()
– plots plots plots plots moments()
– Moments? I thought we were talking about plots?
In this module, we will describe both factor analysis (FA) and principal component analysis (PCA), while also differentiating between the two. We will demonstrate the usefulness of factor analysis using a functioning dataset. Within the module, we will go through the process of cleaning up raw data, which may have missing elements. Finally, we will show previous studies that have used either method.
Factor Analysis (FA) is a statistical method used to determine the latent structure behind a set of variables. What does latent structure/variable mean? Good question!
In statistical terms, this means that a latent variable is a, “variable that cannot be directly measured, but is assumed to be related to several variables that can be measured” (Fields, 2013). So basically, FA is helping boil down a bunch of unexplainable variables into a smaller subset of important ones.
Latent variables are like faith. Say your friend tells you she is a very faithful person. What exactly does this mean? Maybe when your friend is in the lab she is acting according to her faith in science? Maybe when she goes home to her family she is living via her faith in religion? OR, maybe she actually has no faith in science, religion, the government, or herself? Faith is unmeasurable and slightly difficult to define. You always have the option to run a factor analysis on your friend’s behavior to determine the underlying causes to all of her personality quirks…
Principal Component Analysis (PCA) takes a large number of correlated variables and makes a smaller set of uncorrelated variables. So really, PCA is trying to simplify your statistical life (just kidding…. nothing in statistics is that simple)…
I will break it down for you further in an example that will tie into a later example. What if we could make meaningful variables related to love (… don’t worry your love can never be quantified… yours is special or something like that). Let’s say we have this giant list of variables:
Isn’t a minion in a thong mesmerizing? Don’t lie to me- you’re totally checking out his buns…. Anyways, let’s say that our variables load into three components as such:
* PCA1: Desire to See One Another, Commitment, Genuine Concern, Artificial Concern (with the expectation of sex), Engagement in Conversations, Laughing
* PCA2: Frequency of Penetrative Sex, Frequency of Oral Sex, Length of time spent looking at your counterpart’s parts, Butt slapping
* PCA3: Sweaty Pitts, Dry Throat, Eye Fluttering, Horniness, Eye Contact
You would then have three linear components that you could assess. You could rename the components so that they seem a little more meaningful. For my first component, I am going to to title it the, “Attachment Component”, since all of these correlated variables seem to relate to attachment to your partner. The second component is a different story…. All of these variables could be summed up as the, “Physical Attraction Component”. Lastly, sweaty pitts, a dry throat, eye fluttering, etc. are all phsyiological responses that are out of your control… but also related to physical attraction. For simplicity, we will call the third component the, “Physiological Love Component”.
This example should get you to think about things such as online dating (that’s right- nerds control your future prospects in the dating app world). You can also imagine that different individuals would score high for PCA1, PCA2, or PCA3, but not necessarily all of them. Some people are more emotional and some people are more sexual in their relationships. Luckily, there are algorithms to parse out the differences among peeps!
As mentioned above, FA in particular is useful for attempting to determine the underlying, yet correlated, variables (factors) that a dataset is influencing. This is often useful in studies where the variables being used are unmeasurable or unquantifiable (for example, FA is used a lot in the field of psychology, because it is impossible to measure things like intelligence or how considerate someone is directly).
PCA is mainly useful for data reduction, i.e. taking a large amount of correlated variables and compressing them into a smaller number of linear variables (components) without losing the original patterns of correlation. This is particularly useful for dimensional data (such as morphological data) that you want to do cluster analysis on in order to see how your variables cluster together based on correlation.
Finally, both techniques can be used on a dataset if you want to do a sensitivity analysis. Sensitivity analysis involves determining how the different values of independent variables in a dataset impact the dependent variables.
For specific scholarly examples of both techniques, please refer to the example papers at the bottom of the module.
As we said above, factor analysis is a method we use to analyze the covariation among our variables. Let’s start with defining what a factor is. A factor is another way of saying latent variable—something that cannot be measured directly because of its many facets (other measurable variables). We find the facets that go together, put them in a group we call “a factor” and analyze them together (they are called “components” in PCA). So we can say that a factor is a group of variables that highly correlate with each other.
Next, we pick a variable we compare every other factor to. We will refer to this as the Original Variable. In order to understand the underlying nature of a factor, we look at how it correlates to our original variable. These correlations between our factors and original variables are called Factor Loadings. If we organize these factor loadings (between each pair of variables) in a table, we can call that table our R-matrix. If we square a factor loading, it will give us the percentage of variation explained by our factor in our original variable.
Now let’s talk about Eigenvalues. You might think to yourself “why the unnecessary jargon” but hold onto that thought until you hear how wikipedia defines eigenvalues: “if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v”. Let’s try to make that sound a bit more human. A factor’s eigenvalue is just a column of it’s sum of squared factor loadings. It’s job is to tell you the amount of variance explained by that factor.
Limitations of Factor Analysis:
Practical Issues:
Theoretical Issues:
Assumptions:
Steps involved in both FA and PCA:
1.Prepare the Data: both PCA and FA derive their solutions from the correlations among the observed variables. You can input either the raw data matrix or the correlation matrix to the principal()
and fa()
functions. If raw data is inputted, the correlation matrix is automatically calculated. Be sure to screen the data for missing values before proceeding.
2. Select a Factor Model: decide whether PCA (data reduction) or FA (uncovering latent structure) is a better fit for your research goals. If you select an FA approach, you’ll also need to choose a specific factoring method (for example, maximum likelihood).
3. Decide how many Components/Factors to Extract
4. Extract the Components/Factors.
5. Rotate the Components/Factors.
6. Interpret the Results.
7. Compute Component or Factor Scores.
May the odds be ever in your favor loading these packages:
require(psych)
## Loading required package: psych
require(curl)
## Loading required package: curl
require(mice)
## Loading required package: mice
## Warning: package 'mice' was built under R version 3.4.2
## Loading required package: lattice
require(GPArotation)
## Loading required package: GPArotation
require(dplyr)
## Loading required package: dplyr
## Warning: package 'dplyr' was built under R version 3.4.2
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
require(rlang)
## Loading required package: rlang
## Warning: package 'rlang' was built under R version 3.4.2
require(yaml)
## Loading required package: yaml
## Warning: package 'yaml' was built under R version 3.4.3
require(stringi)
## Loading required package: stringi
## Warning: package 'stringi' was built under R version 3.4.2
require(gplots)
## Loading required package: gplots
##
## Attaching package: 'gplots'
## The following object is masked from 'package:stats':
##
## lowess
require(gridExtra)
## Loading required package: gridExtra
##
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
##
## combine
require(moments)
## Loading required package: moments
One of the primary areas of study within evolutionary psychology is individual differences in mating behavior. Thus far, literature on human mating has focused on the strategies adopted when seeking, attracting, and retaining romantic partners. However, very little research investigates the effort put forth in employing these strategies and to what extent it varies between individuals.
Rowe, Vazsonyi, and Figuerdo (1997), investigated individual variation in mating effort in a sample of at risk adolescents. They found that an individual’s mating effort was related to their reproductive success, such that the individuals who reported higher levels of mating effort also had more short-term relationships (Rowe et al., 1997). However, the scale they developed only contained a small number of items and the internal consistency of the measure suggest that it may not effective measure the entire construct. Therefore, by developing and testing a new and more-encompassing scale, we hope to contribute to the psychometric tests that can be used by psychologists studying human mating behavior.
Here we define mating motivation as the extent of energy which people allocate towards locating, attracting, and retaining romantic partners. We will measure mating motivation using the “Mating Motivation Scale” we developed, which contains questions about the importance respondents place on being in and maintaining a relationship and questions about behaviors that participants engage in to locate, attract, and retain romantic partners.
We have developed a 79 item scale. This is a Likert-type scale, which means participants respond by indicating the extent to which they agreed or disagreed with each statement. Responses are being made using a 7 point Likert-type scale, where 1 = Strongly Disagree and 7 = Strongly Agree.
Step 1: Prepare the data. Decide whether we are using FA or PCA. Because we are looking for hypothetical constructs that explain the data, we will use FA, which functions to uncover the latent structure of the variables. In scale validation, we use FA because we do not already know how the variables being the test items are organized, or which item responses will correlate with each other.
In FA, we are assuming that the underlying factors cause our variables (our test items).
Let’s load in the data and get started:
f <- curl("https://raw.githubusercontent.com/GrahamAlbert/AN-597-group-presentation-and-written-R-vignette/master/MatingMotivationScaleEFA_20171201.csv")
MatingMotivationScaleV1<- read.csv(f, header = TRUE, sep = ",", stringsAsFactors = FALSE)
MatingMotivationScaleV1<-MatingMotivationScaleV1[-1]
head(MatingMotivationScaleV1)
## MMS1 MMS2 MMS3 MMS4 MMS5_rc MMS6_rc MMS7 MMS8 MMS9_rc MMS10 MMS11 MMS12
## 1 5 3 4 5 3 6 2 3 5 1 5 2
## 2 2 2 6 5 1 4 4 1 6 4 7 4
## 3 1 1 7 3 1 4 3 1 2 2 7 4
## 4 7 5 4 6 2 4 4 5 6 5 7 5
## 5 4 3 3 4 4 5 4 5 3 5 4 4
## 6 6 5 4 3 2 2 5 4 2 4 6 4
## MMS13 MMS14_rc MMS15_rc MMS16 MMS17 MMS18 MMS19 MMS20 MMS21 MMS22 MMS23
## 1 7 4 4 5 2 2 2 5 5 2 1
## 2 6 3 4 5 4 4 4 4 1 5 4
## 3 7 1 3 5 2 3 5 3 1 5 1
## 4 5 4 3 3 6 4 4 3 6 6 5
## 5 4 4 4 5 3 4 4 3 4 5 5
## 6 4 2 3 4 3 3 4 6 3 3 2
## MMS24 MMS25_rc MMS26 MMS27_rc MMS28 MMS29 MMS30 MMS31 MMS32 MMS33
## 1 1 4 4 3 2 2 1 6 2 2
## 2 2 4 6 3 1 1 1 3 1 7
## 3 1 4 5 2 1 1 1 1 1 7
## 4 4 5 2 2 5 4 3 6 4 4
## 5 5 5 5 5 4 4 4 5 5 3
## 6 6 4 3 5 3 2 6 5 4 4
## MMS34_rc MMS35 MMS36 MMS37 MMS38 MMS39 MMS40_rc MMS41 MMS42 MMS43 MMS44
## 1 4 6 2 1 1 1 1 1 7 7 5
## 2 3 1 1 5 2 1 1 4 6 2 1
## 3 2 1 1 1 2 1 1 1 3 2 2
## 4 4 5 4 6 5 4 5 5 4 3 5
## 5 4 3 4 6 4 4 4 4 4 4 4
## 6 5 6 5 4 3 3 2 5 4 4 3
## MMS45 MMS46 MMS47 MMS48 MMS49 MMS50 MMS51 MMS52 MMS53 MMS54 MMS55
## 1 5 3 3 4 4 2 2 6 6 3 3
## 2 1 6 4 1 1 1 6 2 6 1 1
## 3 1 2 5 1 1 1 5 1 1 1 1
## 4 4 3 6 3 3 6 6 6 5 4 5
## 5 5 5 5 5 4 4 4 5 4 4 5
## 6 3 5 3 3 2 5 6 4 3 2 5
## MMS56_rc MMS57_rc MMS58 MMS59 MMS60 MMS61 MMS62 MMS63 MMS64 MMS65 MMS66
## 1 4 4 5 5 3 5 2 2 4 6 6
## 2 3 1 2 5 5 1 1 1 1 1 1
## 3 2 1 2 1 6 5 1 1 1 NA 1
## 4 4 6 6 4 3 7 6 5 4 3 6
## 5 5 5 4 5 4 4 3 4 4 4 5
## 6 4 5 4 7 5 6 5 3 5 4 4
## MMS67 MMS68_rc MMS69 MMS70_rc MMS71 MMS72 MMS73 MMS74 MMS75_rc MMS76
## 1 6 3 5 5 6 2 2 3 5 7
## 2 1 1 4 1 3 1 2 1 2 1
## 3 1 1 5 2 1 2 4 1 3 1
## 4 3 5 6 4 5 6 4 3 3 5
## 5 4 4 3 3 4 3 4 4 4 5
## 6 6 3 3 4 5 5 5 3 5 4
## MMS77 MMS78 MMS79
## 1 5 3 3
## 2 1 4 1
## 3 1 5 1
## 4 4 6 3
## 5 4 4 3
## 6 5 3 5
You should see that each of the 79 variables is coded as, “MMSXX”, where MMS stands for Mating Motivation Scale and XX would simply be the question number.
Sorry not sorry – our data looks like a wall of numbers.
Here is a sample of what each question looks like:
Let’s scan for missing data:
percentmiss=function(x){sum(is.na(x)/length(x))*100}
missing=apply(MatingMotivationScaleV1, 1,percentmiss)
table(missing)
## missing
## 0 1.26582278481013 2.53164556962025
## 251 17 3
replacepeople=subset(MatingMotivationScaleV1, missing<=5)
apply(replacepeople, 2, percentmiss)
## MMS1 MMS2 MMS3 MMS4 MMS5_rc MMS6_rc MMS7
## 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
## MMS8 MMS9_rc MMS10 MMS11 MMS12 MMS13 MMS14_rc
## 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
## MMS15_rc MMS16 MMS17 MMS18 MMS19 MMS20 MMS21
## 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
## MMS22 MMS23 MMS24 MMS25_rc MMS26 MMS27_rc MMS28
## 0.0000000 0.0000000 0.0000000 0.3690037 0.3690037 0.0000000 0.0000000
## MMS29 MMS30 MMS31 MMS32 MMS33 MMS34_rc MMS35
## 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.3690037
## MMS36 MMS37 MMS38 MMS39 MMS40_rc MMS41 MMS42
## 0.0000000 0.3690037 0.3690037 0.0000000 0.3690037 0.0000000 0.3690037
## MMS43 MMS44 MMS45 MMS46 MMS47 MMS48 MMS49
## 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
## MMS50 MMS51 MMS52 MMS53 MMS54 MMS55 MMS56_rc
## 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.7380074 0.3690037
## MMS57_rc MMS58 MMS59 MMS60 MMS61 MMS62 MMS63
## 0.0000000 0.0000000 0.3690037 0.0000000 0.0000000 0.0000000 0.0000000
## MMS64 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc
## 0.0000000 0.3690037 0.7380074 0.7380074 0.3690037 0.3690037 0.3690037
## MMS71 MMS72 MMS73 MMS74 MMS75_rc MMS76 MMS77
## 0.0000000 0.0000000 0.3690037 0.3690037 0.0000000 0.0000000 0.7380074
## MMS78 MMS79
## 0.0000000 0.0000000
Based on an analysis of the columns (see above) it appears that the data is missing at random and we can proceed with replacing the missing data with the mean of the given variable using the package mice.
tempnomiss = mice(replacepeople)
##
## iter imp variable
## 1 1 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 1 2 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 1 3 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 1 4 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 1 5 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 2 1 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 2 2 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 2 3 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 2 4 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 2 5 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 3 1 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 3 2 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 3 3 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 3 4 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 3 5 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 4 1 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 4 2 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 4 3 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 4 4 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 4 5 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 5 1 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 5 2 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 5 3 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 5 4 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
## 5 5 MMS25_rc MMS26 MMS35 MMS37 MMS38 MMS40_rc MMS42 MMS55 MMS56_rc MMS59 MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS73 MMS74 MMS77
nomiss = complete (tempnomiss, 1)
Mahalanobis Distance: one of the methods used in detecting multivariate outliers is the Mahalanobis statistic (usually referred to as the Mahalanobis Distance). The Mahalanobis Distance measures how many standard deviations a point of prediction (P) is from a distribution (D). If P overlaps with the mean of D, the distance comes up as zero. Its nature is unitless, scale-invariant, and takes into account the correlations in the set, therefore it comes in handy while doing factor analysis.
In statistics, we are trying to predict outcomes to the best of our ability and typically there is a range of observations that are the likeliest to occur. However, some data points deviate from this range remarkably and can potentially mess up the outcome of your analysis. These guys are called outliers.
There are different types of outliers: univariate, bivariate, and multivariate. For the sake of relevance, we are only going to go over multivariate outliers (MVOs). These refer to outliers that exhibit an unusual combination of scores on different variables.
We can screen for multivariate outliers using nomiss and mahalanbis:
cutoff = qchisq(0.99, ncol(nomiss))
mahal = mahalanobis(nomiss,
colMeans(nomiss),
cov(nomiss))
cutoff #####generates cutoff score
## [1] 111.144
ncol(nomiss) #####determines df
## [1] 79
summary(mahal<cutoff)
## Mode FALSE TRUE
## logical 33 238
From the chunk above, you can see that there are 33 individuals that are considered ‘outliers’, while the bulk of our participants have ‘expected’ responses (n = 238).
Now, we will remove these outlier datapoints (i.e. those individuals who are just a little unusual):
noout = subset(nomiss, mahal<cutoff)
head(noout)
## MMS1 MMS2 MMS3 MMS4 MMS5_rc MMS6_rc MMS7 MMS8 MMS9_rc MMS10 MMS11 MMS12
## 2 2 2 6 5 1 4 4 1 6 4 7 4
## 3 1 1 7 3 1 4 3 1 2 2 7 4
## 4 7 5 4 6 2 4 4 5 6 5 7 5
## 5 4 3 3 4 4 5 4 5 3 5 4 4
## 6 6 5 4 3 2 2 5 4 2 4 6 4
## 8 5 5 3 3 3 4 5 3 5 5 4 5
## MMS13 MMS14_rc MMS15_rc MMS16 MMS17 MMS18 MMS19 MMS20 MMS21 MMS22 MMS23
## 2 6 3 4 5 4 4 4 4 1 5 4
## 3 7 1 3 5 2 3 5 3 1 5 1
## 4 5 4 3 3 6 4 4 3 6 6 5
## 5 4 4 4 5 3 4 4 3 4 5 5
## 6 4 2 3 4 3 3 4 6 3 3 2
## 8 5 4 3 5 4 3 4 3 4 5 3
## MMS24 MMS25_rc MMS26 MMS27_rc MMS28 MMS29 MMS30 MMS31 MMS32 MMS33
## 2 2 4 6 3 1 1 1 3 1 7
## 3 1 4 5 2 1 1 1 1 1 7
## 4 4 5 2 2 5 4 3 6 4 4
## 5 5 5 5 5 4 4 4 5 5 3
## 6 6 4 3 5 3 2 6 5 4 4
## 8 3 4 5 3 4 5 4 5 4 3
## MMS34_rc MMS35 MMS36 MMS37 MMS38 MMS39 MMS40_rc MMS41 MMS42 MMS43 MMS44
## 2 3 1 1 5 2 1 1 4 6 2 1
## 3 2 1 1 1 2 1 1 1 3 2 2
## 4 4 5 4 6 5 4 5 5 4 3 5
## 5 4 3 4 6 4 4 4 4 4 4 4
## 6 5 6 5 4 3 3 2 5 4 4 3
## 8 5 5 4 4 3 3 4 4 3 5 5
## MMS45 MMS46 MMS47 MMS48 MMS49 MMS50 MMS51 MMS52 MMS53 MMS54 MMS55
## 2 1 6 4 1 1 1 6 2 6 1 1
## 3 1 2 5 1 1 1 5 1 1 1 1
## 4 4 3 6 3 3 6 6 6 5 4 5
## 5 5 5 5 5 4 4 4 5 4 4 5
## 6 3 5 3 3 2 5 6 4 3 2 5
## 8 5 4 4 3 5 4 5 5 4 4 3
## MMS56_rc MMS57_rc MMS58 MMS59 MMS60 MMS61 MMS62 MMS63 MMS64 MMS65 MMS66
## 2 3 1 2 5 5 1 1 1 1 1 1
## 3 2 1 2 1 6 5 1 1 1 4 1
## 4 4 6 6 4 3 7 6 5 4 3 6
## 5 5 5 4 5 4 4 3 4 4 4 5
## 6 4 5 4 7 5 6 5 3 5 4 4
## 8 4 5 4 5 5 5 5 3 3 5 4
## MMS67 MMS68_rc MMS69 MMS70_rc MMS71 MMS72 MMS73 MMS74 MMS75_rc MMS76
## 2 1 1 4 1 3 1 2 1 2 1
## 3 1 1 5 2 1 2 4 1 3 1
## 4 3 5 6 4 5 6 4 3 3 5
## 5 4 4 3 3 4 3 4 4 4 5
## 6 6 3 3 4 5 5 5 3 5 4
## 8 5 3 4 3 5 4 4 5 3 5
## MMS77 MMS78 MMS79
## 2 1 4 1
## 3 1 5 1
## 4 4 6 3
## 5 4 4 3
## 6 5 3 5
## 8 4 3 4
In the table above, we removed 33 individuals. For example, you can see that individual #7 was removed (the row numbers jump from 6 to 8).
Additivity: we can screen for addivity, which is one of the assumptions for FA. Additive assumptions are those that assume that the effect of a given predictor variable on a response variable are independent from other predictor variables.
In this case, we expect our variables to be correlated, we just do not want our variables to be perfectly correlated otherwise the analysis will not run.
correl=cor(noout, use ="pairwise.complete.obs")
symnum(correl)
## MMS1 MMS2 MMS3 MMS4 MMS5_ MMS6_ MMS7 MMS8 MMS9 MMS10 MMS11 MMS12
## MMS1 1
## MMS2 . 1
## MMS3 1
## MMS4 . 1
## MMS5_rc . . 1
## MMS6_rc . 1
## MMS7 1
## MMS8 . . . . . 1
## MMS9_rc 1
## MMS10 . . . . 1
## MMS11 . . 1
## MMS12 . . 1
## MMS13 . . . .
## MMS14_rc .
## MMS15_rc .
## MMS16
## MMS17 . . . .
## MMS18 .
## MMS19
## MMS20 . .
## MMS21 . . . . , .
## MMS22 .
## MMS23 . .
## MMS24 . . . . . .
## MMS25_rc
## MMS26 . . .
## MMS27_rc
## MMS28 . . . . .
## MMS29 . . .
## MMS30 . . . . . . .
## MMS31 . . . .
## MMS32 . . .
## MMS33 . .
## MMS34_rc
## MMS35 . . . . , .
## MMS36 . . . . . .
## MMS37
## MMS38 . . . .
## MMS39 . . . . . .
## MMS40_rc
## MMS41
## MMS42 .
## MMS43 . . . , .
## MMS44 . . . .
## MMS45 . . .
## MMS46 . . .
## MMS47 .
## MMS48 . . .
## MMS49 . . . .
## MMS50 . . . . .
## MMS51 . .
## MMS52 . . . .
## MMS53
## MMS54 . . . . .
## MMS55 . . . , . .
## MMS56_rc . .
## MMS57_rc . .
## MMS58 . . . . . . .
## MMS59
## MMS60 .
## MMS61 . . . .
## MMS62 . . . .
## MMS63 . . . .
## MMS64 . . . .
## MMS65 . . . .
## MMS66 . . . .
## MMS67 . . .
## MMS68_rc
## MMS69
## MMS70_rc
## MMS71 . .
## MMS72 .
## MMS73 . . . .
## MMS74 . . . .
## MMS75_rc
## MMS76 . . . .
## MMS77 . . . . . .
## MMS78
## MMS79 . . . . . . . .
## MMS13 MMS14 MMS15 MMS16 MMS17 MMS18 MMS19 MMS20 MMS21 MMS22 MMS23
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13 1
## MMS14_rc 1
## MMS15_rc 1
## MMS16 . 1
## MMS17 1
## MMS18 . 1
## MMS19 . . 1
## MMS20 . 1
## MMS21 . . 1
## MMS22 . . 1
## MMS23 . . 1
## MMS24 . . . . .
## MMS25_rc
## MMS26 . .
## MMS27_rc
## MMS28 . . . , .
## MMS29 . . . .
## MMS30 . . . .
## MMS31 . .
## MMS32 . . . . .
## MMS33 . .
## MMS34_rc
## MMS35 . . , .
## MMS36 . . . , .
## MMS37
## MMS38 . . .
## MMS39 . . . . .
## MMS40_rc
## MMS41 . . .
## MMS42 . . .
## MMS43 . . , .
## MMS44 . .
## MMS45 . . . .
## MMS46 . . . .
## MMS47 . .
## MMS48 . . .
## MMS49 . . .
## MMS50 . . . .
## MMS51 . .
## MMS52 .
## MMS53 .
## MMS54 . . . . .
## MMS55 . . . , .
## MMS56_rc
## MMS57_rc
## MMS58 . . . . , .
## MMS59
## MMS60 .
## MMS61 . .
## MMS62 . . . .
## MMS63 . . . . . .
## MMS64 . . .
## MMS65 . . .
## MMS66 . . .
## MMS67 . . .
## MMS68_rc
## MMS69 .
## MMS70_rc
## MMS71 .
## MMS72 .
## MMS73 . . .
## MMS74 . . . .
## MMS75_rc .
## MMS76 . . . .
## MMS77 . . , .
## MMS78
## MMS79 . . . . .
## MMS24 MMS25 MMS26 MMS27 MMS28 MMS29 MMS30 MMS31 MMS32 MMS33 MMS34
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13
## MMS14_rc
## MMS15_rc
## MMS16
## MMS17
## MMS18
## MMS19
## MMS20
## MMS21
## MMS22
## MMS23
## MMS24 1
## MMS25_rc 1
## MMS26 1
## MMS27_rc 1
## MMS28 , 1
## MMS29 . . 1
## MMS30 , , . 1
## MMS31 . . . . 1
## MMS32 . . . . . 1
## MMS33 1
## MMS34_rc 1
## MMS35 . . . . . .
## MMS36 . , . . . .
## MMS37 .
## MMS38 . . . .
## MMS39 , , . , . .
## MMS40_rc
## MMS41 . . . .
## MMS42 .
## MMS43 . . . . , .
## MMS44 . . . . .
## MMS45 . . . . . .
## MMS46 . . . . .
## MMS47 . .
## MMS48 . . . . . .
## MMS49 . . . . . .
## MMS50 . , . , . .
## MMS51
## MMS52 . . . . . .
## MMS53
## MMS54 . . . . . .
## MMS55 . , . . . .
## MMS56_rc
## MMS57_rc
## MMS58 . . . . . .
## MMS59 . .
## MMS60 . .
## MMS61 . . . . .
## MMS62 . , . . . .
## MMS63 . , . . . .
## MMS64 . , . . . .
## MMS65 . . . .
## MMS66 . . . . . .
## MMS67 . . . . .
## MMS68_rc
## MMS69 . . . .
## MMS70_rc .
## MMS71 . . . .
## MMS72
## MMS73 . . . . . .
## MMS74 . . . . . .
## MMS75_rc .
## MMS76 . . . . . .
## MMS77 , . . , . .
## MMS78
## MMS79 , , . , . .
## MMS35 MMS36 MMS37 MMS38 MMS39 MMS40 MMS41 MMS42 MMS43 MMS44 MMS45
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13
## MMS14_rc
## MMS15_rc
## MMS16
## MMS17
## MMS18
## MMS19
## MMS20
## MMS21
## MMS22
## MMS23
## MMS24
## MMS25_rc
## MMS26
## MMS27_rc
## MMS28
## MMS29
## MMS30
## MMS31
## MMS32
## MMS33
## MMS34_rc
## MMS35 1
## MMS36 , 1
## MMS37 1
## MMS38 . . 1
## MMS39 , , . 1
## MMS40_rc 1
## MMS41 . 1
## MMS42 1
## MMS43 , , . . 1
## MMS44 . . , . . 1
## MMS45 . . . . . . 1
## MMS46 . . . . .
## MMS47 . .
## MMS48 . . . . . . .
## MMS49 . . . , . . ,
## MMS50 . , . , . . . .
## MMS51 .
## MMS52 . . . . . .
## MMS53
## MMS54 . . . , . . . .
## MMS55 . , . , . . . .
## MMS56_rc
## MMS57_rc .
## MMS58 . . . , . . .
## MMS59 . .
## MMS60
## MMS61 . . . . . . .
## MMS62 . . . . . . .
## MMS63 . . . , . . .
## MMS64 . . . , . . .
## MMS65 . . . . . , .
## MMS66 . . . , . . .
## MMS67 . . . . . .
## MMS68_rc
## MMS69 . . .
## MMS70_rc
## MMS71 . . . . .
## MMS72 .
## MMS73 . . . . . .
## MMS74 . . . . . . .
## MMS75_rc
## MMS76 . . . , . . .
## MMS77 . . . , . . .
## MMS78 .
## MMS79 , , . , . . . .
## MMS46 MMS47 MMS48 MMS49 MMS50 MMS51 MMS52 MMS53 MMS54 MMS55 MMS56
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13
## MMS14_rc
## MMS15_rc
## MMS16
## MMS17
## MMS18
## MMS19
## MMS20
## MMS21
## MMS22
## MMS23
## MMS24
## MMS25_rc
## MMS26
## MMS27_rc
## MMS28
## MMS29
## MMS30
## MMS31
## MMS32
## MMS33
## MMS34_rc
## MMS35
## MMS36
## MMS37
## MMS38
## MMS39
## MMS40_rc
## MMS41
## MMS42
## MMS43
## MMS44
## MMS45
## MMS46 1
## MMS47 . 1
## MMS48 . 1
## MMS49 . . . 1
## MMS50 . . . , 1
## MMS51 1
## MMS52 . . . 1
## MMS53 . 1
## MMS54 . . . . . 1
## MMS55 . . . , , 1
## MMS56_rc 1
## MMS57_rc .
## MMS58 . . . . , ,
## MMS59 . . . . .
## MMS60 . .
## MMS61 . . . . .
## MMS62 . . . . . .
## MMS63 . . . . . .
## MMS64 . . . . , . . .
## MMS65 . . . . .
## MMS66 . . . , . . ,
## MMS67 . . . . .
## MMS68_rc .
## MMS69 . . . .
## MMS70_rc .
## MMS71 . . . . .
## MMS72 .
## MMS73 . . . .
## MMS74 . . . . . .
## MMS75_rc
## MMS76 . . . . , ,
## MMS77 . . . . , ,
## MMS78
## MMS79 . . . , . . ,
## MMS57 MMS58 MMS59 MMS60 MMS61 MMS62 MMS63 MMS64 MMS65 MMS66 MMS67
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13
## MMS14_rc
## MMS15_rc
## MMS16
## MMS17
## MMS18
## MMS19
## MMS20
## MMS21
## MMS22
## MMS23
## MMS24
## MMS25_rc
## MMS26
## MMS27_rc
## MMS28
## MMS29
## MMS30
## MMS31
## MMS32
## MMS33
## MMS34_rc
## MMS35
## MMS36
## MMS37
## MMS38
## MMS39
## MMS40_rc
## MMS41
## MMS42
## MMS43
## MMS44
## MMS45
## MMS46
## MMS47
## MMS48
## MMS49
## MMS50
## MMS51
## MMS52
## MMS53
## MMS54
## MMS55
## MMS56_rc
## MMS57_rc 1
## MMS58 1
## MMS59 . 1
## MMS60 1
## MMS61 . 1
## MMS62 . . 1
## MMS63 . . . 1
## MMS64 . . . , . 1
## MMS65 . . . . . 1
## MMS66 . . . , . , . 1
## MMS67 . . . . . . . 1
## MMS68_rc .
## MMS69 . . . . . .
## MMS70_rc
## MMS71 . . . . . . .
## MMS72
## MMS73 . , . . . . . . .
## MMS74 . . . . . . . . .
## MMS75_rc
## MMS76 . . . . , . . , .
## MMS77 , . . . , . . .
## MMS78 . .
## MMS79 . . . , , . , .
## MMS68 MMS69 MMS70 MMS71 MMS72 MMS73 MMS74 MMS75 MMS76 MMS77 MMS78
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13
## MMS14_rc
## MMS15_rc
## MMS16
## MMS17
## MMS18
## MMS19
## MMS20
## MMS21
## MMS22
## MMS23
## MMS24
## MMS25_rc
## MMS26
## MMS27_rc
## MMS28
## MMS29
## MMS30
## MMS31
## MMS32
## MMS33
## MMS34_rc
## MMS35
## MMS36
## MMS37
## MMS38
## MMS39
## MMS40_rc
## MMS41
## MMS42
## MMS43
## MMS44
## MMS45
## MMS46
## MMS47
## MMS48
## MMS49
## MMS50
## MMS51
## MMS52
## MMS53
## MMS54
## MMS55
## MMS56_rc
## MMS57_rc
## MMS58
## MMS59
## MMS60
## MMS61
## MMS62
## MMS63
## MMS64
## MMS65
## MMS66
## MMS67
## MMS68_rc 1
## MMS69 1
## MMS70_rc 1
## MMS71 . 1
## MMS72 . 1
## MMS73 . 1
## MMS74 . . . 1
## MMS75_rc 1
## MMS76 . . . , 1
## MMS77 . . . , , 1
## MMS78 . . . 1
## MMS79 . . . . , ,
## MMS79
## MMS1
## MMS2
## MMS3
## MMS4
## MMS5_rc
## MMS6_rc
## MMS7
## MMS8
## MMS9_rc
## MMS10
## MMS11
## MMS12
## MMS13
## MMS14_rc
## MMS15_rc
## MMS16
## MMS17
## MMS18
## MMS19
## MMS20
## MMS21
## MMS22
## MMS23
## MMS24
## MMS25_rc
## MMS26
## MMS27_rc
## MMS28
## MMS29
## MMS30
## MMS31
## MMS32
## MMS33
## MMS34_rc
## MMS35
## MMS36
## MMS37
## MMS38
## MMS39
## MMS40_rc
## MMS41
## MMS42
## MMS43
## MMS44
## MMS45
## MMS46
## MMS47
## MMS48
## MMS49
## MMS50
## MMS51
## MMS52
## MMS53
## MMS54
## MMS55
## MMS56_rc
## MMS57_rc
## MMS58
## MMS59
## MMS60
## MMS61
## MMS62
## MMS63
## MMS64
## MMS65
## MMS66
## MMS67
## MMS68_rc
## MMS69
## MMS70_rc
## MMS71
## MMS72
## MMS73
## MMS74
## MMS75_rc
## MMS76
## MMS77
## MMS78
## MMS79 1
## attr(,"legend")
## [1] 0 ' ' 0.3 '.' 0.6 ',' 0.8 '+' 0.9 '*' 0.95 'B' 1
Normality: we’ve been doing this for pretty much the entire class. We assume that the variable of interest at least roughly follows a normal distribution, i.e. it fits a bell curve shape. This is necessary before running almost any parametric test because all these tests make this assumption. If your data isn’t normally distributed and you use a test that assumes it, it won’t be pretty.
Being normal is overrated, but let’s test for it anyway.
#Set-up for testing assumptions
random = rchisq(nrow(noout), 7)
fake = lm(random ~.,data=noout)
standardized = rstudent(fake)
fitted = scale(fake$fitted.values)
hist(standardized)
Linearity: It is in the name, really. Certain tests (such as linear regression) makes the initial assumption that the relationship between your independent and dependent variables are linear. The linearity assumption can be tested best using scatter plots or q-q plots.
qqnorm(standardized)
Not perfect…. but we’ll take it :)
Homogeneity of Variance:this is another standard assumption, which most widespread tests like ANOVA or t-test use. It assumes that the variance within each population is equal to one another. In other words, we expect that the dispersion of our dependent variable data is relatively equal at each level of our independent variable data.
And now, homogeneity of variance:
plot(fitted, standardized)
abline(0,0)
abline(v=0)
Now that we made you jump through all of those statistical hoops, you can finally run your factor analysis.
All of the assumptions have been met. Now we need to see if we have enough correlation among the variables (i.e. our 79 questions). To do this, we will perform Bartlett’s Test, which will indicate whether we will actually be able to form meaningful factors.
cortest.bartlett(correl, n = nrow(noout))
## $chisq
## [1] 12061.32
##
## $p.value
## [1] 0
##
## $df
## [1] 3081
The Kieser Meyer Oken (KMO) test assesses sampling adequacy. To boil it down- do you have enough participants? For this example, our sample size is N=238. Results from this test range between 0 and 1 with larger values representing more adequate samples. An overall MSA greater than 0.7 is considered to be acceptable (We want 1!We want 1!We want 1!).
KMO(correl)
## Kaiser-Meyer-Olkin factor adequacy
## Call: KMO(r = correl)
## Overall MSA = 0.9
## MSA for each item =
## MMS1 MMS2 MMS3 MMS4 MMS5_rc MMS6_rc MMS7 MMS8
## 0.89 0.92 0.89 0.71 0.84 0.66 0.80 0.94
## MMS9_rc MMS10 MMS11 MMS12 MMS13 MMS14_rc MMS15_rc MMS16
## 0.61 0.88 0.83 0.87 0.89 0.72 0.62 0.66
## MMS17 MMS18 MMS19 MMS20 MMS21 MMS22 MMS23 MMS24
## 0.93 0.87 0.84 0.79 0.95 0.83 0.92 0.92
## MMS25_rc MMS26 MMS27_rc MMS28 MMS29 MMS30 MMS31 MMS32
## 0.61 0.74 0.62 0.94 0.92 0.97 0.93 0.95
## MMS33 MMS34_rc MMS35 MMS36 MMS37 MMS38 MMS39 MMS40_rc
## 0.79 0.74 0.92 0.95 0.80 0.89 0.95 0.63
## MMS41 MMS42 MMS43 MMS44 MMS45 MMS46 MMS47 MMS48
## 0.86 0.71 0.92 0.87 0.94 0.90 0.89 0.93
## MMS49 MMS50 MMS51 MMS52 MMS53 MMS54 MMS55 MMS56_rc
## 0.95 0.94 0.70 0.88 0.81 0.96 0.96 0.82
## MMS57_rc MMS58 MMS59 MMS60 MMS61 MMS62 MMS63 MMS64
## 0.81 0.96 0.88 0.82 0.91 0.95 0.93 0.95
## MMS65 MMS66 MMS67 MMS68_rc MMS69 MMS70_rc MMS71 MMS72
## 0.90 0.93 0.90 0.77 0.87 0.76 0.89 0.78
## MMS73 MMS74 MMS75_rc MMS76 MMS77 MMS78 MMS79
## 0.92 0.94 0.57 0.95 0.95 0.78 0.95
Look at that beauty- we love it!
The most common approach is based on the eigenvalues. Each factor is associated with an eigenvalue of the correlation matrix. The first factor is associated with the largest eigenvalue, the second factor with the second-largest eigenvalue, and so on.
The Kaiser-Harris criterion suggests retaining factors with eigenvalues greater than 1. Factors with eigenvalues less than 1 explain less variance than contained in a single variable… thus they are not really helping reduce anything.
In a Cattell Scree Test, the eigenvalues are plotted against their factor numbers. Such plots typically demonstrate a bend or elbow, and the number of factors above this sharp break are retained.
Finally, you can run simulations, extracting eigenvalues from random data matrices of the same size as the original matrix.
You can assess all three eigenvalue criteria at the same time via the fa. parallel(), which determines the number of factors for extraction.
Let’s check out our Scree Plot:
nofactors <- fa.parallel(noout,fm="ml",fa="fa")
## Parallel analysis suggests that the number of factors = 8 and the number of components = NA
sum(nofactors$fa.values > 1.0)#####old kaiser criterion
## [1] 7
sum(nofactors$fa.values > 0.7)#####new kaiser criterion
## [1] 9
On our Scree Plot, the black horizontal line is drawn at 1. This represents an eigenvalue= 1. We are going to count the number of factors (little blue triangles) above the black line and include that factor number in our analysis.
Hard to see, but there are 7 little blue triangles with an eigenvalue greater than 1. This selection process is based off of the Kaiser-Harris Criterion.
From here, we want to reduce the residuals in our analysis. To do this, we are going to “rotate” our data.
Rotations for factor analysis literally rotate the axis that the variable are on. This allows for the detection of factors that may not be apparent if the axis is held constant (i.e. not rotated). There are two types of rotation: orthogonal or oblique. Orthogonal rotations do not allow the factors to correlate with one another, whereas oblique rotations do.
You can rotate the 7 factor solution from above using an orthogonal or an oblique rotation. Because our factors are inter-correlated, we will use an obilique rotation as this allows the factors to correlated with each other. Since we are creating a scale designed to assess a latent construct we would expect and want our factors to correlate - so again we will use an oblique rotation.
In R, oblique rotation is called oblimin.
Here we will conduct the first factor anlaysis with oblimin
rotation and fm
being maximum likelihood:
round1 = fa(noout,nfactors=7,rotate="oblimin", fm="ml")
round1
## Factor Analysis using method = ml
## Call: fa(r = noout, nfactors = 7, rotate = "oblimin", fm = "ml")
## Standardized loadings (pattern matrix) based upon correlation matrix
## ML1 ML7 ML2 ML4 ML5 ML3 ML6 h2 u2 com
## MMS1 0.10 0.40 -0.05 -0.03 0.06 0.08 0.36 0.44 0.56 2.3
## MMS2 0.14 0.60 -0.10 -0.13 -0.04 -0.04 0.14 0.46 0.54 1.4
## MMS3 0.03 -0.25 0.52 -0.11 -0.22 -0.15 0.11 0.56 0.44 2.3
## MMS4 -0.07 0.06 0.19 -0.15 0.09 0.12 0.54 0.36 0.64 1.7
## MMS5_rc -0.02 0.43 -0.25 0.02 0.27 0.02 -0.03 0.39 0.61 2.4
## MMS6_rc -0.04 0.12 -0.31 0.06 0.28 -0.21 0.05 0.22 0.78 3.3
## MMS7 0.06 0.31 0.55 0.06 0.02 0.00 -0.24 0.42 0.58 2.1
## MMS8 0.04 0.62 0.10 0.07 0.11 0.20 0.01 0.66 0.34 1.4
## MMS9_rc -0.02 -0.40 -0.11 0.31 0.27 -0.04 0.13 0.28 0.72 3.2
## MMS10 0.04 0.59 0.16 -0.11 -0.03 0.09 -0.02 0.42 0.58 1.3
## MMS11 -0.28 -0.10 0.22 -0.09 -0.33 0.05 0.15 0.36 0.64 3.7
## MMS12 0.03 0.11 0.47 0.28 0.15 0.13 -0.08 0.43 0.57 2.3
## MMS13 -0.25 -0.27 0.47 -0.06 -0.16 0.00 0.18 0.56 0.44 2.9
## MMS14_rc 0.14 0.02 -0.09 0.04 0.29 -0.09 -0.07 0.13 0.87 2.1
## MMS15_rc -0.25 -0.09 -0.16 0.43 0.29 -0.02 -0.14 0.34 0.66 3.2
## MMS16 -0.22 0.02 0.23 0.36 -0.28 -0.03 0.13 0.32 0.68 3.8
## MMS17 0.26 0.03 0.11 0.46 -0.03 0.19 0.00 0.55 0.45 2.1
## MMS18 0.12 0.11 0.36 0.43 0.06 -0.03 0.02 0.47 0.53 2.3
## MMS19 0.05 -0.12 0.13 0.45 0.02 0.00 0.33 0.40 0.60 2.2
## MMS20 0.03 0.13 0.57 0.11 0.15 0.02 0.00 0.39 0.61 1.3
## MMS21 0.14 0.59 0.00 0.10 0.04 0.14 0.04 0.67 0.33 1.3
## MMS22 0.09 -0.06 0.50 0.12 0.11 -0.05 0.14 0.35 0.65 1.5
## MMS23 0.25 0.32 0.26 -0.03 0.17 0.02 0.01 0.36 0.64 3.5
## MMS24 0.25 0.32 -0.02 0.26 0.10 0.14 0.09 0.58 0.42 3.7
## MMS25_rc 0.03 -0.14 0.01 -0.05 0.26 -0.12 -0.01 0.10 0.90 2.2
## MMS26 -0.12 0.06 0.54 0.06 -0.08 -0.08 0.23 0.42 0.58 1.6
## MMS27_rc 0.15 -0.05 0.06 -0.11 0.48 -0.22 0.25 0.30 0.70 2.4
## MMS28 0.41 0.24 -0.02 0.27 -0.03 0.16 0.00 0.67 0.33 2.8
## MMS29 0.32 0.25 0.11 0.08 0.13 -0.04 0.30 0.48 0.52 3.7
## MMS30 0.37 0.32 -0.08 0.22 0.08 0.06 0.05 0.61 0.39 3.0
## MMS31 -0.02 0.53 -0.13 0.18 -0.15 0.16 0.12 0.51 0.49 1.9
## MMS32 0.31 0.25 0.01 0.33 -0.08 -0.09 0.20 0.50 0.50 3.9
## MMS33 -0.21 -0.27 0.24 0.36 -0.12 0.05 0.07 0.32 0.68 3.8
## MMS34_rc -0.10 -0.20 -0.28 0.03 0.36 0.05 0.11 0.24 0.76 3.0
## MMS35 0.13 0.46 -0.14 0.12 0.02 0.29 0.02 0.66 0.34 2.3
## MMS36 0.27 0.32 -0.07 0.17 -0.01 0.27 0.04 0.63 0.37 3.7
## MMS37 0.05 -0.10 0.20 -0.02 -0.08 0.04 0.47 0.33 0.67 1.6
## MMS38 0.04 0.03 -0.01 -0.02 -0.04 0.82 -0.02 0.72 0.28 1.0
## MMS39 0.41 0.24 0.05 0.34 0.06 0.17 -0.06 0.76 0.24 3.2
## MMS40_rc 0.02 -0.09 0.18 -0.13 0.56 -0.01 0.30 0.37 0.63 2.0
## MMS41 0.15 -0.04 0.16 0.52 -0.09 0.00 -0.04 0.38 0.62 1.4
## MMS42 0.04 -0.11 0.59 0.08 -0.01 0.02 -0.07 0.37 0.63 1.1
## MMS43 0.00 0.64 -0.08 0.05 -0.04 0.20 0.11 0.64 0.36 1.3
## MMS44 -0.11 0.05 -0.02 0.02 0.03 0.90 0.11 0.81 0.19 1.1
## MMS45 0.31 0.14 -0.02 0.13 0.12 0.10 0.37 0.52 0.48 3.0
## MMS46 0.14 0.12 0.11 0.67 -0.04 -0.04 -0.02 0.61 0.39 1.2
## MMS47 0.10 -0.03 0.01 0.29 -0.04 0.06 0.44 0.38 0.62 2.0
## MMS48 0.22 0.25 0.06 0.08 0.02 0.26 0.01 0.41 0.59 3.3
## MMS49 0.46 0.11 -0.03 0.15 0.10 0.14 0.13 0.56 0.44 1.9
## MMS50 0.43 0.21 -0.05 0.25 0.03 0.11 0.16 0.68 0.32 2.7
## MMS51 -0.03 -0.12 0.55 0.05 0.00 0.02 0.08 0.35 0.65 1.2
## MMS52 0.00 0.26 -0.11 0.01 -0.10 0.13 0.62 0.55 0.45 1.6
## MMS53 0.02 -0.08 0.12 0.28 -0.05 0.01 0.32 0.25 0.75 2.5
## MMS54 0.43 0.17 0.08 0.07 0.17 0.22 0.01 0.59 0.41 2.4
## MMS55 0.39 0.29 0.11 0.21 0.15 0.16 -0.10 0.70 0.30 3.7
## MMS56_rc 0.08 0.12 -0.10 0.07 0.48 -0.02 -0.20 0.39 0.61 1.7
## MMS57_rc -0.01 0.01 -0.04 0.02 0.63 0.12 -0.07 0.46 0.54 1.1
## MMS58 0.51 0.28 0.19 0.01 0.10 0.10 -0.13 0.64 0.36 2.3
## MMS59 0.53 0.09 0.08 -0.02 -0.20 -0.23 0.23 0.42 0.58 2.2
## MMS60 -0.02 -0.15 0.19 0.31 -0.23 -0.07 0.35 0.39 0.61 3.9
## MMS61 0.43 -0.08 -0.03 -0.05 -0.16 0.44 -0.02 0.44 0.56 2.4
## MMS62 0.57 0.09 -0.01 0.00 0.01 0.22 -0.03 0.56 0.44 1.3
## MMS63 0.46 -0.11 -0.03 0.34 0.10 0.31 -0.05 0.63 0.37 3.0
## MMS64 0.54 0.10 0.00 0.07 -0.02 0.17 0.10 0.57 0.43 1.4
## MMS65 0.24 0.05 0.07 -0.13 0.00 0.64 0.00 0.61 0.39 1.4
## MMS66 0.61 0.08 -0.02 0.09 0.01 0.08 0.06 0.57 0.43 1.1
## MMS67 0.20 0.14 0.01 0.05 -0.14 0.13 0.41 0.41 0.59 2.3
## MMS68_rc -0.04 0.01 0.00 -0.04 0.61 0.09 -0.04 0.39 0.61 1.1
## MMS69 0.70 -0.12 -0.05 0.01 -0.17 -0.13 0.23 0.49 0.51 1.5
## MMS70_rc -0.02 -0.05 0.10 -0.06 0.51 -0.28 -0.08 0.36 0.64 1.8
## MMS71 0.23 0.05 0.02 -0.03 0.18 0.14 0.46 0.42 0.58 2.1
## MMS72 0.29 -0.01 0.06 -0.33 0.01 0.09 0.29 0.26 0.74 3.2
## MMS73 0.55 0.12 0.08 -0.02 -0.06 0.01 0.06 0.43 0.57 1.2
## MMS74 0.60 0.11 0.08 0.01 0.11 0.04 0.09 0.56 0.44 1.2
## MMS75_rc -0.10 -0.16 -0.21 0.24 0.37 -0.02 -0.02 0.25 0.75 3.1
## MMS76 0.68 -0.02 0.04 0.10 0.05 0.15 -0.04 0.63 0.37 1.2
## MMS77 0.71 0.11 -0.05 0.00 0.15 0.08 -0.04 0.71 0.29 1.2
## MMS78 0.23 0.11 -0.16 -0.04 -0.28 0.12 0.10 0.23 0.77 3.8
## MMS79 0.43 0.16 -0.06 0.30 0.10 0.16 0.02 0.69 0.31 2.7
##
## ML1 ML7 ML2 ML4 ML5 ML3 ML6
## SS loadings 9.65 6.83 3.99 4.39 3.67 4.94 3.57
## Proportion Var 0.12 0.09 0.05 0.06 0.05 0.06 0.05
## Cumulative Var 0.12 0.21 0.26 0.31 0.36 0.42 0.47
## Proportion Explained 0.26 0.18 0.11 0.12 0.10 0.13 0.10
## Cumulative Proportion 0.26 0.44 0.55 0.67 0.77 0.90 1.00
##
## With factor correlations of
## ML1 ML7 ML2 ML4 ML5 ML3 ML6
## ML1 1.00 0.57 0.08 0.36 0.15 0.46 0.25
## ML7 0.57 1.00 -0.05 0.24 0.15 0.53 0.09
## ML2 0.08 -0.05 1.00 0.15 -0.14 -0.04 0.19
## ML4 0.36 0.24 0.15 1.00 0.11 0.28 0.10
## ML5 0.15 0.15 -0.14 0.11 1.00 0.09 -0.14
## ML3 0.46 0.53 -0.04 0.28 0.09 1.00 0.11
## ML6 0.25 0.09 0.19 0.10 -0.14 0.11 1.00
##
## Mean item complexity = 2.2
## Test of the hypothesis that 7 factors are sufficient.
##
## The degrees of freedom for the null model are 3081 and the objective function was 57.48 with Chi Square of 12061.32
## The degrees of freedom for the model are 2549 and the objective function was 18.3
##
## The root mean square of the residuals (RMSR) is 0.04
## The df corrected root mean square of the residuals is 0.04
##
## The harmonic number of observations is 238 with the empirical chi square 2445.23 with prob < 0.93
## The total number of observations was 238 with Likelihood Chi Square = 3755.38 with prob < 7.7e-50
##
## Tucker Lewis Index of factoring reliability = 0.833
## RMSEA index = 0.054 and the 90 % confidence intervals are 0.042 NA
## BIC = -10193.44
## Fit based upon off diagonal values = 0.98
## Measures of factor score adequacy
## ML1 ML7 ML2 ML4 ML5
## Correlation of (regression) scores with factors 0.97 0.95 0.93 0.93 0.92
## Multiple R square of scores with factors 0.93 0.91 0.86 0.86 0.85
## Minimum correlation of possible factor scores 0.87 0.81 0.73 0.73 0.70
## ML3 ML6
## Correlation of (regression) scores with factors 0.96 0.92
## Multiple R square of scores with factors 0.91 0.84
## Minimum correlation of possible factor scores 0.82 0.69
We can now identify items that load on multiple factors or do not load on any factor and eliminate them from the analysis. We repeat this process until all items load onto a single factor.
In the output above, you look at each row, which represents 1 of our 79 variables. In each row, you want to see that each item loads onto only one factor (M1 to M7) at a level of positive or negative 0.30 or greater. Those items that load onto more than one factor or do not load onto any factor are eliminated in the next factor analysis. So if a row has this qualification, then you keep it. Otherwise, eliminate it from the analysis.
Round 2! Conduct second factor anlaysis with oblimin
rotation and fm
being maximum likelihood. Notice that we are removing certain rows from our analysis.
round2 = fa(noout[ ,-c(30,55,7,1,9,35,29,39,45,61,63,79,14,15,18,19,25,40,48,60,78)],nfactors=7,rotate="oblimin", fm="ml")
round2
## Factor Analysis using method = ml
## Call: fa(r = noout[, -c(30, 55, 7, 1, 9, 35, 29, 39, 45, 61, 63, 79,
## 14, 15, 18, 19, 25, 40, 48, 60, 78)], nfactors = 7, rotate = "oblimin",
## fm = "ml")
## Standardized loadings (pattern matrix) based upon correlation matrix
## ML1 ML6 ML2 ML5 ML4 ML3 ML7 h2 u2 com
## MMS2 0.11 0.66 -0.11 -0.12 -0.09 -0.08 0.12 0.49 0.51 1.3
## MMS3 0.11 -0.18 0.56 -0.13 -0.26 -0.13 0.03 0.58 0.42 2.0
## MMS4 -0.05 -0.01 0.20 -0.11 0.10 0.09 0.58 0.38 0.62 1.4
## MMS5_rc -0.05 0.40 -0.26 -0.01 0.29 0.00 0.01 0.40 0.60 2.6
## MMS6_rc -0.08 0.10 -0.27 0.00 0.32 -0.22 0.06 0.22 0.78 3.3
## MMS8 0.00 0.69 0.08 0.10 0.09 0.14 -0.02 0.69 0.31 1.2
## MMS10 0.01 0.64 0.10 -0.07 -0.04 0.05 0.01 0.41 0.59 1.1
## MMS11 -0.19 -0.13 0.26 -0.09 -0.36 0.07 0.09 0.38 0.62 3.2
## MMS12 0.03 0.16 0.47 0.28 0.19 0.11 -0.12 0.44 0.56 2.6
## MMS13 -0.20 -0.19 0.51 -0.07 -0.19 -0.02 0.14 0.55 0.45 2.2
## MMS16 -0.25 0.02 0.23 0.41 -0.26 -0.05 0.09 0.34 0.66 3.3
## MMS17 0.17 -0.02 0.04 0.56 0.05 0.19 0.01 0.56 0.44 1.5
## MMS20 0.03 0.19 0.54 0.10 0.16 0.00 0.02 0.36 0.64 1.5
## MMS21 0.13 0.68 0.01 0.07 0.02 0.10 -0.02 0.72 0.28 1.1
## MMS22 0.12 0.00 0.54 0.06 0.08 -0.04 0.10 0.35 0.65 1.3
## MMS23 0.18 0.35 0.24 0.04 0.19 0.02 0.03 0.36 0.64 3.0
## MMS24 0.19 0.26 -0.03 0.28 0.17 0.15 0.11 0.57 0.43 4.5
## MMS26 -0.09 0.15 0.57 0.03 -0.11 -0.11 0.18 0.43 0.57 1.6
## MMS27_rc 0.09 -0.02 0.11 -0.10 0.48 -0.24 0.25 0.28 0.72 2.5
## MMS28 0.30 0.24 -0.05 0.32 0.03 0.16 0.06 0.64 0.36 3.6
## MMS31 -0.05 0.52 -0.14 0.18 -0.13 0.13 0.10 0.49 0.51 1.8
## MMS32 0.20 0.14 -0.08 0.48 -0.02 -0.08 0.26 0.54 0.46 2.3
## MMS33 -0.17 -0.19 0.31 0.30 -0.14 0.02 -0.05 0.29 0.71 3.8
## MMS34_rc -0.04 -0.16 -0.17 -0.08 0.31 0.03 0.02 0.16 0.84 2.5
## MMS36 0.23 0.32 -0.09 0.17 0.01 0.26 0.04 0.62 0.38 3.7
## MMS37 0.03 -0.06 0.24 0.01 -0.09 0.01 0.46 0.34 0.66 1.6
## MMS38 0.05 0.04 -0.03 -0.01 -0.06 0.82 -0.01 0.73 0.27 1.0
## MMS41 0.11 -0.09 0.12 0.60 -0.07 0.02 -0.08 0.41 0.59 1.3
## MMS42 0.06 -0.03 0.60 0.08 -0.01 0.03 -0.10 0.39 0.61 1.1
## MMS43 0.04 0.58 -0.10 0.05 -0.05 0.20 0.08 0.60 0.40 1.4
## MMS44 -0.08 0.03 -0.01 0.01 0.02 0.89 0.10 0.82 0.18 1.0
## MMS46 0.03 0.06 0.05 0.80 0.02 -0.06 -0.03 0.68 0.32 1.0
## MMS47 0.06 -0.08 0.00 0.37 -0.02 0.03 0.40 0.37 0.63 2.1
## MMS49 0.40 0.08 -0.04 0.21 0.12 0.15 0.13 0.55 0.45 2.5
## MMS50 0.33 0.15 -0.09 0.34 0.09 0.12 0.21 0.68 0.32 3.7
## MMS51 -0.02 -0.10 0.56 0.10 0.02 0.04 0.05 0.37 0.63 1.2
## MMS52 -0.02 0.17 -0.10 0.07 -0.11 0.10 0.64 0.56 0.44 1.3
## MMS53 0.03 -0.10 0.13 0.29 -0.02 -0.01 0.30 0.23 0.77 2.6
## MMS54 0.41 0.13 0.03 0.14 0.19 0.24 0.03 0.60 0.40 2.7
## MMS56_rc 0.06 0.08 -0.11 0.05 0.50 -0.02 -0.16 0.38 0.62 1.4
## MMS57_rc -0.04 0.00 -0.03 0.00 0.66 0.09 -0.01 0.46 0.54 1.1
## MMS58 0.47 0.33 0.14 0.06 0.10 0.11 -0.12 0.64 0.36 2.4
## MMS59 0.59 0.11 0.08 -0.01 -0.23 -0.20 0.15 0.46 0.54 1.8
## MMS62 0.47 0.09 -0.06 0.10 0.07 0.21 0.06 0.54 0.46 1.7
## MMS64 0.43 0.06 -0.05 0.18 0.02 0.18 0.17 0.55 0.45 2.2
## MMS65 0.24 0.13 0.07 -0.11 0.00 0.61 -0.01 0.60 0.40 1.5
## MMS66 0.52 0.09 -0.06 0.15 0.02 0.09 0.11 0.56 0.44 1.4
## MMS67 0.13 0.22 0.02 0.06 -0.14 0.07 0.42 0.43 0.57 2.1
## MMS68_rc -0.05 -0.08 -0.03 0.02 0.63 0.07 0.02 0.40 0.60 1.1
## MMS69 0.68 -0.11 -0.04 0.08 -0.18 -0.09 0.14 0.48 0.52 1.4
## MMS70_rc 0.00 -0.05 0.14 -0.11 0.50 -0.27 -0.11 0.35 0.65 2.0
## MMS71 0.22 0.00 0.05 -0.02 0.19 0.12 0.48 0.43 0.57 1.9
## MMS72 0.28 -0.06 0.04 -0.26 0.03 0.09 0.36 0.26 0.74 3.1
## MMS73 0.59 0.15 0.07 -0.02 -0.09 0.05 -0.01 0.46 0.54 1.2
## MMS74 0.60 0.15 0.09 0.06 0.09 0.05 0.00 0.59 0.41 1.3
## MMS75_rc -0.09 -0.11 -0.15 0.13 0.37 -0.06 -0.04 0.19 0.81 2.0
## MMS76 0.62 0.02 0.02 0.15 0.06 0.17 -0.04 0.63 0.37 1.3
## MMS77 0.66 0.12 -0.06 0.03 0.18 0.10 -0.01 0.72 0.28 1.3
##
## ML1 ML6 ML2 ML5 ML4 ML3 ML7
## SS loadings 6.07 5.06 3.41 3.72 3.03 3.64 2.75
## Proportion Var 0.10 0.09 0.06 0.06 0.05 0.06 0.05
## Cumulative Var 0.10 0.19 0.25 0.31 0.37 0.43 0.48
## Proportion Explained 0.22 0.18 0.12 0.13 0.11 0.13 0.10
## Cumulative Proportion 0.22 0.40 0.53 0.66 0.77 0.90 1.00
##
## With factor correlations of
## ML1 ML6 ML2 ML5 ML4 ML3 ML7
## ML1 1.00 0.55 0.01 0.41 0.17 0.37 0.32
## ML6 0.55 1.00 -0.15 0.36 0.22 0.54 0.22
## ML2 0.01 -0.15 1.00 0.14 -0.19 -0.11 0.13
## ML5 0.41 0.36 0.14 1.00 0.08 0.33 0.17
## ML4 0.17 0.22 -0.19 0.08 1.00 0.15 -0.16
## ML3 0.37 0.54 -0.11 0.33 0.15 1.00 0.20
## ML7 0.32 0.22 0.13 0.17 -0.16 0.20 1.00
##
## Mean item complexity = 2
## Test of the hypothesis that 7 factors are sufficient.
##
## The degrees of freedom for the null model are 1653 and the objective function was 35.72 with Chi Square of 7745.04
## The degrees of freedom for the model are 1268 and the objective function was 8.59
##
## The root mean square of the residuals (RMSR) is 0.04
## The df corrected root mean square of the residuals is 0.04
##
## The harmonic number of observations is 238 with the empirical chi square 1123.17 with prob < 1
## The total number of observations was 238 with Likelihood Chi Square = 1823.43 with prob < 9.3e-23
##
## Tucker Lewis Index of factoring reliability = 0.878
## RMSEA index = 0.051 and the 90 % confidence intervals are 0.039 NA
## BIC = -5115.4
## Fit based upon off diagonal values = 0.98
## Measures of factor score adequacy
## ML1 ML6 ML2 ML5 ML4
## Correlation of (regression) scores with factors 0.95 0.95 0.92 0.93 0.91
## Multiple R square of scores with factors 0.91 0.90 0.85 0.86 0.82
## Minimum correlation of possible factor scores 0.81 0.79 0.70 0.72 0.65
## ML3 ML7
## Correlation of (regression) scores with factors 0.95 0.90
## Multiple R square of scores with factors 0.90 0.81
## Minimum correlation of possible factor scores 0.81 0.61
Round 3!
round3 = fa(noout[ ,-c(30,55,7,1,9,35,29,39,45,61,63,79,14,15,18,19,25,40,48,60,78,50,58,24)],nfactors=7,rotate="oblimin", fm="ml")
round3
## Factor Analysis using method = ml
## Call: fa(r = noout[, -c(30, 55, 7, 1, 9, 35, 29, 39, 45, 61, 63, 79,
## 14, 15, 18, 19, 25, 40, 48, 60, 78, 50, 58, 24)], nfactors = 7,
## rotate = "oblimin", fm = "ml")
## Standardized loadings (pattern matrix) based upon correlation matrix
## ML1 ML6 ML2 ML5 ML4 ML3 ML7 h2 u2 com
## MMS2 0.12 0.65 -0.12 -0.09 -0.12 -0.08 0.12 0.49 0.51 1.4
## MMS3 0.11 -0.19 0.55 -0.26 -0.12 -0.13 0.04 0.58 0.42 2.1
## MMS4 -0.07 -0.03 0.17 0.10 -0.08 0.08 0.64 0.43 0.57 1.3
## MMS5_rc -0.06 0.40 -0.27 0.29 0.00 0.00 0.03 0.40 0.60 2.7
## MMS6_rc -0.05 0.12 -0.25 0.31 0.00 -0.23 0.02 0.21 0.79 3.3
## MMS8 0.00 0.69 0.08 0.10 0.10 0.13 -0.01 0.69 0.31 1.2
## MMS10 0.01 0.63 0.10 -0.04 -0.08 0.06 0.01 0.40 0.60 1.1
## MMS11 -0.18 -0.13 0.26 -0.38 -0.11 0.07 0.07 0.39 0.61 3.0
## MMS12 0.05 0.19 0.50 0.19 0.24 0.12 -0.15 0.46 0.54 2.5
## MMS13 -0.19 -0.19 0.51 -0.20 -0.08 -0.03 0.13 0.55 0.45 2.2
## MMS16 -0.24 0.02 0.23 -0.25 0.41 -0.05 0.09 0.34 0.66 3.2
## MMS17 0.18 -0.01 0.04 0.05 0.55 0.20 0.01 0.56 0.44 1.5
## MMS20 0.01 0.18 0.52 0.17 0.12 0.00 0.06 0.35 0.65 1.6
## MMS21 0.13 0.69 0.02 0.03 0.06 0.09 -0.01 0.72 0.28 1.1
## MMS22 0.12 -0.01 0.52 0.08 0.08 -0.05 0.12 0.35 0.65 1.4
## MMS23 0.17 0.35 0.24 0.19 0.04 0.03 0.03 0.35 0.65 3.0
## MMS26 -0.07 0.16 0.57 -0.12 0.03 -0.12 0.16 0.43 0.57 1.6
## MMS27_rc 0.11 0.00 0.12 0.45 -0.11 -0.25 0.23 0.26 0.74 2.6
## MMS28 0.30 0.26 -0.04 0.04 0.29 0.17 0.05 0.62 0.38 3.7
## MMS31 -0.02 0.53 -0.12 -0.13 0.17 0.13 0.06 0.49 0.51 1.6
## MMS32 0.20 0.15 -0.09 -0.02 0.48 -0.08 0.25 0.54 0.46 2.3
## MMS33 -0.16 -0.20 0.30 -0.14 0.30 0.02 -0.04 0.29 0.71 3.7
## MMS34_rc -0.02 -0.14 -0.15 0.29 -0.10 0.03 -0.02 0.15 0.85 2.4
## MMS36 0.24 0.35 -0.07 0.01 0.15 0.26 0.02 0.62 0.38 3.3
## MMS37 0.03 -0.06 0.23 -0.10 0.02 0.00 0.45 0.33 0.67 1.7
## MMS38 0.05 0.04 -0.03 -0.05 -0.01 0.82 0.00 0.73 0.27 1.0
## MMS41 0.11 -0.10 0.11 -0.06 0.61 0.02 -0.06 0.42 0.58 1.2
## MMS42 0.05 -0.04 0.61 0.00 0.07 0.03 -0.09 0.39 0.61 1.1
## MMS43 0.06 0.60 -0.07 -0.06 0.03 0.19 0.04 0.61 0.39 1.3
## MMS44 -0.07 0.05 -0.01 0.01 0.01 0.87 0.10 0.81 0.19 1.0
## MMS46 0.03 0.06 0.03 0.03 0.82 -0.06 -0.02 0.71 0.29 1.0
## MMS47 0.09 -0.05 0.00 -0.04 0.36 0.02 0.36 0.34 0.66 2.2
## MMS49 0.42 0.12 -0.02 0.12 0.19 0.16 0.09 0.55 0.45 2.2
## MMS51 -0.02 -0.09 0.57 0.01 0.08 0.04 0.04 0.37 0.63 1.1
## MMS52 -0.02 0.17 -0.13 -0.12 0.10 0.08 0.64 0.58 0.42 1.4
## MMS53 0.05 -0.09 0.13 -0.04 0.28 -0.02 0.27 0.21 0.79 2.7
## MMS54 0.38 0.13 0.01 0.20 0.16 0.24 0.06 0.59 0.41 3.1
## MMS56_rc 0.06 0.10 -0.10 0.51 0.05 -0.02 -0.15 0.38 0.62 1.4
## MMS57_rc -0.03 0.00 -0.02 0.66 0.00 0.09 -0.01 0.46 0.54 1.0
## MMS59 0.61 0.12 0.09 -0.23 -0.02 -0.20 0.11 0.47 0.53 1.8
## MMS62 0.46 0.09 -0.06 0.08 0.10 0.22 0.07 0.53 0.47 1.8
## MMS64 0.43 0.09 -0.03 0.03 0.16 0.19 0.15 0.55 0.45 2.1
## MMS65 0.24 0.13 0.07 0.01 -0.10 0.61 0.00 0.60 0.40 1.5
## MMS66 0.52 0.11 -0.05 0.02 0.14 0.10 0.10 0.56 0.44 1.4
## MMS67 0.15 0.23 0.02 -0.15 0.07 0.06 0.39 0.42 0.58 2.4
## MMS68_rc -0.06 -0.08 -0.05 0.64 0.04 0.06 0.05 0.40 0.60 1.1
## MMS69 0.68 -0.11 -0.05 -0.17 0.09 -0.08 0.13 0.49 0.51 1.3
## MMS70_rc 0.01 -0.03 0.17 0.49 -0.14 -0.26 -0.13 0.36 0.64 2.2
## MMS71 0.20 -0.01 0.03 0.20 0.00 0.11 0.53 0.46 0.54 1.7
## MMS72 0.26 -0.06 0.01 0.04 -0.23 0.08 0.40 0.27 0.73 2.6
## MMS73 0.60 0.17 0.09 -0.09 -0.03 0.05 -0.04 0.48 0.52 1.3
## MMS74 0.60 0.15 0.09 0.11 0.07 0.05 0.00 0.60 0.40 1.3
## MMS75_rc -0.09 -0.11 -0.15 0.36 0.12 -0.06 -0.04 0.18 0.82 2.1
## MMS76 0.62 0.03 0.03 0.08 0.15 0.18 -0.04 0.64 0.36 1.3
## MMS77 0.64 0.12 -0.07 0.19 0.04 0.10 0.01 0.70 0.30 1.3
##
## ML1 ML6 ML2 ML5 ML4 ML3 ML7
## SS loadings 5.41 4.74 3.36 2.98 3.30 3.41 2.63
## Proportion Var 0.10 0.09 0.06 0.05 0.06 0.06 0.05
## Cumulative Var 0.10 0.18 0.25 0.30 0.36 0.42 0.47
## Proportion Explained 0.21 0.18 0.13 0.12 0.13 0.13 0.10
## Cumulative Proportion 0.21 0.39 0.52 0.64 0.77 0.90 1.00
##
## With factor correlations of
## ML1 ML6 ML2 ML5 ML4 ML3 ML7
## ML1 1.00 0.54 0.01 0.14 0.39 0.35 0.32
## ML6 0.54 1.00 -0.16 0.21 0.35 0.54 0.22
## ML2 0.01 -0.16 1.00 -0.19 0.15 -0.11 0.14
## ML5 0.14 0.21 -0.19 1.00 0.06 0.15 -0.16
## ML4 0.39 0.35 0.15 0.06 1.00 0.32 0.15
## ML3 0.35 0.54 -0.11 0.15 0.32 1.00 0.20
## ML7 0.32 0.22 0.14 -0.16 0.15 0.20 1.00
##
## Mean item complexity = 1.9
## Test of the hypothesis that 7 factors are sufficient.
##
## The degrees of freedom for the null model are 1485 and the objective function was 31.74 with Chi Square of 6913.95
## The degrees of freedom for the model are 1121 and the objective function was 7.36
##
## The root mean square of the residuals (RMSR) is 0.04
## The df corrected root mean square of the residuals is 0.04
##
## The harmonic number of observations is 238 with the empirical chi square 1020.93 with prob < 0.98
## The total number of observations was 238 with Likelihood Chi Square = 1569.73 with prob < 1.4e-17
##
## Tucker Lewis Index of factoring reliability = 0.887
## RMSEA index = 0.048 and the 90 % confidence intervals are 0.036 NA
## BIC = -4564.68
## Fit based upon off diagonal values = 0.98
## Measures of factor score adequacy
## ML1 ML6 ML2 ML5 ML4
## Correlation of (regression) scores with factors 0.95 0.95 0.92 0.91 0.93
## Multiple R square of scores with factors 0.90 0.90 0.85 0.82 0.86
## Minimum correlation of possible factor scores 0.80 0.79 0.69 0.65 0.71
## ML3 ML7
## Correlation of (regression) scores with factors 0.95 0.90
## Multiple R square of scores with factors 0.90 0.80
## Minimum correlation of possible factor scores 0.80 0.61
Round 4!
round4 = fa(noout[ ,-c(30,55,7,1,9,35,29,39,45,61,63,79,14,15,18,19,25,40,48,60,78,50,58,24,28,33,34,47,53)],nfactors=7,rotate="oblimin", fm="ml")
round4
## Factor Analysis using method = ml
## Call: fa(r = noout[, -c(30, 55, 7, 1, 9, 35, 29, 39, 45, 61, 63, 79,
## 14, 15, 18, 19, 25, 40, 48, 60, 78, 50, 58, 24, 28, 33, 34,
## 47, 53)], nfactors = 7, rotate = "oblimin", fm = "ml")
## Standardized loadings (pattern matrix) based upon correlation matrix
## ML1 ML6 ML2 ML5 ML3 ML4 ML7 h2 u2 com
## MMS2 0.12 0.63 -0.12 -0.08 -0.07 -0.09 0.11 0.47 0.53 1.3
## MMS3 0.10 -0.21 0.53 -0.26 -0.13 -0.10 0.06 0.56 0.44 2.2
## MMS4 -0.11 -0.02 0.15 0.06 0.03 -0.05 0.72 0.51 0.49 1.2
## MMS5_rc -0.07 0.39 -0.27 0.31 -0.01 0.01 0.06 0.40 0.60 2.9
## MMS6_rc -0.03 0.16 -0.23 0.27 -0.23 -0.02 -0.04 0.20 0.80 3.7
## MMS8 0.00 0.69 0.08 0.11 0.13 0.10 0.01 0.69 0.31 1.2
## MMS10 0.02 0.62 0.11 -0.03 0.07 -0.08 0.00 0.40 0.60 1.1
## MMS11 -0.15 -0.12 0.26 -0.41 0.06 -0.11 0.04 0.39 0.61 2.5
## MMS12 0.05 0.18 0.52 0.20 0.13 0.22 -0.18 0.48 0.52 2.5
## MMS13 -0.15 -0.18 0.51 -0.24 -0.04 -0.10 0.09 0.54 0.46 2.1
## MMS16 -0.22 0.02 0.23 -0.29 -0.06 0.40 0.05 0.32 0.68 3.3
## MMS17 0.16 -0.01 0.05 0.05 0.20 0.55 0.00 0.54 0.46 1.5
## MMS20 -0.01 0.14 0.52 0.17 0.00 0.16 0.09 0.36 0.64 1.7
## MMS21 0.12 0.67 0.01 0.06 0.09 0.07 0.02 0.71 0.29 1.1
## MMS22 0.12 -0.01 0.51 0.09 -0.06 0.06 0.15 0.34 0.66 1.4
## MMS23 0.15 0.33 0.24 0.21 0.04 0.04 0.05 0.35 0.65 3.2
## MMS26 -0.03 0.18 0.58 -0.16 -0.12 0.00 0.10 0.43 0.57 1.5
## MMS27_rc 0.13 0.04 0.12 0.40 -0.26 -0.11 0.18 0.22 0.78 2.9
## MMS31 -0.02 0.52 -0.12 -0.14 0.13 0.18 0.05 0.50 0.50 1.7
## MMS32 0.18 0.14 -0.08 -0.05 -0.08 0.51 0.23 0.53 0.47 2.0
## MMS36 0.23 0.35 -0.06 0.01 0.28 0.15 0.01 0.61 0.39 3.3
## MMS37 0.06 -0.03 0.24 -0.17 0.00 0.01 0.37 0.29 0.71 2.3
## MMS38 0.03 0.03 -0.03 -0.04 0.84 0.00 0.01 0.74 0.26 1.0
## MMS41 0.09 -0.12 0.11 -0.06 0.02 0.63 -0.04 0.43 0.57 1.2
## MMS42 0.03 -0.08 0.60 0.01 0.04 0.11 -0.04 0.40 0.60 1.1
## MMS43 0.07 0.61 -0.08 -0.06 0.19 0.02 0.02 0.63 0.37 1.3
## MMS44 -0.08 0.07 0.00 -0.01 0.86 0.00 0.07 0.79 0.21 1.0
## MMS46 0.00 0.04 0.04 0.02 -0.06 0.85 -0.02 0.72 0.28 1.0
## MMS49 0.39 0.13 -0.01 0.12 0.17 0.20 0.08 0.55 0.45 2.5
## MMS51 -0.04 -0.11 0.58 0.00 0.04 0.10 0.06 0.39 0.61 1.2
## MMS52 -0.01 0.19 -0.12 -0.21 0.05 0.12 0.59 0.56 0.44 1.7
## MMS54 0.35 0.12 0.01 0.22 0.25 0.18 0.08 0.60 0.40 3.6
## MMS56_rc 0.03 0.10 -0.10 0.55 -0.03 0.04 -0.09 0.39 0.61 1.2
## MMS57_rc -0.06 0.01 -0.03 0.65 0.08 0.00 0.03 0.45 0.55 1.1
## MMS59 0.65 0.14 0.09 -0.25 -0.18 -0.02 0.04 0.52 0.48 1.6
## MMS62 0.40 0.08 -0.06 0.11 0.23 0.13 0.11 0.53 0.47 2.4
## MMS64 0.39 0.08 -0.04 0.05 0.20 0.18 0.17 0.54 0.46 2.6
## MMS65 0.23 0.13 0.07 0.03 0.62 -0.10 0.01 0.60 0.40 1.5
## MMS66 0.47 0.09 -0.07 0.04 0.11 0.19 0.14 0.55 0.45 1.8
## MMS67 0.17 0.25 0.03 -0.21 0.07 0.06 0.31 0.40 0.60 3.7
## MMS68_rc -0.08 -0.06 -0.05 0.62 0.05 0.03 0.06 0.38 0.62 1.1
## MMS69 0.67 -0.10 -0.05 -0.16 -0.07 0.10 0.12 0.49 0.51 1.3
## MMS70_rc 0.02 -0.01 0.16 0.49 -0.27 -0.17 -0.12 0.36 0.64 2.2
## MMS71 0.15 -0.02 0.00 0.18 0.08 0.03 0.62 0.52 0.48 1.4
## MMS72 0.24 -0.06 0.00 0.03 0.07 -0.21 0.42 0.28 0.72 2.3
## MMS73 0.63 0.18 0.09 -0.09 0.08 -0.03 -0.10 0.52 0.48 1.3
## MMS74 0.58 0.16 0.08 0.13 0.06 0.08 0.01 0.61 0.39 1.4
## MMS75_rc -0.08 -0.07 -0.13 0.32 -0.07 0.09 -0.07 0.15 0.85 2.1
## MMS76 0.57 0.01 0.02 0.12 0.20 0.18 0.00 0.63 0.37 1.6
## MMS77 0.59 0.11 -0.09 0.24 0.11 0.07 0.07 0.71 0.29 1.6
##
## ML1 ML6 ML2 ML5 ML3 ML4 ML7
## SS loadings 4.78 4.46 3.17 2.98 3.43 3.04 2.39
## Proportion Var 0.10 0.09 0.06 0.06 0.07 0.06 0.05
## Cumulative Var 0.10 0.18 0.25 0.31 0.38 0.44 0.49
## Proportion Explained 0.20 0.18 0.13 0.12 0.14 0.13 0.10
## Cumulative Proportion 0.20 0.38 0.51 0.63 0.78 0.90 1.00
##
## With factor correlations of
## ML1 ML6 ML2 ML5 ML3 ML4 ML7
## ML1 1.00 0.50 0.03 0.11 0.33 0.40 0.32
## ML6 0.50 1.00 -0.14 0.20 0.54 0.38 0.24
## ML2 0.03 -0.14 1.00 -0.19 -0.11 0.14 0.12
## ML5 0.11 0.20 -0.19 1.00 0.16 0.10 -0.11
## ML3 0.33 0.54 -0.11 0.16 1.00 0.35 0.24
## ML4 0.40 0.38 0.14 0.10 0.35 1.00 0.16
## ML7 0.32 0.24 0.12 -0.11 0.24 0.16 1.00
##
## Mean item complexity = 1.9
## Test of the hypothesis that 7 factors are sufficient.
##
## The degrees of freedom for the null model are 1225 and the objective function was 28.13 with Chi Square of 6175.56
## The degrees of freedom for the model are 896 and the objective function was 5.59
##
## The root mean square of the residuals (RMSR) is 0.04
## The df corrected root mean square of the residuals is 0.04
##
## The harmonic number of observations is 238 with the empirical chi square 716.28 with prob < 1
## The total number of observations was 238 with Likelihood Chi Square = 1200.77 with prob < 3.3e-11
##
## Tucker Lewis Index of factoring reliability = 0.914
## RMSEA index = 0.045 and the 90 % confidence intervals are 0.032 NA
## BIC = -3702.38
## Fit based upon off diagonal values = 0.98
## Measures of factor score adequacy
## ML1 ML6 ML2 ML5 ML3
## Correlation of (regression) scores with factors 0.94 0.94 0.92 0.91 0.95
## Multiple R square of scores with factors 0.89 0.89 0.84 0.83 0.90
## Minimum correlation of possible factor scores 0.78 0.78 0.69 0.66 0.80
## ML4 ML7
## Correlation of (regression) scores with factors 0.92 0.89
## Multiple R square of scores with factors 0.85 0.80
## Minimum correlation of possible factor scores 0.71 0.60
Round 5! You should be noticing that our list of rows is growing - we are eliminating more variables.
round5 = fa(noout[ ,-c(30,55,7,1,9,35,29,39,45,61,63,79,14,15,18,19,25,40,48,60,78,50,58,24,28,33,34,47,53,5,6)],nfactors=7,rotate="oblimin", fm="ml")
round5
## Factor Analysis using method = ml
## Call: fa(r = noout[, -c(30, 55, 7, 1, 9, 35, 29, 39, 45, 61, 63, 79,
## 14, 15, 18, 19, 25, 40, 48, 60, 78, 50, 58, 24, 28, 33, 34,
## 47, 53, 5, 6)], nfactors = 7, rotate = "oblimin", fm = "ml")
## Standardized loadings (pattern matrix) based upon correlation matrix
## ML1 ML6 ML2 ML5 ML3 ML4 ML7 h2 u2 com
## MMS2 0.12 0.64 -0.13 -0.08 -0.08 -0.09 0.10 0.46 0.54 1.3
## MMS3 0.10 -0.20 0.52 -0.25 -0.13 -0.10 0.07 0.54 0.46 2.2
## MMS4 -0.12 0.00 0.14 0.06 0.02 -0.05 0.73 0.52 0.48 1.2
## MMS8 -0.04 0.73 0.06 0.12 0.09 0.09 0.01 0.71 0.29 1.1
## MMS10 0.00 0.64 0.09 -0.02 0.05 -0.08 0.00 0.40 0.60 1.1
## MMS11 -0.15 -0.11 0.26 -0.40 0.05 -0.11 0.05 0.37 0.63 2.5
## MMS12 0.05 0.18 0.52 0.19 0.14 0.21 -0.18 0.48 0.52 2.4
## MMS13 -0.14 -0.19 0.51 -0.24 -0.04 -0.10 0.09 0.53 0.47 2.1
## MMS16 -0.21 0.02 0.24 -0.30 -0.06 0.39 0.05 0.32 0.68 3.3
## MMS17 0.15 -0.01 0.05 0.06 0.20 0.55 0.00 0.54 0.46 1.5
## MMS20 0.00 0.14 0.51 0.16 0.00 0.15 0.08 0.36 0.64 1.6
## MMS21 0.09 0.70 -0.01 0.07 0.07 0.06 0.02 0.72 0.28 1.1
## MMS22 0.12 -0.01 0.51 0.08 -0.06 0.06 0.14 0.34 0.66 1.4
## MMS23 0.15 0.33 0.23 0.20 0.04 0.04 0.05 0.34 0.66 3.1
## MMS26 -0.01 0.17 0.59 -0.17 -0.11 -0.01 0.09 0.44 0.56 1.5
## MMS27_rc 0.13 0.04 0.11 0.38 -0.25 -0.11 0.18 0.21 0.79 2.9
## MMS31 -0.02 0.51 -0.13 -0.15 0.14 0.18 0.04 0.49 0.51 1.8
## MMS32 0.18 0.14 -0.08 -0.05 -0.07 0.51 0.22 0.53 0.47 2.0
## MMS36 0.22 0.34 -0.06 0.02 0.28 0.15 0.00 0.61 0.39 3.2
## MMS37 0.08 -0.05 0.24 -0.18 0.02 0.01 0.36 0.29 0.71 2.5
## MMS38 0.03 0.01 -0.02 -0.05 0.85 -0.01 0.00 0.74 0.26 1.0
## MMS41 0.08 -0.10 0.11 -0.06 0.01 0.63 -0.04 0.42 0.58 1.2
## MMS42 0.05 -0.09 0.61 0.00 0.06 0.10 -0.05 0.40 0.60 1.1
## MMS43 0.06 0.62 -0.09 -0.06 0.19 0.02 0.02 0.62 0.38 1.3
## MMS44 -0.08 0.06 0.01 -0.02 0.87 0.00 0.06 0.79 0.21 1.0
## MMS46 0.00 0.05 0.04 0.02 -0.06 0.85 -0.02 0.72 0.28 1.0
## MMS49 0.38 0.13 -0.01 0.12 0.18 0.20 0.08 0.55 0.45 2.7
## MMS51 -0.03 -0.13 0.59 0.00 0.06 0.10 0.06 0.40 0.60 1.2
## MMS52 -0.01 0.20 -0.12 -0.21 0.05 0.12 0.58 0.56 0.44 1.7
## MMS54 0.33 0.13 0.00 0.23 0.24 0.18 0.08 0.60 0.40 4.0
## MMS56_rc 0.00 0.13 -0.12 0.56 -0.06 0.04 -0.08 0.40 0.60 1.3
## MMS57_rc -0.08 0.02 -0.04 0.65 0.07 0.00 0.04 0.45 0.55 1.1
## MMS59 0.65 0.14 0.09 -0.25 -0.17 -0.02 0.03 0.52 0.48 1.6
## MMS62 0.38 0.09 -0.07 0.13 0.23 0.13 0.12 0.53 0.47 2.7
## MMS64 0.38 0.08 -0.04 0.06 0.21 0.18 0.17 0.54 0.46 2.8
## MMS65 0.21 0.14 0.06 0.03 0.61 -0.10 0.01 0.60 0.40 1.4
## MMS66 0.45 0.10 -0.07 0.06 0.11 0.20 0.14 0.55 0.45 2.0
## MMS67 0.18 0.25 0.03 -0.21 0.08 0.06 0.30 0.40 0.60 3.8
## MMS68_rc -0.09 -0.05 -0.05 0.61 0.04 0.03 0.07 0.38 0.62 1.1
## MMS69 0.68 -0.11 -0.05 -0.15 -0.05 0.11 0.12 0.50 0.50 1.3
## MMS70_rc 0.01 0.01 0.14 0.50 -0.29 -0.17 -0.11 0.36 0.64 2.2
## MMS71 0.16 -0.02 0.00 0.18 0.09 0.03 0.61 0.52 0.48 1.4
## MMS72 0.24 -0.05 0.00 0.03 0.07 -0.21 0.43 0.28 0.72 2.2
## MMS73 0.63 0.17 0.10 -0.08 0.10 -0.03 -0.11 0.53 0.47 1.4
## MMS74 0.56 0.18 0.08 0.14 0.06 0.09 0.02 0.61 0.39 1.5
## MMS75_rc -0.09 -0.06 -0.14 0.31 -0.07 0.09 -0.07 0.15 0.85 2.1
## MMS76 0.55 0.02 0.02 0.13 0.20 0.19 0.01 0.63 0.37 1.7
## MMS77 0.57 0.13 -0.09 0.26 0.11 0.07 0.07 0.71 0.29 1.7
##
## ML1 ML6 ML2 ML5 ML3 ML4 ML7
## SS loadings 4.61 4.43 3.03 2.78 3.42 3.03 2.36
## Proportion Var 0.10 0.09 0.06 0.06 0.07 0.06 0.05
## Cumulative Var 0.10 0.19 0.25 0.31 0.38 0.44 0.49
## Proportion Explained 0.20 0.19 0.13 0.12 0.14 0.13 0.10
## Cumulative Proportion 0.20 0.38 0.51 0.63 0.77 0.90 1.00
##
## With factor correlations of
## ML1 ML6 ML2 ML5 ML3 ML4 ML7
## ML1 1.00 0.51 0.02 0.11 0.32 0.39 0.31
## ML6 0.51 1.00 -0.12 0.19 0.57 0.40 0.25
## ML2 0.02 -0.12 1.00 -0.16 -0.12 0.14 0.12
## ML5 0.11 0.19 -0.16 1.00 0.17 0.11 -0.11
## ML3 0.32 0.57 -0.12 0.17 1.00 0.35 0.24
## ML4 0.39 0.40 0.14 0.11 0.35 1.00 0.15
## ML7 0.31 0.25 0.12 -0.11 0.24 0.15 1.00
##
## Mean item complexity = 1.9
## Test of the hypothesis that 7 factors are sufficient.
##
## The degrees of freedom for the null model are 1128 and the objective function was 26.98 with Chi Square of 5941.16
## The degrees of freedom for the model are 813 and the objective function was 5.05
##
## The root mean square of the residuals (RMSR) is 0.03
## The df corrected root mean square of the residuals is 0.04
##
## The harmonic number of observations is 238 with the empirical chi square 642.75 with prob < 1
## The total number of observations was 238 with Likelihood Chi Square = 1088.98 with prob < 2.6e-10
##
## Tucker Lewis Index of factoring reliability = 0.918
## RMSEA index = 0.045 and the 90 % confidence intervals are 0.032 NA
## BIC = -3359.98
## Fit based upon off diagonal values = 0.98
## Measures of factor score adequacy
## ML1 ML6 ML2 ML5 ML3
## Correlation of (regression) scores with factors 0.94 0.95 0.92 0.91 0.95
## Multiple R square of scores with factors 0.89 0.90 0.84 0.82 0.90
## Minimum correlation of possible factor scores 0.77 0.79 0.68 0.64 0.80
## ML4 ML7
## Correlation of (regression) scores with factors 0.92 0.89
## Multiple R square of scores with factors 0.86 0.80
## Minimum correlation of possible factor scores 0.71 0.60
Round 6! Final Round! In this round we do not find any items that load on more than one factor or items that fail to load on a factor.
finalmodel= fa(noout[ ,-c(30,55,7,1,9,35,29,39,45,61,63,79,14,15,18,19,25,40,48,60,78,50,58,24,28,33,34,47,53,5,6,16,67)],nfactors=7,rotate="oblimin", fm="ml")
finalmodel
## Factor Analysis using method = ml
## Call: fa(r = noout[, -c(30, 55, 7, 1, 9, 35, 29, 39, 45, 61, 63, 79,
## 14, 15, 18, 19, 25, 40, 48, 60, 78, 50, 58, 24, 28, 33, 34,
## 47, 53, 5, 6, 16, 67)], nfactors = 7, rotate = "oblimin",
## fm = "ml")
## Standardized loadings (pattern matrix) based upon correlation matrix
## ML1 ML6 ML2 ML3 ML5 ML4 ML7 h2 u2 com
## MMS2 0.11 0.64 -0.13 -0.08 -0.08 -0.08 0.09 0.45 0.55 1.3
## MMS3 0.10 -0.20 0.52 -0.14 -0.25 -0.10 0.08 0.54 0.46 2.2
## MMS4 -0.13 0.02 0.13 0.00 0.02 -0.03 0.76 0.57 0.43 1.1
## MMS8 -0.04 0.73 0.06 0.09 0.12 0.09 0.02 0.71 0.29 1.1
## MMS10 0.00 0.63 0.09 0.05 -0.02 -0.07 0.00 0.39 0.61 1.1
## MMS11 -0.15 -0.09 0.26 0.04 -0.42 -0.11 0.06 0.38 0.62 2.3
## MMS12 0.03 0.18 0.53 0.13 0.19 0.22 -0.17 0.49 0.51 2.4
## MMS13 -0.12 -0.19 0.51 -0.03 -0.23 -0.12 0.08 0.52 0.48 2.1
## MMS17 0.14 -0.01 0.06 0.20 0.06 0.54 0.00 0.53 0.47 1.5
## MMS20 -0.01 0.13 0.52 0.00 0.15 0.16 0.09 0.37 0.63 1.6
## MMS21 0.08 0.72 -0.01 0.06 0.06 0.07 0.03 0.72 0.28 1.1
## MMS22 0.11 0.00 0.51 -0.07 0.06 0.07 0.16 0.35 0.65 1.4
## MMS23 0.15 0.33 0.23 0.05 0.19 0.04 0.04 0.34 0.66 3.2
## MMS26 0.00 0.16 0.59 -0.11 -0.17 -0.01 0.08 0.43 0.57 1.5
## MMS27_rc 0.15 0.02 0.11 -0.23 0.39 -0.12 0.15 0.21 0.79 2.8
## MMS31 -0.01 0.51 -0.12 0.14 -0.16 0.18 0.03 0.49 0.51 1.8
## MMS32 0.17 0.14 -0.07 -0.07 -0.08 0.53 0.20 0.53 0.47 1.8
## MMS36 0.23 0.34 -0.06 0.28 0.02 0.14 -0.02 0.61 0.39 3.2
## MMS37 0.10 -0.06 0.25 0.03 -0.19 0.02 0.32 0.27 0.73 3.0
## MMS38 0.03 0.01 -0.02 0.86 -0.05 -0.01 0.00 0.75 0.25 1.0
## MMS41 0.07 -0.10 0.12 0.01 -0.06 0.61 -0.03 0.40 0.60 1.2
## MMS42 0.03 -0.09 0.62 0.06 -0.01 0.11 -0.04 0.41 0.59 1.1
## MMS43 0.07 0.63 -0.08 0.18 -0.07 0.00 0.02 0.63 0.37 1.3
## MMS44 -0.08 0.07 0.01 0.86 -0.02 0.00 0.06 0.79 0.21 1.0
## MMS46 -0.03 0.05 0.05 -0.07 -0.01 0.87 -0.02 0.73 0.27 1.0
## MMS49 0.39 0.12 -0.01 0.18 0.12 0.20 0.07 0.55 0.45 2.6
## MMS51 -0.03 -0.13 0.59 0.06 -0.01 0.10 0.06 0.40 0.60 1.2
## MMS52 0.01 0.21 -0.12 0.05 -0.23 0.13 0.55 0.53 0.47 1.9
## MMS54 0.33 0.13 0.00 0.25 0.23 0.17 0.08 0.60 0.40 3.8
## MMS56_rc 0.01 0.12 -0.13 -0.06 0.57 0.02 -0.07 0.41 0.59 1.2
## MMS57_rc -0.08 0.01 -0.05 0.08 0.65 0.00 0.04 0.45 0.55 1.1
## MMS59 0.66 0.14 0.09 -0.17 -0.24 -0.02 0.02 0.52 0.48 1.6
## MMS62 0.38 0.09 -0.07 0.23 0.12 0.15 0.11 0.52 0.48 2.8
## MMS64 0.39 0.07 -0.04 0.21 0.06 0.18 0.15 0.54 0.46 2.6
## MMS65 0.20 0.15 0.06 0.61 0.03 -0.10 0.02 0.59 0.41 1.4
## MMS66 0.45 0.10 -0.07 0.11 0.06 0.21 0.13 0.55 0.45 2.0
## MMS68_rc -0.09 -0.05 -0.06 0.05 0.59 0.04 0.08 0.36 0.64 1.1
## MMS69 0.68 -0.11 -0.05 -0.05 -0.15 0.11 0.11 0.50 0.50 1.3
## MMS70_rc 0.03 -0.01 0.13 -0.27 0.52 -0.20 -0.11 0.38 0.62 2.2
## MMS71 0.17 -0.01 -0.01 0.08 0.16 0.03 0.62 0.54 0.46 1.3
## MMS72 0.26 -0.05 -0.01 0.08 0.03 -0.22 0.41 0.28 0.72 2.5
## MMS73 0.64 0.16 0.10 0.10 -0.07 -0.04 -0.12 0.53 0.47 1.3
## MMS74 0.57 0.17 0.07 0.06 0.15 0.07 0.02 0.61 0.39 1.4
## MMS75_rc -0.09 -0.07 -0.13 -0.06 0.32 0.09 -0.08 0.16 0.84 2.1
## MMS76 0.55 0.02 0.02 0.21 0.14 0.18 0.01 0.63 0.37 1.7
## MMS77 0.55 0.13 -0.10 0.10 0.26 0.08 0.08 0.70 0.30 1.8
##
## ML1 ML6 ML2 ML3 ML5 ML4 ML7
## SS loadings 4.54 4.32 2.95 3.38 2.67 2.96 2.14
## Proportion Var 0.10 0.09 0.06 0.07 0.06 0.06 0.05
## Cumulative Var 0.10 0.19 0.26 0.33 0.39 0.45 0.50
## Proportion Explained 0.20 0.19 0.13 0.15 0.12 0.13 0.09
## Cumulative Proportion 0.20 0.39 0.51 0.66 0.78 0.91 1.00
##
## With factor correlations of
## ML1 ML6 ML2 ML3 ML5 ML4 ML7
## ML1 1.00 0.51 0.04 0.32 0.10 0.42 0.30
## ML6 0.51 1.00 -0.12 0.57 0.21 0.42 0.23
## ML2 0.04 -0.12 1.00 -0.11 -0.14 0.11 0.12
## ML3 0.32 0.57 -0.11 1.00 0.17 0.37 0.23
## ML5 0.10 0.21 -0.14 0.17 1.00 0.15 -0.09
## ML4 0.42 0.42 0.11 0.37 0.15 1.00 0.14
## ML7 0.30 0.23 0.12 0.23 -0.09 0.14 1.00
##
## Mean item complexity = 1.8
## Test of the hypothesis that 7 factors are sufficient.
##
## The degrees of freedom for the null model are 1035 and the objective function was 25.64 with Chi Square of 5661.63
## The degrees of freedom for the model are 734 and the objective function was 4.45
##
## The root mean square of the residuals (RMSR) is 0.03
## The df corrected root mean square of the residuals is 0.04
##
## The harmonic number of observations is 238 with the empirical chi square 550.73 with prob < 1
## The total number of observations was 238 with Likelihood Chi Square = 961.83 with prob < 2.7e-08
##
## Tucker Lewis Index of factoring reliability = 0.929
## RMSEA index = 0.043 and the 90 % confidence intervals are 0.03 NA
## BIC = -3054.81
## Fit based upon off diagonal values = 0.99
## Measures of factor score adequacy
## ML1 ML6 ML2 ML3 ML5
## Correlation of (regression) scores with factors 0.94 0.95 0.91 0.95 0.90
## Multiple R square of scores with factors 0.89 0.90 0.84 0.90 0.82
## Minimum correlation of possible factor scores 0.77 0.79 0.67 0.80 0.63
## ML4 ML7
## Correlation of (regression) scores with factors 0.93 0.89
## Multiple R square of scores with factors 0.86 0.79
## Minimum correlation of possible factor scores 0.71 0.59
Goodness of fit statistics compare the reproduced correlation matrix to the actual correlation matrix. We will use the Tucker Lewis Index and the Comparative Fix Index (CFI) to judge model fit. Values are between 0 and 1, where larger values represent better model fit. In psychology, goodness of fit indices of >.95 are considered excellent, those >.90 are considered acceptable and anything <.9 is considered poor (Boo….we don’t accept your kind here).
Residual Fit Statistics refers to the differences between the reproduced and actual correlation matrix. For this analysis will use the Root mean square error of approximation and Root mean square of the residual . In this case smaller values represent better model fit. In psychology, goodness of fit indices less than .06 are considered excellent, those between 0.06-0.08 are considered acceptable and anything >.10 is considered poor.
CFI<-1-((finalmodel$STATISTIC-finalmodel$dof)/(finalmodel$null.chisq-finalmodel$null.dof))
CFI
## [1] 0.950756
Happy dances all around!
Internal consistency based on the correlations between different items on the same test or the same subscale on a larger test (in this case same subscale). It measures whether several items that propose to measure the same general construct produce similar scores. In other words, measures of internal consistency seek to determine whether the items are measuring the same thing. We will be using Cronbach’s Alpha . A Cronbach’s alpha >0.9 is considered excellent, between 0.8-0.9 is considered good, between 0.8-0.7 is considered acceptable, between 0.7-0.6 is considered questionable, between 0.7-0.6 is considered poor and below 0.5 is considered unacceptable (how low can you go?). In our case all of the cronbach’s alphas for the factors are at or above 0.7 and are acceptable.
factor1=c(49,54,59,62,64,66,69,73,74,76,77)
factor2=c(3,12,13,20,22,26,42,51)
factor3=c(38,44,65)
factor4=c(17,32,41,46)
factor5=c(11,27,56,57,68,70,75)
factor6=c(2,8,10,21,23,31,36,43)
factor7=c(4,37,52,71,72)
alpha(noout[,factor1])
##
## Reliability analysis
## Call: alpha(x = noout[, factor1])
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.91 0.92 0.92 0.5 11 0.0083 3.2 1.3
##
## lower alpha upper 95% confidence boundaries
## 0.9 0.91 0.93
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS49 0.91 0.91 0.91 0.49 9.7 0.0092
## MMS54 0.91 0.91 0.91 0.49 9.8 0.0091
## MMS59 0.92 0.92 0.92 0.52 10.9 0.0082
## MMS62 0.91 0.91 0.91 0.50 9.8 0.0091
## MMS64 0.91 0.91 0.91 0.49 9.7 0.0092
## MMS66 0.91 0.91 0.91 0.49 9.7 0.0092
## MMS69 0.91 0.91 0.92 0.51 10.6 0.0085
## MMS73 0.91 0.91 0.91 0.50 10.0 0.0089
## MMS74 0.90 0.90 0.91 0.49 9.5 0.0093
## MMS76 0.90 0.90 0.91 0.48 9.4 0.0095
## MMS77 0.90 0.90 0.91 0.48 9.3 0.0095
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS49 238 0.76 0.76 0.72 0.69 3.0 1.9
## MMS54 238 0.75 0.75 0.72 0.68 2.9 1.8
## MMS59 238 0.60 0.59 0.55 0.51 4.1 1.8
## MMS62 238 0.74 0.74 0.71 0.67 2.9 1.8
## MMS64 238 0.75 0.75 0.72 0.69 2.9 1.7
## MMS66 238 0.76 0.76 0.74 0.70 2.8 1.6
## MMS69 238 0.64 0.64 0.58 0.55 3.9 1.7
## MMS73 238 0.72 0.71 0.68 0.65 3.5 1.9
## MMS74 238 0.78 0.78 0.76 0.73 3.0 1.7
## MMS76 238 0.80 0.81 0.79 0.75 2.9 1.8
## MMS77 238 0.81 0.81 0.80 0.76 2.7 1.6
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS49 0.31 0.18 0.09 0.16 0.13 0.11 0.02 0
## MMS54 0.31 0.21 0.10 0.12 0.16 0.09 0.02 0
## MMS59 0.13 0.13 0.08 0.18 0.25 0.16 0.07 0
## MMS62 0.31 0.19 0.14 0.13 0.13 0.06 0.03 0
## MMS64 0.29 0.20 0.15 0.11 0.17 0.06 0.02 0
## MMS66 0.26 0.29 0.09 0.16 0.13 0.06 0.00 0
## MMS69 0.14 0.10 0.11 0.16 0.32 0.13 0.03 0
## MMS73 0.24 0.13 0.12 0.14 0.22 0.11 0.04 0
## MMS74 0.26 0.21 0.16 0.15 0.15 0.06 0.02 0
## MMS76 0.30 0.22 0.13 0.12 0.14 0.08 0.02 0
## MMS77 0.32 0.24 0.13 0.16 0.08 0.06 0.01 0
alpha(noout[,factor2])
##
## Reliability analysis
## Call: alpha(x = noout[, factor2])
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.78 0.78 0.79 0.31 3.6 0.021 4.9 0.88
##
## lower alpha upper 95% confidence boundaries
## 0.74 0.78 0.82
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS3 0.76 0.76 0.76 0.31 3.2 0.024
## MMS12 0.78 0.78 0.78 0.34 3.5 0.022
## MMS13 0.77 0.77 0.77 0.32 3.4 0.023
## MMS20 0.76 0.77 0.77 0.32 3.3 0.024
## MMS22 0.76 0.76 0.77 0.31 3.2 0.024
## MMS26 0.75 0.75 0.75 0.30 3.0 0.025
## MMS42 0.74 0.74 0.75 0.29 2.9 0.026
## MMS51 0.75 0.75 0.75 0.30 3.0 0.025
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS3 238 0.61 0.62 0.56 0.47 5.4 1.3
## MMS12 238 0.56 0.54 0.44 0.38 3.9 1.5
## MMS13 238 0.56 0.58 0.51 0.42 5.7 1.2
## MMS20 238 0.61 0.60 0.52 0.46 4.5 1.4
## MMS22 238 0.65 0.63 0.54 0.49 4.7 1.5
## MMS26 238 0.68 0.68 0.62 0.55 4.9 1.5
## MMS42 238 0.71 0.71 0.67 0.59 4.9 1.4
## MMS51 238 0.68 0.69 0.63 0.55 5.1 1.4
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS3 0.02 0.02 0.05 0.14 0.24 0.36 0.18 0
## MMS12 0.07 0.16 0.17 0.23 0.21 0.13 0.03 0
## MMS13 0.01 0.01 0.04 0.08 0.22 0.37 0.27 0
## MMS20 0.03 0.05 0.11 0.27 0.29 0.20 0.05 0
## MMS22 0.05 0.06 0.08 0.23 0.26 0.23 0.09 0
## MMS26 0.03 0.05 0.08 0.16 0.32 0.25 0.11 0
## MMS42 0.02 0.04 0.06 0.26 0.23 0.30 0.10 0
## MMS51 0.01 0.04 0.05 0.18 0.26 0.31 0.14 0
alpha(noout[,factor3])
##
## Reliability analysis
## Call: alpha(x = noout[, factor3])
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.86 0.86 0.82 0.68 6.3 0.015 3 1.5
##
## lower alpha upper 95% confidence boundaries
## 0.83 0.86 0.89
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS38 0.81 0.81 0.68 0.68 4.2 0.025
## MMS44 0.74 0.74 0.59 0.59 2.8 0.034
## MMS65 0.87 0.87 0.77 0.77 6.7 0.017
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS38 238 0.89 0.89 0.81 0.74 3.1 1.6
## MMS44 238 0.93 0.92 0.88 0.81 3.1 1.8
## MMS65 238 0.85 0.85 0.72 0.67 2.9 1.6
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS38 0.21 0.22 0.15 0.22 0.12 0.07 0.02 0
## MMS44 0.27 0.16 0.16 0.18 0.11 0.08 0.03 0
## MMS65 0.24 0.24 0.16 0.15 0.15 0.05 0.01 0
alpha(noout[,factor4])
##
## Reliability analysis
## Call: alpha(x = noout[, factor4])
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.79 0.79 0.75 0.48 3.7 0.023 3.7 1.3
##
## lower alpha upper 95% confidence boundaries
## 0.74 0.79 0.83
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS17 0.73 0.73 0.67 0.48 2.7 0.030
## MMS32 0.75 0.75 0.68 0.50 3.1 0.028
## MMS41 0.77 0.77 0.70 0.53 3.4 0.026
## MMS46 0.67 0.67 0.58 0.40 2.0 0.037
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS17 238 0.78 0.78 0.67 0.60 3.6 1.7
## MMS32 238 0.76 0.76 0.63 0.56 3.4 1.8
## MMS41 238 0.72 0.73 0.59 0.52 4.3 1.6
## MMS46 238 0.86 0.85 0.81 0.71 3.6 1.7
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS17 0.13 0.20 0.13 0.21 0.19 0.12 0.02 0
## MMS32 0.21 0.14 0.11 0.23 0.19 0.08 0.03 0
## MMS41 0.08 0.13 0.06 0.21 0.29 0.20 0.04 0
## MMS46 0.15 0.17 0.11 0.22 0.20 0.11 0.04 0
alpha(noout[,factor5],check.keys=TRUE)
## Warning in alpha(noout[, factor5], check.keys = TRUE): Some items were negatively correlated with total scale and were automatically reversed.
## This is indicated by a negative sign for the variable name.
##
## Reliability analysis
## Call: alpha(x = noout[, factor5], check.keys = TRUE)
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.7 0.71 0.69 0.26 2.4 0.03 3.2 0.95
##
## lower alpha upper 95% confidence boundaries
## 0.64 0.7 0.76
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS11- 0.68 0.68 0.66 0.27 2.2 0.032
## MMS27_rc 0.69 0.70 0.68 0.28 2.3 0.031
## MMS56_rc 0.66 0.66 0.64 0.25 2.0 0.034
## MMS57_rc 0.63 0.63 0.61 0.22 1.7 0.037
## MMS68_rc 0.65 0.65 0.63 0.24 1.9 0.036
## MMS70_rc 0.67 0.68 0.65 0.26 2.1 0.033
## MMS75_rc 0.69 0.69 0.68 0.27 2.3 0.031
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS11- 238 0.56 0.56 0.45 0.36 2.7 1.6
## MMS27_rc 238 0.54 0.52 0.39 0.32 3.6 1.7
## MMS56_rc 238 0.60 0.63 0.54 0.44 3.0 1.4
## MMS57_rc 238 0.70 0.71 0.67 0.55 2.8 1.5
## MMS68_rc 238 0.66 0.67 0.60 0.49 3.0 1.6
## MMS70_rc 238 0.60 0.59 0.48 0.40 3.7 1.7
## MMS75_rc 238 0.54 0.53 0.39 0.33 3.5 1.6
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS11 0.04 0.04 0.05 0.13 0.18 0.31 0.26 0
## MMS27_rc 0.11 0.22 0.15 0.19 0.19 0.08 0.05 0
## MMS56_rc 0.11 0.32 0.26 0.18 0.06 0.03 0.03 0
## MMS57_rc 0.21 0.27 0.25 0.12 0.07 0.05 0.02 0
## MMS68_rc 0.19 0.22 0.24 0.17 0.09 0.03 0.05 0
## MMS70_rc 0.08 0.18 0.26 0.18 0.11 0.10 0.08 0
## MMS75_rc 0.08 0.22 0.26 0.18 0.13 0.06 0.06 0
alpha(noout[,factor6])
##
## Reliability analysis
## Call: alpha(x = noout[, factor6])
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.88 0.88 0.88 0.49 7.5 0.012 2.9 1.3
##
## lower alpha upper 95% confidence boundaries
## 0.86 0.88 0.9
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS2 0.87 0.88 0.87 0.50 7.1 0.013
## MMS8 0.86 0.86 0.85 0.46 6.0 0.014
## MMS10 0.87 0.88 0.87 0.50 7.0 0.013
## MMS21 0.85 0.85 0.85 0.46 5.9 0.015
## MMS23 0.88 0.89 0.88 0.53 7.8 0.012
## MMS31 0.87 0.87 0.87 0.49 6.7 0.013
## MMS36 0.86 0.86 0.86 0.48 6.4 0.014
## MMS43 0.86 0.86 0.86 0.47 6.2 0.014
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS2 238 0.68 0.68 0.61 0.57 2.9 1.7
## MMS8 238 0.82 0.82 0.81 0.75 2.6 1.7
## MMS10 238 0.69 0.69 0.62 0.58 3.1 1.9
## MMS21 238 0.84 0.84 0.84 0.78 2.5 1.7
## MMS23 238 0.60 0.60 0.51 0.47 3.6 1.8
## MMS31 238 0.73 0.73 0.68 0.64 2.9 1.7
## MMS36 238 0.77 0.77 0.73 0.68 2.9 1.8
## MMS43 238 0.80 0.80 0.78 0.72 2.7 1.7
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS2 0.27 0.24 0.12 0.12 0.16 0.07 0.01 0
## MMS8 0.40 0.20 0.11 0.11 0.11 0.07 0.00 0
## MMS10 0.30 0.15 0.11 0.17 0.15 0.08 0.04 0
## MMS21 0.42 0.18 0.13 0.10 0.10 0.06 0.01 0
## MMS23 0.18 0.16 0.11 0.14 0.24 0.13 0.03 0
## MMS31 0.29 0.24 0.09 0.13 0.17 0.07 0.01 0
## MMS36 0.33 0.20 0.11 0.14 0.12 0.10 0.01 0
## MMS43 0.39 0.16 0.10 0.15 0.13 0.07 0.00 0
alpha(noout[,factor7])
##
## Reliability analysis
## Call: alpha(x = noout[, factor7])
##
## raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
## 0.71 0.71 0.67 0.33 2.4 0.029 3.9 1.1
##
## lower alpha upper 95% confidence boundaries
## 0.65 0.71 0.77
##
## Reliability if an item is dropped:
## raw_alpha std.alpha G6(smc) average_r S/N alpha se
## MMS4 0.63 0.62 0.57 0.29 1.7 0.039
## MMS37 0.72 0.72 0.66 0.39 2.5 0.030
## MMS52 0.63 0.63 0.58 0.30 1.7 0.038
## MMS71 0.62 0.62 0.56 0.29 1.6 0.040
## MMS72 0.69 0.69 0.64 0.36 2.2 0.032
##
## Item statistics
## n raw.r std.r r.cor r.drop mean sd
## MMS4 238 0.73 0.74 0.66 0.55 3.9 1.6
## MMS37 238 0.55 0.57 0.37 0.31 4.7 1.5
## MMS52 238 0.73 0.72 0.62 0.53 3.8 1.8
## MMS71 238 0.75 0.74 0.67 0.56 3.2 1.7
## MMS72 238 0.63 0.62 0.46 0.39 4.1 1.7
##
## Non missing response frequency for each item
## 1 2 3 4 5 6 7 miss
## MMS4 0.08 0.16 0.15 0.16 0.30 0.12 0.02 0
## MMS37 0.04 0.06 0.11 0.19 0.24 0.27 0.09 0
## MMS52 0.16 0.13 0.13 0.15 0.26 0.13 0.03 0
## MMS71 0.20 0.25 0.12 0.14 0.18 0.10 0.01 0
## MMS72 0.09 0.15 0.13 0.13 0.29 0.15 0.06 0
Let’s create our factors!
noout$f1=apply(noout[,factor1], 1,mean) #creates average score for f1.
noout$f2=apply(noout[,factor2], 1,mean) #creates average score for f2.
noout$f3=apply(noout[,factor3], 1,mean) #creates average score for f3.
noout$f4=apply(noout[,factor4], 1,mean) #creates average score for f4.
noout$f5=apply(noout[,factor5], 1,mean) #creates average score for f5.
noout$f6=apply(noout[,factor6], 1,mean) #creates average score for f6.
noout$f7=apply(noout[,factor7], 1,mean) #creates average score for f7.
head(noout)
## MMS1 MMS2 MMS3 MMS4 MMS5_rc MMS6_rc MMS7 MMS8 MMS9_rc MMS10 MMS11 MMS12
## 2 2 2 6 5 1 4 4 1 6 4 7 4
## 3 1 1 7 3 1 4 3 1 2 2 7 4
## 4 7 5 4 6 2 4 4 5 6 5 7 5
## 5 4 3 3 4 4 5 4 5 3 5 4 4
## 6 6 5 4 3 2 2 5 4 2 4 6 4
## 8 5 5 3 3 3 4 5 3 5 5 4 5
## MMS13 MMS14_rc MMS15_rc MMS16 MMS17 MMS18 MMS19 MMS20 MMS21 MMS22 MMS23
## 2 6 3 4 5 4 4 4 4 1 5 4
## 3 7 1 3 5 2 3 5 3 1 5 1
## 4 5 4 3 3 6 4 4 3 6 6 5
## 5 4 4 4 5 3 4 4 3 4 5 5
## 6 4 2 3 4 3 3 4 6 3 3 2
## 8 5 4 3 5 4 3 4 3 4 5 3
## MMS24 MMS25_rc MMS26 MMS27_rc MMS28 MMS29 MMS30 MMS31 MMS32 MMS33
## 2 2 4 6 3 1 1 1 3 1 7
## 3 1 4 5 2 1 1 1 1 1 7
## 4 4 5 2 2 5 4 3 6 4 4
## 5 5 5 5 5 4 4 4 5 5 3
## 6 6 4 3 5 3 2 6 5 4 4
## 8 3 4 5 3 4 5 4 5 4 3
## MMS34_rc MMS35 MMS36 MMS37 MMS38 MMS39 MMS40_rc MMS41 MMS42 MMS43 MMS44
## 2 3 1 1 5 2 1 1 4 6 2 1
## 3 2 1 1 1 2 1 1 1 3 2 2
## 4 4 5 4 6 5 4 5 5 4 3 5
## 5 4 3 4 6 4 4 4 4 4 4 4
## 6 5 6 5 4 3 3 2 5 4 4 3
## 8 5 5 4 4 3 3 4 4 3 5 5
## MMS45 MMS46 MMS47 MMS48 MMS49 MMS50 MMS51 MMS52 MMS53 MMS54 MMS55
## 2 1 6 4 1 1 1 6 2 6 1 1
## 3 1 2 5 1 1 1 5 1 1 1 1
## 4 4 3 6 3 3 6 6 6 5 4 5
## 5 5 5 5 5 4 4 4 5 4 4 5
## 6 3 5 3 3 2 5 6 4 3 2 5
## 8 5 4 4 3 5 4 5 5 4 4 3
## MMS56_rc MMS57_rc MMS58 MMS59 MMS60 MMS61 MMS62 MMS63 MMS64 MMS65 MMS66
## 2 3 1 2 5 5 1 1 1 1 1 1
## 3 2 1 2 1 6 5 1 1 1 4 1
## 4 4 6 6 4 3 7 6 5 4 3 6
## 5 5 5 4 5 4 4 3 4 4 4 5
## 6 4 5 4 7 5 6 5 3 5 4 4
## 8 4 5 4 5 5 5 5 3 3 5 4
## MMS67 MMS68_rc MMS69 MMS70_rc MMS71 MMS72 MMS73 MMS74 MMS75_rc MMS76
## 2 1 1 4 1 3 1 2 1 2 1
## 3 1 1 5 2 1 2 4 1 3 1
## 4 3 5 6 4 5 6 4 3 3 5
## 5 4 4 3 3 4 3 4 4 4 5
## 6 6 3 3 4 5 5 5 3 5 4
## 8 5 3 4 3 5 4 4 5 3 5
## MMS77 MMS78 MMS79 f1 f2 f3 f4 f5 f6 f7
## 2 1 4 1 1.727273 5.375 1.333333 3.75 2.571429 2.250 3.2
## 3 1 5 1 1.636364 4.875 2.666667 1.50 2.571429 1.250 1.6
## 4 4 6 3 4.454545 4.375 4.333333 4.50 4.428571 4.875 5.8
## 5 4 4 3 4.090909 4.000 4.000000 4.25 4.285714 4.375 4.4
## 6 5 3 5 4.090909 4.250 3.333333 4.25 4.571429 4.000 4.2
## 8 4 3 4 4.363636 4.250 4.333333 4.00 3.571429 4.250 4.2
If you are interested in standard deviation, this one is for you:
sd(noout$f1)
## [1] 1.29036
sd(noout$f2)
## [1] 0.875032
sd(noout$f3)
## [1] 1.471186
sd(noout$f4)
## [1] 1.327892
sd(noout$f5)
## [1] 0.7932646
sd(noout$f6)
## [1] 1.295088
sd(noout$f7)
## [1] 1.135332
Now, the exciting part (drum roll please): the factor plot
factor.plot(finalmodel, labels=rownames(finalmodel))
And an image of our factor structure:
fa.diagram(finalmodel, simple=FALSE)
Huzzah!!!
One of the biggest questions in paleoanthropology is how hominin brain evolved to increase in size over time. However, brain size is obviously not a trait that can be studied directly by looking at the hominin fossil record, as brains, like all soft tissue, decompose (dah). This study aimed to find basicranial (cranial base) measurements that are correlated with relative brain size- AND to do this they used factor analysis.
The table below shows all of the correlated groups of basicranial measurements for each group of primates (all catarrhines, non-hominid catarrhines, cercopithecoids, hominoids) they studied, from which they concluded the best overall basicranial measurements for predicting relative brain size (listed in the footnote of the table).
What does this lovely table mean?
A new species of orangutan has been announced!!!! His name is the Tapanuli orangutan (Pongo tapanuliensis)!
For one of their many analyses, Nater et al. (2017) ran a principle components analysis on traits related to the genetic diversity seen across different populations of orangutans. Take a look at the plot of PCA1 and PCA2:
Note anything fishy? The answer is yes, yes you do. PCA1 is awesome! We like PCA1- it accounts for 34.2% of the variance in our samples (really I mean species/ populations). PCA2 only accounts for a tiny 3.6% of variance, BUT it makes tapanuli orangutans look like a totally different species from Sumatran orangutans (P. abelii) and Bornean orangutans (P. pygmaeus).
Take away message…. be very careful in how you interpret your principle component analysis!
Brown, T. A. (2014). Confirmatory factor analysis for applied research. Guilford Publications.
Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage Publications.
Kruger, D. J. (2017). Brief Self-Report Scales Assessing Life History Dimensions of Mating and Parenting Effort. Evolutionary Psychology, 15(1).
Merriam-Webster, Inc. (1983). Webster’s ninth new collegiate dictionary. Merriam-Webster.
Nater, A., Mattle-Greminger, M. P., Nurcahyo, A., Nowak, M. G., de Manuel, M., Desai, T., Groves, C., Pybus, M., Sonay, T. C., Roos, C., … & Lameira, A. R. (2017). Morphometric, behavioral, and genomic evidence for a new orangutan species. Current Biology.
Rowe, D. C., Vazsonyi, A. T., & Figueredo, A. J. (1997). Mating-effort in adolescence: A conditional or alternative strategy. Personality and Individual Differences, 23(1), 105-115.
Strait, D. S. (2001). Integration, phylogeny, and the hominid cranial base. American Journal of Physical Anthropology, 114: 273–297. doi:10.1002/ajpa.1041.
Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson.