Interpreting MANOVA test with more than one dependent variable
IBM SPSS software with its generalized model analysis helps in formulating a multivariate model. The previous article explained the procedure to apply the multivariate analysis of variance (MANOVA) test. This article explains how to interpret the results derived from the MANOVA test and overcome the major issues witnessed while performing it. It also offers solutions to overcome these common problems while performing the MANOVA test.
Results of the MANOVA test analysis
The MANOVA results can be majorly classified into 5 groups:
- box test of equality of covariance,
- multivariate test,
- Levene’s test,
- a test of between-subject effects, and
- multiple comparisons.
Herein, the examination of organizational factors has been done on the employee performance and satisfaction by analyzing the perception of 178 employees.
Box test of equality of covariance
Box test of equality of covariance is a test that detects the presence of heteroskedasticity in the model. Therefore the null hypothesis is:
Observed covariance matrices of the dependent variable are equal across groups.
The test determines whether the variance is the same or not among dependent variables. The assumption of MANOVA is to not have heteroskedasticity in the model. Therefore, the significance (sig.) value should be more than the considered level. For example in the above case, the value is 0.385 > 0.05.
A common error we face while running the MANOVA test in SPSS is:
The Box’s Test of Equality of Covariance Matrices is not computed because there are fewer than two nonsingular cell covariance matrices.
The reason for this error could be due to the following reasons:
- presence of two or more independent groups
- some variance value for each group
- existence of a value for two or more cases
- nonexistence of linearity.
Out of these, points 1, 2 and 3 are due to issues in the data description. So make sure that you have categorical independent variables and continuous dependent variables. Alternatively, you can still perform the MANOVA test without rectifying this error. However, if you face any of these errors and still want to check for heteroskedasticity, then you must apply Levene’s test. The only problem with this is that the model may be non-linear. If so then it will defeat the purpose of the model. Therefore, first, make sure that the model is linear by eliminating the non-linear variables from the model. One way to examine linearity in the variables is by generating QQ plots graphs.
Multivariate MANOVA test
A multivariate test in MANOVA helps examine the relationship between variables. In this test, Wilk’s Lambda value explains the strength of the relationship. The value always lies between 0 and 1. The ideal value is close to 0. In the example shown in the table below, for the 1st variable WC, the value is 0.959 which is close to 1. The significance value is very high i.e. 0.639 > 0.05. Thus, there is no significant relationship between the variables in this dataset.
You can perform this test without errors. The only problem is when non-related variables are involved, like in the case above. It is difficult to prove the relationship when the sig value is more than 0.05. The solution is to examine each variable’s sig value with the dependent variable’s and eliminate those with a sig. value of more than 0.05. This would help in building a better linkage between variables.
Levene’s Test of Equality of error variance
Like we mentioned above, Levene’s test is another way to detect heteroskedasticity. The null hypothesis for the test is that the error variance of the dependent variable is the same throughout the dataset. Therefore, the significance (sig.) value of the dependent variables should be more than 0.05. As we can see in the table below, for employee performance, the value is more than 0.05 but for satisfaction, as value is low. Therefore, heteroskedasticity is present in the ‘satisfaction score’ variable.
If your sig. value is more than 0.05 then your model will be inefficient. To rectify this, you must first understand the distribution of the residuals i.e. a graphical plot of error terms using a boxplot. When the graphical plot shows the presence of heteroskedasticity, you must perform a log transformation of the dependent variables for preventing equal variance. If this does not work then the only solution is to use a completely new dataset.
Between subject effects
This test is similar to ANOVA wherein the difference of dependent variable from an independent variable is identified. It also helps in understanding the nature of the dataset. The significance value for WC is more than 0.05 for both dependent variables i.e. employee performance and satisfaction score. The values are not significant, i.e. they are more than 0.05. This shows that the variables are not different and hence the model is inefficient.
Another condition in this test is to have R square should be close to 1. When it is less than 0.5, it means that there is possibly no linkage between variables, or there are non-significant statements. To verify this, you can perform the correlation and regression test and eliminate the variables which are not significant. Again, if the results are still not significant, then use a fresh dataset.
Lastly, the important test in MANOVA is to compare the means of the different groups for dependent variables. This helps in differentiating whether the group the perception varies or not. In the below case table, for ‘neutral’ and ‘agree’ the sig. value is 0.00. This shows that mean difference is present. However the sig value of ‘agree’ is almost the same as many others like ‘disagree’, ‘strongly agree’ and ‘strongly disagree’. This shows that there is not much difference in the perception of employees.
Suggestions for an effective MANOVA test
MANOVA is very useful for estimating the relationship between multiple independent and multiple dependent variables. However, the possibility of errors limits its applicability. An effective way to enhance its efficiency is by performing assumption tests like normality, heteroskedasticity, and stationarity. Even correlation analysis needs to be done beforehand to know whether any relationship between dependent and independent variables exists or not.