In continuation to my previous article, the results of multivariate analysis with more than one dependent variable has been discussed in this article.
Hypothesis testing
Between subject factors
The first result shown in the output file is that of Between Subjects Factors (See Table 1 below).
Between subjects factors 

Value Label  N  
IV1  1.00  02 hours  52 
2.00  23 hours  201  
3.00  3 4 hours  48  
4.00  5 hours and more  151  
5.00  Strongly Agree  48  
IV2  1.00  02 hours  54 
2.00  23 hours  198  
3.00  3 4 hours  49  
4.00  5 hours and more  149  
5.00  02 hours  50 
This table gives the overview about the independent variables included in the model.
For example in IV1 the numbers of respondents in 23 hours category are 201 . This indicates that there are 201 respondents who study Book 1 (IV1) for 23 hours week.
Interpreting the descriptive statistics
The second table (Table 2) is for the descriptive statistics of all the variables in the model. In this case the dependent variables are shown in row whereas the independent variables are in column. The interpretation of the descriptive table has already been discussed in our previous article.
Descriptive statistics 

Chapter: 2 score in science  Chapter: 3 score in math  bookA  bookB  
N  Valid  500  500  500  500 
Missing  0  0  0  0  
Mean  2.8780  2.8960  2.8840  2.8860  
Median  2.0000  3.0000  2.0000  2.0000  
Std. Deviation  1.21905  1.22809  1.22210  1.23127  
Minimum  1.00  1.00  1.00  1.00  
Maximum  5.00  5.00  5.00  5.00 
Deviation1.219051.228091.222101.23127.
Box’s test of equality of covariance matrices
After descriptive statistics the next table is of Box’s Test of Equality of Covariance Matrices. This tests the null hypothesis that the observed covariance matrices of dependent variables are equal across groups. In our example the null hypothesis would be:
Covariance between score in mathematics and score in science would be same for all the students irrespective of their reading hours for each book.
In MANOVA we do not want this to be significant. In other words the MANOVA can be performed only if the covariance matrices among the dependent variables are same across all the groups (5 groups in this case).
Box’s test of equality of covariance matrices^{a} 

Box’s M  26.618 
F  8.814 
df1  3 
df2  11348563.554 
Sig.  .054 
Box’s test of equality of covariance matrices
So in this case the significance value is more than 0.05, so cannot reject the null hypothesis and MANOVA can be performed.
MANOVA F value
The next table is for the MANOVA F values as shown in the table below:
The above table shows the F values for the independent variables in the model.
SPSS gives us four different approaches to calculate F value for MANOVA. All of them are used to test whether the vector of means of the groups are from the same sampling distribution or not. We can choose any of them for interpretation.
The Pillai’s trace is the most preferred approach for the F value as this is the least sensitive to the violation of the assumption of the covariance of matrices. In this case for the first independent variable IV1 the Pillai’s Trace value is 0.136 with F value of 8.819. This is significant at 5% level as the p value is 0.000. So we reject the null hypothesis that the IV1 are at same level for all the dependent variables. This is concluded on the basis of the MANOVA derived by combined dependent variable.
For IV2 the Pillai’s Trace is 0.509with F value of 11.250. This is also significant as the p value is less than 0.05. So for IV2 also we reject the null hypothesis. In other words IV2 are at the same level for all the dependent variables.
In our case we reject the null hypothesis that scores in mathematics and science is same for all the students irrespective of their reading hours for Book2.
Partial eta squared
This is similar to the R squared in the simple ANOVA analysis. The partial eta squared of Pillai’s Trace for IV1 is 0.068. One can be interpret this as 6.8% of the variability in the dependent variables is being accounted by variability in the IV1.
Observed power
It is useful in those cases if we are not able to reject the null hypothesis, or in other words if the p value is higher than 0.05. For example if we are not able to reject the null hypothesis and the power observed is 0.3 then it shows that we only have 30% chance rejecting the null hypothesis when we should have rejected it.
Test of homogeneity
Tip: The covariance shows the relationship between the two dependent variables. If there is no relationship between the two dependent variables then analyzing the effect of independent variable on both the dependent variable will not make sense.
After the testing of hypothesis the next results is of homogeneity and one can use the Levene’s Test for that purpose. Since there are more than one dependent variable, it is important to check whether the covariance or the interconnections among the dependent variable is same or not. If the covariance is different then it would not be appropriate to use the dependent variable together.
So the Levene’s test is used here to test the homogeneity. It tests the null hypothesis that the error variance of the dependent variables is equal across the independent variables. Even though MANOVA is relatively robust to the violation of homogeneity, it is better to test it.
Levene’s test of equality of error variances^{a} 

F  df1  df2  Sig.  
DV1  3.901  17  482  .210 
DV2  2.614  17  482  .140 
Tests the null hypothesis that the error variance of the dependent variable is equal across groups.  
a. Design: Intercept + IV1 + IV2 + IV1 * IV2 
Homogeneity test
Testing of between subject Effects
This shows the separate ANOVA for each dependent variable. This shows the results similar to normal ANOVA if separate regression tests were to run for each dependent variable instead of combining both of them.
For example in case of IV1 the first independent variable has F value of 8.874 as shown in the above table (process explained in the previous article). This is also significant at 5% significance level. So the null hypothesis can be rejected. In other words there is at least one difference in different groups of IV with respect to the first dependent variable.
Similarly for IV2 the second independent variable the F value is 54.727 which is also significant at 5% as shown in the above table (process explained in the previous article). So in this case reject the null hypothesis. This means there is at least one difference in different groups of independent variable with respect to the first independent variables. In fact the results for the second independent variable are also significant. One can interpret the partial eta squared and observed power in same way as in the previous case.
Impact of independent variable on the dependent variable
Since the Pillai’s trace shows significant results. It can be said that the impact of Book A (first independent variable) on super dependent variable (combination of both the dependent variables i.e. scores of Science and Mathematics) is significant. For the second independent value also the Pillai’s trace is significant. So the impact of Book B (second independent variable on super dependent variable (combination of both dependent variable) is significant.
Further, test between subject effects also show significant results so the impact first independent variable on first dependent variable is significant. Similarly the impact of book A on second dependent variable (score in science) is also significant. This is because the p value is less than 0.05. The impact of second independent variable on first dependent variable is significant as shown by the results in between subject effects. Also the impact of reading book2 on scores in science is also significant.
So, on the basis of the analysis it can be said that the scores in mathematics and science are significantly affected by the reading hours of each book. In other words, if students reads for more hours, then their score will improve for both the subjects.
Indra Giri
Latest posts by Indra Giri (see all)
 How to conduct path analysis?  November 5, 2017
 How to conduct survival analysis?  October 30, 2017
 How to perform nonlinear regression?  October 30, 2017
Discuss