Tag: STATA for data analysis

This article explains how to perform point forecasting in STATA, where one can generate forecast values even without performing ARIMA.

 ,

The previous article (Pooled panel data regression in STATA) showed how to conduct pooled regression analysis with dummies of 30 American companies. The results revealed that the joint hypothesis of dummies reject the null hypothesis that these companies do not have any alternative or joint effects. Therefore pooled regression is not a favourable technique for […]

 , , , ,
By Rashmi Sajwan & Priya Chetty on October 31, 2018 8 Comments

Time series data requires some diagnostic tests in order to check the properties of the independent variables. This is called ‘normality’. This article explains how to perform normality test in STATA.

 , , ,

The underlying assumption in pooled regression is that space and time dimensions do not create any distinction within the observations and there are no set of fixed effects in the data.

 , , ,

The problem of multicollinearity arises when one explanatory variable in a multiple regression model highly correlates with one or more than one of other explanatory variables. It is a problem because it underestimates the statistical significance of an explanatory variable (Allen, 1997).

 
By Rashmi Sajwan & Priya Chetty on October 22, 2018 7 Comments

This article shows a testing serial correlation of errors or time series autocorrelation in STATA. Autocorrelation problem arises when error terms in a regression model correlate over time or are dependent on each other.

 , , , , ,
By Rashmi Sajwan & Priya Chetty on October 16, 2018 17 Comments

Applying Granger causality test in addition to cointegration test like Vector Autoregression (VAR) helps detect the direction of causality. It also helps to identify which variable acts as a determining factor for another variable. This article shows how to apply Granger causality test in STATA.

 , , ,
By Rashmi Sajwan & Priya Chetty on October 16, 2018 11 Comments

Heteroskedastic means “differing variance” which comes from the Greek word “hetero” (‘different’) and “skedasis” (‘dispersion’). It refers to the variance of the error terms in a regression model in an independent variable.

 , , ,