The reliability of a questionnaire is a way of assessing the quality of the measurement procedure used to collect data. In order to consider a result valid, the measurement procedure must first be reliable. Choose a measure while examining the construct of a study. Construct is the hypothetical variable that is being measured and questionnaires are one of the mediums.
These questionnaires are part of the measurement procedure. As a result, this measurement procedure should provide an accurate representation of the construct, to be considered stable or constant.
Concept of reliability
Since there are many ways of thinking about intelligence (e.g., IQ, emotional intelligence, etc.). This can make it difficult to come up with a measurement procedure if we are not sure if the construct is stable or constant (Isaac & Michael 1970).
The reliability of a construct or variable refers to its constancy or stability. The assumption, that the variable that is to be measured is stable or constant, is central to the concept behind the reliability of the questionnaire. A measurement procedure that is stable or constant should produce the same (or nearly the same) results when the same individuals and conditions are used. There are threats to the reliability of measurement or construct. They fall under systematic or unsystematic categories as shown below.
Calculating reliability of questionnaire using Cronbach Alpha
Cronbach’s alpha determines the internal consistency or average correlation of items in a survey instrument to gauge the reliability of the questionnaire. Thus, Cronbach’s alpha is an index of reliability associated with the variation accounted for by the true score of the “underlying construct” (Santos 1999). The alpha coefficient ranges in value from 0 to 1. It can be used to describe the reliability of factors extracted from dichotomous. Questions with two possible answers and/or multi-point formatted questionnaires or scales i.e. rating scale: 1 = poor, 5 = excellent; are called dichotomous. Therefore, the higher the score, the more reliable the generated scale is (Tavakol & Dennick 2011). Statistical formula to calculate reliability is:
Total scale variance = sum of item variances and all item co-variances [k/(k-1)] * [1- (sum of item variances/total scale variance) Where k = number of items and ranges between 0 and 1
Criteria for assessment is: ≥ 0.70 = adequate reliability for group comparisons ≥ 0.90 = adequate reliability for individual monitoring
Alpha is an important concept in the evaluation of assessments and questionnaires. Hence, it is important that assessors and researchers estimate the quantity to add validity and accuracy to the interpretation of their data. Nevertheless, alpha is frequently reported in an uncritical way and without adequate understanding and interpretation.
This article provided a basic idea about the usage of Cronbach’s alpha to statistically test the reliability of quantitative data. Furthermore, to understand the procedure of calculating Alpha using SPSS refer to Performing tests using Cronbach Alpha.
- Golafshani, N., 2003. Understanding Validity in Qualitative Research. The Qualitative Report, 8(4), pp.597–607. Available at: http://www.nova.edu/ssss/QR/QR8-4/golafshani.pdf [Accessed December 14, 2015].
- Isaac, S. & Michael, W.B., 1970. Handbook in Research and Evaluation. Available at: http://eric.ed.gov/?id=ED051311 [Accessed May 7, 2016].
- Santos, J.R.A., 1999. Cronbach’s Alpha: A Tool for Assessing the Reliability of Scales. Journal of Extension, 37(2). Available at: http://www.joe.org/joe/1999april/tt3.php [Accessed September 30, 2015].
- Spitzer, R.L., 1978. Research Diagnostic Criteria. Archives of General Psychiatry, 35(6), p.773. Available at: http://archpsyc.jamanetwork.com/article.aspx?articleid=491943 [Accessed May 7, 2016].
- Tavakol, M. & Dennick, R., 2011. Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, pp.53–55. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4205511/ [Accessed July 10, 2014].