Cronbach’s Alpha, Coefficient Alpha, Internal Consistency, and Reliability Scores

When I first started hearing about Cronbach’s alpha, coefficient alpha, internal consistency, and reliability scores, I became more confused with each empirical article using different names and wondering if they were talking about the same thing.

In a study to get to the bottom of the situation, I found that they were talking about the same test. Cronbach’s test for internal consistency is the most popular test for scale reliability; however, the naming conventions are all over the place. In this blog post, I urge that we agree to use one name going forward to reduce confusion and demonstrate our understanding of this concept. Coefficient α seems to be the best option.

By the way, alpha should be abbreviated with “α” instead of writing out the word.

What’s the origin of the Coefficient Alpha?

Cronbach introduced the reliability test in 1951. He never intended his name to be used with the test results, as it has been done for decades. In a paper written by Cronbach and Shavelson, he explained that it was an embarrassment that the formula adopted his name, becoming known as Cronbach’s alpha (2004, p. 397).

What is the Coefficient Alpha?

The test itself, whatever it is called, is the most common reliability test for internal consistency; however, there are important factors to know about this test. First, it is a unidimensional test. This means that if you are testing the reliability of an instrument that has four subscales (dimensions), the alpha should be computed for each of the dimensions separately, not for the entire survey.

What are some examples of the Coefficient Alpha?

An example is the Leader-Member Exchange (LMX) instrument by Liden and Maslyn (1998). The overall survey measures LMX. However, the questions that make up each dimension are used to compute the coefficient α. The LMX has the following four dimensions: affect, loyalty, professional respect, and contribution. In their original 1998 study, Liden and Maslyn reported the coefficient alpha as .92 for effect, .90 for loyalty, .78 for professional respect, and .60 for contribution.

They retested with another sample of participants and found the reliability of the data to be .90 for effect, .89 for loyalty, 74 for professional respect, and .57 for contribution. They performed one more test with a different group of participants and found the coefficient alphas as .83 for effect, .79 for professional respect, .66 for loyalty, and .56 for contribution.

Another important factor for the coefficient α is the generally accepted threshold. This, too, has been confusing over the years and likely misinterpreted. We seem to all agree that the range is .00 to 1.0. However, many textbooks say the minimum threshold is .70, meaning if the coefficient α is at or above .70, the data is reliable.

In the example above by Liden and Maslan, many of the scores were below .70. However, the original text by Nunnally actually implies that .70 is “miserably low,” and depending on the purpose of the data, .90 may not be high enough (1975, p. 10).

In my quest to find the answers, I wrote a book chapter titled Clearly Communicating Conceptions of Validity and Reliability (Dean, 2021). I found that a satisfactory level of reliability depends on how it is being used. Nunnally wrote that for modest reliability, “.70 or higher will suffice,” yet for the purpose of applied settings, a score of “.80 is not nearly high enough;” however, “a reliability of .90 is the minimum that should be tolerated, and a reliability of .95 should be considered the desirable standard” (Nunnally, 1978, pp. 245-246). In other words, it is time to raise the standards for reliability.

The Data is Reliable vs The Instrument is Reliable

The final misconception of coefficient alpha is the common phrase where someone says the instrument is valid and reliable. The coefficient alpha is not measuring the reliability of the instrument itself. It measures the data gathered from the instrument. It is more accurate to say the data is reliable instead of saying the instrument is reliable.


Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334.

Cronbach, L. J., & Shavelson, R. J. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement, 64(3), 391-418.

Dean, D. (2021). Clearly Communicating Conceptions of Validity and Reliability (M. Bocarnea, B. Winston, & D. Dean, Eds.). In Advancements in organizational data collection and measurements: Strategies for addressing attitudes, beliefs, and behaviors. Hershey, PA: Business Science Reference.

Liden, R. C., & Maslyn, J. M. (1998). Multidimensionality of Leader-Member Exchange: An empirical assessment through scale development. Journal of Management, 24(1), 43-72.

Nunnally, J. (1975). Psychometric theory. 25 years ago and now. Educational Researcher, 4(10), 7-21.

Nunnally J. (1978) An Overview of Psychological Measurement. In: Wolman B.B. (eds) Clinical Diagnosis of Mental Disorders. Springer, Boston, MA.


The Dissertation Success Handbook