I’m doing a review of basic statistics since I’ll be helping undergrad students, in one-on-one consultation and teaching labs, understand math and stats concepts introduced in their classes. I also find it useful to step outside the realm of mathematics to interpret and understand the material from a more general perspective. As such, I’ll likely post on several topics from the perspective of understanding and applying basic statistics.
In my review I’ve started reading The Little Handbook of Statistical Practice by Dallal. I jumped to Significance Tests to sample the handbook and because, quite frankly, I felt there was something I was conceptually missing about hypothesis testing as an undergrad. I could churn out the answers, as required, but never felt it was well absorbed. Dallal’s discussion turned on a light bulb in my head:
Null hypothesis are never accepted. We either reject them or fail to reject them. The distinction between “acceptance” and “failure to reject” is best understood in terms of confidence intervals. Failing to reject a hypothesis means a confidence interval contains a value of “no difference”. However, the data may also be consistent with differences of practical importance. Hence, failing to reject H0 does not mean that we have shown that there is no difference (accept H0).
I like Dallal’s discussion of the topic because of the emphasis on confidence intervals and the distinction between accepting the null and failing to reject it. It seems odd that I would never have heard of this in my previous studies. I turned to my intermediate undergrad-level text (by Miller and Miller) to see if I had simply forgotten, and they state the problem as being “to accept the null hypothesis or to reject it in favor of the alternative hypothesis.” They take the (possibly common) approach of considering a hypothesis test to be a problem in which one of the null hypothesis or the alternative hypothesis will be asserted. This approach leaves me wholly unsatisfied.
|You can’t prove the null by not rejecting it
You can’t increase power to prove the null
But you can show equivalence
I instead turned to my intermediate grad-level text (by Casella and Berger) for more insight: “On a philosophical level, some people worry about the distinction [...] between “accepting” H0 and “not rejecting” H0.” This sounds promising. The authors continue with some details and finally state that “for the most part, we will not be concerned with these issues.” Ugh. What a disappointing end to what could (or should) have been an interesting discussion.
If we don’t reject the null hypothesis, we don’t conclude that it’s true. We simply recognize that the null hypothesis is a possibility (it’s something that we could observe). I believe this is what is meant by “accepting” the null hypothesis—we accept that it is a possibility (the term “accept” is far from precise, after all). An older text (by Crow, Davis, and Maxfield) reminded me, as did Dallal, that Fisher did not use an alternative hypothesis, and therefore there was no concept of “accepting” an alternative in his construction of significance tests. Maybe this has something to do with the use of this imprecise term for both H0 and H1 (and somehow involving the “Neyman-Pearson school of frequentist statistics”, which puts an emphasis on the alternative hypothesis, as Dallal points out).
Many texts, and perhaps analysts, discuss “accepting” the null hypothesis as though they were stating that the null hypothesis were in fact true. Showing that the null hypothesis is true is not the same thing as failing to reject it. There is a relatively low probability (by construction) of rejecting the null hypothesis when it is in fact true (Type I error). But if we fail to reject the null hypothesis, what’s the probability of it being true? Dallal provides an interesting discussion of how “failing to find an effect is different from showing there is no effect!” Until I find a good counter argument, I’m going to be irked when I hear or read the use of “accepting the null”.