Luk Arbuckle

Posts Tagged ‘bayesian’

Absence of evidence is evidence of absence?

In hypothesis testing on 25 January 2009 at 5:15 pm

In the context of logical reasoning, and using Bayesian probability, you can argue that absence of evidence is, in fact, evidence of absence.  Namely, not being able to find evidence for something changes your thinking and can result in you reversing your original hypothesis  entirely.   For example, failing to find evidence that some medical treatment works, you may begin to think that it doesn’t work.  Maybe it’s a placebo.  You could, therefore, decide to change your hypothesis and look to create an experiment disproving it’s effectiveness.  Of course, there are no “priors”, in the Bayesian sense, in the frequentist interpretation of hypothesis testing.  But, just the same, what does this say about the maxim used in statistical hypothesis testing, that absence of evidence is not evidence of absence?  Nick Barrowman has an interesting post on the topic, and I wanted to participate in the discussion:

I interpret “absence of evidence is not evidence of absence” (in the context of hypothesis testing) to mean “failing to reject the null is not equivalent to accepting the null.” I’m thinking of the null hypothesis of “no treatment effects”. You don’t have significant evidence to reject the null, and therefore an absence of evidence of treatment effects, but this is not the same thing as saying you have evidence of no treatment effects (because of the formulation of hypothesis testing, flawed as it may be).

One point, which I believe you are alluding to, is that an equivalence test would be more appropriate. But I’ve heard some statisticians and researchers try and argue that they could use retrospective power to “prove the null” when they are faced with non-significant results. See Abuse of Power [PDF] (this paper was the nail in the coffin, if you will, in a previous discussion I was having with a group of statisticians).

I believe the maxim is simply trying to emphasize that the p-value is calculated having assumed the null, and therefore can’t be used as evidence for the null (as it would be a circular argument). Trying to make more out of the maxim than this may be the sticking point. It’s too simple, and therefore flawed when taken out of this limited context.

I agree with your previous post. If I’m not mistaken, one point was that failing to reject the null means the confidence interval contains a value of “no effect”. But there could still be differences of practical importance, and so failing to reject the null is not the same as showing there’s no effect. The “statistical note” from the BMJ, Absence of evidence is not evidence of absence, seems to be saying the same thing: absence of evidence of a difference is not evidence that there is no difference. Or, absence of evidence of an effect is not evidence of no effect. Because you can’t prove the null using a hypothesis test (you instead need an equivalence test).

I entirely agree with Nick that confidence intervals are more clear.   We can’t forget that hypothesis testing, although constructed like a proof by contradiction, has uncertainty (in the form of Type I errors, rejecting the null when it is true, and Type II errors, failing to reject the null when it is false).  It’s interpretation is, therefore, muddied by uncertainty and inductive reasoning (I had actually forgotten what Nick had written with regards to Popper and Fisher when I was commenting).  To be honest, my head is still spinning trying to make sense of all this, but it certainly is an interesting topic.

Bayesian and the brain

In news on 4 June 2008 at 3:48 pm

Researchers in computational neuroscience want to come up with a single theory to explain how the brain works—Bayesian statistics may provide the answer.  An article in NewScientist asks: Is this a unified theory of the brain? (although a subscription to NewScientist is required to access the article, the Mind Hacks blog found a link to a copy of the article posted elsewhere).

Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain. From this single law, Friston’s group claims to be able to explain almost everything about our grey matter. […]

Friston’s ideas build on an existing theory known as the “Bayesian brain”, which conceptualises the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses.

The article goes on to explain the Bayesian brain and how it is a group of related approaches that use Bayesian probability theory to understand different aspects of brain function.  What Friston has done is introduce the framework for a “unifying theory”—a theory that ties everything together—using the idea of a prediction error (to minimize surprise) as “free energy”.  Friston describes the theory as follows:

In short, everything that can change in the brain will change to suppress prediction errors, from the firing of neurons to the wiring between them, and from the movements of our eyes to the choices we make in daily life.

Many researches aren’t yet convinced that the theory will be unifying—although they aren’t denying the possibility—and concerns have been raised that the theory may not be testable or be used to build machines that mimic the brain.  But experiments are being proposed to help advance and prove the theory, and many agree that it has tremendous potential.

Econometrics lit review in video

In mixed on 27 May 2008 at 12:45 am

The National Bureau of Economic Research—a private, nonprofit, nonpartisan research organization—has made public an eighteen-hour workshop from it’s Summer Institute 2007: What’s New in Econometrics?  Included are lecture videos, notes, and slides from the series.

The lectures cover recent advances in econometrics and statistics.   The topics include (in the order presented):

  • Estimation of Average Treatment Effects Under Unconfoundedness 
  • Linear Panel Data Models
  • Regression Discontinuity Designs
  • Nonlinear Panel Data Models
  • Instrumental Variables with Treatment Effect Heterogeneity: Local Average Treatment Effects
  • Control Function and Related Methods
  • Bayesian Inference
  • Cluster and Stratified Sampling
  • Partial Identification
  • Difference-in-Differences Estimation
  • Discrete Choice Models
  • Missing Data
  • Weak Instruments and Many Instruments
  • Quantile Methods
  • Generalized Method of Moments and Empirical Likelihood

The speakers explain the material well, including some practical pros and cons to the methods presented.  The slides are, however, typically academic: packed with content and equations, with little to support the speaker.  In a way it’s expected, but surprising given that lecture notes are provided.

It takes a bit of time to get into the talks, but once you do there’s lots to learn.  I suggest two open browser windows: one for the videos, one for the slides.  But avoid the temptation to read the slides—the speakers explain the material well and you’ll pick up quite a bit if you can focus on what they’re saying while you stare lovingly at the equations.

Special thanks to John Graves at the Social Science Statistics Blog for posting a notice about the series.