Here I take a stab at it.
A frequentist P-value (say, of a t-test for a difference between means) is the probability that a difference as large or larger than the observed difference would occur if our two samples were drawn from the same distribution, and the same experiment were conducted repeatedly ad infinitum (or ad nauseum). "A P value is often described as the probability of seeing results as or more extreme as those actually observed if the null hypothesis were true" (2).
A frequentist P-value of such a t-test is apparently not lots of things we wish it were (1):
- it is not the probability that our null hypothesis is true.
- it is not the probability that such a difference could occur by chance alone.
- it is not the probability of falsely rejecting the null hypothesis.
- The probability of observing data as extreme or more extreme in ...
- repeated identical experiments and analyses, given ...
- the a priori belief that the null hypothesis is true (this is a Bayesian "prior").
A frequentist confidence interval is a region calculated from our data and our selected confidence level, α, in a manner which would include the "true" population parameter (e.g., the "true" mean) 100(1-α)% of the time if the same experiment and analysis were conducted repeatedly ad infinitum. It is a region which, so calculated, would include the true population parameter in 95% of all hypothetically repeated identical experiments. Thus, the population parameter of interest is fixed (i.e., a "true" value exists), and the interval is random (because it is based on a randomized experiment).
A frequentist confidence interval is not an interval which we are 100(1-α)% certain contains the true parameter. I don't even know what this statement (i.e., 95% certain) means -- what is "certainty"?
A 95% Bayesian credible interval (a.k.a. Bayesian confidence interval) is the a continuous subset (i.e., an interval) of a posterior probability distribution of a parameter of interest (e.g., a mean). A posterior probability distribution is the probability distribution which results from combining of our prior beliefs about the parameter, and a conditional probability distribution of our data, given all possible relevant data sets. It contains all possible values of our parameter of interest (given our priors and our data, and our model). The credible interval is merely an interval which contains a most likely subset. That is, we are not sure what is was while the world turned and our data were collected, but the safest bet is inside the credible interval.
Key differences between frequentist and Bayesian statistics:
- Parameter of interest (e.g., a mean): fixed (the Platonic archetype exists) vs. random (i.e., subject to the whims of the gods of stochasticity)
- Prior beliefs: implicit and sometimes hard to discern vs. explicit and plainly stated.
- Statements of probability: Pr(data|null) vs. Pr(h|data). That is, frequentist P-values are the probability of observing your data (or more extreme data) given that the null hypothesis is true, whereas Bayes posterior probability distributions describe the probability of your scientific hypothesis (not the null), given that your data are true.
- "P-values": Exist vs. not typically presented (usually misinterpreted; Fisher used it as a "weight of evidence," whereas Neyman used it as a basis to make decisions (yes/no) but not necessarily true/false - I do not understand how or why they can do all that; in the Bayesian context, it is not always clear what such a beast would be because of differences in underlying interpretations).
My problem is that I do not have an intuition about how these things all differ or under what conditions they are likely to differ substantially. Therefore, I cannot keep them clearly differentiated in my head. All I can do is repeat them. It would help to explore the pathological cases where frequentist and Bayesian methods result in very different outcomes differ strongly.
1. http://en.wikipedia.org/wiki/P-value#Frequent_misunderstandings
2. http://www.jerrydallal.com/LHSP/pval.htm
I think the problem is that a lot of these different metrics (relative likelihood of null vs alternative hypothesis (= relative probabilities if we take a Bayesian approach and set equal prior probabilities on the null and the alternative), p-value, etc.) *in general* tend to be correlated with each other under normal circumstances, even if they are not identical. I think the best way to gain intuition is to look over the various pathological examples (quoted in various papers by Berger, Royall's 1997 book, etc. etc.) where the p-values are really seriously misleading as measures of strength of evidence and try to figure out what the different metrics say in those cases, and whether those cases are atypical in some way. I think the piece of this that I am shakiest about is the use of Fisherian p-values [i.e. as opposed to Neyman-Pearson rejection procedures] as "measures of strength of evidence" -- Royall says this is a really bad idea and gives examples (he would prefer the likelihood ratio of null and alternative hypotheses), but I find them tempting.
ReplyDeleteBen