
The cult of statistical significance
Steve Ziliak, haiku economist and co-author of The Cult of Statistical Significance, points me to this geeky but splendid video about a much-misunderstood subject:
Steve’s a guest on this week’s More or Less. Here’s an earlier piece about his work, although I am worried that I may have got some of the technical details wrong.
6th of April, 2011 • Video
• Comments off
5 Comments
Adrian Liston says:
Well, that is not really accurate. You could lower the p value by just pumping in extra participants, but equally you could increase the p value by pumping in extra participants. It all depends on whether the initial batch was representative of the larger group or not. Obviously there is a cult of 0.05, since editors see 0.049 as profoundly different from 0.051, when really there is next to no difference, but the objection that p is just a measure of n is not really sound.
6th of April, 2011Daniel says:
Of course, p-values being used to imply a significance when there is none is a problem, too. See today’s xkcd: http://xkcd.com/882/
6th of April, 2011Mo says:
Reading the earlier article you linked to, I was struck by this: “Researchers estimated that every dollar spent on the programme saved $4.30 and were 87 per cent confident that the result was real.”
7th of April, 2011Surely this is completely the wrong kind of test? It’s not that the programme definitely saved either $4.30 or $0.00, but that the programme saved an indeterminate amount and there’s an 87% confidence that the amount in question was $4.30. Therefore they should have calculated a weighted probbaility distribution and arrived at a figure that might have been eg. $4.10 +/- 0.20 with 95% confidence?
Sorry, I know this is old news, but it’s nagging at me now 🙂
Tim Harford says:
Mo – my statistical knowledge may have wobbled here, but Ziliak and others – for instance Gerd Gigerenzer in this PDF – argue that you are quite right. A bright-line statistical confidence test is exactly the wrong kind of test. Gigerenzer’s paper is good on this and also discusses what Fisher did and did not think with much more detail than I.
7th of April, 2011Eduardo says:
I don’t really see the point. P value and confidence intervals are esentially the same information. Is your point estimate within the confidence interval for a given p value? What is the min p value for what that happens? Is not the same information?
17th of April, 2011