Comments on: The cult of statistical significance
http://timharford.com/2011/04/the-cult-of-statistical-significance/
The Undercover EconomistMon, 21 Jul 2014 10:39:45 +0000hourly1https://wordpress.org/?v=4.6.1By: Eduardo
http://timharford.com/2011/04/the-cult-of-statistical-significance/comment-page-1/#comment-170
Sun, 17 Apr 2011 21:08:38 +0000http://timharford.com/?p=1760#comment-170I don’t really see the point. P value and confidence intervals are esentially the same information. Is your point estimate within the confidence interval for a given p value? What is the min p value for what that happens? Is not the same information?
]]>By: Tim Harford
http://timharford.com/2011/04/the-cult-of-statistical-significance/comment-page-1/#comment-169
Thu, 07 Apr 2011 10:46:54 +0000http://timharford.com/?p=1760#comment-169Mo – my statistical knowledge may have wobbled here, but Ziliak and others – for instance Gerd Gigerenzer in this PDF – argue that you are quite right. A bright-line statistical confidence test is exactly the wrong kind of test. Gigerenzer’s paper is good on this and also discusses what Fisher did and did not think with much more detail than I.
]]>By: Mo
http://timharford.com/2011/04/the-cult-of-statistical-significance/comment-page-1/#comment-168
Thu, 07 Apr 2011 09:21:57 +0000http://timharford.com/?p=1760#comment-168Reading the earlier article you linked to, I was struck by this: “Researchers estimated that every dollar spent on the programme saved $4.30 and were 87 per cent confident that the result was real.”
Surely this is completely the wrong kind of test? It’s not that the programme definitely saved either $4.30 or $0.00, but that the programme saved an indeterminate amount and there’s an 87% confidence that the amount in question was $4.30. Therefore they should have calculated a weighted probbaility distribution and arrived at a figure that might have been eg. $4.10 +/- 0.20 with 95% confidence?
Sorry, I know this is old news, but it’s nagging at me now ðŸ™‚
]]>By: Daniel
http://timharford.com/2011/04/the-cult-of-statistical-significance/comment-page-1/#comment-167
Wed, 06 Apr 2011 19:48:06 +0000http://timharford.com/?p=1760#comment-167Of course, p-values being used to imply a significance when there is none is a problem, too. See today’s xkcd: http://xkcd.com/882/
]]>By: Adrian Liston
http://timharford.com/2011/04/the-cult-of-statistical-significance/comment-page-1/#comment-166
Wed, 06 Apr 2011 17:59:24 +0000http://timharford.com/?p=1760#comment-166Well, that is not really accurate. You could lower the p value by just pumping in extra participants, but equally you could increase the p value by pumping in extra participants. It all depends on whether the initial batch was representative of the larger group or not. Obviously there is a cult of 0.05, since editors see 0.049 as profoundly different from 0.051, when really there is next to no difference, but the objection that p is just a measure of n is not really sound.
]]>