Sensitive content
someone asked me "what's the problem with NHST exactly?", thereby activating the neurons of my standard rant against NHST:
Null hypothesis significance testing was never intended to be a way to "prove" a scientific hypothesis against a zero hypothesis (or, as Gelman calls it, a "strawman hypothesis"). Its adoption as such by the social and medical sciences was a disaster—statisticians including the guy who invented p-numbers (Fisher) have complained about it the entire time but we didn't give them enough heed, because everyone wants a black box formula where you input your data and the computers tells you that your hypothesis is "true" (there isn't and cannot be such a tool). NHST is single-handedly one of the biggest causes of the replication crisis.
For details check out the work of statistician Andrew Gelman (Abandon Statistical Significance is a good entry point). See also Gigerenzer, The Null Ritual. Gelman's 1-page commentary on the The American Statistical Association's Statement on p-values is also a valuable read.
If you're in the social/bio/etc. field and you're ever in the position of proposing some research, the best advice I can give is to do your best to lobby the grant agency to have a statistician model the statistics. The most reliable statistics IMO is done when a specialist who has lots of field-specific knowledge (e.g. a biologist) cooperates with a statistician who has lots of knowledge about which methods work best for their case. In the absence of that, abstain from "uncertainty laundering", that is, from using magical algorithms to pretend we know more than the world than we actually do; even if it makes harder to publish stuff because the field has settled on a posture of definitive statements in catchy press releases, it's our duty as scientists to make the uncertainty as clear as possible.
Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.