tag:blogger.com,1999:blog-987850932434001559.post8462007361833902099..comments2021-11-27T10:12:25.524+01:00Comments on The 20% Statistician: So you banned p-values, how’s that working out for you?Daniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger19125tag:blogger.com,1999:blog-987850932434001559.post-45302767731762833652019-05-04T05:28:02.383+02:002019-05-04T05:28:02.383+02:00p value should be there, just to validate the meth...p value should be there, just to validate the methodological correctness and assigning uniformity in research work or strengthening justifications to the findings only with respect to the individualistic terms of the work, but not to support the hypothesis as universal fact. Of course, we can encourage reporting Power and effect size, because there are many studies where Power is compromised. What I liked Trafimow’s article is that it vibrates the dishonest attempt of researchers to get their paper published in journals based on p value with unrealistic elements like exceptionally low n (as small as 3), skewed distributions, non-homogeneity etc. BASP might have fatigued with such type of papers. That is why they wrote "we encourage the use of larger sample sizes than is typical in much psy-chology research, because as the sample size increases,descriptive statistics become increasingly stable and sampling error is less of a problem" (from Trafimow & Marks, 2015, doi.10.1080/01973533.2015.1012991). Honest and judicious use of p or CI is always welcome.www.surjyasaikia.inhttps://www.blogger.com/profile/09172588142202332799noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-76273054671284052842017-10-20T17:51:40.715+02:002017-10-20T17:51:40.715+02:00Hi, the Type 1 error rate has increased because pe...Hi, the Type 1 error rate has increased because people stop controlling their error rates at 5% when reporting multiple tests. So it must logically be higher. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-75581707519230773002017-10-20T17:48:23.793+02:002017-10-20T17:48:23.793+02:00Hi Daniel, could you please tell me how you have c...Hi Daniel, could you please tell me how you have come to think that the type I error rate increased? You seem to believe (but correct me if I am wrong) that a p-value tells you whether a type I error has been made or not. But that is simply not true. If my decision criterion is: reject when p is between 1.00 and .95, for instance, the type I error rate is the same as when I reject when p < .05. In both cases it is .05. So, given the first criterion, reject when p = .99, provides a perfect control of type I errors (but of course not of type II errors). So, unless one magically determines which null-hypotheses are actually true, there is no way of determining whether or not a type I error has been made. A rejection of a true null is a type I error regardless of the value of p used to make the decision. (The idea that p-values tell you something about the probability of a type I error is called the local type I error fallacy). <br />Gerben Mulderhttps://www.blogger.com/profile/13239926388232485676noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-36785770707009769182016-06-02T06:31:53.203+02:002016-06-02T06:31:53.203+02:00The fact that researchers are struggling isn't...The fact that researchers are struggling isn't the grounds on which to bash the editors, but the very real fact that they truly misunderstand the statistics of significance tests is. I came to learn that through Trafimow's papers.MAYO:ERRORSTAThttps://www.blogger.com/profile/02967648219914411407noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-92140720764054236422016-02-12T17:16:04.861+01:002016-02-12T17:16:04.861+01:00ha, thanks! I'm sure someone will remove it ve...ha, thanks! I'm sure someone will remove it very soon ;)Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-84231485720777177092016-02-12T17:15:28.099+01:002016-02-12T17:15:28.099+01:00Hi Chris, I think the editors are to blame for not...Hi Chris, I think the editors are to blame for not taking the responsibility to check the articles they publish better than they have. Also, the surprisingly large number of citations to articles that are not good, and suggest NHST is crap, annoy me: http://daniellakens.blogspot.nl/2015/11/the-relation-between-p-values-and.html Obviously the authors and reviewers can and should improve, but I'm criticizing the editorial strategy here.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-37270720251063144922016-02-12T16:56:12.034+01:002016-02-12T16:56:12.034+01:00It seems that there is a lot of editor bashing her...It seems that there is a lot of editor bashing here. I think that is inappropriate. The important lesson to be learned from this ban is that, without NHST, most researchers--and readers of empirical research--are incapable of evaluating empirical data. The fact that researchers are struggling should not be used to mock Trafimow and Marks. If anything, their ban on NHST has helped make salient just how much of our critical thinking we have outsourced to misunderstood statistical procedures.R Chris Fraleyhttp://internal.psychology.illinois.edu/~rcfraley/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-19606835464117451352016-02-11T23:06:39.327+01:002016-02-11T23:06:39.327+01:00You are right - I don't think they banned them...You are right - I don't think they banned them, but they are missing from many papers (maybe the majority). The editors/reviewers should have asked for them, but I don't really think they are intentionally banned. I was slightly exaggerating there. ;)Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-59378853022418814212016-02-11T22:50:31.217+01:002016-02-11T22:50:31.217+01:00Interesting post! You write: "They also banne...Interesting post! You write: "They also banned reporting sample sizes for between subject conditions" but I don't remember seeing a ban for this anywhere, and checked with the editor & he says they never banned reporting sample sizes for between subject conditions -- only p-values and traditional confidence intervals. Did I miss something? Thanks for all your work -- cheers <br /><br />b.logg.earphttps://www.blogger.com/profile/12258213557733282789noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-3386671707892678632016-02-11T15:29:14.656+01:002016-02-11T15:29:14.656+01:00Enlightening post, thanks! One comment "They ...Enlightening post, thanks! One comment "They don’t, and have never, discussed the only thing p-values are meant to do: control error rates." Fisher didn't think so did he?! I mean you don't really need the p-values; if you just want to control long term error rates you could request people to calculate the relevant test statistic and compare it to a pre-set critical value. Maybe that wouldn't give the feel of a continuous measure of strength of evidence that the p has.<br />barucehttps://www.blogger.com/profile/14268921406075621023noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-34809790848699006712016-02-11T15:06:33.972+01:002016-02-11T15:06:33.972+01:00I agree. (With the second point, I have no opinion...I agree. (With the second point, I have no opinions on Shakespearian English).Anonymoushttps://www.blogger.com/profile/09640729547040033538noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-60526623814837798902016-02-11T15:05:39.740+01:002016-02-11T15:05:39.740+01:00If you read the editorial, you'll see that the...If you read the editorial, you'll see that they are not exactly encouraging authors to do Bayes Factors either. I personally really like Bayes Factors for hypothesis testing, but I would perhaps not risk submitting them to BASP after reading that editorial. Anonymoushttps://www.blogger.com/profile/09640729547040033538noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46959757402980820512016-02-11T13:22:48.862+01:002016-02-11T13:22:48.862+01:00https://en.wikipedia.org/wiki/Basic_and_Applied_So...https://en.wikipedia.org/wiki/Basic_and_Applied_Social_PsychologyAnonymoushttps://www.blogger.com/profile/12707381996365946983noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-4724542919863197132016-02-11T12:52:47.953+01:002016-02-11T12:52:47.953+01:00While I think it's not strictly incorrect, per...While I think it's not strictly incorrect, personally I feel it ought to be 'thou shalt' not 'thou shall' ;)<br /><br />I agree banning p-values without giving any alternative for hypothesis-testing was a silly move. Sam Schwarzkopfhttp://www.neuroneurotic.netnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-33638291146788494942016-02-11T12:48:26.344+01:002016-02-11T12:48:26.344+01:00It's almost as if the problem is with the ince...It's almost as if the problem is with the incentives of the publishing system, rather than with the specific ways in which those problems manifest themselves.<br /><br />I suspect that if p-values were declared illegal worldwide tomorrow, we would quickly see a consensus around d=.02 or r=.10 or pesq2=.02 or even BF=6 as the new shorthand for "Look what a clever scientist I am, can I have some more money now please?".<br /><br />On the other hand, change takes time. Many of these articles will have been in the pipeline when the journal announced its new policies. Every journey starts with a small step, etc. The problem is to determine when to examine one's progress on that journey and decide whether to carry on, or go home and have a cup of tea on your familiar comfy sofa.Nick Brownhttps://www.blogger.com/profile/18266307287741345798noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-68768840145804816242016-02-11T12:48:08.146+01:002016-02-11T12:48:08.146+01:00This comment has been removed by the author.Nick Brownhttps://www.blogger.com/profile/18266307287741345798noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-17773066901241121912016-02-11T11:08:36.222+01:002016-02-11T11:08:36.222+01:00It's surprising that even now with JASP being ...It's surprising that even now with JASP being so easy to use that someone didn't report a BF. That's even less improvement than my low expectations expected.Rayhttps://www.blogger.com/profile/00635208680011072501noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-57059207673412605402016-02-11T08:58:26.252+01:002016-02-11T08:58:26.252+01:00I did not encounter any paper using Bayesian stati...I did not encounter any paper using Bayesian statistics in 2015. Note that it would have made sense in the 8 study paper I mention in the blog post where they don't find support for their hypotheses, but even there, nothing.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-48079583590157701872016-02-11T08:41:01.446+01:002016-02-11T08:41:01.446+01:00So how many papers in BASP did Bayesian statistics...So how many papers in BASP did Bayesian statistics one year prior to the p-value ban versus one year after? Is there a qualitative difference?Rayhttps://www.blogger.com/profile/00635208680011072501noreply@blogger.com