Questions

Questions

Benedek Bartha -
Number of replies: 2

In their paper on Scientific Misconduct in Psychology, Stricker and Günther write:

"Interestingly, the trend that was identified for article retractions in psychology was not found for gross statistical inconsistencies in published psychological articles which are regarded as a potential indicator of scientific misconduct or QRPs (Nuijten et al., 2016). This finding supports Fanelli’s (2013) notion that the increase in article retractions is mostly attributable to improved detection and retraction systems (also see Gross, 2016)." 

Does this mean that gross statistical inconsistencies – big but earnest mistakes – were left undetected and unretracted, while frauds were increasingly admitted and retracted voluntarily? If so (but maybe I misunderstand this part), it raises the questions:

(1) How much have those detection and retraction systems actually improved if the retractions didn't increase for gross statistical inconsistencies? Or maybe they did, but proportionately much less? (Didn't look at the cited paper by Nuijten...)

(2) If gross statistical inconsistencies are that much more likely to remain, implying the detection and retraction systems are still not top-notch (even if they'd improved), then even if the number of self-admitting retractions have increased, it's hard to know how many insincere submissions remain undetected and unwilling to retract, right?

In reply to Benedek Bartha

Re: Questions

Christophe Heintz -
I understand the quotes as saying that the increase of rate in retraction is a consequence of "improved detection". It is an argument in favour of scientific institutions for detecting scientific misconduct. At the moment, it is done by some scientists who are not especially rewarded for their work. This does not mean that earnest mistakes are not retracted. It is just that the rate did not change ...
I don't know the answers to your question, but they both read as appeal to do better ... how can we do that?
Myself, when I review papers, I do not peer into the data and the details of the analysis. I already think that it is a lot of work to review a paper for a journal when just looking at the validity of the argument (it usually takes me more than one day of work), and I am not recognised or rewarded for doing this work. Checking the data set and its consistency is really not what I want to spend my time doing. Recently, I have added a disclaimer to the intention of the editor when reviewing papers: "Please note that I have not look at the data set and specific of its analysis". It is a way to switch the responsibility from me to the editor. However, I know that the editor does not have either the tools, time or incentives for doing a thorough check. Maybe Elsevier, Springer and co. could spend some of their incredible margin for that purpose? There are not doing it at the moment, but they at least realise that they have an interest in investing in developing automatic systems that make some checks.
In reply to Christophe Heintz

Re: Questions

Shubhamkar Ayare -
My very brief experience with the review systems of conferences and journals has been that the statistics or any details are seldom checked thoroughly. The editors and reviewers only seem to have the time to evaluate the story, the overall picture and a few routine details, but never anything. Is that rather common? Because in that case, the responsibility for correctness of the papers seems largely up to the individual, their immediate labmates or supervisor, and more generally the department to ensure that everything is going well.