Skip to main content


Open science provides fraud deterrence, and facilitates fraud detection.

When I first learned about open science around 20 years ago (then called "open notebook science"), I never thought about research fraud. Now it is a critically-important issue.

Open science practices provide part of the records needed for data provenance or chain-of-custody trail. Here's one of my posts from earlier this year about fraud deterrence: alexholcombe.wordpress.com/202… .

So in my view, open science policy updates need to consider fraud deterrence. Disappointed to see no consideration of the issue in the TOP guidelines update. journals.sagepub.com/doi/10.11…

in reply to Alex Holcombe

Part of the issue are the incentives. Science has to stop evaluating scientists by the number of publications. Never was a good idea and now even less.

#academia

in reply to 〽️ɪɢᴜᴇʟ ➡️ GodotFest

@bitbraindev You are asking "what else should we evaluate (people who spent a decade learning their craft) by?"

There is only a single method that actually works:

Trust other scientists in the same field (only they can actually understand the research) and punish proven fraud by expulsion.

Competition for jobs doesn’t work in science.

Giving publications a secondary objective ("get a job") ruins them for their prime objective (communicate).

draketo.de/english/science/qua…

@albertcardona @alexh

in reply to ArneBab

@ArneBab @bitbraindev
Indeed – publishing's purpose is to communicate their progress in scientific research. Nothing else, nothing more.

On the measure and evaluation of science, I continue to think Ross Cagan's vision is best:
mathstodon.xyz/@albertcardona/…

This entry was edited (3 weeks ago)
in reply to Albert Cardona

@albertcardona I think the part "it was pretty much a given if you were doing good work." is the most important in that.

Don’t constantly stress people about their employment future while they are working for you. That stress steals focus from the work they are doing that actually benefits society.

@bitbraindev @alexh

in reply to ArneBab

@ArneBab @albertcardona @bitbraindev the idea of counting papers is especially ridiculous when considering that evaluating the long term impact of scientific work has always been notoriously difficult. The early work on mRNA vaccine technology, CRISPR, restriction enzymes, you name it, comprised only a small number of papers that ended up being very influential much later.
in reply to ArneBab

@ArneBab @bitbraindev @albertcardona there is also an obsession with over evaluation. We need to learn to be okay with not evaluating/comparing people.
in reply to Frank Aylward

@ArneBab @bitbraindev @albertcardona i will say that it can be difficult to retain one's love of science when they are forced to work in an ecosystem of constant evaluation and comparison, even when it is obvious that so much of it is meaningless. I see this as a primary reason why so many people leave the community.
in reply to Frank Aylward

@foaylward There’s some background from concept discussed in the book Thinking Fast and Slow:

Since you often only know years (decades) later whether some research was important, building intuition can’t work and creating metrics that quantify that is a fools errant.

But people try anyway. It looks a lot like superstition caused by a need to have certainty.

Either an emotional need or as a cover-my-ass method in case something doesn’t work out.

@bitbraindev @albertcardona @alexh

in reply to ArneBab

@ArneBab @foaylward @bitbraindev @albertcardona Does it really take decades to know though? This feels like one of those things people say, like “protein has to be the genetic material because nucleic acid is too simple” or “bacteria don’t have genes”.

Where is the systematic evidence (not anecdata) that this is true?

in reply to Kristine Willis

@ArneBab @foaylward @bitbraindev @albertcardona I happened to be re-reading “Science, the endless frontier” recently and it seems possible that this idea comes from Vannevar, who asserts it pretty much in this exact form; and I am beginning to suspect it’s something people have just been repeating for .. decades, because it feels true.
in reply to Kristine Willis

@kristine_willis If you want to check that, the easiest sanity check could be to investigate when the science that led to Nobel prizes later on was recognized as groundbreaking.

The next step could be to check which requirements these had to be possible and how long it took for those requirements to be recognized as important (time after publication).

@foaylward @bitbraindev @albertcardona @alexh

in reply to ArneBab

@ArneBab could not agree more, this is an obvious test. But, we have to define "recognition".

I suspect that the immediate field recognizes breakthroughs much more rapidly than the broader scientific community, and I would hypothesize that perhaps the lag time between recognition by practitioners of a sub-specialty and recognition by the scientific community in general is what give the appearance of a long delay.

@foaylward @bitbraindev @albertcardona @alexh

in reply to Kristine Willis

@ArneBab The sleeping beauty phenomenon you're describing in comp sci fits a different pattern where neighboring practitioners don't recognize the immediate utility (and maybe they never do, it's some other community that finds the utility). But this just means, I think, there can be delays, not that the delay in recognition is obligatory.

@foaylward @bitbraindev @albertcardona @alexh

in reply to Kristine Willis

@ArneBab We may be violently agreeing here. But I seem to hear the "you can never know" line rolled out as a kind of bromide to provide cover to funding work that, as a subject matter expert, I'm pretty sure is a dead letter.

Of course I could be wrong. Sure. Absolutely. The problem is, our resources are finite, and I wonder to what extent this paradigm is undermining progress by contributing to problematic hyper competition. @foaylward @bitbraindev @albertcardona @alexh

in reply to Kristine Willis

@kristine_willis @ArneBab @foaylward @bitbraindev
A key point here is that in scientific research competition is counterproductive. That no scientist in their right mind would want to compete with anyone. And if a work is so obvious that multiple labs are on it, collaboration beats competition any day. There's no point in being a month faster and scooping someone; even the concept of scooping is absurd: if anything, that'd be confirmation, validation – and very valuable. Most, though, would rather work on questions whose answers push the horizon of knowledge.

Resources are indeed finite, hence let's stop competition for papers, for grants, for positions. There is no point in that. Define what size of scientific research sector can the country support and go with that, with properly funded labs.

#academia #ScientificPublishing

in reply to Albert Cardona

@albertcardona I see two different aspects:

One is that funding is provided by people outside the field, and they want proof of value. But they don’t want to trust the people within the field ("they all know each other, so how can they be objective?"). So they request something impossible.

The second is that abuse of funds absolutely does happen, and bad theories often only die with their generation. So there *is* need for funding of outsiders.

@kristine_willis @foaylward @bitbraindev @alexh

in reply to ArneBab

@albertcardona The current solution is to make the most accomplished scientists waste at least a third of their time writing grants just so people they know are doing good work have a chance to continue their work.

And the missing job security kills a lot of good work because people are on the edge instead of focusing on their passion.

So the current situation is just bad.

The only ones who benefit are administrations who can point to metrics.

@kristine_willis @foaylward @bitbraindev @alexh

in reply to ArneBab

@albertcardona "We did not choose wrong: here are the numbers to prove that our selection is the only correct one, so if it does not work out, we are not responsible."

@kristine_willis @foaylward @bitbraindev @alexh

in reply to ArneBab

@ArneBab @albertcardona Completely agree that the failure to right-size and instead attempt infinite growth has been a very serious mistake; that sets up an insane and counterproductive competition in the place of what should be collaborative discussions. And also agree that a lack of stability is disastrous and kills good ideas. @foaylward @bitbraindev @alexh
in reply to Kristine Willis

@ArneBab @albertcardona But this just means pushing decision-making back a step; who gets to have the stable career? What is the “right size” research enterprise? How do we decide that? As a scientist, I just don’t see how we make those decisions in a data-free way. @foaylward @bitbraindev @alexh
in reply to Kristine Willis

in reply to Albert Cardona

@albertcardona @ArneBab @foaylward @bitbraindev I’ve spent a decade up-close and personal with the “bring outsiders to evaluate” model and in my experience, as a stand-alone solution, it leaves a lot to be desired. Not least that if you ask different “outsiders” you get different results.
in reply to Kristine Willis

@albertcardona @ArneBab @foaylward @bitbraindev and, at the same time, there are some important topics that all the outside reviewers universally down-rank: health disparities. Women’s health. Bio-engineering.
science.org/doi/10.1126/sciadv…
When you make judgements by asking a limited number of experts for their subjective feelings, you get … uneven results.
in reply to Kristine Willis

@kristine_willis @ArneBab @foaylward @bitbraindev
Indeed, the evaluation problem is a tough one. Two points.

1. Outsider perspectives are always needed. Hence I'd value most an evaluation committee composed on 1/3 internal, 1/3 national, 1/3 international. The result should be biased towards not squashing potentially great people or projects, at the cost of letting some less good ones continue. While the cost of an error in the latter is small, the cost on the former is gigantic.

2. Since it's impossible to be perfect, I'd use, again the Ross Cagan proposal of funding levels: go up, go down, and so on, depending on past performance, not future perspectives. In other words, no grants: the evaluation is done on past work only.

#academia

in reply to Didier Ruedin

@druedin @kristine_willis @ArneBab @foaylward @bitbraindev
Same as now: who is interested, who did a sensible internship or rotation, who do you happen to know, who has sensible grades on relevant subjects, who can come up with a project proposal that reads sensible. Plus the equivalent of a visitor project for juniors: short-term positions of 3 months to a year where they can prove themselves. Actual work in a lab is the best recruitment basis there is.

Frankly, my problem is finding people who want to work in academic scientific research. There aren't enough. And the issue isn't entirely salaries, which is a major one. It's also that not everybody is comfortably being wrong all day long, all year wrong, not knowing exactly how to do something, not knowing what the outcome may be. Perhaps this can be learned, but at the PhD/postdoc level it may be too late.

#academia

in reply to Albert Cardona

@albertcardona @druedin @kristine_willis @ArneBab @bitbraindev if resources are highly limiting it will always lead to difficult decisions based on questionable metrics. But this is a false choice. We live in a world where a psychopath tech bro just got a trillion dollar pay package. Science thrives when we have money to fund people who don't excel at traditionable metrics, and there is plenty of money to make that happen, if we as a society choose.

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.