Open science provides fraud deterrence, and facilitates fraud detection.
When I first learned about open science around 20 years ago (then called "open notebook science"), I never thought about research fraud. Now it is a critically-important issue.
Open science practices provide part of the records needed for data provenance or chain-of-custody trail. Here's one of my posts from earlier this year about fraud deterrence: alexholcombe.wordpress.com/202… .
So in my view, open science policy updates need to consider fraud deterrence. Disappointed to see no consideration of the issue in the TOP guidelines update. journals.sagepub.com/doi/10.11…
Scientists can’t even prove their data are real
I gave a workshop on preregistration to honours students last Friday and mentioned that preregistration provides evidence that you created your hypothesis in advance of seeing the data. The student…Alex Holcombe's blog
Albert Cardona
in reply to Alex Holcombe • • •Part of the issue are the incentives. Science has to stop evaluating scientists by the number of publications. Never was a good idea and now even less.
#academia
〽️ɪɢᴜᴇʟ ➡️ GodotFest
in reply to Albert Cardona • • •ArneBab
in reply to 〽️ɪɢᴜᴇʟ ➡️ GodotFest • • •@bitbraindev You are asking "what else should we evaluate (people who spent a decade learning their craft) by?"
There is only a single method that actually works:
Trust other scientists in the same field (only they can actually understand the research) and punish proven fraud by expulsion.
Competition for jobs doesn’t work in science.
Giving publications a secondary objective ("get a job") ruins them for their prime objective (communicate).
⇒ draketo.de/english/science/qua…
@albertcardona @alexh
counting scientific publications as metric for scientific quality is dumb | Zwillingssterns Weltenwald | 1w6
www.draketo.deAlbert Cardona
in reply to ArneBab • • •@ArneBab @bitbraindev
Indeed – publishing's purpose is to communicate their progress in scientific research. Nothing else, nothing more.
On the measure and evaluation of science, I continue to think Ross Cagan's vision is best:
mathstodon.xyz/@albertcardona/…
Albert Cardona (@albertcardona@mathstodon.xyz)
Albert Cardona (Mathstodon)ArneBab
in reply to Albert Cardona • • •@albertcardona I think the part "it was pretty much a given if you were doing good work." is the most important in that.
Don’t constantly stress people about their employment future while they are working for you. That stress steals focus from the work they are doing that actually benefits society.
@bitbraindev @alexh
Frank Aylward
in reply to ArneBab • • •Frank Aylward
in reply to ArneBab • • •Frank Aylward
in reply to Frank Aylward • • •ArneBab
in reply to Frank Aylward • • •@foaylward There’s some background from concept discussed in the book Thinking Fast and Slow:
Since you often only know years (decades) later whether some research was important, building intuition can’t work and creating metrics that quantify that is a fools errant.
But people try anyway. It looks a lot like superstition caused by a need to have certainty.
Either an emotional need or as a cover-my-ass method in case something doesn’t work out.
@bitbraindev @albertcardona @alexh
Kristine Willis
in reply to ArneBab • • •@ArneBab @foaylward @bitbraindev @albertcardona Does it really take decades to know though? This feels like one of those things people say, like “protein has to be the genetic material because nucleic acid is too simple” or “bacteria don’t have genes”.
Where is the systematic evidence (not anecdata) that this is true?
Kristine Willis
in reply to Kristine Willis • • •ArneBab
in reply to Kristine Willis • • •@kristine_willis If you want to check that, the easiest sanity check could be to investigate when the science that led to Nobel prizes later on was recognized as groundbreaking.
The next step could be to check which requirements these had to be possible and how long it took for those requirements to be recognized as important (time after publication).
@foaylward @bitbraindev @albertcardona @alexh
Kristine Willis
in reply to ArneBab • • •@ArneBab could not agree more, this is an obvious test. But, we have to define "recognition".
I suspect that the immediate field recognizes breakthroughs much more rapidly than the broader scientific community, and I would hypothesize that perhaps the lag time between recognition by practitioners of a sub-specialty and recognition by the scientific community in general is what give the appearance of a long delay.
@foaylward @bitbraindev @albertcardona @alexh
Kristine Willis
in reply to Kristine Willis • • •@ArneBab The sleeping beauty phenomenon you're describing in comp sci fits a different pattern where neighboring practitioners don't recognize the immediate utility (and maybe they never do, it's some other community that finds the utility). But this just means, I think, there can be delays, not that the delay in recognition is obligatory.
@foaylward @bitbraindev @albertcardona @alexh
Kristine Willis
in reply to Kristine Willis • • •@ArneBab We may be violently agreeing here. But I seem to hear the "you can never know" line rolled out as a kind of bromide to provide cover to funding work that, as a subject matter expert, I'm pretty sure is a dead letter.
Of course I could be wrong. Sure. Absolutely. The problem is, our resources are finite, and I wonder to what extent this paradigm is undermining progress by contributing to problematic hyper competition. @foaylward @bitbraindev @albertcardona @alexh
Albert Cardona
in reply to Kristine Willis • • •@kristine_willis @ArneBab @foaylward @bitbraindev
A key point here is that in scientific research competition is counterproductive. That no scientist in their right mind would want to compete with anyone. And if a work is so obvious that multiple labs are on it, collaboration beats competition any day. There's no point in being a month faster and scooping someone; even the concept of scooping is absurd: if anything, that'd be confirmation, validation – and very valuable. Most, though, would rather work on questions whose answers push the horizon of knowledge.
Resources are indeed finite, hence let's stop competition for papers, for grants, for positions. There is no point in that. Define what size of scientific research sector can the country support and go with that, with properly funded labs.
#academia #ScientificPublishing
ArneBab
in reply to Albert Cardona • • •@albertcardona I see two different aspects:
One is that funding is provided by people outside the field, and they want proof of value. But they don’t want to trust the people within the field ("they all know each other, so how can they be objective?"). So they request something impossible.
The second is that abuse of funds absolutely does happen, and bad theories often only die with their generation. So there *is* need for funding of outsiders.
@kristine_willis @foaylward @bitbraindev @alexh
ArneBab
in reply to ArneBab • • •@albertcardona The current solution is to make the most accomplished scientists waste at least a third of their time writing grants just so people they know are doing good work have a chance to continue their work.
And the missing job security kills a lot of good work because people are on the edge instead of focusing on their passion.
So the current situation is just bad.
The only ones who benefit are administrations who can point to metrics.
@kristine_willis @foaylward @bitbraindev @alexh
ArneBab
in reply to ArneBab • • •@albertcardona "We did not choose wrong: here are the numbers to prove that our selection is the only correct one, so if it does not work out, we are not responsible."
@kristine_willis @foaylward @bitbraindev @alexh
Kristine Willis
in reply to ArneBab • • •Kristine Willis
in reply to Kristine Willis • • •Albert Cardona
in reply to Kristine Willis • • •@kristine_willis @ArneBab @foaylward @bitbraindev
If academic science is funded by the taxpayer, then it's an item in the state budget. If the public knows, understands and recognises that long-term research leads to basic discoveries and ultimately improvements in quality of life, then science will be funded at a level that the state can afford, with the understanding of a long-term payoff.
Quantifying ROI is hard, but there are examples. Like the human genome project (141x ROI: genome.gov/27544383/calculat
... show more@kristine_willis @ArneBab @foaylward @bitbraindev
If academic science is funded by the taxpayer, then it's an item in the state budget. If the public knows, understands and recognises that long-term research leads to basic discoveries and ultimately improvements in quality of life, then science will be funded at a level that the state can afford, with the understanding of a long-term payoff.
Quantifying ROI is hard, but there are examples. Like the human genome project (141x ROI: genome.gov/27544383/calculatin… ) or of the BRAIN initiative (less quantified: braininitiative.nih.gov/news-e… ), or the overall UK research (committees.parliament.uk/writt… ).
Next question is how to slice the pie: given a total amount per annum over a defined set of years, how much such each research group get, which determines how many groups. That's a whole other discussion; the foundation of HHMI Janelia Research Campus was an experiment on this: is it more effective to house 50 HHMI investigators together or separate in different university departments. The answer was obvious which is why the institute was founded. So perhaps the right question is how many state-funded research centres and of what size, and focused on what topics, should there be.
The evaluation question on who enters as junior and who gets to renew every 5 years is something else, but has a clear solution: bring outsiders to evaluate.
#academia
From the BRAIN Initiative Alliance: A look back on the BRAIN Initiative in 2024 (and what’s coming in 2025!) | BRAIN Initiative
braininitiative.nih.govKristine Willis
in reply to Albert Cardona • • •Kristine Willis
in reply to Kristine Willis • • •science.org/doi/10.1126/sciadv…
When you make judgements by asking a limited number of experts for their subjective feelings, you get … uneven results.
Albert Cardona
in reply to Kristine Willis • • •@kristine_willis @ArneBab @foaylward @bitbraindev
Indeed, the evaluation problem is a tough one. Two points.
1. Outsider perspectives are always needed. Hence I'd value most an evaluation committee composed on 1/3 internal, 1/3 national, 1/3 international. The result should be biased towards not squashing potentially great people or projects, at the cost of letting some less good ones continue. While the cost of an error in the latter is small, the cost on the former is gigantic.
2. Since it's impossible to be perfect, I'd use, again the Ross Cagan proposal of funding levels: go up, go down, and so on, depending on past performance, not future perspectives. In other words, no grants: the evaluation is done on past work only.
#academia
Didier Ruedin
in reply to Albert Cardona • • •Albert Cardona
in reply to Didier Ruedin • • •@druedin @kristine_willis @ArneBab @foaylward @bitbraindev
Same as now: who is interested, who did a sensible internship or rotation, who do you happen to know, who has sensible grades on relevant subjects, who can come up with a project proposal that reads sensible. Plus the equivalent of a visitor project for juniors: short-term positions of 3 months to a year where they can prove themselves. Actual work in a lab is the best recruitment basis there is.
Frankly, my problem is finding people who want to work in academic scientific research. There aren't enough. And the issue isn't entirely salaries, which is a major one. It's also that not everybody is comfortably being wrong all day long, all year wrong, not knowing exactly how to do something, not knowing what the outcome may be. Perhaps this can be learned, but at the PhD/postdoc level it may be too late.
#academia
Frank Aylward
in reply to Albert Cardona • • •