Skip to main content


Counting citations hasn't been a reliable measure of scientific impact for a while, especially on platforms like Google Scholar that compile info from random documents. Hyper-authorship, predatory journals, etc have all contributed to the problem.

This preprint just drives home how important it is to measure scientific impact more carefully and without reliance on automated metrics

Google Scholar is manipulatable

https://arxiv.org/abs/2402.04607

#science #academia #publishing #citations

Frank Aylward reshared this.

in reply to Frank Aylward

maybe the question is: why do we need to measure scientific impact?
in reply to El Duvelle

@elduvelle excellent point! In many cases it is not needed or desirable.

But even when trying to self evaluate, I often ask what discoveries would have the broadest relevance, and what lines of inquiry could lead to the most drastic developments. But that is a much more nuanced discussion of impact that can't be captured by metrics.

in reply to Frank Aylward

@elduvelle And on a more practical note, many scientists are forced to report metrics of impact. For example I had to report the number of papers I published and the number of citations I received in my faculty annual report.
So if we are going to do that we should at least be aware of what biases these metrics have.
in reply to Frank Aylward

sure, but why do you have to report those “scientific impact” measures? What are they even used for? It’s not like researchers get paid more if they have more “impactful” papers (whatever that means, and also, that would be completely unethical). So are they used to estimate future “impact”? But how can one predict the other? You might be lucky and get once really surprising and impactful results but how often is that going to happen in one’s career?

Maybe people should instead measure the things that actually indicate good science: clear protocols, well-designed experiments, reproducibility of the findings, proper sharing of data and code, accurate and unbiased reporting of the existing literature, etc.

This entry was edited (2 months ago)
in reply to El Duvelle

@elduvelle I believe researchers do get paid more of they have more "impact". That is why there is such incentive to manipulate these metrics. For example there was just a story showing that self citation rates in countries are reflected by the way impact or productivity was measured in that country ( https://www.nature.com/articles/d41586-024-00090-z).
I agree with you that there are better ways of doing this, and moving away from over reliance on these metrics would probably be the first step.
in reply to El Duvelle

@elduvelle yeah, chiming in, in many countries researchers are incentivized or outright pressured to publish in "high impact" or other box-ticking-exercise journals, and they receive bonuses, salary raises, or positions based on their CV and what is on it.

That's not to say that's good, but it is the reality that ~half the world experiences? And these conversations from Western perspectives often discount that as "bad," but lack appreciation for what drives it despite it being "bad."

This entry was edited (2 months ago)
in reply to MAHanson

@MarkHanson so.. what drives it? Why would a university or government reward their researchers for publishing “impactful” science, whatever that means? Why don’t they instead encourage them to publish good science?

@foaylward

in reply to El Duvelle

@elduvelle @MarkHanson there is a general issue of limiting resources. If you are a funding agency with 500 applications and you only have enough funding for 30, you need to establish criteria to decide who to fund. If a uni has a single open position they may get 200 applicants, etc
in reply to Frank Aylward

@elduvelle @MarkHanson the incentive structure exists everywhere in some form, it just seems a bit more extreme or explicit in some places
in reply to Frank Aylward

@elduvelle @MarkHanson not to rant, but in the US the root problem goes back to the declining state/federal funding of science over the last 6 decades. Unis used to have funding for good programs/people even if it wasn't profitable per se. This led to lots of great basic science because there was more freedom to explore interesting avenues of inquiry. But with declining funding there has been a troubling trend towards running unis "like a business" ...
in reply to Frank Aylward

from young age I have a problem understanding why human value gold: they are not edible, bad for weaponry or building material. It is shiny and human likes shiny things, if we can convince other not to be obsessed with gold, perhaps we will eventually convince ourselves and academics system not to maximise impact, jus like gold, a monetary agreement.
This entry was edited (2 months ago)
in reply to Frank Aylward

I wonder if AI is advanced enough in 2024 to be able to exclude self-citations in a citation count on google scholar. I'm guessing not yet, since there seems to be no button for excluding self-citations. I guess that seriously complex deep learning algorithms will be needed to figure out if an author is citing themselves 😀 I'm looking forward to this major technological leap in AI.
in reply to Frank Aylward

Hey I like this! Where do I sign up to buy citations, you ask? Here you go:

https://spubl.com.ua/en

" Intrigued by a citation-
boosting service that we unravelled during our investigation, we contacted the
service while undercover as a fictional author, and managed to purchase 50
citations."

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.