Skip to main content


I am in the process of benchmarking our NCBI Taxonomy package in #JuliaLang (https://github.com/PoisotLab/NCBITaxonomy.jl), and so it's a good time to tell everyone that I dislike benchmarks.

Not because we don't compare well (our worst speedup so far is 5x, our best is of the order of 10⁵x).

But because benchmarks assume that we all want the same thing, and this thing is speed. This is not true.
in reply to Timothée Poisot

Sometimes we want "ease of interaction". Sometimes we want "integration with the rest of our pipeline". Sometimes we want "a very specific feature that no other package implements".

All of these are various valid reasons to sacrifice speed. People who think the worth of a piece of code is its speed are confused as to how, and why, people actually use code.
in reply to Timothée Poisot

And I know it's just, like, my opinion, man, but it's an opinion I have refined by handling software papers as a subject editor for various journals over the last 8 years, and training and mentoring and building and and and...

Benchmarks are not that informative, and the type of pissing context we shouldn't encourage.
in reply to Timothée Poisot

yes this can be very frustrating when I'm using an approach that benchmarks "poorly" even though my goal is different from standard approaches.

Lo, thar be cookies on this site to keep track of your login. By clicking 'okay', you are CONSENTING to this.