I am in the process of benchmarking our NCBI Taxonomy package in #JuliaLang (github.com/PoisotLab/NCBITaxon…), and so it's a good time to tell everyone that I dislike benchmarks.
Not because we don't compare well (our worst speedup so far is 5x, our best is of the order of 10⁵x).
But because benchmarks assume that we all want the same thing, and this thing is speed. This is not true.
GitHub - PoisotLab/NCBITaxonomy.jl: Wrapper around the NCBI taxonomy files
Wrapper around the NCBI taxonomy files. Contribute to PoisotLab/NCBITaxonomy.jl development by creating an account on GitHub.GitHub
Timothée Poisot
in reply to Timothée Poisot • • •Sometimes we want "ease of interaction". Sometimes we want "integration with the rest of our pipeline". Sometimes we want "a very specific feature that no other package implements".
All of these are various valid reasons to sacrifice speed. People who think the worth of a piece of code is its speed are confused as to how, and why, people actually use code.
Timothée Poisot
in reply to Timothée Poisot • • •And I know it's just, like, my opinion, man, but it's an opinion I have refined by handling software papers as a subject editor for various journals over the last 8 years, and training and mentoring and building and and and...
Benchmarks are not that informative, and the type of pissing context we shouldn't encourage.
Frank Aylward
in reply to Timothée Poisot • • •