In which we stand on the shoulders of midgets

The other day I was part of a rapt audience, listening to the seminar of Dr Big Shot. As Big Shots go, this man was immensely likeable: coherent, humorous, persuasive but – above all – modest. The way he introduced his story was particularly effective, following a narrative formula that, when I started thinking about it, is actually very common.

It goes something like this: a researcher is in the dark, having just discovered that a particular Gene X is probably involved in his pet biological phenomenon. But he has no idea where to start to work out how. So the researcher turns to the literature, does some searches and uncovers forgotten papers A, B and C, which collectively point towards a potential link. The link is followed up and – lo and behold – the secret of the new biology is stripped away to reveal a shining Truth. Cue fame, glory and a high-profile article in Nature.

But what of obscure little papers A, B and C? Let’s look a little bit more closely at them. Are they top-tier papers as well? Actually, no. One’s a classic cloning/sequencing paper from the early Nineties published in Nucleic Acid Research (impact factor 6.88). One’s a small bit of biochemistry on a very bitty, incremental problem that appeared a few years ago in FEBS Letters (impact factor 3.26). And the third contains comparative microarray data deposited with little fuss in BMC Genomics (impact factor 3.76) last year.

I think it’s safe to say our likeable Big Shot would be the first to admit that without Papers A, B and C, his research would not have proceeded as smoothly – and in fact, without Paper B in particular, he might never have made the connection at all. And this is precisely why I worry about initiatives that advocate pyramid schemes to foster the Elite at the expense of the second-tier researchers that underpin them. Of course one could argue that there are thirty-plus years of obscure papers in PubMed for the Elite to text-mine, so failure to generate a future supply of solid but relatively trivial results will make no difference. But that supply won’t last forever, especially as techniques advance and the need for new edifices of knowledge will start eating into the supply of old bricks. When we drive all but the top-tier out of research, who will provide the necessary foundations? The more we uncover about the natural world, the more complex it seems to become. These days, great papers open up more questions than they answer, and the job seems to expand infinitely.

This is why I put my dignity on the line and suggested in public that the scientific profession needs metrics that also reward effort as well as luck. I presume similar passions inspired Stephen Curry and his kids to make a wonderful film explaining that not all scientists are, or have to be, geniuses to contribute. I think we should think carefully before ignoring the efforts of – or worse, doing away with altogether – the host of valuable foot soldiers in the scientific profession.

About Jennifer Rohn

Scientist, novelist, rock chick
This entry was posted in Uncategorized. Bookmark the permalink.

48 Responses to In which we stand on the shoulders of midgets

  1. Richard P. Grant says:

    I think we should think carefully before ignoring the efforts of – or worse, doing away with altogether – the host of valuable foot soldiers in the scientific profession.
    Say it loud. We’re all PBI, and we get shot for deserting.

  2. Jennifer Rohn says:

    The problem with saying it loud is that you mark yourself as suboptimal. Hence the sole comment on the Nature metrics opinion piece – I know it was meant to be a joke, but it still stung: “After searching for the citation index of the authors, it appeared to me that many of them do not rank well. Therefore, I did not want to spend time to read this article, and I prefer saving time for true value activities, like reading the Da Vinci Code or other best sellers.”

  3. Richard P. Grant says:

    Perhaps those of us who have already proved ourselves to be sub-optimal should say it, then.
    Oh, and citation indices. GRR

  4. Stephen Curry says:

    Excellent post Jenny and reminds me that I must catch up with your Nature piece (been a bit hectic of late…). Props to you for sticking your head above the parapet.
    And thanks for the plug!
    More cheese anyone?

  5. Jennifer Rohn says:

    Stephen, if you need any non-genius scientists to appear in your follow-up films, it goes without saying that I’m the first in the queue. Plus I adore cheese.

  6. Austin Elliott says:

    Well said, Jenny.
    It is also worth noting that the hard-grafting postdoc in Dr Big Shot’s lab may well have been trained by one of the PBI.
    All the more depressing to see Wellcome “doing a Nurse” and switching their funding system to what is in effect (or at least seems to me to be) an attempt to identify and fund only the elite.

  7. Jennifer Rohn says:

    Yes, and I had high hopes that with my modest Career Reentry fellowship from Wellcome, I might be able to capitalize on that in future funding opportunities from them. My hopes of that have dwindled in light of their new scheme, I must admit, though I will not give up until forced out.

  8. Stephen Curry says:

    @Jenny – You were already on the list. 😉

  9. Jennifer Rohn says:

    Actually, Austin, on reflection your comment has made me realize a flaw in my own argument. The Big Shot’s lab is of course full of foot soldiers too. For every apprentice that produces a Nature paper, there will probably be 5 or 6 that spin off the less-celebrated sideshoots of which we’re speaking. So perhaps only funding the Elite will still give enough low-level support for the Nobel-caliber endeavors. It may be depressing but that doesn’t mean it’s not true.

  10. Ian Brooks says:

    That’s a good post Jenny, and a philosophy to which I ascribe. Furthermore there is some great software available which makes life easier for Prof. BigShot & his minions (and the rest of us plebs) to scan the literature for possible functional genetic links.
    I am referring specifically to a program from a company founded recently by a colleague of mine called Computable Genomix (and no I don’t get commission, this is a free plug for a great piece of software). The program, GeneIndexer does…

    utilizes artificial intelligence and computational linguistic techniques to identify conceptual gene relationships from titles and abstracts in MEDLINE citations automatically.

    Using the scientific literature, GeneIndexer represents genes as vectors in space and deduces gene-to-gene and gene-to-keyword relationships. The method extracts features from the scientific literature that are not easily made or even possible by humans. Therefore, GeneIndexer allows researchers to mine the biomedical literature for large gene datasets rapidly and to make mechanistic or functional predictions that were not previously possible.

    I wish I was still active in genetics research because this is a lot of fun to play with, it’s simple to use and it actually bloody works. You can spend months trawling the literature and pull out a few associations. Well, geneIndexer does it for you more much faster and often finds gene’s you might over look, and it then presents them in clustered in trees and ranked in tables. Really rather jolly.

  11. Jennifer Rohn says:

    Yes, we’re about to start dabbling with some new text-mining algorithms which some collaborators are developing. This technology has been around for a few years now but I don’t think we’ve scratched the surface of what it can do.
    For just general, workaday associations, sort of the Wikipedia of text mining, I really love iHop ( It’s very good with synonyms and gets you started a lot better than a basic PubMed search.

  12. Grant Jacobs says:

    Career Reentry fellowship from Wellcome
    UK residents only? 🙂
    I tried to encourage one senior person at the very top of the grant agencies here a re-entry fellowship scheme wouldn’t go amiss, but the reply has been silence.
    I could say far too much about the main topic, but one thought: I get grumpy that “the system” too often forgets to look at the people directly and remember that different people make different types of contributions. (Or is that two thoughts?)

  13. Jennifer Rohn says:

    I think it’s for UK labs only, but am not sure. The UK is very good at this sort of thing: there are at least four schemes that I know of to encourage people to come back, though I believe one or two are only for women. I am feeling a bit cynical today, so I am wondering how many of these re-entry fellows actually go on to make it – as defined as getting a stable position at the other end. It’s hard enough to compete if you’ve done the conveyor belp straight – but with the gap in publications I can imagine that the CV just won’t look as good.

  14. Nicolas Fanget says:

    Thanks for flagging up iHop (I hope?) Jenny, very interesting tool. A bit light on the bacterial side but not surprising (can I haz actinobacteria? plz?). Through #mfenner I saw this blogpost about semantic searching and protein association prediction, very good.

  15. Jennifer Rohn says:

    My only gripe with iHop – which won’t bother you at all – is that it somehow made the decision to assign human gene symbols using a different vocabulary to that of the Human Genome Nomenclature Committee of HUGO, which is meant to be the gold-standard for human gene naming. I’m not sure whose system it is – NCBI’s, perhaps – but it’s annoying that it’s not a one-stop shop.
    And speaking of little people needing grants, I got some delightful spam/advert in my Inbox this morning. I don’t want to flatter them with any product placement by mentioning names, but basically if you pay $197 you can participate in their phone seminar about how to milk private foundations for research grant money.

  16. Richard P. Grant says:

    one-stop shop iHop? Not.

  17. Nicolas Fanget says:

    Actually it does bother me now that you mention it, because it is Nature policy to use official HUGO symbols for genes and proteins whenever possible. Thankfully the HUGO website search works very well!

  18. Jennifer Rohn says:

    Glad to hear you’re not bacteriocentric. But to judge by the literature, nobody ever bothers to use the official names for anything – which is really annoying. Perhaps it’s an insidious plot by all the text-mining people who want us to become reliant on their synonym translation services…

  19. Nicolas Fanget says:

    It is quite annoying, but unfortunately old habits die hard. For bacteria/yeast there is the IJSEM for official names, and the ICTV for viruses, but that doesn’t stop people making up their own stuff!
    Working at SGM first I was introduced to virus/yeast, and now at Nature it gets crazy with things like molar nomenclature for fossils, histone modifications, genes and proteins but with different styles according to organism, all those new little RNAs that get discovered every other week, crystallography, and that’s just the biological sciences!

  20. Björn Brembs says:

    Nice post, Jennifer! I always say: scientific discoveries are like orgasms: there are no bad ones.

  21. Jennifer Rohn says:

    I think all journals should force authors to use the right names – at least when first mentioned in the text. There have been times when I honestly am not sure what gene the paper is referring to – certain synonyms are used more than once and it’s ludicrous when the reader can’t be sure which it is. Text-mining can’t help you there.

  22. Jennifer Rohn says:

    Hi Bjorn – I like your maxim. Which reminds me, Bora pointed out this related article in the Chronicle of Higher Education (, whose title says it all: “We Must Stop the Avalanche of Low-Quality Research”. As if incremental data and repetition/bolstering were bad things for science, not good.

  23. Heather Etchevers says:

    Right on. If you feel like building up a righteous head of steam – I have no time for it – check out “this link”:, and the box below it comparing the “traditional model” and the 23andMe initiative. I saw red.
    The implication is that the latter is better, because “faster”. But really, look at points 4 and 5 on the right. How dare they suggest that unpublished work presented at a conference that builds not on a small impact publication, but a painfully constructed one involving more than 20 minutes work, is superior?
    But I entirely agree with your points above. How many Nature Genetics papers were published with a lucky “candidate gene” approach using candidates that were so precisely because of low-impact expression studies done on obscure genes about which no one cared at the time? (Sorry, but this is a sore point recently all around my research unit, which is accustomed to publishing in that journal and getting well cited. They are slowly turning to doing “functional” studies, but they’re geneticists. It’s taking time, and other labs are more efficient at that, whereas they are good at sequencing intelligently. Most recently, a wonderful gene discovery was rejected – along with studies of its physiopathology – because it was in the expected signal transduction cascade! WTF?)

  24. Jennifer Rohn says:

    Thanks for that, Heather. I don’t even feel there is any comparison – could 23andme have done anything without all of the traditional research underpinning the disease-association of all the genes in their screen? Could it even exist as a company without it? I think people are so used to “the answer being there somewhere in PubMed” that they forget that real, unsung researchers – many of whom will never end up getting tenured positions – generated that boring old data. And they forget that it needs to be continually refreshed to be relevant.

  25. Karen James says:

    Hear hear, Jenny. The tone of that awful comment on the Nature metrics article is, sadly, very familiar. Confession: I’m afraid to click on the Wired link.

  26. Jennifer Rohn says:

    I have to say that the Wired piece has some interesting elements – it’s intriguing to hear from someone who has decided to try to reduce the risks of his genetic legacy putatively disposing him to Parkinson’s. It raises a lot of questions, and is a good read despite the annoying bits.
    What you should be afraid to click on is the Chronicle of Higher Education piece I mentioned above ( If the authors had their way, all of the minor papers would be obliterated from the face of the earth – because “it’s too hard to find stuff” and “it’s too much work to review”. My God – complaining that we have too much knowledge is a sad thing; these authors should be pushing for the development of better filters and text-mining instead of bemoaning the fact that there are so much data out there. They argue it must be “poor quality” because it is not well-cited – which begs the question, does lack of citations really mean the data aren’t good? It might be a very sharp needle in that huge haystack, but hasn’t been found yet to prick someone into action. And I am always relieved when small ‘me too’ papers confirm my results – it just makes the whole picture that much more robust. Finally – don’t even get me started on negative data. How do the big shots know what blind alleys to avoid if people can’t get that crucial data out there? Think of the time and money we could save if such papers were encouraged and valued.

  27. Grant Jacobs says:

    I’ve some thoughts on the main topic, but I promised myself to put four hours programming in before the start of the NZ – Paraguay match. (It’s a tall order, but if NZ wins, we make it through to the top-16 knock-out phase.) Football, of course. (C’mon, what else?)
    I’ll try put together a post over the weekend. Too many thoughts for a quick comment…
    But a quick note about the gene name thing. On one project I was doing a few years back comparing data from many genomes, the main problem proved to be sorting out the gene names, not the sequence analysis. I ended up coding my own gene name-mapping system to try unscramble the mess. It can be a real problem for data pipelines, etc., if things aren’t reliably standardised.

  28. Jennifer Rohn says:

    It’s a real problem if the paper you are referring to ONLY mentions the synonym, and this maps to multiple genes, so you really and truly have no idea what gene is being discussed in the paper. I’ve stooped to doing domain searches of the different candidates, trying to work out which gene they were probably talking about. (E.g. they think it has a role in transcription – does it have TF domains? Is it predicted to be in the nucleus?) Absolutely ludicrous.

  29. Mike Fowler says:

    Jenny, a nice post, which immediately got me thinking about ecosystem dynamics.
    The traditional (simplified) view of an ecosystem is essentially a ‘food pyramid’, with lots of biomass at the bottom, and less as we move up the trophic levels…
    in fact, I won’t clog up your comments, but will post about it here instead. Thanks for the inspiration.
    And I’ll shamelessly plug the Journal of Negative Results – Ecology & Evolutionary Biology to help soothe your frenzied nerves about such details.

  30. Jennifer Rohn says:

    Thanks, Mike. I plugged the Journal of Negative Results in Biomedicine in my Nature metrics piece – glad to know there are more journals like that out there. I suppose the clinical trials people should have one – I can’t think of a more suitable negative results journal topic than that. But will anyone give you kudos for publishing in journals like that? I seriously doubt it.

  31. Mike Fowler says:

    Well, I don’t think any of us (the Editorial board) run JNR-EEB for the kudos, and I wouldn’t particularly say I do science for the kudos, so it doesn’t bother me if it doesn’t come.
    What’s more important personally, is just knowing that the good research that is done is actually getting out there, and not just sitting in someone’s bottom drawer. If it helps someone understand the natural world better – even the authors, then job done.

  32. Jennifer Rohn says:

    Hmmmm…by ‘kudos’ I meant the sort that could allow you a grant or job thereby ensuring that you could actually stay in science doing all the foot-soldiery type stuff. The kind of praise that actually matters.

  33. Mike Fowler says:

    Heh 🙂 I suppose we’re talking about a world where all that matters is that people ‘praise’ your work, regardless of its quality.
    By ‘praise’, I guess we both mean ‘cite’, as this is what people will probably look at when doing the application review (grant or job), before actually reading and thinking about your contribution themselves. If we’re lucky and they see past the journal IF…

  34. Jennifer Rohn says:

    Ah, but it’s the word ‘quality’ we’re really quibbling over. Even very small, ignored papers can be of high quality – just because they’re not cited and are not in top-tier journals doesn’t make them crap. Nobody should get credit for poorly-executed work; but we shouldn’t use top-tier as a surrogate for its opposite.

  35. Mike Fowler says:

    I completely agree, Jenny. My last point was meant to be that it’s much better to actually read someone’s work, rather than just look at the IF/number of citations, when trying to establish its quality.
    However, if you want to reward/recruit someone that does well in the current system, then (unfortunately) it does pay to select those who publish in high IF journals or receive high citation rates.
    In principle, I think citation rate should correlate reasonably well with quality within a field, but of course, there are cases where this isn’t always reflected, and ways that it can be manipulated.
    Those who advertise themselves well, both within a paper and outside of it, probably receive more citations. But this is probably a skill that is desirable – we don’t want to fund someone who does high quality research but is content for it to fall through the cracks and disappear following publication.

  36. Austin Elliott says:

    Completely agree with Mike (and Jenny) about journal impact factors comparisons between fields being utter tripe, and in no way an adequate surrogate variable for “quality of the research”.
    Mike is talking particularly about eco stuff, I guess, but the place where I have come across this is in comparative (i.e. non-mammalian) physiology. When I took a detour into comparative physiology a couple of years ago (for reasons I won’t bore you with) I was amazed by the quality of the papers I was reading in J Comparative Physiology and the like. In techniques, rigour, aims, structure, writing etc the papers were wholly indistinguishable from the stuff I usually read in the journals of mainstream (i.e. mammalian) physiology. The only difference was that, being on (e.g.) crayfish rather than mouse calcium channels, they were in journals with impact factors between 1.0 and 2.5, rather than between 3.0 and 5.5.
    Anyway, a salutary lesson, at least to a lifelong mammalian physiologist like me.

  37. Jennifer Rohn says:

    Yeah, I had the same thing when I was a virologist – the place to be seen was J Virol (impact about 6) and the only way to get into Nature was via HIV or whatever scary virus happened to be trending at that time – or if your retrovirus had transduced a particularly juicy new oncogene, say.

  38. Brian Derby says:

    Its not just where you publish but where you submit that is important. A colleague tells a tale of a paper rejected from an American Chemical Soc. Journal. This was resubmitted (practically unchanged) but with the corresponding author now one who was based in the USA and hey presto it was accepted.

  39. Austin Elliott says:

    That recalls an old joke I heard, from someone similarly having troubles with a US journal, about NIH standing for “Not Invented Here”.
    Of course, cynics would say that one would likely be more polite (or at least “considered”) about the paper of someone who was likely to be reviewing your next grant – something deemed to be more likely “in country”.
    The other context where you hear this gripe is American authors only citing papers published in the US journals – see the last comment for a possible reason.

  40. Jennifer Rohn says:

    Ach – is that really true? As an editor, I never thought about what country the various papers were coming from. I couldn’t help noticing whether the corresponding author was a Big Shot, but this wouldn’t influence my decisions.
    Mind you, when I told a few profs in my PhD department in Seattle that I was going to the UK for a postdoc, they basically said “that one won’t count and you’ll have to do another one in the US”. The cheek! Especially as the person I was going to work for was a hundred times more well-known and respected than any of these profs!

  41. Austin Elliott says:

    It wasn’t journal editors so much, rather referees, that one hears the “Not Invented Here” joke about, Jenny.
    It is not always all that sinister, more just human nature plus the pressures of competition.
    For instance: journals routinely tell authors to limit themselves to 30-40 references. So if four papers from three groups have reported basic “observation X”, which I want to mention, then that is 10% of all the references in the entire paper if I give all four references. Hence the tendency to cite reviews, rather than the original science – for that I do think journals carry a lot of the responsibility.
    Alternatively, I could cite some of the original refs. Now, if two of the four are by the bloke I know reviews my NIH grants (Prof A), and another is by someone else who probably does (Dr B), while the fourth is by a European that I am aware doesn’t do much work in this field any more and is thus highly unlikely to ever review my funding proposals (Dr C)…
    …well, the odds are short, I would suggest, that I will cite Prof A and Dr B, but not Dr C.
    I honestly think that some people feel it is intrinsically better to cite a review simply to avoid this kind of jockeying. Unfortunately, the same logic may apply to whose work got cited in the review.

  42. Jennifer Rohn says:

    No beef about citing reviews – there’s no other way to do it. In my experience if you piss off a referee by not citing his paper, he’ll ask you to “mention the excellent paper of Dr X” as a requirement for publication – where of course X has to be the referee.

  43. Brian Derby says:

    Jennifer – Dr. X was probably the student/post-doc of big/not big shot

  44. Jennifer Rohn says:

    Ha! True. Postdoc referees are the only ones meticulous enough to even notice things like that in a manuscript.

  45. Mike Fowler says:

    I recall seeing a presentation in Helsinki a couple of years ago, about citation practices in Europe and the USA/North America (possibly only in Ecology & Evolutionary Biology), carried out by a PhD student in Turku.
    Clear evidence that everyone cites the NA researchers more than the Europeans in the literature, but NA researchers cite themselves even more than Europeans do. Wish I could find the person and the research… If anyone else can (Turun Yliopisto website), please let me know!

  46. Mike Fowler says:

    No, wait, I think it was Roosa Leimu. Lots of interesting research on citation and referee practice (some of which Bob O’H has taken futher), but I can’t find anything published that looks exactly like the presentation yet.

  47. Jennifer Rohn says:

    Very interesting, though depressing!

  48. Jean-Luc Lebrun says:

    A sobering thought. Who remembers the giants Sir Isaac Newton mentioned…
    Another thought who could help junior writers, Newton did not write “If I have seen further, it is because they all are are as blind as bats.”
    Being French, and not American, I would like to quote Pascal, a contemporary of Newton (I’ll quote the English version though): (Thought 43) “Certain authors, speaking of their works, say: “My book,” “My commentary,” “My story,” etc. They are just like middle-class people who have a house of their own on main street and never miss an opportunity to mention it. It would be better for these authors to say: “Our book,” “Our commentary,” “Our story,” etc., given that frequently in these, more belong to other people’s than to them.”

Comments are closed.