There’s a new kid on the block in our institute: a shiny high-content screening apparatus. I don’t want to engage in any gratuitous product placement, so let’s just refer to it as the Swanky Apparatus. A market leader, this sleek number costs about as much as a house and is the showcase piece of our just-built state-of-the-art screening lab. Not only will its confocal front-end incubate and photograph your billions of cells in stacks of multiwell plates, but its brain will analyze them for you in real-time. Understandably charmed, the entire population of our building is now busy cooking up proposals for how best to entice this machine to peek into the darkest corners of our lines of research and throw up an enticing cache of biological signals out of a veritable abyss of noise.
I poked my head into the half-unpacked lab recently to admire the Swanky Apparatus, along with the massive liquid handling robot that had been brought in as its live-in valet. The head of the lab – let’s call him the Master – was only too happy to show it off.
Although it’s all very impressive, I must admit to feeling a little depressed at the thought that this machine could probably repeat my entire two and a half year stint in the lab in about ten seconds, and not even raise a silicon sweat in the process. Knowing how fed up I am with screens at the moment, my boss mercifully suggested I forego the intensive two-day training course. But my lab mates returned full of wondrous tales about the machine’s prowess – with one odd footnote.
“You can’t actually see what the cells look like,” my benchmate said. “We asked the Master how to access the photos, and he looked and us and said, _why would you want to do that?_”
And therein lies the rub. The Swanky Apparatus’s raison dêtre is all about reducing biological complexity into numbers. Its built-in software packages are all tailored to translate pixels to math, comparing the control cells to the experimental samples and working out, numerically, how they differ. And it calculates the so-called zed score – the line under which you decide that the numbers are not above background – that the genes or conditions giving rise to them are not hits. This clashes a little bit with the raison dêtre of our lab, which is interested in why cells and tissues take up their particular shapes in order to perform their functions. Of course some aspects of the appearance of cells can, to a certain extent, be represented numerically: the software can segment individual cells, and count them, and measure their sizes, and tell if they’re dead or alive, and no doubt do a lot of other common tasks very well indeed. But how well could it say that this particular phenotype or texture is a bit funny looking, subtly but reproducibly? Much less tell you what that means?
I have to admit that, at this point, I would kiss anything that came along and decided on my zed score for me – even the Swanky Apparatus. I’ve gone through my visual screen exhaustively, and our talented research assistant has gone through a replicate screen even more exhaustively, and we are currently fighting it out trying to agree on a Venn diagram consensus based on what our eyes and brains can see. It’s all quite messy, and a bit subjective, and very, very exhausting, but from this murk of human observations, refined over a long period of time, patterns are slowly starting to crystallize. And to me they feel a lot more believable than a column of numbers.
The scary thing is, though, that I don’t know if this new upstart robot, given the same task, would give substantially different answers. Even more scarily, if it did, I don’t know how you could tell which one of us was “right”.