In many B-movies, machines try to take over the world. And in real life, we often joke about losing our lab jobs to them. As case in point, three of my five years in graduate school were largely consumed by sequencing a few megabases of DNA. After performing the radioactive chain-termination reactions, I’d carefully clean and tape up the big glass plates, prepare the fresh polyacrylamide gel and run the samples through. When the dye ran off I’d dry down the gel and put it on film. Every day, there was also the previous evening’s film to develop and then – the worst part, as far as I was concerned – staring at this film, with its ladders of tiny horizontal black dash marks, and entering the sequences manually into the computer, running one finger upward as I went so I wouldn’t lose my place. After those three years, the G, C A and T keys on my computer were visibly faded compared to their neighbors.
Today, the latest production-scale sequencers can analyze millions of base pairs of DNA in less than a day.
Sobering, isn’t it? Viewed in this light, machines become the things that free up our time to do things that are a lot more creative. Back in the ’90’s, the only way to understand viral evolution was to roll up my sleeves and sequence strains of the same virus over and over, documenting the mutations that occurred in a population during the course of infection. But imagine if I’d seen the lay of the land in a few weeks, and could have spent the next three years probing into the question in more meaningful ways? How much more would I have achieved?
Of course one could argue that every generation has its tedious chore – for every megalithic task that has become routine, the latest state-of-the-art technique consumes our time in much the same way. I guess I’m living through this right now, as I do high-throughput RNA screens using only minor automation on five-year-old platforms and process hundreds of thousands of images visually. But I’ve seen the future (and have unsuccessfully dodged its o’er-enthusiastic sales reps): wardrobe-sized machines with robotic arms that do your entire screen with no human intervention. Indeed, they practically make you a cup of tea and nip down to the shops to pick up your dry-cleaning.
As far as the automated image analysis, it’s still early days with regard to the features that I’m interested in. But I found last week that the future lurks around that corner too. I can’t go into any detail at the moment, but let’s just say I’m involved in a collaboration with some computer scientists who are keen to try out their image algorithms on something completely different. Enter my dataset. We started simply, choosing one morphological characteristic of interest: the tendency of some of my gene knockdowns to turn fried-egg-shaped cells into vaguely geometric objects. Although the human brain can see this differences instantly, it turns out to be surprisingly challenging for a computer to make the same call. But this new algorithm, after a few rounds of training, managed the task with about a 96% success rate. And when I nipped over to the CS department to see where it had gone wrong, I was mortified to find out that my eye had misclassified some fried eggs as triangles, and the programme was actually 100% correct.
OK, this was embarrassing. But it was also fascinating, because the algorithm wasn’t looking at the same thing that I look at. All the pixel intensities had been isolated, plotted on polar coordinates, snipped into patches: strange random-looking patterns of light and dark that would not be out of place hanging in the Tate Modern. I am quite sure that whatever my visual cortex is doing, it is not seeing the data in this warped form. But nevertheless, the correct answer emerged.
I’m not looking for a new job just yet. But I might think about more creative ways to fill my time.