Okay- for my British colleagues, no worries, this will not be an attack on “the pint” and the value of the metric system (which unfortunately we have not adopted in the US). I will also stay away from the “metrics” of journals and impact factors, as Athene Donald has recently discussed this issue in a thoughtful commentary here. No, I would like to discuss the recent obsession in the scientific world that absolutely everything has to be measured and quantified in a precise manner. This obsession has been termed “metrics”.
In support of this obsession, Marc Kirschner on LabLit has posted a piece about “a cult of engineers trying to weigh the smallest thing possible“.
Now, don’t get me wrong. I completely understand the value of measuring things. I took enough physics and chemistry courses in my time, not to mention that biochemistry is a key area of my training. I do agree that we should strive to measure things as scientists–after all, that’s what scientists do. In fact, as a cell biologist, I am certainly in favor of the current trend to measure when at all possible. It’s just that there are times when this becomes a tad unreasonable. And there are times where “quantifying” can miss the point.
I think that there is no scientific field that has undergone such a radical transformation in “quantification” in recent years as that of cell biology–particularly with regards to fluorescence microscopy. This is a very good thing overall. For many years, cell biologists have been given a hard time by other scientists, because much of the microscopic data obtained by researchers in these fields has been considered “qualitative” rather than “quantitative”. Further complicating life for cell biologists these days is the problem that years of training and learning to interpret microscopic data has been ‘overwhelmed’ by easy access to microscopy facilities in many institutions; some of these facilities are staffed by excellent technicians, but frequently ones who have little or no training in cell biology. This often leaves the interpretations wide open to be misinterpreted, or in worse cases, for researchers without the requisite training to “request images showing a desired tendency”.
As one of the dying breed of ‘basic’ cell biologists, who works on fundamental science questions that are not necessarily ‘translational’ or ‘disease-oriented’ (and therefore frequently open to attack by those who believe basic science is not worthwhile), I often find people coming to me for validation for the microscopy studies. I am often horrified at seeing the so-called “co-localizations” that are proudly displayed for me.
As a simple primer for those of you not familiar with this term; co-localization is a microscopic analysis used to determine if “protein A” is found within a certain close range with “protein B”. This is often done using primary antibodies to detect protein A and protein B, respectively, and a pair of fluorescently-labeled ‘secondary antibodies’ to detect the former two antibodies. An example of partial co-localization from some work in our lab is shown below: compare the insets on the bottom left side of each image from A-C and D-F, and the arrows will show the overlap (seen as yellow arrows).
Thus, if A and B are indeed within a certain distance from one another (determined in part by the resolving power of the objective) , the fluorochromes will overlap, and when imaged with a microscope, pixels from one channel will overlap with the other.
There are also ‘object-based’ methods of measurement, which are particularly useful for unusual or irregular-shaped structures, such as the ones we have studied below:
Here the overlapping tubules should be fairly obvious even to the uninitiated.
Quantifying co-localizations is a tricky business, and there is a wide range of different statistical techniques. Some are based on the overlap of only one channel with the other, while others take overlap of pixels from both channels into consideration. Pearsons and Manders correlations are probably the most common ones used. An excellent source is information is provided by Bolte and Cordelières here.
After this brief diversion, back to my shock at seeing the images presented to me: a basic tenet is that co-localization can only be observed when both proteins are associated with some cellular structures or membranes. If either of the proteins is in the cytoplasm and freely diffusing through the cell, then the entire exercise is meaningless. I am frequently asked to be impressed by the “yellow merge” of a red and green fluorochrome, when a free-floating cytoplasmic protein will obviously overlap with all of the pixels of the second protein. Does this have any biological relevance? None, because the “overlap” is not real.
Indeed, this has been a major concern for cell biologists over the past decade. While the growing field of cell biology and its journals have become progressively stricter in reviewing and publishing microscopy data, the non-cell biology literature has become suffused with an ever-growing volume of papers that show colorful and pretty pictures that, at best, mean nothing, and in many cases sow havoc on their respective fields.
But I have digressed considerably. Having “spilled my guts”, I would like to go back to the issue of metrics. It’s not that I have an axe to grind. Well maybe I do–back in 2007, when reviewers requested quantifying the number and mean length of focal adhesions in cells for one of our papers, I was 42 years old and proud of my “near perfect” vision. After counting over 1000 of these structures I found myself with recurrent headaches, an MRI (thankfully all was well) and a nice new pair of multi-focal lenses.
But I do have a couple of important issues to raise regarding “quantification” (also known as “quantitation”).
First–(and not necessarily related to microscopy)–what about “small differences”? This is an issue that frequently comes up in discussion– when one sees a difference of, let’s say, 8-10% in a biological or biochemical test. Does this mean anything? I often get into arguments with those who will say that unless one sees a 50% difference, in biological terms this is meaningless. What types of effects am I referring to? Measuring the rates of receptor internalization, recycling, cell death, increased protein expression or phosphorylation–whatever. Does a 9% difference mean anything? Is it significant?
I argue that as long as we can measure it and there is a statistically significant difference (let’s just say p<0.05 for simplicity) that is consistent and repeatable–then it is worthwhile studying and understanding. My argument is that the degree of difference that we measure is often a reflection of the sensitivity and/or linearity of the assay system, and that if sensitivity were altered, a 9% difference could become a 200% difference–and that any statistical difference measured therefore has significance. This argument goes on and on, and I would be grateful for input on this issue.
Second–can all imaging ultimately be quantified? My own answer here (and I would love feedback from cell biologists and non-cell biologists alike) is that there are still things that the eye (or microscope) can see, yet we cannot formulate a way to “measure differences”. For example, while computerized techniques can now measure thousands of pixels (and their degree of intensity) at the click of a button (for example using the macros of ImageJ), many things need to be measured manually.
For instance, if a certain protein is depleted from cells, and then a mutant form is introduced in its place, researchers may want to know how that affects the level (intensity) of a different protein, for example protein X. In such a case, the researcher needs to manually count cells that contain or don’t contain the mutant protein, and in each case do measurements to determine the level of protein X.
But what happens when the ‘phenotype’ of the cell is less–shall we say–measurable? For example, in the same experiment outlined above, the researcher wants to know if the localization of protein X is altered (not if its expression level has changed). Can this be measured? The answer is–perhaps. If it moves to a known cellular compartment, then one could measure increased co-localization with a known member of that known compartment. We have used this method, but it is not always possible to do so.
But what if the change is more subtle? Or if it is hard to find what to actually measure? Is this data rendered meaningless? Take the following real-life example from a manuscript currently in preparation–and this particular image set, which I believe clearly draws a distinction, cannot really be ‘quantified’ (please correct me if I am wrong!):
In this figure, please compare the actin microfilaments seen in Mock-treated cells (A) and the enlarged insert to the right, with the treated cells below in B and the insert on the right. Yes, we could subjectively count the number of cells “with the spiky-like punk filaments” compared to those without–but isn’t that rather useless? From my own cellf perspective (sorry, I’m addicted to awful puns), I rest my case–and my microscope.
What about doing co-localizations in tissue sections?
Since the time I have started working with tissue sections, I have realized that co-localizations on tissue sections are a different game altogether. The rules that apply to imaging cells on plastic doesn’t seem to apply to tissues and I’ve had a hard time deciding which confocal images to believe and which not to.
For most tissue sections, the resolving power of low powered objectives combined with issues concerning the wavelengths of the dyes used, render “co-localization” fairly irrelevant. You will end up with a resolving power (D) which is greater than actual distances between proteins and thus be unable to conclude anything. So expression levels and patterns are really the relevant info to be derived.