On Thursday, Eric Michael Johnson tweeted a request for help:

Pop Quiz: You have 3 nominal categories of ordinal data collected at 3 separate time intervals. What statistical test is most appropriate?

Once I had past my usual bee in the bonnet about p-values, I responded with The Correct Answer, but fully explaining what it meant takes more than 140 characters. Josh Rosenau had also responded suggesting an repeated measures ANOVA, which is either wrong or sort-of right, depending on how you look at it.

Eric’s problem is with the sort of data we are often asked to provide in surveys: Do you think that this blog is (a) crap, (b) poor, (c) OK, (d) good, (e) brilliant. There is an obvious order in the responses, but there’s no natural sense of a distance between them. What to do? Well, first we’ll put a name on this sort of data: it’s called ordinal. And the method we’ll use is called ordinal regression.

It’s perhaps easiest to work out way up to the problem. We can start with the simplest ordinal data where there are two categories: Yes or No (for example). This sort of data is straightforward to analyse: it’s a logistic regression. We model the probability of a success, P(Yes), by transforming it onto a convenient scale, e.g. the logit scale (log(*p*/(1-*p*)). The convenience comes because we can then use regression models:

Now, there is another way of looking at this, the idea of a threshold model. We could imagine drawing a random number from a logistic distribution. If the number is above a threshold, the datum is a Yes, if it’s less than the threshold, it’s a No.

We can set the threshold at zero and calculate the mean (or median: it’s the same thing) with the equation *β*_{0} + *β*_{1} *X _{i}*, and we end up with the same logistic regression. Equivalently, we can keep the mean of the distribution at zero, and move the threshold up and down .

So, what if we have more than 2 categories? We can use the same threshold approach:

So, now we have two thresholds for two logistic regressions: one for A, and one for “A or B”. The effect of any covariate is to move the distribution up and down:

Mathematically, this reduces to a set of logistic regressions, with different intercepts (

*β*

_{0}, but the same effects of the other covariates (

*β*

_{1}), which is convenient for fitting the model.

So far I haven’t described in detail how the mean is changed by the model. The simple regression,

*β*

_{0}+

*β*

_{1}

*X*, can be extended to all sorts of terms, as long as they are linear (i.e. they involve adding things up, something I even wrote a paper about). This can include regression, ANOVA designs (i.e. factors), and random effects. This is why Josh was right with his suggestion of the repeated measures ANOVA. The response is ordinal, so it needs this sort of mode, but the design of the data has repeated measures on the same individuals. So, even though the response looks different, we can still use the same framework to link the effects in the model to the mean of the response.

_{i}If anyone wants to do these analyses in practice, then there are a few ways in R. In the MASS package, there is the polr() function. But there is also a package called ordinal, which extends the basic idea in all sorts of ways (including adding random effects, and link functions other than the logit I described above). If you want a more complicated model than these packages can manage, there are a couple of examples in OpenBUGS that can be adapted.