Reading the responses from students to one’s teaching is all too often painful. Even for a course that has gone reasonably well you will probably get as many panning you for making it too simple as complaining it was impossibly hard. Indeed, given a bunch of responses like that you can probably assume you got it about right, since it is likely to be impossible to please everyone regardless of their backgrounds. If you have made any slip-ups in the handouts, even ones as trivial as a missing bracket or subscript which have instantly been corrected in person in the lectures, you are still liable to be stung by received criticism, possibly even by invective. And then there are the casual insults. Some which have come my way may be more due to my gender than my teaching style. Two stand out as being particularly irrelevant, almost funny but not quite, commenting on my dress style: ‘where does she get her clothes, the Oxfam shop?’ and ‘she looks like a women’s lib officer’. If someone can let me know what such a person wears then I’ll know quite how insulted to be, though I’m sure it wasn’t meant as a compliment.
Therein lies a key problem with using student questionnaires as a measure of quality of the teaching. Studies have shown (see e.g. Basow and Silberg for university lecturers and Potvin et al regarding high school students attitudes to male and female science teachers) that students tend to evaluate female lecturers as less competent than equivalent male ones. Likewise, black teachers are scored worse than white faculty (see here). In other words, unconscious bias is rife here whether or not explicit comments about dress sense – or any other attribute – are made. As a minority, individuals will be judged more harshly whatever their actual skill level.
Of course students like lecturers who make them feel comfortable, make them think that a high class lecture is being given regardless of whether they are actually learning anything. A classic experiment in 1970 (giving rise to something known as the Dr Fox effect) charged an actor to give a lecture full of meaningless jargon but in a very ‘expressive’ way. The students thought he (I assume it was a male actor) was great, despite the fact the material covered was meaningless junk. (Sound a bit like Alan Sokal’s considerably later spoof during the Science Wars?). Thus a satisfied student audience is no recipe for learning success.
A culturally out-of-step lecturer can receive scathing feedback because the audience don’t feel at ease with an alien style. As an example I’d cite a British lecturer I knew with a well-developed sarcastic wit lightly laced with irony who was loathed by an American audience who did not understand the implicit humour at all; loathed to the extent that he was nearly denied tenure regardless of any underlying excellence. Furthermore, some courses are inherently more interesting to their audience than others simply because some are there as necessary scaffolds on which to build the more interesting material. Those giving the latter will almost inevitably fare better than the former regardless of how charismatic or competent (not the same thing) they are.
And, as the Dr Fox experiment and later equivalent studies demonstrate, whether a lecturer is liked has little to do with what the students actually take in or how they will do in any subsequent assessment. This was neatly summed up in an email I received recently from a relatively early career lecturer who sadly said ‘but I got some horrible horrible student feedback on my teaching, but now I am marking their exams and they have done really well….so although they hate me they actually learned a lot…which makes it more bearable.’ I wouldn’t go so far as to say there is an inverse relationship between student evaluations and exam performance but I’m not sure there is any neat direct correlation either.
Which means that a VC who introduces a policy saying that lecturers whose feedback falls below some specific numerical score will be required to attend an ‘informal capability meeting’ is going to be on shaky ground. This, according to the THE, is what is being contemplated at the University of Surrey. As the sub-header to the article says, lecturers will be ‘at [the] mercy of students‘ if this plan goes ahead. Of course any university leadership should wish to push up the standards of their teaching and it is reasonable to expect lecturers to pull their socks up if they are doing badly. But simply using a crude average of some hostile students’ feedback may not be an optimum strategy.
Now of course feedback can be constructive. I will ever be grateful to the student who (many years ago I hasten to say) mildly pointed out I had written an equation containing the letter ‘c’ on both sides: in one case I was using it to mean some constant, in the other the speed of light. Amazingly I hadn’t noticed what I was doing since both seemed so naturally ‘right’, but the students had every right to complain; it was incompetent. Other feedback may be similarly constructive when something that seems natural to a lecturer is all wrong, particularly if there is a generational change in expectations. Additionally, everyone knows a lecturer who is inaudible beyond the first two rows (who should certainly use a mike), or who is invariably late and disorganised, either of whom may – or may not – get slated in evaluations. But the ones who really need attention are the ones whose lectures are full of undefined jargon, utter an incoherent stream of consciousness tirade or who assume far more knowledge than the class actually has been able to acquire. Simply using some raw numerical metric, without digging beneath the surface to analyse what factors are feeding into the score, strikes me as unwise and unlikely actually to lead to better outcomes for the students or the university.
Let’s get rid of the numerical scores. The written comments are far more useful in determining what students like and dislike about a course. The numerical scores are only used by foolish administrators to determine whose teaching is “unacceptable” and by foolish students in choosing what courses to take.
Lovely post, as ever. This one, I have to say had a rather sad resonance with my own observations over the years. The ‘Student Experience’ is not necessarily enhanced by a naive, utilitarian, approach to student feedback. Had you seen David Willetts recent comments on ‘upping our game’ by the way: http://www.bbc.co.uk/news/uk-politics-27681424
I taught my first undergraduate course this year and underwent this experience for the first time. Amidst the useful comments (`hard to understand presentation of X’) and the mutually contradictory ones (`I liked emphasis on Y’ / `Spend less time on Y’) there was the amusing ‘9am is too early please can the lecture times be moved later’
You say that “students like lecturers who make them feel comfortable, make them think that a high class lecture is being given regardless of whether they are actually learning anything”, suggesting that student satisfaction and student learning are independent. But actually, it’s worse than that.
The only two (to my knowledge) studies which have properly randomised students into classes and then used a sensible follow-up measure of learning outcomes (achievement on follow-up modules), both found *negative* correlations between student satisfaction and student learning.
Given this, the University of Surrey story is astonishing.
See:
http://www.econ.ucdavis.edu/faculty/scarrell/profqual2.pdf
http://www.econstor.eu/bitstream/10419/51579/1/665262329.pdf
I’ve seen too many “marks” (high and low) (or numberical scores) on students questionnaires which seem to get invalidated by the feedback comments: low marks followed by “there’s too many kinetics in this lecture” (yes…. that is the content, glad you noticed….) or high scores followed by very generic comments about the subject that have nothing to do with the lecturer (“really interesting subject, at the core of my industrial interests”).
Using questionnaires in the way proposed by Surrey will end up in mediocre lecturing styles, since no one will dare risking any innovative approach.
Also, questionnaires should be completed AFTER the module results are out: students should be able to correlate student experience of the module with how relevant this experience was for a specific result. Of course, that would need moderating, because students may tend to punish lecturers in modules in which they haven’t done well. Maybe the way forward is to provide questionnaires both during the module and after results have come out (with an anonymous way of correlating the same student response before and after). But as they are, these questionnaires are just popularity contests.
I’m still struggling with the “women’s lib officer” quote…
The only picture I can come up with is the statue of liberty: clearly a woman, representing liberty, and an “officer” if you like of the US – or at least its beliefs.
So if you are dressing like her you might want to stop bringing the flaming torch into lectures, it’s probably against the fire regulations.
More seriously the student questionnaires are just popularity contests as noted above, and the less challenging the course, the more popular the lecturer is likely to be. Modern mass media has also got us used to people who tell us things being picked for the job at least partially on the grounds of being photogenic (I’m thinking of TV & film stars, and who gets to be a successful politician), or at least the popular images of that profession. Many students have never met a scientist until they get to university and have no idea what one might look like, so they revert to media stereotypes where scientists are usually male and required to look like Einstein with hair like a superannuated sheep dog*.
Undergraduate students can also equate lecturer with secondary school teacher – a person who in their recent past had a disciplinary role and is usually cast in the minds eye in opposition to the students freedom. (The mid 80’s movie The Breakfast Club captures that meme brilliantly). This might explain the cattiness, but it does not in any way excuse it.
It is only with hindsight that many of these students will understand just how remarkable the teaching was.
* Sadly not my line, it is a quote from Nigel Calder’s book “Einstein’s Universe”
As with all your posts this has crystallized so many things I had always vaguely assumed must be true into things that papers have been written about showing them to be true! So very many thanks for that!
Someone recently asked my why the students had taken against a teaching method I had used when he thought he could use the same method with little or no issue. When I suggest his male professor status versus my lowly female lecturer status he actually laughed.
Now I can be more confident of that contributing…though, of course, it is unlikely to be the main reason!
There is some point (though maybe limited in first year monster classes) – when you have paid good money and got no return benefits. As in one lecturer giving all lectures completely out of whatever book he deemed relevant (and by completely I mean reading aloud!) and never turning up to his labs or setting the labs up even. The lab technicians (mostly masters students) had to invent their own in the hope of somehow being on target. It was a small class in our final year too. Total waste of time apart from the labs. One person gave him a not quite bottom score but the rest of us figured we needed someone else to repeat the year for free!!
This was the only feedback opportunity we really got and, admittedly, this was one out of many really good lecturers and a number that were passable but at least good within their subject. I’m not sure how else this should be done so I suppose that stupid feedback has to be lived with just in case.
viv in nz
The problem is the assumption that using the MEQ scores is meant to drive up quality of teaching, and that this is the reason for Surrey University’s wish to increase these for all their staff. It doesn’t matter if all the evidence point to no increase in teaching quality, if this is not their motive behind the use of MEQs. I suspect the real purpose of Surrey University’s use of MEQs is to lead students to subsequently rate their course equally highly on similar questions posed in the NSS, and thus increase the university’s standing on the national league tables.