REF again!

Some interesting posts recently about the forthcoming Research Excellence Framework (REF2014). An interesting post by Dave Fernig called In Defence of REF. This post does make some valid points. REF, and previous RAEs, may well have encouraged more sensible hiring practices in which the quality of the applicant is taken more seriously than maybe it was in the distant past. Two comments I would make are that it I still think that teaching ability is still not taken seriously enough and, in my field at least, many places have adopted a very risky hiring strategy that – I hope – doesn’t come back to bite us in 5 years time. Dave Fernig also seems to feel that the panel, in his field, can distinguish between excellent, good and mediocre papers. This may well be true in his field, but I don’t think it is for my field (physics).

Peter Coles, who writes the Telescoper blog, has written a new post called Counting for the REF. I won’t say much about this post as you can read it for yourself, but I agree with much of what is said. Maybe the most concerning comment in the post was the suggestion that the weighting – when determing the funding distribution – would be 9 for 4* papers and 1 for 3* papers. Essentially, most of the funding would be determined by 4* paper and a very small amount would be associated with 3* papers. Fundamentally I think this is unfortunate as it gives very little credit to some very good papers and absolutely no credit to what might be quite good papers (there is no funding associated with 2*).

There is a more fundamental concern that is associated with what is discussed in Peter Coles’s post. In a recent post (Some more REF thoughts) I pointed out that in Physics fewer than 10% of all papers get more than 10 citations per year. The claim is that two members of the REF panel will read and assess each paper. However, as pointed out by others, this would require each panel member to read 2 papers per day for a year. Consequently, it is impossible for them to give these papers as much scrutiny as they would be given if they were being properly peer-reviewed. There is an expectation that metrics (citations for example) will play an important role in deciding how to rate the papers. How could you do this? You could set a threshold and say, for example, that since most papers get fewer than 10 citations a year that 4* papers will be those that receive more than 10 citations a year. The problem that I have (ignoring that citations are not necessarily a good indicator of quality) is that this would then be a very small fraction (about 5%) of all published papers. The distribution of REF funding would then be being determined by a minority of the work published since 2008. This means that small variations can have a big impact on how the money is distributed. One could imagine that just a few papers being judged 3* instead of 4* could have a massive impact on how much money a department gets (I accept that the money doesn’t actually go to the department, but you probably know what I mean).

Alternatively, if you want to avoid small variations having a big impact you would need 4* papers to make up a reasonable fraction of the assessed papers (maybe 10 – 20%). The problem here is that you’re now getting down to papers that are only collecting a few (5-10) citations per year, so where do you draw the boundary. Is 3 per year too few, but 5 a year okay. You could argue that these are just being used to to guide the assessment and that the panels’ reading of the paper will allow a distinction to be drawn between 4* and 3* papers. This doesn’t, however, change the fact that the panel members have to read a massive number of papers. It feels more like combining two completely flawed processes and hoping that what pops out the other side is okay.

I suggested in an earlier post (REF prediction) that, given the diverse nature of a typical academic department or university, that this might be an appropriate time to simply consider using some kind of metric. I did a quick analysis of all 42 physics departments’s h-indices and saw a reasonable correlation between their h-index and how they did in RAE2008. I noticed today that Deevy Bishop, who writes a blog called BishopBlog, has made a similar suggestion and carried out the same kind of analysis for psychology. Her analysis seems quite similar to mine and suggested that this would be An alternative to REF2014.

Anyway, it’s quite good to see others writing about REF2104 (whether for or against). I think it is a very important issue and I’m just disappointed that it is probably too late to make any changes that would make the REF2014 process simpler and more likely to produce a reasonable ranking.


4 thoughts on “REF again!

  1. Thanks for mentioning my blogpost on psychology rankings. It gets ever more interesting.I need to check and double-check, but I just did a very quick comparison of your results for physics and mine, having downloaded the results tables for physics from RAE2008 site.
    I took as an outcome measure the amount of INCOME that results from RAE scores, because that takes into account the number of people entered in the RAE (which you would expect to affect the departmental H index – rather like age does for individuals). This is simply done by a formula that multiplies N people entered by a weighted (weighting in brackets) sum of the numbers of outputs rated as 2* (1), 3* (3) and 4* (7).
    The correlation between your (inverted) H-index ranking and the RAE income in physics came out as .80 – pretty close to what I got for psychology.
    Even more interesting, if you predict RAE income from H-index, and then add into the regression equation the number of panel members for each institution, the correlation goes up to .92, with panel membership accounting for a further 20% of the variance. I found a similar trend in psychology, though not quite as big.
    So not only does the H index do a remarkably good job of prediciting RAE financial outcomes; insofar as the outcomes differ from that prediction, the indicators are that this is because there is an advantage to having a member of your institution on the panel.

    • Thanks for the comment. Very interesting. Clearly your analysis is more suitable than the quick one I did. I did wonder how one would account for the size of the department and what you suggest seems quite reasonable. Does seem like we are expending an awful lot of time and effort (and money) to do something that could be done much more simply and that would produce a result that I suspect most would be reasonably happy. Slightly concerning that there seems to be an advantage to having a member of your institution on the panel.

    • I’ve tried the weighting that you suggest and indeed the correlation is much stronger. I’ve added a new figure to my REF prediction post to show this. I actually got a correlation of 0.92 without taking panel membership into account, but maybe I did something wrong. It’s clearly quite strong anyway.

  2. Pingback: REF prediction | To the left of centre

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s