Kafkarna continues: REF gloves off at Lancaster University

I thought I would reblog this. Partly because I haven’t had a chance to write much recently and it gives me an opportunity to keep things ticking over, partly because it seems like something worth highlighting (although given my readership, this may not help much), and partly because I’ve written about the REF (and been quite critical) and this type of activity is one of the things that I expected to happen and illustrates the issues – in my opinion – with this type of assessment process. I particularly like, and agree with, this comment in the post Whatever this charade is, it is not a framework for research excellence and so I recommend giving this a good read.

coasts of bohemia

Two days ago an open letter from Professor Paolo Palladino, former Head of the History Department at Lancaster University, appeared on the university’s internal History Staff list serve, with copies sent to senior administrators in the Faculty of Arts and Social Sciences (FASS) and Central Administration.  I responded the next day with a post supporting Professor Palladino’s position, sent to the same recipients. With Paolo’s permission, I am reproducing both of our letters here.  We both believe that the issues raised by Lancaster’s selective culling of research-active staff from submission in the 2014 REF–a practice in which it is not alone among British universities–deserve the widest possible public debate.  We would therefore urge anyone who shares our concerns to share this post through social media.

Professor Palladino’s letter:
Dear all,
Over the next few days, a number of colleagues across the university are to be informed that they will not…

View original post 1,060 more words


REF, QR funding and the science budget

I guess we should all be reasonably pleased that the science budget has remained ring-fenced. The reality, of course, is that this just means that it is ring-fenced in a flat-cash sense, not in an inflation adjusted sense. As scientists, maybe we should make sure they define the term more specifically in future. It does appear that the current definition would allow the government to claim that something has been ring-fenced, despite the spending power tending to zero. Logically, you mighty expect it to be defined in terms of spending power rather than pounds, but that would require deciding between RPI and CPI, and that is clearly for too difficult.

Anyway, enough cynicism. In truth, we should probably be grateful that the outcome hasn’t been worse. It certainly has been for some and I do feel that the current government has got its basic economic policies completely wrong. It seems like it’s time that someone explained to George Osborne that it’s not necessarily the size of the debt and deficit that matters. What matters is their size relative to the size of our economy. The way its going now it seems like he’s getting it wrong on both counts.

What is maybe more concerning, from a science funding perspective, is the possibility that the government may choose to axe the QR funding. This is the funding stream that comes from the Higher Education Funding Councils and how it is distributed is determined via the outcome of the Research Excellence Framework (REF) exercise. Now, it may well be part of the ring fence and it may well be safe, but I wouldn’t know whether to laugh or cry if it were cut. If you’ve read some of my earlier posts, you’ll know that I’ve been very critical of REF. This is both due to the manner in which it is implemented and due to the shenanigans taking place at UK universities; the potentially risking hiring, the morally/legally questionable redundancies, and the time and effort spent preparing for what is – in my opinion – a completely flawed exercise.

So, if it were to be cut, part of me would feel like saying “serves us right for taking something so silly so seriously, and for playing the kind of games that have not and will not benefit our fundamental role as teachers and researchers”. On the other hand, it is a lot of money (£1.5 billion I believe) and I certainly have no desire to see this money leave the Higher Education sector. As far as I can tell, some may struggle to survive as they are, even if there are no cuts to the QR funding. Well, I certainly hope that it isn’t cut but I also hope that in future, universities will find the backbone to tell the government that playing these kind of games is silly and that they should find a simpler and more effective mechanism for distributing this money (although I don’t think it should simply be given to the research councils, but that might be a topic for another post).

REF and teaching

There’s a recent article in the Guardian about the influence of the Research Excellence Framework (REF2014) on university teaching. The basic issue is that money will only be allocated on the basis of papers that score highly (3* and 4*) and that league table rankings will also be determined by these high-ranking papers. Therefore, there is an incentive for universities to only submit researchers who have enough (essentially 4) papers that will be judged to be 3* or 4*. The concern is therefore that those who do not qualify will be encouraged (forced) to focus primarily on teaching or (as in the case of Queen Mary, University of London) face redundancy.

Many universities are making “clear pledges that not being entered to the REF in November will not damage an academic’s career“. There are others, however, where this is clearly already having an impact (Queen Mary, University of London, Kings College and Strathclyde are three that I’ve heard about). I personally think that it is potentially a real problem. There is a big difference between how research and teaching are evaluated at universities, with an individual’s contribution to the research ranking being much more obvious than an individual’s contribution to any teaching ranking. One concern is that it will create a hierarchy within universities with some able to focus more on research and others “encouraged” to focus primarily on teaching and administration. I don’t have an issue with different people contributing to an academic department in different ways. I just would rather it were dynamic and evolved in some “natural” way, rather than being forced upon us by an external assessment exercise.

University leaders are trying, in general, to make it clear that research and teaching are both valued parts of an academic’s career. The problem is that they don’t get to decide if the staff regard them as being of similar value. It certainly seems that even students are concerned about the impact that REF might have on the motivation of staff who might be judged to be “unworthy” and hence encouraged into having a larger role in teaching. I certainly think that these concerns are justified, even if there isn’t any evidence that REF is having, in general, this kind of impact.

There do seem to be two common views expressed by those who are more supportive of REF than maybe I am. One is that it is not unreasonable to expect academics to publish 4 good papers every 7 years. In general I agree with this, although there may be some exceptions. However, there is a difference between publishing 4 good papers and publishing 4 papers that will be judged (by a panel – many of whom may not be particular expert in your field) to be good. Maybe about one-quarter of my papers have done quite well (in terms of citations) but I don’t really have a good idea why they did well and why others didn’t. I can’t really look back and claim that I can now tell why some papers would be judged to be good, while others would not. I’m typically quite pleased with most of the papers I publish. Whether or not they do well (in metric terms) all seems a little random to me.

The other claim that is often made is that REF has forced universities to take hiring more seriously and that hiring is now based on excellence. Firstly, this is presumably only “perceived excellence” in research. One of the perennial criticisms of university hiring has been that teaching ability hasn’t been taken seriously enough. I really can’t see that REF has helped here. My feeling is that it may have made the situation worse. The other issue I have with this claim is that it suggests that the typical academic today is somehow better (because of REF) than they were 20 or 30 years ago. Really? I thought universities in the UK have been world-class for decades. I’m sure many academics who were active in the 60s, 70s, 80s and 90s might be slightly insulted by this suggestion. I suspect there were issues with hiring practices in those days, but that was probably more to do with societal issues that have been remedied via equalities legislation, than via REF.

It strikes me that there has been quite a lot of recent coverage about the negative aspects of REF, so maybe some of it will sink in. Not that hopeful though. Maybe I should be considering holding back some of my current work so as to publish papers that will qualify for REF2021.

I wanted to reblog this partly because it’s good – in my view – to see more people writing about REF. I also think the post makes some interesting points. I think REF is horribly flawed, but maybe we have to also realise that self-regulation also has its problems. I would suggest that self-regulation might be the wrong term to use. Not all of our research money comes through REF. A big fraction comes through research grants that are competitive and are assessed by a reasonably knowledgeable panel (although I would suggest that the allocation of research council grants in the UK has its own problems). REF is essentially allocating QR money that goes – in the first instance – to the university, not to individuals. The university then decides how to divide this money up. It’s not clear to me that a simpler assessment exercise that used up less time, was less easy to game, and in which maybe the allocation was less non-linear (i.e., a weaker dependence on how well you do and how much money you get per person submitted) would be just as effective and less damaging. Part of me thinks that maybe the criteria should be secret until the process is finished, but I suspect many would feel that they then wouldn’t trust it at all and everyone would complain afterwards if they didn’t do as well as they thought they should have done.


When I was a medical student we were encouraged to conduct vaginal examinations on anaesthetized gynaecological patients, so that we could learn how to examine the reproductive system in a relaxed setting. The women did not know this was going to happen to them during their surgery, and did not sign any consent forms. Probably they would not have minded anyway (they were asleep after all, and educating the next generation of doctors is undoubtedly a good cause) but the possibility that they should be asked if they did mind did not occur to anybody, until my ultra-feminist friend kicked up a stink and organised a rebellion. The surgeons were genuinely surprised and mystified. Being well-meaning, it had not occurred to them that some people might see what they were doing as wrong.

Fast-forward a few years to the Alder Hey scandal, in which it emerged that doctors at a children’s hospital had…

View original post 584 more words

Some more REF thoughts

The post about my REF interview seems to have generated a modest amount of interest in the last day or so. There were no comments, so I can’t tell if others identified with my experience and agreed with my general views, or disagreed and thought it was all a load of nonsense. However, seeing that post generate a little interest reminded me that I had seen some interesting data recently about REF outputs. For those that don’t know, REF is the Research Excellence Framework and is an exercise in which the quality of research in UK universities will be judged and the results will determine how to divide up a fairly substantial pot of money. What makes it more “interesting” is that the formula that decides how much money each university gets is highly non-linear. There is a big difference between doing “very well” compared to simply doing “well”.

What will be assessed will in general be papers published by academics in each institution. Typically, there will be 4 papers – published since 2008 – for each academic included in the submission. The intention is that each paper will be judged in terms of its originality, significance and rigour and will be given a score of either 4*, 3*, 2*, or 1*. The claim is that the panels doing the judging will not be using Journal Impact Factors or citations to make their assessment. It has, however, already been pointed out that this claim is unlikely to be credible. In Physics, there will probably be something like 6500 papers each of which will supposedly be read by 2 of the 20 panel members in a period of about 12 months. In other words, at least 2 papers per day each. Pretty difficult to do. Virtually impossible to make a credible judgement of each paper. The general view is that, despite what is claimed, Journal Impact Factors and citations will indeed be used to judge these papers.

Here’s what I found interesting. According to what I saw recently, a paper published since 2008 that is receiving about 8 citations a year will be in the top 10% according to citation numbers. I was a little surprised. I assumed that the top 10% of papers (according to citation numbers) would be receiving more than 8 or so citations a year. I decided to look into this myself using Web of Knowledge. If you search for all refereed articles published in the general area of Physics that also have “UK”, “United Kingdom”, “England”, or “Scotland” in the address you discover 38176 refereed articles published since January 2008. Web of Knowledge can’t do citation statistics on more than 10000 papers. I divided these papers into 5 categories (Condensed Matter, Astronomy & Astrophyscs, Particle Physics, Nuclear Physics, Mathematical Physics). I also included a randomly chosen sample of areas in Physics that, together, hadn’t published more than 10000 papers since 2008. The table below shows the average number of citations per paper, the number to be in the top 1%, the number to be in the top 10%, and the median.


Indeed, it seems that the average number of citations per Physics paper published since 2008 is about 10 and to be in the top 10% of all physics papers published since 2008 you need to be collecting fewer than 10 citations per year. Although it is different for different areas of physics, the difference isn’t particularly large. One issue with the above table is that the older papers will have collected more citations than the newer papers. I then repeated the above, but considered only papers published – with a UK author – in 2008, 2009, 2010, 2011, and 2012. In this case there are typically between 7000 and 7500 refereed articles published per year, so I didn’t divide it into different disciplines, but considered all articles in physics. The table below shows the result.


The result seems about the same. To be in the top 10% of papers published in any year since 2008 a paper needs fewer than 10 citations per year. Essentially, for a paper to be in the top 10% of cited papers, requires a fairly small number of citations per year. Alternatively, most papers seem to receive very few citations. What to make of this? Partly, I was just a little surprised. If asked, I would have guessed that to be in the top 10% of cited papers would require more than 10 citations per year. Also, what does the fact that a large fraction of refereed articles attract very few citations per year imply? Does it mean that much of what we publish isn’t particularly interesting. Although I think we probably publish too many papers, I don’t think that 90% of what we publish is worthless. Quite a large number of those papers receiving very few citations must be excellent bits of research that are worth publishing. Maybe they just haven’t been noticed. Maybe they’re what is referred to as slow-burners. Maybe it was a necessary step that has been superseded by a newer bit of research but that isn’t getting the citations that it might deserve. Maybe it’s something a researcher enjoyed doing, learned a lot by doing and that then allowed them to move on to something newer and more interesting.

What’s more interesting is how citations can then be used to judge these papers. We will presumably be submitting something like 6500 papers, so potentially 17% or so of all refereed physics papers published since 2008. Only papers judged to be 3* or 4* will attract money. One could assume that 3* and 4* papers will be those with much higher than average number of citations. This would then imply that a small number of papers will be used to determine how to divide up the large sum of money associated with REF. Small variations could then have a big effect. On the other hand, if 3* papers are not necessarily those with much more than the average number of citations, how do you then distinguish between 3* and 2* papers. Most papers are collecting fewer than 10 citations per year. Where’s the division? Is 3 a year 2* and 6 a year 3*? Alternatively, we shouldn’t really use citations and metrics and should judge each paper on it’s originality, significance and rigour (as suggested in the REF documentation). The problem is that very few, if any, believes that it is possible for a panel of 20, however distinguished, to do this.

The truth is probably that it will be a combination. The panel members will, I’m sure, try to read the papers and will then use metrics to fine tune their scores. However, combining two largely flawed processes to try and determine the quality of research activity in UK universities doesn’t really seem like much of an improvement. I suspect that, at the end of the day, a ranking will be produced that isn’t entirely unreasonable. However, as I’ve pointed out before, it should be possible to achieve a reasonable ranking in a manner that doesn’t use up quite as much time and effort as REF is currently doing.

REF Interview

So I recently had an interview with our Head of Department and our Director of Research to discuss my inclusion in the Research Excellence Framework (REF2014). Although I had expected some problems, my interview went fine and it looks as though I’m “good enough” or, more correctly, I have my name on 4 papers that are “good enough”. Some of my colleagues were not particularly happy after their interviews, but I don’t feel all that comfortable discussing their issues in any detail. As you may know, if you’ve read any of my other posts, I’m not a huge fan of REF. Even though my interview went fine, my views haven’t changed much and – if anything – it’s rather confirmed my general views of the issues with the process.

I don’t really want to go into specifics as maybe I shouldn’t be giving away our strategy. I thought, instead, that I would give some general comments. It’s clear that it’s a game, and everyone knows this. There are two parameters; money and league table ranking. Most institutions will be adjusting their submission to optimise between these two parameters. This is not an objective assessment of research quality. It’s a game to try and submit the optimum set of papers that will get you as high as possible on the League table and get as you much money as possible. Some institutions may determine that it will be better to sacrifice league table position for money and others may choose to prioritise league table position over money.

It was claimed that submissions (papers) would be read and assessed and that metrics would not be used. Noone believes this to be true. We’re clearly using Impact Factors and citations at some level. Papers in Nature or Science will be 4* simply because they’re in Nature or Science. Papers in other good Journals will be 4*, 3* or 2* depending on the number of citations. There’s also no rigorous assessment of ownership. As long as you can indicate that you contributed sufficiently to a paper, your institution will get full credit. Someone who makes modest contributions to 4 papers that are regarded as excellent, is more valuable to an institution than another person who writes 4 good papers, none of which are regarded as excellent.

You could argue that the 4 excellent papers are better than the 4 good papers, but there is a chance that our careers may depend on how our 4 papers are judged for REF. Is it really better to have average people who are good at getting their names on excellent papers, rather than good people who write their own. I may be slightly biased in that I’m probably more in the latter category than the former, but I would say that I don’t intend for my papers to only be “good”. I always aim to write papers that I think are tackling interesting and challenging problems and I always aim to be as careful and rigorous as I can reasonably be. I always try to write something that would be regarded by others as excellent. The citations, however, don’t always indicate that they have been received particularly well. Of course, the definition of excellence is very tricky and using simplistic metrics is not necessarily a good way to determine the quality of a piece of research. In discussing this, someone commented that it is not unreasonable to expect academics to write 4 good papers every 7 years. In some sense I agree, and I would certainly back myself to write 4 good (maybe even some excellent) papers every 7 years. What I have much less confidence in is the ability of those who are judging these (either on the REF panel or in my own Department) to do so properly.

It seems that there is a real chance that we will aim to adapt our research strategy to suit future REF exercises. It’s clear that independence and originality is no longer necessarily optimal. It might be much better to have people who work in collaborations that are likely to publish papers that will be highly cited. I could easily see people being encouraged to make sure that they belong to such a collaborations and that they aim to contribute sufficiently to at least 4 papers so that their institution can submit these 4 papers to REF. Maybe this won’t happen, but I find the possibility of this happening quite disturbing. What I find slightly more disturbing is that the senior people that I encounter acknowledge that it is a game, but seem to feel that it is a game we need to play. Noone seems that bothered that we might be adjusting our research strategies to suit what is essentially an assessment exercise. I appreciate that there is a lot of money involved but, as far as I’m concerned, either our research is valuable at some fundamental level, or we really shouldn’t be bothering. Maybe that’s a little extreme, but hopefully you know what I mean.

That’s probably all I was going to say. As I may have mentioned before, a concern of mine is that over time we will adapt so that top UK institutions are very good at scoring well on REF exercises but don’t really do research that has any particular value. Maybe I’ll be proven wrong and that once REF is finished we’ll forget about it for a while and everyone will be left alone to get on with whatever research they think is most interesting and valuable. I somewhat doubt that and I do worry slightly about the careers of those who are deemed to not have enough papers that are good enough for REF. Will they be marked and will their jobs be at risk. I think that would be awful if it did start happening. I have no real issue with people who are not contributing positively to the running of an academic being sanctioned. I do, however, have an issue with the possibility that some people’s careers could depend largely on an assessment exercise that, in my opinion, is horribly flawed.

I had considered writing about this myself. I also noticed the article in the Times Higher Education a few days ago about HESA publishing contextual staff data. I’ve written a number of times about my concerns about REF2014 (see, for example, REF2014: Good or Bad and The negative impact of REF) and it is amazing – to me at least – that at this late stage they are still finalising the procedures. Rather than writing my own post, I thought I would simply reblog this as it mostly says what I was going to say myself.

In the Dark

The topic of the dreaded 2014 Research Excellence Framework came up quite a few times in quite a few different contexts over the last few days, which reminded me that I should comment on a news item that appeared a week or so ago.

As you may or may not be aware, the REF is meant to assess the excellence of university departments in various disciplines and distribute its “QR” research funding accordingly.  Institutions complete submissions which include details of relevant publications etc and then a panel sits in judgement. I’ve already blogged of all this: the panels clearly won’t have time to read every paper submitted in any detail at all, so the outcome is likely to be highly subjective. Moreover, HEFCE’s insane policy to award the bulk of its research funds to only the very highest grade (4* – “internationally excellent”) means that small variations in judged…

View original post 479 more words