Kafkarna continues: REF gloves off at Lancaster University

I thought I would reblog this. Partly because I haven’t had a chance to write much recently and it gives me an opportunity to keep things ticking over, partly because it seems like something worth highlighting (although given my readership, this may not help much), and partly because I’ve written about the REF (and been quite critical) and this type of activity is one of the things that I expected to happen and illustrates the issues – in my opinion – with this type of assessment process. I particularly like, and agree with, this comment in the post Whatever this charade is, it is not a framework for research excellence and so I recommend giving this a good read.

coastsofbohemia

Two days ago an open letter from Professor Paolo Palladino, former Head of the History Department at Lancaster University, appeared on the university’s internal History Staff list serve, with copies sent to senior administrators in the Faculty of Arts and Social Sciences (FASS) and Central Administration.  I responded the next day with a post supporting Professor Palladino’s position, sent to the same recipients. With Paolo’s permission, I am reproducing both of our letters here.  We both believe that the issues raised by Lancaster’s selective culling of research-active staff from submission in the 2014 REF–a practice in which it is not alone among British universities–deserve the widest possible public debate.  We would therefore urge anyone who shares our concerns to share this post through social media.

Professor Palladino’s letter:
Dear all,
Over the next few days, a number of colleagues across the university are to be informed that they will not…

View original post 1,060 more words

Advertisements

The integrity of universities

There’s a very interesting (in my opinion) article by George Monbiot in the Guardian today. The article is called Oxford University won’t take funding from tobacco companies, but Shell’s OK. The basic premise is that universities should be acting for the common good, or as George Monbiot puts it


the need for a disinterested class of intellectuals which acts as a counterweight to prevailing mores.


I have to say that I agree completely with this. It has always surprised me how disinterested UK academics can be. I had always assumed that university academics had a role to play in defining what is acceptable in our societies. They are meant to be the intellectual members of our society; the people who think. If academics are reluctant to be involved in this, then who else is going to do it? This isn’t to say that everyone should bow to the views of academics, simply that academics should feel free to question what is accepted in our societies.

There are probably many reasons why UK academics are reluctant to engage in discussions about our society. One may simply be that academics have become very focused. They see themselves as experts in quite specific areas and so don’t see it as appropriate to engage in areas outside their expertise. There is some merit to this, but it is a bit disappointing – in my opinion. Another may be that there is now quite a lot of pressure on academics. Universities have become very bureaucratic and there is quite a strong publish-or-perish attitude. Academics don’t have much spare time to contemplate the merits – or lack thereof – of our societal mores. Universities have also become much more like businesses. The goal is to maximise teaching and research income and, hence, academics are discouraged from doing anything that doesn’t enhance a university’s ability to generate income.

Personally, I think the latter is the main reason why universities (in the UK) are no longer hotbeds of dissent. We are publicly funded and hence need to do what is expected of us. I’m often very critical, for example, of the Research Excellence Framework (REF2014) and even though most seem to agree, the typical response is “we just have to do this”. Well, yes, but do we have to do it happily. If we think it is damaging what we regard as strengths of the UK Higher Education system, shouldn’t we be making it clear that we’re doing it under duress. There’s also this view that we have to do what is best for UK PLC (i.e., what will best help economic growth in the UK). In a sense, I agree with this. What I disagree with is how we’re influenced to do this. University research has, for many decades, had a very positive impact on economic growth. However, this didn’t happen because politicians told universities to do this. It’s because people recognised the significance of some piece of research and used it to develop something that had economic value. It’s also largely unpredictable. It’s certainly my view that telling us to start predicting the economic benefit of our research will do more harm than good.

The final thing I was going to say regards the main thrust of George Monbiot’s article. If there is increasing evidence that global warming is happening (as there is) and if there is evidence that such warming could lead to life-threatening climate change, shouldn’t the universities where this research is taking place act as though it’s important. Maybe universities shouldn’t be accepting funding from oil companies, if their own research indicates that we should significantly reduce our use of fossil fuels. I have heard some argue that we shouldn’t worry about the provenance of our funding as we’ll typically do good things with whatever money we can get. I think this is naive. The idea that one can take funding from oil companies without being influenced by the source of this funding seems highly unlikely.

A new REF algorithm

In a previous post (REF prediction) I looked up the h-indices and the citations per publication for all Physics and Astronomy departments included in RAE2008. I ranked them in terms of their h-index, in terms of their citations per publication, and as average of these two. It looked alright but I did comment that one could produce something more sophisticated. At the time I did worry that using just the h-index would disadvantage smaller departments, but I couldn’t really think of what else to do and it was just a very basic exercise.

Deevy Bishop has, however, suggested an alternative way of ranking the departments. This is to basically relate the income they get with their h-index. For example, in RAE2008 each department was ranked according to what fraction of their papers were 4*, 3*, 2*, 1* and U. The amount of funding they received (although I think it technically went to the university, rather than to the department) was then scaled according to N(0.1×2* + 0.3×3* + 0.7×4*) where N was the number of FTEs submitted. This data can all be downloaded from the RAE2008 website. Deevy Bishop did an analysis for psychology and discovered that the level of funding from RAE2008 correlated extremely well the department’s h-index. What was slightly concerning was that the correlation was even stronger if one also included whether or not a department was represented on the RAE2008 panel.

I’ve now done the same analysis for Physics and Astronomy. I’ve added various figures and text to my REF prediction post, but thought it worth making it more prominent by adding it to a new post. The figure showing RAE2008 funding plotted against h-index is below. According to my quick calculation, the correlation is 0.9. I haven’t considered how this changes if you include whether or not a department was represented on the RAE2008 panel. The funding formula for REF2014 might possibly be N(0.1×3* + 0.9×4*). I’ve redone the figure below to see what the impact would have been if this formula had been used instead of the RAE2008 formula. It’s very similar and – if you’re interested – it’s included at the bottom of my REF prediction post. It does seem that if all we want to know is how to distribute the money, relating it to a department’s h-index seems to work quite well (or at least it would have worked well if used for RAE2008). I’m not quite sure how easy it would be to produce an actual league table though. Given that the REF2014 formula may depend almost entirely on the fraction of 4*, one could simply divide the h-index by the number of FTEs to get a league table ranking, but I haven’t had a chance to see if this produces anything reasonable or not. Of course, noone really trusts league tables anyway, so it may be a good thing if we don’t bother producing one.

A plot of h-index against the RAE2008 funding formula - N(0.1x2* + 0.3*3* + 0.7*4*).

A plot of h-index against the RAE2008 funding formula – N(0.1×2* + 0.3×3* + 0.7×4*).

REF again!

Some interesting posts recently about the forthcoming Research Excellence Framework (REF2014). An interesting post by Dave Fernig called In Defence of REF. This post does make some valid points. REF, and previous RAEs, may well have encouraged more sensible hiring practices in which the quality of the applicant is taken more seriously than maybe it was in the distant past. Two comments I would make are that it I still think that teaching ability is still not taken seriously enough and, in my field at least, many places have adopted a very risky hiring strategy that – I hope – doesn’t come back to bite us in 5 years time. Dave Fernig also seems to feel that the panel, in his field, can distinguish between excellent, good and mediocre papers. This may well be true in his field, but I don’t think it is for my field (physics).

Peter Coles, who writes the Telescoper blog, has written a new post called Counting for the REF. I won’t say much about this post as you can read it for yourself, but I agree with much of what is said. Maybe the most concerning comment in the post was the suggestion that the weighting – when determing the funding distribution – would be 9 for 4* papers and 1 for 3* papers. Essentially, most of the funding would be determined by 4* paper and a very small amount would be associated with 3* papers. Fundamentally I think this is unfortunate as it gives very little credit to some very good papers and absolutely no credit to what might be quite good papers (there is no funding associated with 2*).

There is a more fundamental concern that is associated with what is discussed in Peter Coles’s post. In a recent post (Some more REF thoughts) I pointed out that in Physics fewer than 10% of all papers get more than 10 citations per year. The claim is that two members of the REF panel will read and assess each paper. However, as pointed out by others, this would require each panel member to read 2 papers per day for a year. Consequently, it is impossible for them to give these papers as much scrutiny as they would be given if they were being properly peer-reviewed. There is an expectation that metrics (citations for example) will play an important role in deciding how to rate the papers. How could you do this? You could set a threshold and say, for example, that since most papers get fewer than 10 citations a year that 4* papers will be those that receive more than 10 citations a year. The problem that I have (ignoring that citations are not necessarily a good indicator of quality) is that this would then be a very small fraction (about 5%) of all published papers. The distribution of REF funding would then be being determined by a minority of the work published since 2008. This means that small variations can have a big impact on how the money is distributed. One could imagine that just a few papers being judged 3* instead of 4* could have a massive impact on how much money a department gets (I accept that the money doesn’t actually go to the department, but you probably know what I mean).

Alternatively, if you want to avoid small variations having a big impact you would need 4* papers to make up a reasonable fraction of the assessed papers (maybe 10 – 20%). The problem here is that you’re now getting down to papers that are only collecting a few (5-10) citations per year, so where do you draw the boundary. Is 3 per year too few, but 5 a year okay. You could argue that these are just being used to to guide the assessment and that the panels’ reading of the paper will allow a distinction to be drawn between 4* and 3* papers. This doesn’t, however, change the fact that the panel members have to read a massive number of papers. It feels more like combining two completely flawed processes and hoping that what pops out the other side is okay.

I suggested in an earlier post (REF prediction) that, given the diverse nature of a typical academic department or university, that this might be an appropriate time to simply consider using some kind of metric. I did a quick analysis of all 42 physics departments’s h-indices and saw a reasonable correlation between their h-index and how they did in RAE2008. I noticed today that Deevy Bishop, who writes a blog called BishopBlog, has made a similar suggestion and carried out the same kind of analysis for psychology. Her analysis seems quite similar to mine and suggested that this would be An alternative to REF2014.

Anyway, it’s quite good to see others writing about REF2104 (whether for or against). I think it is a very important issue and I’m just disappointed that it is probably too late to make any changes that would make the REF2014 process simpler and more likely to produce a reasonable ranking.

What are universities for?

What has been happening at Queen Mary, University of London (highlighted here) and at other UK universities, has made me consider what universities are actually for. Stefan Collini has written a book titled What are Universities for? and I should probably read this before writing my own post, but I haven’t and I’ll write something anyway.

Basically, I think universities are places where people carry out research and other scholarly activities and pass on what they (and others) have learned. The research/scholarship should be original and fundamental and should aim to enhance our understanding of the world/universe. We then pass on this knowledge through publishing papers, talking at conferences, engaging with the public and educating students who can then go out and use this knowledge and the associated skills throughout their careers. The impact that universities have is therefore partly through the graduates and partly through the research/scholarship which may have both societal and economic impact (although one would expect it to be medium to long-term impact).

An academic job is also typically assumed to be permanent. The US still has tenure, the UK doesn’t, but academic jobs are still regarded as permanent. There are two reasons for this (I think). We want to attract world-class researchers into jobs that don’t pay huge salaries and so job security does play a role in making the career attractive. The other reason is that academic researchers have typically had, what is called, academic freedom. This is the freedom to, essentially, study/research whatever they would like. I think this is quite important and, without it, we risk the possibility that academics start doing predictable, risk-free research, which won’t have as much impact as the risky research that might result in something completely unexpected (but might also result in nothing). Also, what is the alternative? I don’t have a boss who decides what research I should do. I decide for myself. Sometimes, it turns out to be interesting and worth publishing. Sometimes it doesn’t. I think it is important that academics can commit to a project that may not lead to anything without having to worry that they could lose their jobs if the management decide that they’re no longer doing valuable research.

It is now very clear, however, that universities are run as businesses with management teams who need some measure of success. Success is now generally regarded, by the management at least, as how much money the research is able to bring in and where the university sits in various league tables. The next big pot of money will be associated with next year’s Research Excellence Framework (REF2014). This is leading to some universities (Queen Mary, University of London) actually getting rid of a large fraction of the academics in some departments so as to hire new academics who can, supposedly, improve their REF2014 ranking. Firstly, I think this is morally indefensible as these are people who, by all accounts, are doing their jobs. They are being made redundant because the university has introduced some measures of research success that they don’t satisfy. If one could show that these measures are a sensible measure of research quality/success, this may at least make sense, but they almost certainly aren’t. These redundancies may also be, technically, illegal. Redundancies can take place if jobs are no longer needed. Queen Mary is currently advertising for people to take over from those being made redundant. I also think this is a very dangerous thing to do. People decide on academic careers for a number of reasons but job security and freedom to carry out research of your own choosing are certainly important considerations. If these are removed, then it’s quite likely that those with the most potential will simply not choose an academic career.

It’s possible that Queen Mary and the other universities who are replacing staff to improve their REF score will achieve what they want and will indeed move up the REF rankings. In the long-term, however, I think this kind of behaviour will lead to a university system that scores well according to the current metrics but doesn’t actually do anything particularly significant. When everyone realises that the scoring systems is flawed, these universities may suddenly plummet down the rankings when a better way of measuring research quality is introduced. I think this kind of behaviour is potentially extremely damaging to UK Higher Education and this kind of management (as exemplified by Simon Gaskell – principal of Queen Mary) could see the UK Higher Education system losing it’s status very quickly.

I’ll finish with a link to a post about Leaving Academica written by a US academic. I don’t think that the UK system is necessarily as bad as suggested in this post (and I’m certainly not about to leave academia) but it certainly strikes a chord with me and I do worry that we are heading in the kind of direction that this post highlights. You’ll have to read it to see what this is, but I think it is worth a read.

REF prediction

I’ve come to feel more strongly that although the Research Excellence Framework (REF) is trying to do something reasonably decent, it is doing it in a ridiculous and counterproductive way. Not only does it take an awful lot of effort and time, it also has a big impact on how universities and university departments behave. As I’ve mentioned before, I think the amount of effort expended assessing the various university departments in order to give them a REF score seems excessive and that using metrics might be more appropriate. I don’t particularly like the use of metrics, but if ever there was an appropriate time it would be when assessing a large diverse organisation like a University.

To put my money where my mouth is, I decided to see if I could come up with a ranking for all of the 42 Physics and Astronomy departments that were included in RAE2008 (the precursor to REF). For REF2014, each department will submit 4 papers per submitted academic and these papers must be published or in press between January 2008 and October 2013. What I did is went to Web of Science and found all the papers published in Physics and Astronomy for each of the 42 departments included in RAE2008. Since it is currently October 2011, I used papers published between January 2006 to October 2011. I also didn’t exclude reviews or conference papers. For each department I then determined the h-index of their publications and the number of citations per publications. I then ranked the departments according to these two metrics and then decided that the final ranking would be determined by the average of these two rankings. The final table is shown below. It is ordered in terms of the average of the h-index and citations per publications ranking, but these individual rankings are also shown. I also show the ranking that each department achieved in RAE2008.

I don’t know if the above ranking has any merit, but it took a couple of hours and seems – at first glance at least – quite reasonable. The departments that one would expect to be strong are near the top and the ones that one might expect to be weaker are near the bottom. I’m sure a more sophisticated algorithm could be determined and other factors included but I predict (I’ll probably regret doing this) that the final rankings that will be reported sometime in 2015 will be reasonably similar to what I’ve produced in a rather unproductive afternoon. We’ll see.

Addendum – added 21/03/2013
Deevy Bishop, who writes a blog called BishopBlog, has carried out a similar excercise for Psychology. In her post she compares the h-index rank with the RAE2008 position and also works out the correlation. I thought I would do the same for my analysis. Slightly different in that Deevey Bishop considered the h-index rank for the time period associated with RAE2008, while I’ve considered the h-index rank associated with a time period similar to that for REF2014, but it should still be instructive. If I plot the RAE2008 rank against h-index rank, I get the figure below. The correlation is 0.66, smaller than the 0.84 that Deevey Bishop got for Psychology, but not insignificant. There are some clear outliers and the scatter is quite large. Also, this was a very quick analysis and something more sophisticated, but still simpler than what is happening for REF2004, could certainly be developed.

h-index rank from this work plotted against RAE2008 rank for all Physics departments included in RAE2008.

h-index rank from this work plotted against RAE2008 rank for all Physics departments included in RAE2008.

Additional addendum
Deevy Bishop, through a comment on my most recent post, has described a sensible method for weighting the RAE2008 results to take into account the number of staff submitted. The weighting (which essentially ranks them by how much the funding each institution received) is N(0.1×2* + 0.3×3* + 0.7×4*) where, N, is the number of staff submitted and 2*, 3*, 4* are the percentage of the submitted papers at each rating. If I then compare the h-index rank from above with this new weighted rank I get the figure below which (as Deevey Bishop found for psychology) shows a much stronger correlation than my figure above. Deevey Bishop checked the correlation for physics and found a value of 0.8 using the basic data, and a value of 0.92 if one included whether or not an institution had a staff member on the panel. I did a quick correlation and found a value of 0.92 without taking panel membership into account. Either way, the the correlation is remarkably strong and seems to suggest that one could use h-indices to get quite a good estimate of how to distribute the REF2014 funding.

Plot showing the h-index rank (x-axis) and a weighted RAE2008 ranking (y-axis) for all UK Physics institutions included in RAE2008.

Plot showing the h-index rank (x-axis) and a weighted RAE2008 ranking (y-axis) for all UK Physics institutions included in RAE2008.

Another addendum
I realised that in the figure above I had plotted RAE2008 funding level rank against h-index rank, rather than simply RAE2008 funding level against h-index. I’ve redone the plot and the new one is below. It still correlates well (correlation of 0.9 according to my calculation). I’ve also done a plot showing h-index (for the RAE2008 period admittedly) against what might be the REF2014 formula, which is thought to be N(0.1×3* + 0.9*4*). It still correlates well but, compared to the RAE2008 plot, it seems to shift the bottom points to the right a little. This is presumably because the funding formula now depends strongly on the fraction of 4* papers, and so the supposedly weaker institutions suffer a little compared to the more highly ranked institutions. Having said that, the plot using the possible REF2014 funding formula, does seem very similar to the RAE2008 figure, so I hope I haven’t made some kind of silly mistake. Don’t think so. Presumably it just means that for RAE2008, (0.1×3* + 0.9×4*) is similar to (0.1×2*+0.3×3*+0.7×4*).

A plot of h-index against the RAE2008 funding formula - N(0.1x2* + 0.3*3* + 0.7*4*).

A plot of h-index against the RAE2008 funding formula – N(0.1×2* + 0.3*3* + 0.7*4*).


A plot showing h-index (for RAE2008 period) plotted against a possible REF2014 formula - N(0.1x3* + 0.9x4*).

A plot showing h-index (for RAE2008 period) plotted against a possible REF2014 formula – N(0.1×3* + 0.9×4*).

EPSRC again

Seems like my concern (in an earlier post about EPSRC Fellowships) regarding EPSRC’s research funding philosophy is not completely unfounded. In an article in the Times Higher Education it is reported that EPSRC starts to impose order on its universe.

The basic idea seems to be that EPSRC will specify, quite specifically, what they are willing to fund. They also appear to be explicitly regarding themselves as “sponsors” of research, rather than “funders” of research. I must admit that I don’t quite know, at this stage, what the distinction is. I do, however, have a concern than funding councils like EPSRC will start to feel that they should decide what research needs to be done and regard university researchers as “contractors” who carry out the specified research. I think this is a very dangerous policy to follow as it seems highly unlikely that it will lead to the breakthroughs that we ideally would like. Furthermore, if senior EPSRC people are ultimately the ones to effectively decide what research should be done, why have they decided to become administrators rather than remaining active researchers. If they are this brilliant, they would never have willingly given up their research careers.

The other issue I have is that I don’t really feel that the taxpayers should be funding research that industry could be doing. If something is likely to return a profit in the short to medium term, then industry should be funding the research. The taxpayer should fund the research that we can’t reasonably expect industry to fund. It is this research that is very difficult to specify in advance. I’m not suggesting that research councils should never fund industrially relevant research, simply that research councils should tend to fund work that will have long-term benefits or that will have societal benefits that industry may not value as immediately valuable. I think EPSRC’s current policies are potentially extremely damaging and I hope they rethink them soon.