REF Interview

So I recently had an interview with our Head of Department and our Director of Research to discuss my inclusion in the Research Excellence Framework (REF2014). Although I had expected some problems, my interview went fine and it looks as though I’m “good enough” or, more correctly, I have my name on 4 papers that are “good enough”. Some of my colleagues were not particularly happy after their interviews, but I don’t feel all that comfortable discussing their issues in any detail. As you may know, if you’ve read any of my other posts, I’m not a huge fan of REF. Even though my interview went fine, my views haven’t changed much and – if anything – it’s rather confirmed my general views of the issues with the process.

I don’t really want to go into specifics as maybe I shouldn’t be giving away our strategy. I thought, instead, that I would give some general comments. It’s clear that it’s a game, and everyone knows this. There are two parameters; money and league table ranking. Most institutions will be adjusting their submission to optimise between these two parameters. This is not an objective assessment of research quality. It’s a game to try and submit the optimum set of papers that will get you as high as possible on the League table and get as you much money as possible. Some institutions may determine that it will be better to sacrifice league table position for money and others may choose to prioritise league table position over money.

It was claimed that submissions (papers) would be read and assessed and that metrics would not be used. Noone believes this to be true. We’re clearly using Impact Factors and citations at some level. Papers in Nature or Science will be 4* simply because they’re in Nature or Science. Papers in other good Journals will be 4*, 3* or 2* depending on the number of citations. There’s also no rigorous assessment of ownership. As long as you can indicate that you contributed sufficiently to a paper, your institution will get full credit. Someone who makes modest contributions to 4 papers that are regarded as excellent, is more valuable to an institution than another person who writes 4 good papers, none of which are regarded as excellent.

You could argue that the 4 excellent papers are better than the 4 good papers, but there is a chance that our careers may depend on how our 4 papers are judged for REF. Is it really better to have average people who are good at getting their names on excellent papers, rather than good people who write their own. I may be slightly biased in that I’m probably more in the latter category than the former, but I would say that I don’t intend for my papers to only be “good”. I always aim to write papers that I think are tackling interesting and challenging problems and I always aim to be as careful and rigorous as I can reasonably be. I always try to write something that would be regarded by others as excellent. The citations, however, don’t always indicate that they have been received particularly well. Of course, the definition of excellence is very tricky and using simplistic metrics is not necessarily a good way to determine the quality of a piece of research. In discussing this, someone commented that it is not unreasonable to expect academics to write 4 good papers every 7 years. In some sense I agree, and I would certainly back myself to write 4 good (maybe even some excellent) papers every 7 years. What I have much less confidence in is the ability of those who are judging these (either on the REF panel or in my own Department) to do so properly.

It seems that there is a real chance that we will aim to adapt our research strategy to suit future REF exercises. It’s clear that independence and originality is no longer necessarily optimal. It might be much better to have people who work in collaborations that are likely to publish papers that will be highly cited. I could easily see people being encouraged to make sure that they belong to such a collaborations and that they aim to contribute sufficiently to at least 4 papers so that their institution can submit these 4 papers to REF. Maybe this won’t happen, but I find the possibility of this happening quite disturbing. What I find slightly more disturbing is that the senior people that I encounter acknowledge that it is a game, but seem to feel that it is a game we need to play. Noone seems that bothered that we might be adjusting our research strategies to suit what is essentially an assessment exercise. I appreciate that there is a lot of money involved but, as far as I’m concerned, either our research is valuable at some fundamental level, or we really shouldn’t be bothering. Maybe that’s a little extreme, but hopefully you know what I mean.

That’s probably all I was going to say. As I may have mentioned before, a concern of mine is that over time we will adapt so that top UK institutions are very good at scoring well on REF exercises but don’t really do research that has any particular value. Maybe I’ll be proven wrong and that once REF is finished we’ll forget about it for a while and everyone will be left alone to get on with whatever research they think is most interesting and valuable. I somewhat doubt that and I do worry slightly about the careers of those who are deemed to not have enough papers that are good enough for REF. Will they be marked and will their jobs be at risk. I think that would be awful if it did start happening. I have no real issue with people who are not contributing positively to the running of an academic being sanctioned. I do, however, have an issue with the possibility that some people’s careers could depend largely on an assessment exercise that, in my opinion, is horribly flawed.


Climate change – statistical significance

I’ve written before about the claim that there has been no warming for the last 16 years. This is often made by those who are skeptical – or deny – the role of man in climate change. As I mentioned in my earlier post, this is actually because the natural scatter in the temperature anomaly data is such that it is not possible to make a statistically significant statement about the warming trend if one considers too short a time interval.

The data that is normally used for this is the temperature anomaly. The temperature anomaly is the difference between the global surface temperature and some long-term average (typically the average from 1950-1980). This data is typically analysed using Linear Regression. This involves choosing a time interval and then fitting a straight line to the data. The gradient of this line gives the trend (normally in oC per decade) and this analysis also determines the error using the scatter in the anomaly data. What is normally quoted is the 2σ error. The 2σ error tells us that the data suggests that there is a 95% chance that the actual trend lies between (trend + error) and (trend – error). The natural scatter in the temperature anomaly data means that if the time interval considered is too short, the 2σ error can be larger than the gradient of the best fit straight line. This means that we cannot rule out (at a 95% level) that surface temperatures could have decreased in this time interval.

We are interested in knowing if the surface temperatures are rising at a rate of between 0.1 and 0.2 oC per decade. Typically we therefore need to consider time intervals of about 20 years or more if we wish to get a statistically significant result. The actual time interval required does vary with time and the figure below shows, for the last 30 years, the amount of time required for a statistically significant result. This was linked to from a comment on a Guardian article so I don’t know who to credit. Happy to give credit if someone can point put who deserves it. The figure shows that, in 1995, the previous 20 years were needed if a statistically significant result was to be obtained. In 2003, however, just over 10 years was required. Today, we need about 20 years to get a statistically significant result. The point is that at almost any time in the past 30 years, at least 10 years of data – often more – was needed to determine a statistically significant warming trend and yet noone would suggest that there has been no warming since 1980. That there has been no statistically significant warming since 1997 is therefore not really important. It’s perfectly normal and there are plenty of other periods in the last 30 years were 20 years of data was needed before a statistically significant signal was recovered. It just means we need more data. It doesn’t mean that there’s been no warming.

Figure showing the number of years of data necessary for the warming trend to be statistically significant at the 2σ level.

Figure showing the number of years of data necessary for the warming trend to be statistically significant at the 2σ level.

Some of this is very funny. Not quite sure what to make of the Chris Rock video. I’ve never really like his humour and can’t quite decide if I think it’s funny (I did laugh) or somewhat inappropriate. A combination of the two really, which was the point of the post I guess.

The Custody Record

Humour noun

The quality of being amusing or comic, especially as expressed in literature or speech.

Seems fairly straightforward doesn’t it? Yet there are sub-sets within. Humour can be rude or childish, sexist, racist, light hearted and sometimes very dark. Yet it’s still not that simple because humour and how it is categorised depends on the prevailing circumstances, the group it is presented to and finally the ear that hears it.

I love this by Chris Rock – contains some profanity and violence

Having watched this you may be laughing like me. Conversely, you may be angry about the language, the use of police violence and the message being passed.

The police work in an environment where they have to deal with some pretty awful incidents. As a result there is occasionally some humour that people outside of the service would find distasteful. It’s a coping strategy that was echoed…

View original post 483 more words

Pressure on Venus

I’ve written before about the Runaway greenhouse process on Venus. In the case of Venus, this was an entirely natural process and such a process is unlikely to take place on Earth as we have locked much of our CO2 into carbonate rocks. It is, however, illustrative of how the greenhouse effect can significantly influence the climate on a terrestrial planet.

I recently became aware of a post on the Watts up with that website claiming that the reason Venus has such a high surface temperature is because of the extremely high pressure. There are lots of equations and graphs and quite a convincing argument, but it is simply wrong. The reason I thought I would write about this is that when I saw that post I remembered a senior Professor in an Earth Science department at a US university telling me the same thing a few years ago. At the time I thought, “that’s interesting”, but didn’t think any more of it. If a professor of Earth science can get it wrong, no wonder it’s easy enough to make it seem as though the argument makes sense.

So why is it wrong. Basic atmospheric modelling is not all that difficult. To model an atmosphere you need an equation of state relating pressure, density and temperature. Typically you would use something like
where P is the pressure, ρ is the density, T is the temperature, mH is the mass of a hydrogen atom, and μ is the mean molecular weight. On Earth, the most common molecule in the atmosphere is N2 so μ ~ 28, while on Venus it is CO2 so μ ~ 44.

Atmospheres will settle into a state of hydrostatic equilibrium. This means that the downward force of gravity will be balanced by an upward pressure force. This allows us to write
where g is the acceleration due to gravity (9.8 m s-2 on Earth and 8.9 m s-2 on Venus) and z is the height in the atmosphere. We can solve the above equation to write
where Po is the pressure at the base of the atmosphere. The term kT/μmHg is known as the scaleheight and tells you the vertical distance over which the pressure drops by a factor of 1/e (1/2.72). It increases with temperature and so the hotter the atmosphere, the slower the pressure drops with height.

Now we have a set of equations that essentially allow us to solve a basic atmospheric structure problem. However, there are two things we don’t know. What is T and what is Po? Where does the temperature come from? Well it comes from the balance between the amount of energy that the planet receives from the Sun and the amount it re-radiates into space. The temperature of the planet must be such that it re-radiates as much energy back into space as it receives. The pressure at the base of the atmosphere is related to the atmospheric density through the equation of state. The atmospheric density depends on how much atmospheric material there is in the atmosphere. I’ve kind of implied that it is simple, but in truth it is not that simple. The planet’s temperature depends on the composition of the atmosphere. To solve the problem properly, you need to include the influence of the atmosphere on the incoming and outgoing radiation. This will determine the equilibrium planetary temperature and the variation of temperature with height. You also need to iterate until the density structure gives the correct total mass. However, what determines the pressure profile in the atmosphere is the temperature of the atmosphere and the amount of material in the atmosphere (i.e., the density). The pressure does not determine the temperature. If there was no incoming energy, the atmosphere would lose energy, the temperature would drop, the scale height would decrease and the atmosphere would collapse onto the surface of the planet. When it was cold enough, the molecules would become liquids or solids and the atmosphere would disappear. In other words to have a steady atmospheric pressure, you need incoming energy from the Sun. To re-iterate, the temperature determines the pressure, the pressure does not determine the temperature.

So the reason Venus has a high surface temperature is not because of the high pressure. The reason it has a high pressure is because of the high temperature and density. We can illustrate this by considering the Earth. Atmospheric pressure at sea level is 105 Pa. Atmospheric density at sea level is about 1.2 kg m-3. The atmosphere is primarily nitrogen so μ = 28, mH = 1.67 x 10-27 kg, and k = 1.38 x 10-23 m2 kg s-2. If I plug these numbers into the equation of state shown at the beginning of the post I get T = 338 K. So, the Earth’s temperature is because of our atmospheric pressure, not because of the energy we receive from the Sun! No, clearly this is wrong. If the temperature and density determine the pressure, I can then use the density and pressure to calculate the temperature. It doesn’t mean that the pressure determines the temperature, it just means there’s a relationship between pressure, density and temperature.

Given that I’ve linked to the post on the Watts up with that website, this would normally appear as a comment. I’ll be interested to see if it does indeed appear as such. It’s clear that claiming that the high surface temperature on Venus is due to its high pressure is wrong. Anyone who understands physics would accept this and the author of the post claiming this should recognise this and be willing to acknowledge their mistake. I would have much more time for those who are skeptical of man-made climate change if they were willing to accept when they are wrong. That is essentially the scientific process. Mis-using a set of equations to make a spurious claim is not.

Assessing applicants

I was recently part of a panel to rank PhD applicants and I’ve got to say that I found it really difficult. It was made more so by the extremely short time we had and by the fact that for a while I was unable to access the applications. The joys of modern technology.

What I found difficult was actually assessing the applications themselves. We had very little time, so it was done (in my case at least – don’t want to accuse other assessors of not doing their jobs properly) in a rather superficial way. Reject those whose grades seem too low. Then rank those remaining by trying to find something in their application that makes them stand out. Scan reference letters and research statements to find something that differentiates one applicant from another. What struck me was that if I was applying for something like this at the end of my PhD, I’m sure I would be rejected pretty quickly. This might suggest that I don’t deserve to be where I am today, but I think it just means that we could be missing someone who could be good but who – on paper – doesn’t stand out. I’m sure most of the applicants are academically capable, motivated people who believe that they could do well. We’re rejecting many after a relatively superficial look at their application.

Now it would be nice if we had plenty of time to assess the applications. To sit in a room and go through them in detail and to make sure that we’re not missing anything that might make an applicant look stronger or – in some cases – weaker. We just really don’t have the time. Do I think we get it horribly wrong? Not really. The strong candidates stand out and are fairly obvious. If we spent more time it may differ a little in that we may judge some who were below the boundary to actually be above (and conversely some above the boundary to be below). Does this mean that we’ve selected some who don’t deserve it? No, they’re probably all quite similar and it’s just a different judgement. What would worry me more would be if there was any sense of some kind of prejudice, but the outcome looked nice and diverse. It didn’t appear as though anyone was being disadvantage because of their gender or their race.

I guess, all I was trying to get across in this post was partly how difficult assessing these types of things can be. Partly because we’re often given very little time and partly because it’s just difficult to rank a group who are clearly all very good. It makes me realise how much “luck” plays a role. Somebody noticing something on your application that appears to make you stand out can be what makes the different between your application being successful or not. Having said that, it’s quite likely that many who we didn’t rank highly will go off and become successful somewhere else. Maybe they’ll be lucky that we didn’t select them. Luck can work both ways and I guess the main things is to keep trying and learning from all your experiences. I suspect that – more than natural ability – is what got me to where I am today.

Classical music

I’m not particularly familiar with classical music. As you can probably tell if you’ve read some of my earlier posts, I’m more familiar with the music of the 80s than with classical music. Last weekend, however, I went to a performance by a local chamber orchestra and it was absolutely fantastic. Without knowing much about the music I found it incredibly emotional. Sometimes sad, sometimes happy, sometimes a little tense. Maybe I was just in a strange mood, but I don’t think so. I was just responding to the music being played. The orchestra was (I think) a particularly good one and so I was also enjoying the precision with which they were playing. I know that all music can evoke emotions but I did find this particularly beautiful. I don’t know if I’ll suddenly start listening to classical music on a regular basis, but I’ll certainly go to another concert in the not too distant future.

I’ve posted below one of the pieces that I found particularly moving. I have heard it before, but couldn’t have named it until hearing it in the concert last weekend.


I’ve been working on a new research topic for a few months now. It’s not something that I’m particularly familiar with and I’ve quite enjoyed learning something new. What I’ve found interesting is – in a sense – how much I’ve learned and how much I’ve relied on things that I thought I’d forgotten. When I first started working on this problem, I stared at a set of equations without any real sense of how I would work through to get to the point where I could use them to solve the problem I was trying to solve. I then recognised something and saw how to get started. I didn’t get it right first time, but this small spark of recognition is what got me going. What stuck me was that something that I learned a long time ago and had largely forgotten, came back to me very quickly and now I feel completely comfortable using it.

It’s now been quite a few months that I’ve been working on this problem, and I haven’t quite finished but I’ve really enjoyed persevering through it. Even if it doesn’t lead to any publications, that’s fine. I now understand something about this research area that I didn’t really understand before. It’s also been nice, in a sense, going back to basics. My office is now littered with bits of paper with algebraic calculations on them. It’s not something that I’ve done for quite some time.

So why am I writing this? Well, partly to simply write something a little more positive than is the norm for me. There were two thoughts I had when working on this problem. One was that as someone who teaches, it is really good to work on something basic and fundamental and to use some of the tools I learned as a student. In a sense this is why I think it is good for active researchers to teach at this level. We use the tools we’re teaching to solve real problems and hence understand their significance. The other thing I realised was just how quickly you remember how to use the mathematical/scientific tools that you learn as a student. Students often think they learn things that they’ll never use, but you never know what tools you may need in the future and it seems remarkable how quickly you become familiar, again, with what you learned many years before. Anyway, its been fun and enjoyable working through something both basic and complex at the same time. I hope I can use what I’ve learned to do some interesting research but even if nothing comes of it, it’s still been a very interesting and useful experience.