The Green Agenda

There seems to be quite a lot of rhetoric about the cost of climate policies with many essentially suggesting that it will cost a lot of money and that it is some kind of government conspiracy. There is even a Fox Business News report titled The Green Tyranny which seems to be suggesting that government regulations have gone too far.

Something that has always confused me about this is that providing energy is always going to cost money. Extracting fossil fuels is not free and so that it will cost money to provide alternative energy sources is obvious. The real questions should be is it more expensive than using fossil fuels, what is the long-term cost compared to fossil fuels, and how does it compare – environmentally – to fossil fuels (or, what are the additional costs). Of course, I think that increasing CO2 levels in our atmosphere is leading to climate change and that we should be acting to mitigate this as soon as possible. However, even if you disagree, fossil fuels will eventually run out or become extremely expensive to extract, and so developing alternative technologies seems to make sense. The question is, should we start now, or can we wait. I think we should be starting now, but I’d be happy to hear arguments as to why we should be waiting.

There is another factor though – in my view at least – and that is whether or not you need to import oil and gas in order to provide your energy needs. I found the two figures below on the Energy Administration Information website. The top shows the UK’s natural gas production and consumption from 2000 to 2011 and the bottom shows the same for oil. What is clear is that until about 2004, the UK was producing more natural gas and oil than it was consuming. Today we only produce about 60 – 70 % of what we use and the fraction appears to be dropping quite dramatically.

Natural gas production and consumption.

Natural gas production and consumption.

Oil consumption and production.

Oil consumption and production.

This clearly means that we must be importing a significant fraction of the natural gas and oil that we use. What does this cost? I found the figure below on a website called The Oil Drum. It shows the UK’s trade balance in energy products and shows that until 2004, there was an oil and gas trade surplus. Today there is a deficit. We’re spending about £5 billion per year to import oil and gas. What’s more, in 2000 there was a trade surplus in energy products. Today it makes up 15 – 20% of the total trade deficit and – as far as I can tell – will continue to increase as we import more and more of our oil and gas.

What I’m suggesting is that even if you don’t feel that we should be worried about climate change, surely you should be concerned about the UK’s increasing need to import oil and gas. We’re currently spending about £5 billion a year importing oil and gas and it seems (given that the fraction we can produce is decreasing quickly) that this is likely to increase. Wouldn’t it be better if we could spend this money paying people in the UK to develop and maintain alternative energy sources? I think it would, but feel free to let me know if you disagree.


Pebbles on a beach


I’ve been away for a week or so and haven’t had a chance to post anything. Also, there hasn’t really been anything that I wanted to post. I’ve been getting back into photography, so – to keep things ticking over – I thought I would post one of my recent photographs (although this may actually be one taken by my wife – I can’t quite remember – but I like it either way).

NHS privatisation

I wanted to add a link to the video below, which shows Lucy Reynolds discussing changes to the NHS that suggest that we are heading towards the privatisation of healthcare in England and how this could be disastrous. What’s interesting about this is that Lucy Reynolds is an academic who studies “medical and healthcare programmes”. It’s clear that she’s very critical of what is happening to healthcare in England and a lot of what she says certainly makes sense to me. One issue I had was that, as an academic, maybe one should aim to be objective and unbiased when discussing one’s research, and so I wasn’t quite sure what to make of an academic who studies healthcare provision having such strong personal views about healthcare in England. On the other hand, I sometimes feel that academics in the UK aren’t sufficiently political. Aren’t we meant to be intellectual and think about society and the implications of decisions made by politicians on our society. If not, then who else? So, fundamentall, I was very pleased to see someone, who is clearly an expert in this area, discussing – very clearly – why the proposed changes to the NHS in England could have a very damaging impact on our ability to provide decent healthcare for the population.

I don’t want to say too much, but one of the basic messages was that in a publicly funded healthcare system you can prioritise the needs of the patient. It still costs money and so there isn’t complete freedom, but if one assumes that most doctors and nurses went into healthcare in order to help people, it makes sense that the optimal system is one that allows them to put the patients first. In a private healthcare system this is no longer necessarily the case. Legally, private companies are obliged to do what’s best for their investors. They therefore need to optimise their profits. They can’t simply put the patient first. They’re not obliged to treat those who haven’t taken out suitable healthcare. They can refuse treatment if someone hasn’t told them their complete healthcare history when taking out insurance. They can prescribe treatments or tests that may not be strictly necessary. Profit has to come first, ahead of patient care.

Something that has confused me about this supposed desire for privatisation of the NHS in England is that the private sector is already very heavily involved. The NHS doesn’t make anything. They buy all of their equipment, vehicles, pharmaceuticals, etc. from private companies. Also, once the staff have paid their taxes they spend most (if not all) of their money in the private sector. They buy food, electrical goods, pay for holidays, etc. Most of the money that goes into the NHS ends up passing the private sector anyway. If we continue privatising the NHS all that this will mean, apart from the damage it is likely to do to healthcare provision, is that a new set of investors will be making a profit from the NHS. I would have thought that companies like Sainsburys, Tescos, Currys, Thomsons travels, …. would all be slightly worried about this. Surely there’s no great benefit to them to see the NHS privatised. Of course, if their investors are the same as those who will be investing in healthcare companies, maybe they’ll still see the dividends. On the other hand, if the investors are American pension funds, they will effectively be losing money.

I appreciate that you can’t have a successful economy if it is fully public. Similarly, however, I would argue that it’s hard to see how you can have a successful economy that is fully private. You need some kind of balance. Certain things fit well within a market. I can have a choice of foods, types of transport, holidays, entertainment, etc. It is, however, harder to see how you can apply a market philosophy to healthcare, education, policing, the military, the justice system. If you apply a market to these it implies a variety of different provisions and hence some receiving a better level of healthcare (for example) than others. I don’t really care that my car is clearly not as fancy as someone else’s. Similarly, I don’t really care if some people have to catch a bus rather than driving. It’s a perfectly fine way to get around. I do, however, care about my children’s education or about the healthcare that I may receive in the future. I don’t think these type of things should depend on your income/wealth. It should, ideally, be provided at an equal level to all. Getting educated or receiving healthcare isn’t really something you want to have to choose. You would like the best you can get. A private healthcare system may well benefit some but at the expense of others who will be locked out because of the costs, and I think that would be very unfortunate.

Anyway, I recommend watching the video below. It is a little long but she does make a very strong case for why we should resist what is essentially the privatisation of the NHS in England.

I wanted to reblog this partly because it’s good – in my view – to see more people writing about REF. I also think the post makes some interesting points. I think REF is horribly flawed, but maybe we have to also realise that self-regulation also has its problems. I would suggest that self-regulation might be the wrong term to use. Not all of our research money comes through REF. A big fraction comes through research grants that are competitive and are assessed by a reasonably knowledgeable panel (although I would suggest that the allocation of research council grants in the UK has its own problems). REF is essentially allocating QR money that goes – in the first instance – to the university, not to individuals. The university then decides how to divide this money up. It’s not clear to me that a simpler assessment exercise that used up less time, was less easy to game, and in which maybe the allocation was less non-linear (i.e., a weaker dependence on how well you do and how much money you get per person submitted) would be just as effective and less damaging. Part of me thinks that maybe the criteria should be secret until the process is finished, but I suspect many would feel that they then wouldn’t trust it at all and everyone would complain afterwards if they didn’t do as well as they thought they should have done.


When I was a medical student we were encouraged to conduct vaginal examinations on anaesthetized gynaecological patients, so that we could learn how to examine the reproductive system in a relaxed setting. The women did not know this was going to happen to them during their surgery, and did not sign any consent forms. Probably they would not have minded anyway (they were asleep after all, and educating the next generation of doctors is undoubtedly a good cause) but the possibility that they should be asked if they did mind did not occur to anybody, until my ultra-feminist friend kicked up a stink and organised a rebellion. The surgeons were genuinely surprised and mystified. Being well-meaning, it had not occurred to them that some people might see what they were doing as wrong.

Fast-forward a few years to the Alder Hey scandal, in which it emerged that doctors at a children’s hospital had…

View original post 584 more words

The Green Benches

This blog seems to be getting quick a lot of traffic today. Most of this seems to be because of my post about Eoin Clarke and the Green Benches. I can’t see any real reason why there is a sudden surge in interest in this today, but if anyone would like to comment on why this might be, I would be quite keen to know.

Thanks to the comments below, I now know that it is because Polly Toynbee has suggested people go to one of the protests listed on The Green Benches website. For some reason the link in her article wasn’t working (although that seems to now be fixed) and if you google Green Benches, you seem to end up here. The correct link is The Green Benches and I too encourage you to attend one of these protests.

Second fastest annual rise in carbon dioxide

A recent article in the Guardian reported that 2012 saw the second highest annual rise in CO2. This was 2.67 parts per million (ppm) measured at the Mauna Loa Observatory on Hawaii. The article also included a link to a paper published in Science that presented reconstructions of regional and global temperature for the past 11,300 Years (Marcott et al., 2013, Science, 339, 1198-1201).

It seems as though many who are skeptical (deny) man-made climate change often say things like “where is the paper that says ….”, so I thought I would highlight some of the results presented in this paper. Below I reproduce the abstract. Quite a balanced abstract containing a summary of the results and a conclusion, at the end, that suggests that by 2100 the global surface temperatures will be higher than at any time in the past 11,300 years.

Surface temperature reconstructions of the past 1500 years suggest that recent warming is unprecedented in that time. Here we provide a broader perspective by reconstructing regional and global temperature anomalies for the past 11,300 years from 73 globally distributed records. Early Holocene (10,000 to 5000 years ago) warmth is followed by ~0.7°C cooling through the middle to late Holocene (<5000 years ago), culminating in the coolest temperatures of the Holocene during the Little Ice Age, about 200 years ago. This cooling is largely associated with ~2°C change in the North Atlantic. Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history. Intergovernmental Panel on Climate Change model projections for 2100 exceed the full distribution of Holocene temperature under all plausible greenhouse gas emission scenarios.

Below is one of the main figures from the paper. It shows the temperature anomaly and compares various different methods. The left-hand panel goes back 2000 years, while the right-hand panel goes back 11,300 years. There is good agreement between the different methods and it seems clear that the temperature anomaly today is higher than it has been for the past 2000 years.

Comparison of various methods for determining the temperature anomaly for the past 2000 years (left-hand panel) and for the last 11,300 years (right-hand panel).  Figure from Marcott et al. (2013).

Comparison of various methods for determining the temperature anomaly for the past 2000 years (left-hand panel) and for the last 11,300 years (right-hand panel). Figure from Marcott et al. (2013).

It, however, appears (from the righ-hand panel in the above figure) that there may have been periods during the Holocene when the temperature may have been higher than it is today. However, I include – below – some of the concluding text from the paper.

Our results indicate that global mean temperature for the decade 2000–2009 (34) has not yet exceeded the warmest temperatures of the early Holocene (5000 to 10,000 yr B.P.). These temperatures are, however, warmer than 82% of the Holocene distribution as represented by the Standard5×5 stack, or 72% after making plausible corrections for inherent smoothing of the high frequencies in the stack (6) (Fig. 3). In contrast, the decadal mean global temperature of the early 20th century (1900–1909) was cooler than >95% of the Holocene distribution under both the Standard5×5 and high-frequency corrected scenarios. Global temperature, therefore, has risen from near the coldest to the warmest levels of the Holocene within the past century, reversing the long-term cooling trend that began ~5000 yr B.P. Climate models project that temperatures are likely to exceed the full distribution of Holocene warmth by 2100 for all versions of the temperature stack (35) (Fig. 3), regardless of the greenhouse gas emission scenario considered (excluding the year 2000 constant composition scenario, which has already been exceeded). By 2100, global average temperatures will probably be 5 to 12 standard deviations above the Holocene temperature mean for the A1B scenario (35) based on our Standard5×5 plus high-frequency addition stack (Fig. 3).

Probably the strongest statement is that within the past century global average temperatures have gone from some of the coolest of the last 11,300 years to some of the warmest. If this continues, as is expected, by 2100 global average temperatures will be significantly higher than the mean of the Holocene. Here we have a very recent peer-reviewed paper in a major scientific journal saying that within the next 100 years, average global temperatures will be higher than they’ve been for the last 11,300 years. The paper doesn’t actually say that such high temperatures could have a catastrophic effect on man, but I think we can probably conclude that it won’t be ideal.

Some more REF thoughts

The post about my REF interview seems to have generated a modest amount of interest in the last day or so. There were no comments, so I can’t tell if others identified with my experience and agreed with my general views, or disagreed and thought it was all a load of nonsense. However, seeing that post generate a little interest reminded me that I had seen some interesting data recently about REF outputs. For those that don’t know, REF is the Research Excellence Framework and is an exercise in which the quality of research in UK universities will be judged and the results will determine how to divide up a fairly substantial pot of money. What makes it more “interesting” is that the formula that decides how much money each university gets is highly non-linear. There is a big difference between doing “very well” compared to simply doing “well”.

What will be assessed will in general be papers published by academics in each institution. Typically, there will be 4 papers – published since 2008 – for each academic included in the submission. The intention is that each paper will be judged in terms of its originality, significance and rigour and will be given a score of either 4*, 3*, 2*, or 1*. The claim is that the panels doing the judging will not be using Journal Impact Factors or citations to make their assessment. It has, however, already been pointed out that this claim is unlikely to be credible. In Physics, there will probably be something like 6500 papers each of which will supposedly be read by 2 of the 20 panel members in a period of about 12 months. In other words, at least 2 papers per day each. Pretty difficult to do. Virtually impossible to make a credible judgement of each paper. The general view is that, despite what is claimed, Journal Impact Factors and citations will indeed be used to judge these papers.

Here’s what I found interesting. According to what I saw recently, a paper published since 2008 that is receiving about 8 citations a year will be in the top 10% according to citation numbers. I was a little surprised. I assumed that the top 10% of papers (according to citation numbers) would be receiving more than 8 or so citations a year. I decided to look into this myself using Web of Knowledge. If you search for all refereed articles published in the general area of Physics that also have “UK”, “United Kingdom”, “England”, or “Scotland” in the address you discover 38176 refereed articles published since January 2008. Web of Knowledge can’t do citation statistics on more than 10000 papers. I divided these papers into 5 categories (Condensed Matter, Astronomy & Astrophyscs, Particle Physics, Nuclear Physics, Mathematical Physics). I also included a randomly chosen sample of areas in Physics that, together, hadn’t published more than 10000 papers since 2008. The table below shows the average number of citations per paper, the number to be in the top 1%, the number to be in the top 10%, and the median.


Indeed, it seems that the average number of citations per Physics paper published since 2008 is about 10 and to be in the top 10% of all physics papers published since 2008 you need to be collecting fewer than 10 citations per year. Although it is different for different areas of physics, the difference isn’t particularly large. One issue with the above table is that the older papers will have collected more citations than the newer papers. I then repeated the above, but considered only papers published – with a UK author – in 2008, 2009, 2010, 2011, and 2012. In this case there are typically between 7000 and 7500 refereed articles published per year, so I didn’t divide it into different disciplines, but considered all articles in physics. The table below shows the result.


The result seems about the same. To be in the top 10% of papers published in any year since 2008 a paper needs fewer than 10 citations per year. Essentially, for a paper to be in the top 10% of cited papers, requires a fairly small number of citations per year. Alternatively, most papers seem to receive very few citations. What to make of this? Partly, I was just a little surprised. If asked, I would have guessed that to be in the top 10% of cited papers would require more than 10 citations per year. Also, what does the fact that a large fraction of refereed articles attract very few citations per year imply? Does it mean that much of what we publish isn’t particularly interesting. Although I think we probably publish too many papers, I don’t think that 90% of what we publish is worthless. Quite a large number of those papers receiving very few citations must be excellent bits of research that are worth publishing. Maybe they just haven’t been noticed. Maybe they’re what is referred to as slow-burners. Maybe it was a necessary step that has been superseded by a newer bit of research but that isn’t getting the citations that it might deserve. Maybe it’s something a researcher enjoyed doing, learned a lot by doing and that then allowed them to move on to something newer and more interesting.

What’s more interesting is how citations can then be used to judge these papers. We will presumably be submitting something like 6500 papers, so potentially 17% or so of all refereed physics papers published since 2008. Only papers judged to be 3* or 4* will attract money. One could assume that 3* and 4* papers will be those with much higher than average number of citations. This would then imply that a small number of papers will be used to determine how to divide up the large sum of money associated with REF. Small variations could then have a big effect. On the other hand, if 3* papers are not necessarily those with much more than the average number of citations, how do you then distinguish between 3* and 2* papers. Most papers are collecting fewer than 10 citations per year. Where’s the division? Is 3 a year 2* and 6 a year 3*? Alternatively, we shouldn’t really use citations and metrics and should judge each paper on it’s originality, significance and rigour (as suggested in the REF documentation). The problem is that very few, if any, believes that it is possible for a panel of 20, however distinguished, to do this.

The truth is probably that it will be a combination. The panel members will, I’m sure, try to read the papers and will then use metrics to fine tune their scores. However, combining two largely flawed processes to try and determine the quality of research activity in UK universities doesn’t really seem like much of an improvement. I suspect that, at the end of the day, a ranking will be produced that isn’t entirely unreasonable. However, as I’ve pointed out before, it should be possible to achieve a reasonable ranking in a manner that doesn’t use up quite as much time and effort as REF is currently doing.