Public sector pensions

I don’t know the details of the public sector pensions, but I do have a lot of support for those who are planning to strike. As far as I’m aware people accepted (and were offered) jobs in the public sector that had generous pension schemes. I read somewhere that in fact the member’s contribution was generally 5% of their salary and the employer’s contribution was maybe as high as 20%. This is effectively part of their salary package. When they retire they should therefore effectively have a large pot of money from which to draw a pension. Fundamentally, therefore, changing public sector pensions is very unfair on those who accepted jobs with this pension as part of the deal and is effectively a large cut in their lifetime salary.

I must admit, however, that I had always assumed that public sector jobs were typically jobs in which you might earn less than you could (for an equivalent job) in the private sector but in which the benefits were more generous. Overall, it might therefore be equivalent. Private sector pays you a higher salary but doesn’t have a good pension scheme (or job security) while the public sector offers a lower basic salary but better pension and more job security. If this is no longer true and if the basic salaries in the public sector compare well with those in the private sector, then maybe it is now unfair if public sector jobs also offer very generous pension schemes. This, however, doesn’t necessarily mean that the pensions need to be reformed, it means that we should be looking at the total (lifetime) benefit or working in the public sector compared to the private sector and deciding if there is indeed a problem and if so does it suggest changes to the salary structure, to the pension scheme, or to something else.

I do have a rather cynical view that the reality is a little different to what I describe above. As far as I’m aware, the public sector pension scheme is unfunded. Ideally workers (and often their employers) contribute to a pension fund. In many cases this might mean that, annually, something like 15% to 20% of every member’s salary is paid into the fund. This money is invested and also used to pay current retired members pensions. As long as the fund is well vested this is fine. In the public sector, however, I believe that there is no actual fund. The idea is simply that pensions are paid directly by the government. However, public sector workers took jobs with a promise of a reasonably generous final salary pension scheme that would be equivalent to one in which 20-25% of their salary was paid into a fund. In theory there should be a well vested fund. That the government chose not to actually have a fund seems irrelevant. It certainly seems as though they are using the proposed changes to the public sector pension schemes as a mechanism for reducing the deficit in a way that is neither completely honest not fair.

I should add, however, that if it is indeed the case that the overall benefit (which I regard as lifetime earnings – total salary plus pension) of someone in the public sector is now greater than that of someone (equivalent) in the private sector, then this may well need to be addressed. I do feel that it is quite reasonable that we could have one sector that pays lower salaries but offers better benefits (job security plus pensions) and another sector that pays higher salaries but doesn’t offer particularly good benefits, is quite reasonable. People in the various sectors probably have different motivations. To simply attack public sector pensions doesn’t, however, seem like a reasonable and fair thing to do. If you take away the good pensions we may find that higher salaries are required to attract good people into the sector and we really save nothing at all.

Advertisements

REF strategy

I was at a meeting yesterday where we discussed our REF strategy. I probably shouldn’t say what it is (might be confidential), but it didn’t increase my confidence in the basic system. For those who don’t know, REF is the Research Excellence Framework and the basic idea is that all university departments will be assessed to determine the quality of their research, the wider impact of their research, and the vitality of their research environment.

In fairness, what the REF is attempting to do is not inherently bad. The precursor to REF was the RAE (Research Assessment Excecise). In RAE2001, if I remember correctly, individuals were assessed and given a score. They were not told (I think) what their scores were, but departments were then given a final score based on the scores of the individuals in that department. In RAE2008, rather than scoring individuals, outputs were scored. Each person who was included by a department would submit 4 papers (with brief descriptions), some invited talks and other forms of output. These were then ranked on a scale from 1* to 4*. The advantage of this (in my view) is that an individual could have some outputs that score 4* and some that score 1*, so many could contribute to the 4* outputs of a department. A department was then given a score that was essentially what fraction of their ouputs were 4*, 3*, 2* and 1*. If a department had 25% 4* it wouldn’t be known whether this was because only 25% of the individuals produced 4* outputs or if a quarter of everyone’s outputs were 4* (or somewhere inbetween, as is more likely).

The amount of money given to a university was then based on what fraction of the outputs were 1*, 2*, 3* and 4*. I forget the exact formula, but it was something like amount*[(fraction of 1*) + (fraction of 2*)*2 + (fraction of 3*)*5 + (fraction of 4*)*7] multiplied by the number of people submitted. The 3* and 4* outputs therefore counted much more than the 1* and 2* outputs. That money was given for 1* and 2* outputs was, recently, heavily criticised by Vince Cable who (incorrectly in my view) interpreted this as giving money for mediocre research.

As a consequence of the above view, it appears as though 1* and 2* outputs will not receive any funding from the upcoming REF. This, consequently, has implications for the strategy that departments might choose to use. The two strategies that I’m aware of are, firstly, to submit as many people as possible, which will dilute the fraction of 3* and 4* outputs but the reduction in amount per person may be more than compensated for by the fact that there are more people submitted. The second strategy is to submit fewer people so as to minimise the fraction of 1* and 2* outputs (and hence increase the fraction of 3* and 4* outputs) and hope that the reduction in the number of people submitted is compensated for by the fact that the amount per person increases sharply with increasing fraction of 3* and 4* outputs. The advantage of the latter strategy is that it is also likely lead to a higher place in the rankings table, which is often regarded as extremely important.

The problem that I have with the above is not the university departments are chosing strategies (which is largely because they don’t actually know how the assessment scores will translate into money), but that strategies are necessary at all. The REF is meant, in my opinion at least, to be an attempt to objectively assess the quality of research in UK universities. That two essentially identical departments could end up with different scores depending on their chosen strategies suggests that the process is flawed. It’s not meant to be about whether they can guess the best strategy or not. It’s meant to be producing a measure of their quality (relative to other departments in the same area). The future funding of UK universities should – ideally – be based on objective measures of quality, not on whether the stategy gamble paid off or not.

A probability question!

After writing yesterday’s post about probabilities and statistics I remembered a particular question that I would often ask people to illustrate how easy it is to get probabilities wrong. Since I’ve never done a poll before, I thought I would try one and post it here. Very few people are reading this, so I may well get no responses. This question also happens to actually be true for me, but that isn’t really relevant.

False positives

I have always been quite interested in situations in which people (myself included) misunderstand probabilities. The classic is the Monty Hall problem. There are 3 doors, behind one of which is a prize. You choose a door and the quiz master then opens one of the other two doors, but always opens one that does not have a prize. You’re then given the option of changing from your first choice to the remaining closed door. Should you switch? The answer (in terms of probability) is yes. The reason is that if you have chosen a door without a prize, the quiz master then opens the other door without a prize and the remaining door has the prize. Therefore if you switch you win. The only time you don’t win is if your initial choice had the prize. However, since two doors don’t have prizes you would expect that 2 times out of 3, your first choice door wouldn’t have a prize and hence if you switch you win 2 times out of 3.

What has taken me quite some time to understand is the issue of false positives. I think I first encountered this in Simon Singh’s book, Fermat’s Last Theorem , but also read about it in an article yesterday. The basic issue is, if you have a test for a disease (for example) that is 90% accurate, what should you tell those who test positive?

Imagine a situation in which you’re testing for a disease and that you expect (or know) that 50% of those tested actually have the disease. Consider a sample of 100 people. The test is 90% accurate. Of the 50 who actually have the disease, 45 test positive and 5 test negative. Of the 50 who do not, 45 test negative and 5 test positive. Overall, the test has resulted in 50 positives and 50 negatives. Of those who tested positive 45 do have the disease and 5 don’t. Of the 50 who tested negative 45 don’t and 5 do. You would therefore tell those who tested positive that there is a 90% chance that the result is correct, and to those who tested negative you might say there is a 10% chance that they have the disease (or 90% chance that they don’t).

Seems fairly straightforward. It becomes less so if the you’re testing for something that is rare. Imagine you’re testing for a disease that only 1% of those tested will have. If 1000 people are tested then 990 do not have the disease and 10 do. If the test is 90% accurate, then 891 of those who don’t have the disease test negative and 99 test positive. Of the 10 that do have the disease 9 test positive and 1 tests negative. The test has then resulted in 892 negatives and 108 positives. Of the 108 positives, only 9 actually have the disease so those who tested positive actually have a less than 10% chance of actually having the disease. Of the 892 who tested negative only 1 actually has the disease. As I write this, I worry that I have it slightly wrong (I was expecting 900 negative and 100 positives) but I think it is essentially correct. The bottom line is that a test for something that is rare is unlikely to be 100 % accurate and so consequently the negative results may be very accurate (i.e., only 1 wrong out of 892 above) but there will be a large number of false positives (99 out of 108). You know that most (90% in this case) of those who are positive have tested positive, but you also know that a large fraction of those who tested positive are actually negative. This is the reason why they are referred to as false positives.

I partly wrote this because I just find this interesting (and I hope my explanation is correct) but it is also illustrates how important it is to have a reasonable understanding of probability and statistics as many decisions can be made based on an incorrect understanding of what the “numbers” are actually telling you.