Measuring the productivity in science

All this talk of productivity and restructuring at Queen's University Belfast has me pondering on the measurement of productivity…

All this talk of productivity and restructuring at Queen's University Belfast has me pondering on the measurement of productivity in science. The commonly used method of measurement is not very useful. It measures activity rather than productivity.

Our modern world is very preoccupied with efficiency. The paraphernalia of efficiency are everywhere. The big enchilada is the computer, but you also have the fax, the mobile phone, the dictaphone, the organiser, the wall-planner, etc. Frederick Taylor was the world's first efficiency expert. He worked as a foreman at the Mudvale Steel Company in Philadelphia. By 1880 he had doubled the output of the machinists, but he was unhappy with the methods he had to use - a mixture of exhortation, threats, fines and sackings.

Taylor devised an objective method for calculating how long a job should take. He broke down jobs into component parts, timed and studied each part, and then put the whole lot together again. In this way he calculated the "one right way" to do the job. The problem then was to persuade men to do the job in this "one right way". Doing the job this way didn't involve more work on the part of the man, but it removed individual flexibility. The human spirit rebels against such a straitjacket. Taylor overcame this resistance by paying premium rates for doing the job in his prescribed manner and single-handedly gave birth to the modern efficiency industry.

The function of science is to produce new knowledge about the natural world. This is done by research - carrying out experiments using the scientific method. Results are published in peer-reviewed journals.

READ MORE

It is reasonable for an employer of research scientists to expect that productive work is carried out. But how should the employer gauge productivity? Obviously, productivity is intimately related to how much knowledge is produced, but assessing this is where the difficulty arises. All too often the assessment concentrates principally on the rate at which scientific papers are produced, and on how much research funding the scientist wins. The UK is now awash with this approach, a legacy of Margaret Thatcher (Queen's University is but one example) and, usually, where the UK leads, Ireland quickly follows. In this instance let us resolve to make an exception. Adding up numbers of publications is easy, which is why it is popular. Assessing quality of work is more difficult.

Numbers of papers and size of grant income are not a reliable index of how much knowledge is being generated. It is reasonable to expect a scientist to produce one good paper every two years. Questions should be asked if two years pass and no paper is published. But to go beyond that, drawing up league tables based on publication numbers, is counter-productive. When the system primarily rewards numbers, the first priority for the scientist is numbers and not new knowledge. Research is planned in terms of papers and work is planned in chunks, each just large enough to support a paper. "Tricks of the trade" are used to maximise number of papers. Obviously, the output from this system is long on paper and thin on new knowledge. But it is busy and looks productive to an assessor with a Taylor-like mentality. Incidentally, Paul Erdos, a Hungarian-born mathematician, holds the record for number of papers published in a lifetime: 1,475.

Another unfortunate consequence of the numbers game is scientific fraud. When job security depends on numbers there is an obvious temptation for people under pressure to inflate and embellish results in order to publish papers.

The nature of scientific research is such that output is not mechanically coupled to effort. You may be working hard on a problem but getting nowhere because your approach is wrong. Often there is no way to know the best approach in advance and you have to work it out by trial and error. In order to produce quality results in the long term you must feel secure in your job and not cowed by tyrannical requirements for short-term publication numbers.

Indiscriminate insistence on continual high rates of publication strangles imaginative research and encourages conservative straitjacket thinking. It is too easily forgotten that we have achieved revolutionary advances this century in both physical and biological sciences without imposing a requirement for several papers a year on every scientist involved. Indeed, it is quite possible that Albert Einstein's publication numbers would look pedestrian in a modern research evaluation exercise.

The system should assist scientists to set long and medium-term goals and should assess them on progress towards attaining these goals. The most important part of the assessment should be quality of publication.

William Reville is a senior lecturer in biochemistry at UCC.