Monitoring of performance is often damaging

Edward Tsang 2013.03.31

It is fashionable to assess performance with measurable output. All criteria for measuring "good performance" are partial, because quality of performance is hard to measure. Unfortunately, too many people confuse measurable output as definition of performance. These criteria often do more harm than good.


Measuring performance is fashionable

Management likes to define criteria for "good performance". For example, factory workers are measured by how many pieces they produce (and pass quality control) in an hour. Academics are measured by how much funding they bring in, students' satisfaction, papers published, citations, etc.

Unfortunately, quality of performance is hard to measure

For factory work, output can probably be measured fairly. After all, products are sold by volume. However, quality cannot be measured easily for most jobs. For an academic, the above criteria are incomplete. An academic's primary job is teaching and research. (All other activities are periferal, necessary some of them may be.) Surprisingly, the quality of teaching and student supervision is not a criterion for "good performance". It is not necessarily reflected by student satisfaction. Neither is the quality of research, which is not necessarily replaced by the number of publications or citations.

Could approximations be used?

Perhaps quality of teaching and research are not counted because they cannot be quantified. But even their approximations have not been considered. For example, the amount of time that an academic spends with students is not counted. Neither is the amount of time that an academic spends on reading, thinking and writing.

Easy option: using measurable output

The reason for measuring performance using the above mentioned criteria is because they are measurable outcomes. Unfortunately, not everything is measurable. One can only accept that part of performance is measurable and part is not. Unfortunately, some appear to believe that everything is measurable, or only those measurable criteria matter! The consequence of such belief is often deterioration of service.

Approximations of approximations

Funding, students' satisfaction, papers published, citations, etc. are only approximations, or approximations of approximations. Yet, they are considered to be complete measures of performance. Many academics adjust their behaviour according to these measures. As a result, teaching is compromised. Scholarship is compromised.

Is funding a good measure?

The logic of using funding to measure performance is that: an academic can only succeed in obtaining funding if they are respected by their peers who review their proposals. Funding is often secured through skilful networking. So funding measures networking skills more than research skills. Funding is not even an appropriate "measurable output", because funding is input, not output! Funding supports research. Funding is really an approximation of approximation!

Is student satisfaction a good measure?

Many students want to study, but measurable output drives many to seek high marks with little effort. Academics can please them by teaching well. But it is even easier to please with inflated marks (with the exception of top students, who want marks to reflect their ability), which may not reflect quality of teaching.

Is publications a good measure?

Publication is important. It is probably the most measurable output of an academic. However, the number of publications may be distorting. Ludwig Wittgenstein did not have many publications, yet his impact is greater than 99.9% of those publish more than him. Isaac Newton did not have enough publications to survive today's rigorous appraisal systems (had he died before 40, few would have noticed him).

If the number of publication is used to measure performance, academics would tend to spread their research output in multiple publications, instead of pooling them together in one publication (a process known as salami publications -- thin slices, with very littel meat in each of them).

Citation is an approximation of importance of a publication, but it is not everything. Academics can cite each other in order to build up their citation number; in which case, it is a measure of networking skills. Citation number also takes time to build up, so it is a delayed measure.

Confusing "measurable output" as "definition of performance"

I have used academics as an example to illustrate the difficulties in measuring performance. The same analysis applies to all other professionals. In fact, such confusion is ubiquitous; it is fashionable to believe that performance can be concretely measured. Most people believe that measures should be as objective as possible. The trouble is: too many people confuse wishes and reality. While objective measure of performance is a reasonable goal, one must not forget that true quality is hard to measure. Defining criteria for measuring performance could do more damage than harm.

What are those criteria really?

It is not clear how people treat those criteria. This issue is not even discussed:

All of these are problematic.

Damaging measures

Most criteria for measuring performance are only partial. Unfortunately, as they make the headline, many consider them definitions of "good performance". This is damaging! Having these criteria is not bad if they are treated as references. But treating them as complete makes them worse than having no criteria. When staff adjust their behaviour to meet the criteria that they are measured against, quality of service is often sacrificed. In other words, such measures are destructive measures.

[End]

Related Articles:
Destructive Testing in Higher Education
Teaching Overhead: a high premium for teaching quality control
Missing measures in university education


All Rights Reserved