World University Rankings blog: the value of impact assessment

University rankings are often criticised, but seldom ignored. This piece argues that an assessment of research impact—beyond scholarly citations—also needs to be included in these rankings. It also notes that the criticism about the huge cost burden of research impact exercise did not stack up in the context of the 2014 Research Excellence Framework in the UK.  

First published on Times Higher Education

Which universities are the most innovative?

University rankings are often criticised, but seldom ignored. They are composite measures of several variables, with research quality—as indicated by citations or the number of highly cited researchers—a significant driving force for many of them.

While one may argue with the methodological soundness of composite indicators, there is no doubt that rankings influence behaviour and have caused quality to be taken seriously in universities around the world. 

They also have some unintended consequences. A case in point is the Academic Ranking of World Universities, which was originally developed to monitor the global standing of Chinese universities as they invested in research capacity. However, the ARWU soon became a league table for the world’s most research-intensive universities.

Certainly, ARWU and other rankings have had a positive influence on nurturing a culture of quality, and do provide a reasonable mechanism to monitor the progress of universities and national systems starting from a low base. However, they may be creating distortions for countries that already have a developed higher education research sector; this is especially the case if these countries also have research assessment exercises that reinforce similar behaviour.

Universities in the UK and Australia have performed well in these rankings. It is no coincidence that the UK has had more than 25 years of experience in formal research assessment, and Australia is in the midst of its third edition of the Excellence in Research Australia exercise.

However, one concerning distortion relates to Australia’s long-standing poor showing in business-university collaboration. This is despite recent, significant improvement in business investment in research and development, and the ongoing expansion of university research.

Two of the Australian Research Council’s grant schemes are the ‘Discovery’ scheme, which funds investigator-driven projects, and the ‘Linkage’ scheme, which requires co-investment from end-user partner organisations. Over the past three grant rounds, the number of applications for the Discovery grant has increased by 8 per cent, while the Linkage grant has declined by 24 per cent. This is despite the fact that the success rate for Discovery is a low 18 per cent, while Linkage is a healthy 36 per cent.

Pricey it may be, but the assessment of impact will evolve and better methodologies will appear over time, says Arun Sharma.

There could be other contributing factors, but the distorting role of rankings and assessment exercises based on the academy’s view of excellence is certainly at play.

It is in this context that the inclusion of impact in the recent UK research excellence framework, or REF, is a welcome development. Despite initial misgivings, it appears to have been well-received by a wide range of disciplines, including the humanities and social sciences.

One pervasive criticism is surrounding the cost of the impact assessment exercise.  A recent report puts that cost at £55 million, presented as 3.5 per cent of the research funds to be allocated according to the impact assessment in the specified five-year time frame. But this misses the point. The impact assessment is of the broad research output of UK universities – not just that funded by the funding councils. 

It can be argued that a more appropriate denominator would be the total university research expenditure for the five-year period. That would change the cost of impact assessment to something like 0.15 per cent.

A conservative estimate of the global annual spend on university research is at least a quarter of a trillion US dollars, and much of this is from public funds.  Is it too much to expect an attempt be made to assess the broader societal impact of this massive investment? A full economic costing of impact assessment may be more than the reported figure, but it would still be minuscule compared to the sheer scale of university research. In contrast, the full economic cost of traditional assessment via academic peer review is, rightly, rarely questioned. 

There is a more important benefit of the impact assessment. The UK now has 6,679 published case studies, and it should not take too much effort to perform postcode analysis to see where the research described took place; in fact, they could even be mapped to different parliamentary constituencies. Such an analysis would either leave politicians talking about the great things happening in their universities, or result in them making the case for greater investment. This will eventually lead to a stronger case for investment in university research.

Impact assessment will evolve, and better methodologies will appear over time. But the university sector—and the nations that are investing in their universities—will benefit from a diversity of rankings that take broader societal impact into account. Early iterations of these rankings will be questioned, but they will not be ignored.


Arun Sharma is Deputy Vice-Chancellor, research and commercialisation, at Queensland University of Technology.

Arun Sharma

Arun Sharma is a Distinguished Professor Emeritus at the Queensland University of Technology (QUT) and Chair of the Council of QIMR Berghofer Medical Research Institute. He is an advisor to the Chairman of Adani Group and leads the Group’s Sustainability and Climate Change function.

https://professorarunsharma.com/
Previous
Previous

The Human Touch In An Automated World: Are The Creative Arts Ready To Respond?

Next
Next

AARNews Interviews Arun Sharma