Time and Date

Wednesday, September 15, 2010

Robust, transparent and sophisticated



Phil Baty explains how in-depth consultation with the global academic community has produced the most exact and relevant world rankings yet devised

It is, of course, rather crude to reduce universities to a single number.

We are aware that higher education institutions are extraordinarily complex organisations. They do many wonderful, life-changing and paradigm-shifting things that simply cannot be measured. Data on some of their most valuable endeavours simply do not exist or cannot be meaningfully compared on a global scale; many of the proxies commonly used are less than satisfactory.

The 2010-11 Times Higher Education World University Rankings have been compiled with these limitations very much in mind.

The tables' methodology was determined only after 10 months of detailed consultation with leading experts in global higher education: more than 50 senior figures across every continent provided extensive feedback on our plans, amounting to more than 250 pages of commentary. The wider university community also had its say via more than 300 postings on our website.

So, despite the inherent limitations, these tables represent the most comprehensive and sophisticated exercise ever undertaken to provide transparent, rigorous and genuinely meaningful global-performance comparisons for use by university faculty, strategic leaders, policymakers and prospective students.

The aim over the past 10 months has been to create a genuinely useful tool for the global higher education community and beyond, not just an annual headline-driven curiosity.

So what is the result of perhaps the largest consultation exercise ever undertaken to produce world university rankings?

The tables use 13 separate indicators (up from just six under our old system) designed to capture a broad range of activities, from teaching and research to knowledge transfer.

These elements are brought together into five categories:

* Teaching — the learning environment (worth 30 per cent of the final ranking score)
* Research — volume, income and reputation (worth 30 per cent)
* Citations — research influence (worth 32.5 per cent)
* Industry income — innovation (worth just 2.5 per cent)
* International mix — staff and students (worth 5 per cent)

The weightings for the five categories, and the 13 indicators within them, vary considerably. High weightings are given where consultation has shown unmistakable enthusiasm for the indicator as a valuable proxy and clear confidence in the data we have. Lower weightings are employed where confidence in the data or the usefulness of the indicator is less pronounced.
The future



This is the first year of a highly ambitious new rankings system. In all such systems, compromises must be made, proxies must be applied and data-collection issues will arise.

However, we are confident that by creating our methodology in open and detailed consultation over the past 10 months, we have produced a robust and evidence-based ranking that paints a realistic picture of the global landscape.

No comments:

Post a Comment