There has been a strong response to the FT’s consultation about changes to the FT50 research rankings, as part of a reflection on the methodology of its business school rankings notably to reflect societal impact.
IFSAM should be congratulated for hosting five webinars with different groups to discuss the issues raised in a series of webinars and related reflections, and for directly involving the FT in the process.
As the final IFSAM position statement stresses, the FT rankings are primarily designed to provide information for potential students in selecting whether and where to apply to business schools. Research is only one component among many that it takes into account.
As it argues, it is the responsibility of the faculty leadership to consider whether to use the rankings in any academic recruitment, promotion and tenure decisions. They should surely use their professional judgement to assess any individual and deploy a broader set of criteria, as well as considering a balance of varied contributions from different people (not just academic publishing) to support the overall objectives of the faculty.
The FT rankings attempt to provide a snapshot of academic research activity alongside teaching and other factors across the business school, rather than judging individual academics. They also seek to provide a up-to-date perspective by considering only recent publications, and crediting those in highly regarded journals as a leading indicator of quality.
Citations are also a valuable sign of quality, but are a lagging indicator which can build over many years. Any absolute citation count also needs much subtle interpretation and adjustment depending on the nature of the academic field, and risks distortion based on where a paper was published.
It is surely the responsibility of journal editors to take into account in their decision on whether to publish a paper the very valid concerns raised by IFSAM of dominant western bias, and to consider a broader range of inter-disciplinary themes and insights relevant to particular regions including notably from non English speaking countries.
As Ifsam says, they should also reflect on the reforms taking place in other academic disciplines to embrace the Dora declaration and open science, avoid duplication of existing research and consider sharing underlying data to permit replication and benefit the broader academic community.
That said, the FT is keen through both its reporting and its rankings to help showcase and encourage more research that is applied (something that is particularly relevant to business schools) and has societal impact.
It has experimented with a “teaching power index” to credit business school authors of textbooks and other works which are cited in other institutions’ syllabuses. It has explored ways to capture the wider resonance of articles with societal impact beyond academia, by capturing citations in the media and on social media.
The challenge is to foster a system that is consistent, comparable and feasible. A start would be encourage academics, business schools and publishers to report all academic output in a consistent and unambiguous manner: the use of a standard such as Orcid to provide a unique author identifier; ROR to do the same for business schools; and DoI to capture articles, chapters, textbooks and also more “popular” non-academic books.
The ideas around a global research evaluation system, measures of impact on teaching and society, while overcoming risks of gaming, would be welcome even if they are difficult to achieve. More work is need to consider ways to report other forms of academic output such as participation in advisory boards, policy consultations and support to community organisations.
But reporting and publicly sharing details of publication output in a consistent way would provide a powerful start. We would welcome ideas and encourage groups to get in touch.
Andrew Jack is global education editor at the Financial Times, and writes in his personal capacity. email@example.com