The Department of Business, Innovation and Skills has launched a Call for Evidence as part of Lord Stern's Review of the Research Excellence Framework. Details of the inquiry can be found on the Government website. The submission produced jointly between University Geoscience UK and the Geological Society can be found below:
Submitted 24 March 2016
1. This response has been prepared jointly by the Geological Society and University Geoscience UK.
a. The Geological Society (GSL) is the UK’s learned and professional body for geoscience, with about 12,000 Fellows (members) worldwide. The Fellowship encompasses those working in industry, academia, regulatory agencies and government with a broad range of perspectives on policy-relevant science, and the Society is a leading communicator of this science to government bodies, those in education, and other non-technical audiences.
b. University Geoscience UK is the subject association of Geoscience (geology, applied geology, Earth science, geophysics, geochemistry and some environmental science) departments/schools based within universities in the British Isles. It promotes discussion and exchange of information between departments and provides a point of contact between these and professional, government and quality control agencies.
What changes to existing processes could more efficiently or more accurately assess the outputs, impacts and contexts of research in order to allocate QR? Should the definition of impact be broadened or refined? Is there scope for more or different use of metrics in any areas?
2. The pattern of usage and the speed of uptake of research (sometimes referred to as the research ‘half life’) vary from discipline to discipline, and even within the Earth Sciences. Earth Science research tends to have a relatively long half life so it will take longer for citation references to be useful compared with areas such as theoretical physics or biomedicine which may have a shorter half life. It has been reported to us that Earth Science citations are only really useful 3 years after they have been published and then reliable after 5 years. This balance was broadly managed well in the 2014 REF but it is worth taking into account for future plans.
3. Some of our respondents noted that the completion of the Impact Summary document did not contribute a lot to the process and instead used up valuable time in the process. By contrast, many agreed that the Impact Case Study documentation was a useful part of the process. We also received comments regarding the time and effort that has to go into some of the data collection. It was noted to us that the esteem measures for individual researchers required for the Environmental Statement are very time consuming to collate and often have an uneven effect on those they represent. Some respondents in universities recommended that the Environmental Statement should be simplified, and could be an area in which metrics such as PhD completions and income (both RCUK and industry) might play a leading role, supplemented by a statement concerning future research plans.
4. While at the highest level the definition of impact is very broad, across different Units of Assessment (UoA) it is more patchily defined. And while there may be consensus within a given UoA of what constitutes impact, there are concerns that the same Impact Case Study submitted to two different UoAs could result in different outcomes. This might be particularly true at the interface of science and engineering with social science. Impact should be defined sufficiently broadly that the value delivered to society by different disciplinary bodies of research are adequately recognised. Clearer guidance on what constitutes impact and what does not, following careful consideration, would be of value – at present (rightly or wrongly, in respect of the current guidelines) many institutions and departments frame quite narrowly the impact that their research has.
5. There are already numerous metric based systems available that capture a large number of data from a different criteria such as ORC-ID, Researcher ID, Research Gate and Web of Knowledge to name a few. It would be sensible to make the best use of data capture systems that already exist rather than investing time in creating additional systems.
6. An additional criterion for assessing researchers may be a “value added” score which measures the outputs relative to the inputs. This is common in evaluating economic performance, for example. Some researchers are incredibly resourceful, using low level funding (actual and in-kind) to generate both quantity and quality of outputs. It would be useful to capture this aspect of performance, which is largely lost in the REF process at present. It is often through such small-scale endeavours that serendipitous findings (addressing ‘unknown unknowns’) emerge and lead to major advances.
7. While metrics play a useful role in assessing outputs, it is difficult to see how meaningful metrics could be constructed to measure impact. To do so would at the very least require the definition of impact to be artificially and narrowly constrained, which would be detrimental to the laudable objective of seeking to stimulate and reward effective and creative uses of research.
If REF is mainly a tool to allocate QR at institutional level, what is the benefit of organising an exercise over as many Units of Assessment as in REF 2014, or in having returns linking outputs to particular investigators? Would there be advantages in reporting on some dimensions of the REF (e.g. impact and/or environment) at a more aggregate or institutional level?
8. It is the view of our respondents that the REF assessment should stay at the department level. Moving to an individual researcher leveI is likely to result in destructive competition, with ‘stars’ moving between institutions. Individualisation would also be demoralising to the majority of staff. Additionally, using a more aggregated or institutional level reporting of REF would be potentially disadvantageous. It could lead to internal politics dominating this aspect of the REF report and ‘gaming’ in respect of HEI’s teaching and research priorities, to the detriment of students and UK plc. The REF already consumes a considerable amount of resource and loading more work into the HEIs without good reason would consume more resource within institutions that could better be used to conduct high quality research and teaching. Assessing impact at an aggregated institutional level, in particular, would be meaningless. An advantage of the REF is that it can identify areas of strength and weakness within subject areas across the UK and within institutions, and this is of value to disciplinary communities.
What use is made of the information gathered through REF in decision making and strategic planning in your organisation? What information could be more useful? Does REF information duplicate or take priority over other management information?
9. Departments often use the REF as a benchmarking exercise and the data collected as part of strategic planning. It allows departments to recognise strengths and weaknesses with external calibration to comparators. It is not easy to disaggregate this from wider discussions in academic communities, and to identify their relative value, but given the existence of the REF it does deliver benefit.
What data should REF collect to be of greater support to Government and research funders in driving research excellence and productivity?
10. Our respondents noted that the UK research sector is already one of the leanest machines when it comes to the delivery of research excellence. Our research productivity is outstanding compared with most nations. The REF is far from being the only driver of this success, and there may be limited scope to use reform of REF as a means of increasing this further.
11. Research excellence & productivity are measured by scholarly activity, which is self-regulated through the peer review and scientific journals in that if research is excellent it will be well cited and in high impact journals. This research should be weighed in the balance of inputs. An individual may make greater leaps than a large group, or a small grant than a large grant.
How the REF might be further refined or used by Government to incentivise constructive and creative behaviours such as promoting interdisciplinary research, collaboration between universities, and/or collaboration between universities and other public or private sector bodies?
12. We agree that there should be greater encouragement of cross-referral of outputs/impact case studies to encourage interdisciplinary research. Geoscience is, by its very nature, multi-disciplinary and we need to make sure that tweaks to REF do not encourage further silos to emerge. The same could be said of collaborative research between and within individual HEIs.
In your view how does the REF process influence, positively or negatively, the choices of individual researchers and / or higher education institutions? What are the reasons for this and what are the effects? How do such effects of the REF compare with effects of other drivers in the system (e.g. success for individuals in international career markets, or for universities in global rankings)? What suggestions would you have to restrict gaming the system?
13. At an individual level, REF ought not to influence successful academics. Such individuals will usually produce a range of publications with international and more parochial scope. Yet REF does encourage individuals to focus on ‘blue chip’ journals and high-impact publications at the expense of work that may have a greater local and regional impact for the institution.
14. Some of our respondents noted that they would be in favour of institutions having to submit all eligible staff to the REF process, to reduce scope for ‘gaming’.
In your view how does the REF process influence the development of academic disciplines or impact upon other areas of scholarly activity relative to other factors? What changes would create or sustain positive influences in the future?
15. It is still too early to have a clear picture of the full impact of the REF. Viewed as a continuation of its predecessor, the RAE, research evaluation exercises seem to result in a more fixed interpretation of disciplines. To evaluate or audit, you first have to categorise and this categorisation then sets barriers and boundaries which in themselves can be counterproductive. We need to enable and empower our science base. There should be more focus on ensuring access to outstanding infrastructure for each researcher (not just large centres) linked to excellent support structures to enable a holistic approach to developing research agendas (including to application of research, where applicable).