Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click on 'Find out more' to see our Cookie statement.

This week (23 July) Research England published the draft guidance for the upcoming Research Excellence Framework (REF) submissions - much awaited as it provides an idea of the direction that institutions can aim for with their submissions. I've taken a look and picked out the bits that are relevant to public engagement, where I think it will inform advice to give to researchers. It's a bit of a long read so I'd get yourself a cuppa... (skip to the bit about public engagement or where to find support).

First, a frequently asked question is, ‘do impacts arising from public engagement with research count? ‘

The answer: Yes. It’s explicitly included.

NOTE: These are still only a draft. The final guidance isn't due to be published until “early 2019” and this is for consultation only. Some of the interpretations are my own, but a lot of this is copied from the guidance. The good news is that if you have something to say about this, you can of course submit your views via the online form (by noon 15 Oct), however it may be better to send your comments to be collated as part of the University's response by emailing refinfo@admin.ox.ac.uk (by mid Sept).

A LITTLE BACKGROUND...

The Research Excellence Framework (REF) is the process of expert review that ultimately informs the amount of ‘quality related’ research funding that is provided to Higher Education Institutions in the UK… amongst other things.

It involves an assessment of the quality of research carried out at each institution (outputs), the impact that arises from said research (impact), and also the environment supporting the research (environment). Everything is split up into ‘Main Panels’ (A – Medicine, health and life sciences, B – physical sciences, engineering and mathematics, C – social sciences, D – arts and humanities), which are then further split into a total of 34 Units of Assessment (UoAs).

What comes out is a series of assessments for each UoA, from 1* to 4* (and ‘unclassified).

It feeds league tables, is used for benchmarking, accountability, and all sorts of stuff. It’s sort of a big deal.

The REF had an initial outing in 2014, which itself came from a series of ‘Research Assessment Exercises’ (RAE).

If you’re looking for more detailed information on the REF, where it’s come from and what its purposes are see the ref.ac.uk website.

This time around there are a number of changes. Some are to do with the number of staff submitted and the formula to work out the number of impact case studies that each HEI needs to submit, etc., that I won’t detail here. An important change is that impact case study assessments will count for more of each UoA score for each institution (25% compared to 20% in REF2014) and that the definition of ‘underpinning’ research has been extended to include a body of work, rather than the need to be tied to specific research outputs (see more below).

The basic stuff…

First off, what is ‘impact’, according to the REF?

Impact for the REF is defined as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia”, which is underpinned by excellent research.

“Impact includes, but is not limited to, an effect on, change or benefit to…

  • the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
  • of an audience, beneficiary, community, constituency, organisation or individuals
  • in any geographic location whether locally, regionally, nationally or internationally.”

“Impact includes the reduction or prevention of harm, risk, cost or other negative effects.” I.e., research that leads to a decision NOT to do something is included.

This could be considered vague. I’d say it’s broad. They go on to acknowledge that there are multiple routes to impact– including in partnership with those outside of academia. The relationship between the research and impact can be direct, or indirect, linear, non-linear, can happen at a number of different geographical levels (i.e., regional or international), and can arise throughout the research lifecycle (i.e., not just at the end!). It can be foreseen and unforeseen, and can be seen as the effects on individuals, groups and whole communities.

Interestingly, in stating how impacts can arise directly or indirectly from research the draft guidance states, “a submitted unit’s research may have informed research in another submitted unit (whether in the same or another HEI), which in turn led to an impact.” They don’t provide an example so hopefully the final guidance will expand on what this might actually look like. It also mentions how to handle ‘co-produced’ work that leads to impact(s).

Impact will be assessed by looking at both ‘reach’ and ‘significance’.

By underpinning research they mean that it “may be a body of work produced over a number of years or may be the output(s) of a particular project. It may be produced by one or more individuals.” ‘Excellent research’ means “that the quality of the research is at least equivalent to two star (2*): ‘quality that is recognised internationally in terms of originality, significance and rigour”.

“The submitting unit must show that the engagement activity was, at least in part, based on the submitted unit’s research and drew materially and distinctly upon it.”

Windows of eligibility

“The assessment period 1 August 2013 to 31 July 2020, during which time the impacts stated must have arisen/taken place.”

“The underpinning research must have been produced by the submitting institution during the period 1 January 2000 to 31 December 2018.”

As an interesting side note here, they state that the impact need not be ‘finished’ – it could be at any level of maturity BUT it will only be judged on the basis of what has actually taken place and can be evidenced, not based on predictions of future potential impact.

Impact is not the activities or outputs themselves. Nor is it academic publications, talks at conferences, and those activities that result in the advancement of knowledge (for academic purposes) and so on - it needs to be outside of academia.

Finally, whilst we know many researchers do a huge amount of extremely valuable work advising at a number of levels, which itself may have impacts, this isn’t the ‘Researcher Excellence Framework’ so the impact of individuals giving general advice, where the impacts arising can’t demonstrably be linked with their research, won’t count.

By ‘reach’ they mean “the extent and/or diversity of the potential beneficiaries of the impact, as relevant to the nature of the impact.” But not in geographic terms or as absolute numbers. What this means, as I read it, is that it's not about the absolute numbers, but more that you stated a (group of) beneficiaries and that you can show that a reasonable proportion of them were reached. This will hopefully encourage people to define their audiences better so that they can a) make better targeted and more appropriate activities but also b) reasonably back up their claims of impact. It'll be far easier to show a greater extent of reach within a tightly defined group (e.g., 60% of the regional branch of the girl guides chapter took part in at least one of the activities – my example, not theirs) than ALL the people (i.e., 'the general public'). Imagine trying to prove the extent that ALL the people of the British Isles saw, heard or interacted with something, let alone were affected by it…

By ‘significance’ they mean the degree to which the impact has enabled, enriched, influenced, informed or changed the performance, policies, practices, products, services, understanding, awareness or well-being of the beneficiaries. So… was there a little change or a big change? Clearly showing an effect from something like mass media is difficult, and so I’m sure people will be working some more to figure out indicators for this. There’s also a question of what proportion of the people you claim you reach you should be expected to show the change affecting (i.e., would only 10% of people featuring in the evaluation or impact monitoring process count as sufficient to support your claims or to extrapolate across everyone who took part in the engagement?).

An overall judgement will be made based on the combination and balance of reach and significance, but neither is given precedence – it’ll all be taken into account within the context of the impact itself. I.e., a big change on a small, well-defined group of beneficiaries, versus a smaller impact across a large audience (e.g., the difference between more intensive engagement models and awareness raising through mass media). 

So does Public Engagement count?

Yes... here's the relevant bit in full:

"Engaging the public with the submitting unit’s research (for example, through citizen science, patient and public involvement in health, or through public and community engagement), is an activity that may lead to impact. Sub-panels will welcome, and assess equitably, case studies describing impacts achieved through public engagement, either as the main impact described or as one facet of a wider range of impacts. Panels expect that case studies based on public engagement will demonstrate both reach (e.g. through audience or participant figures) and significance, and will take both into account when assessing the impacts.”

The main points to highlight are that impacts arising from public engagement:

  • won’t by default be judged to be ‘lesser’ than impacts achieved through other means (e.g., economic impacts via commercialisation).
  • may be the main focus of the case study, or feature as a blend of, or in addition to, other types of impact.
  • must be underpinned by research – that means there must be a significant link back to a body of work, or research outputs. This means, for example, that outreach projects that aim to increase enjoyment of a subject generally wouldn’t count, unless it can be demonstrated that there’s a significant inclusion of research in the content/delivery, etc.

Significantly, they also extend what they mean by ‘assess equitably’ by stating, “the main and sub-panels have determined that no one model or relationship will be considered intrinsically preferable, and each impact case study will be assessed on its own merits.” This is great, as previously people have been concerned that engagement would be seen as the weaker sibling to e.g., commercial impacts. Basically, impact is impact, whatever the flavour.

What will impact case studies look like?

Impact case studies will need to be a ‘clear and coherent’ narrative account of what the impacts were, who the beneficiaries were, how the impacts are underpinned by research, and how they were achieved, with evidence to back it all up, amongst other things. They state the case studies “should include sufficiently clear and detailed information to enable panels to make judgements based on the information it contains, without making inferences, gathering additional material, following up references or relying on members’ prior knowledge.”

There’s a general template included in the guidance (Annexe G), and each case study will be limited to 5 pages including references. Below is an excerpt:

REFimpactcasestudytemplate.png 

REF2021 Guidance on Submissions, extract from Annexe G: Impact Case Study Template and Guidance

How do you evidence impact?

Well that’s the big question – obviously the types of evidence you provide need to a) support the claims you make and b) be appropriate to the types of impact.

They reiterate the point that both reach and significance will be taken into account, but that the impacts themselves must be evidenced, and not just evidence of dissemination taking place. The example they provide is: “attendance figures at an event may illustrate the pathway to a change in understanding or awareness and provide an indication of the reach of the impact. However, on their own, they would not serve as evidence of the significance of the impact, which might be demonstrated, for example, through participant feedback or critical reviews”.

Alongside the draft guidance, they also published a study looking to standardise the use of quantitative indicators. You can read the whole thing here, but below is an overview table of what they propose as a dual system using a ‘style guide’, to make the indicators more ‘discoverable’ and ‘specific guidance’ on using quantitative indicators. Again, this is to inform the development of guidance and also to help those looking to effectively articulate their impact, it’s not meant to constrain how impact case studies are written.

And here’s the overview for specific guidance on engagement and mention in non-academic documents and media:

engagementidicatorsguide.png 

Guidance for standardising quantitative indicators of impact within REF case studies page 16

 mediamentionsindicatorsguide.png

Guidance for standardising quantitative indicators of impact within REF case studies page 17

I note, of course, that this only relates to ‘reach’ rather than significance. This could be explained by the fact that they were looking back to REF2014 case studies, and excluded measures as they followed a process to identify indicators used that met certain criteria. Additionally, we know that the quality of evidence provided for impact from engagement in REF2014 was ‘often weak’.

Another important point is on verifiability: any evidence provided will need to be ‘independently verifiable’. Meaning, if need be, people can access the information themselves to make a judgement.

There is a table provided as a guide to the potential types of indicators and evidence that would be most relevant to different types of impact, an excerpt is shown below, but again they say this isn’t exhaustive, and that “Sub-panels will consider any relevant, verifiable evidence.”

indicatorstables.jpg 

REF2021 Guidance on Submissions

There are differences between the main panels in their supplementary criteria:

  • Panel A (Medicine, health and life sciences ) encourages quantitative indicators where possible and also, “do not welcome testimonials offering individuals’ opinions as evidence of impact; however, factual statements from external, non-academic organisations would be acceptable as sources to corroborate claims made in a case study.”
  • Panel B physical sciences, engineering and mathematics ) welcome both quantitative and qualitative indicators as appropriate, and “Where testimony is cited, it should be made clear whether the source is a participant in the process of impact delivery (and the degree to which this is the case), or is a reporter on the process.”

The key, of course, is to understand what you want to achieve first, and build assessment and evaluation in at the very beginning based on your desired outcomes. The Generic Learning Outcomes framework cited in the table is what we recommend using.

They also note that whilst links to items online can be included, the case study itself should provide all the information required, and sub-panels will not follow links.

How will impact case studies be chosen?

Departments are currently drawing up long lists of potential case studies through their own processes. What I can point to in the draft guidance is that they say the strongest case studies should be chosen and there is no expectation to be representative of anything other than that (i.e., breadth of type of impact).

What about case studies that build on impact case studies from REF2014?

As long as the case studies meet the eligibility criteria (such as the underpinning research taking place after 2000, and is ‘excellent’) case studies continued from REF2014 are eligible, as long as impact has occurred within the assessment period - but they'll need to be identified as continuing case studies rather than new (to REF). Note that the panels won’t take into account REF2014 case studies, the case studies for 2021 will be judged on their own merit. They state, “the sub-panels don't want to receive information about how continued case studies relate to those submitted in REF2014.”

Expectations for new and continued case studies may be different. For example, in panel A they encourage new case studies whilst also appreciating the long lead times required for certain biomedical and health impacts; panels B, C and D encourage the strongest case studies to be submitted, regardless of whether they are new or not.

What if my evidence needs to be kept confidential?

The draft guidance and working methods covers this in some detail, including how panel members are bound by confidentiality arrangements and, in justifiable cases, they can use more robust mechanisms – including if a certain level of security clearance is required and submitting institutions can request that specific case studies are redacted or not published at all. There are also processes to ensure conflicts of interest are raised and taken into account.

What does this mean for the public engagement with research I’m planning?

Nothing. The general advice we provide about planning high quality, effective public engagement still applies. Public engagement with research is a highly valuable activity to undertake for myriad reasons; it can improve research and have positive impacts on you, the researcher, as well as wider society – even if it’s not ‘REFable’.

If anything, the guidance now explicitly acknowledges a large breadth of processes and activities that can lead to impact, and that impact isn’t necessarily something that just pops out beautifully from a sequential process, but can be complicated, and something that happens at all sorts of different levels, in all sorts of different ways.

If you have a well-thought out public engagement project, that has defined objectives, defined audiences/participants/beneficiaries, appropriate methodologies, and that has a plan to evaluate and monitor the desired outcomes and impacts, then it may well be that (as long as it takes place within the eligible window and is underpinned by excellent research) then you may well be impactful, and therefore have an impact case study on your hands.

Departments are collating potential impact case studies, so if you think you do, or will, have a potential case study, it’s best to get in touch with the REF contact in your department.

Where can I find more support?

In the first instance contact the Division’s Public Engagement Facilitator… me!

There’s lots of guidance out there on planning high quality and effective (and therefore potentially impactful) public engagement with research, and there are tools to help you record outputs, outcomes and impacts. I can point you in the right direction.

Training for public engagement with research takes place termly – with additional workshops on evaluating public engagement being added, too.

I’d recommend at least taking a quick look through this analysis of public engagement in the last REF. The take-away being that case studies featuring public engagement (of which over half of the case studies submitted included some form of public engagement) were no less likely to be scored outstanding than those featuring other sorts of impact.

If you feel like going down the rabbit hole, then all available impact case studies are published online, and searchable. Though, I would add caution; the analysis above did conclude that the evidence for impact from engagement was ‘often weak’, so basing future plans on what has been done before, might not be wise.

For the full draft guidance see the generic guidance here and the panel guidance here. The key thing to keep in mind is that the guidance is meant to assist institutions with their submissions – they specifically say they do not wish to constrain submissions, and make a point that the examples provided are not exhaustive.