What makes a ‘successful’ collaborative research project between public health practitioners and academics? A mixed-methods review of funding applications submitted to a local intervention evaluation scheme

Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Associated Data

Additional file 1: File 1. Data extraction template for document analysis. File 2. Interview schedule for SPHR practitioners. File 3. Interview schedule for SPHR researchers. File 4. Online survey. File 5. Programme national workshop.

GUID: 0D7AC6F4-22A3-4265-8C5F-614CD03A607F

The data sets, which include anonymised interview transcripts and a description of our coding trees, are available from the corresponding author on reasonable request.

Abstract

Background

The national Public Health Practice Evaluation Scheme (PHPES) is a response-mode funded evaluation programme operated by the National Institute for Health Research School for Public Health Research (NIHR SPHR). The scheme enables public health professionals to work in partnership with SPHR researchers to conduct rigorous evaluations of their interventions. Our evaluation reviewed the learning from the first five years of PHPES (2013–2017) and how this was used to implement a revised scheme within the School.

Methods

We conducted a rapid review of applications and reports from 81 PHPES projects and sampled eight projects (including unfunded) to interview one researcher and one practitioner involved in each sampled project (n = 16) in order to identify factors that influence success of applications and effective delivery and dissemination of evaluations. Findings from the review and interviews were tested in an online survey with practitioners (applicants), researchers (principal investigators [PIs]) and PHPES panel members (n = 19) to explore the relative importance of these factors. Findings from the survey were synthesised and discussed for implications at a national workshop with wider stakeholders, including public members (n = 20).

Results

Strengths: PHPES provides much needed resources for evaluation which often are not available locally, and produces useful evidence to understand where a programme is not delivering, which can be used to formatively develop interventions. Weaknesses: Objectives of PHPES were too narrowly focused on (cost-)effectiveness of interventions, while practitioners also valued implementation studies and process evaluations. Opportunities: PHPES provided opportunities for novel/promising but less developed ideas. More funded time to develop a protocol and ensure feasibility of the intervention prior to application could increase intervention delivery success rates. Threats: There can be tensions between researchers and practitioners, for example, on the need to show the 'success’ of the intervention, on the use of existing research evidence, and the importance of generalisability of findings and of generating peer-reviewed publications.

Conclusions

The success of collaborative research projects between public health practitioners (PHP) and researchers can be improved by funders being mindful of tensions related to (1) the scope of collaborations, (2) local versus national impact, and (3) increasing inequalities in access to funding. Our study and comparisons with related funding schemes demonstrate how these tensions can be successfully resolved.

Keywords: Decision-making, Public health, Qualitative research, Research personnel, Translational medical research

Background

Collaborative research projects between public health practitioners (PHP) and researchers are encouraged to increase the use of evidence in practice and decision-making. However, little is known about how to make collaborative research projects successful.

Previous research consistently suggests that research evidence is more likely to be used if users are engaged with researchers in defining the purpose and design of new research [1–8]. In particular, ‘sustained interactivity’ between researchers, policymakers and practitioners to support ongoing exchange, opportunities for personal two-way communication and partnership approaches is seen as important for making these partnerships work [5, 9].

Interpersonal trust and ongoing communication channels have been identified as essential to the process of developing close collaboration between research producers and users [10]. Long-term commitments from, and sustainable funding for, research is required to build these relationships over time [2]. In contrast, short-term initiatives are unlikely to work given the likely pace of organisational change and scale of the challenges facing academia and local government.

We have previously identified the need for an increased mutual awareness of the structures and challenges under which PHPs and researchers work [11]. Opportunities for frequent and meaningful engagement between PHPs and researchers can help to overcome barriers to co-production of evidence. Collaborative models, such as the use of researchers embedded in practice, might facilitate this; however, flexible research funding schemes are needed to support these models.

The difficulties for collaborative research have been well documented in studies of knowledge transfer and knowledge exchange in health services [1–3, 5, 12]. These studies consistently demonstrates that there is no single consistent definition of what constitutes ‘evidence’. This ambiguity results in inconsistencies in terms of what is used and valued as research between PHPs and academics. Particularly in local authorities, the use of research and evidence is highly political, with prevailing ideologies shaping the way evidence is identified, interpreted and considered at a local level [13, 14]. Linked to alignment with political ideologies, the timing of research is a key challenge for academics [15]. Research must be timely to fit with the notion of being able to influence and impact upon a specific ‘policy window’ and for evidence to be available when policymakers are likely to be receptive [16].

Experiences from various Collaborations for Leadership in Applied Health Research and Care (England) to develop collaborative research in health demonstrate that these issues are persistent and require constant alignment of relationships, values, structures and processes for collaboration, with a need for developing a shared 'collaborative' identity and new communities within existing networks that provide bridges across organisational boundaries [17, 18].

These experiences also suggest an ongoing need for dedicated funding programmes and spaces for researchers and PHPs to work together to generate research findings of greater utility to public health practice. Several research organisations in England have started to implement new services and programmes to create such opportunities.

One example is the Public Health Practitioner Evaluation Scheme (PHPES) run by the National Institute for Health Research School for Public Health Research (NIHR SPHR). PHPES [19] is a national, competitive scheme that offers PHPs support to evaluate local interventions in collaboration with SPHR researchers. The scheme was introduced in 2013 by SPHR to give access to researchers in its member organisations, which comprise eight leading public health research centres in England. PHPES aims to produce high-quality evidence needed by PHPs to improve population health and reduce health inequalities. PHPs can apply to the scheme for SPHR members to evaluate their local public health interventions. The scheme particularly focuses on local, rather than national, public health initiatives that have not been the subject of previous robust evaluations but which have potential applicability elsewhere and have secured operational funding for the research period.

No research exists to date on the evaluation of this scheme, and any assessment of the 'success’ of the scheme as a whole, or of individual projects, is complicated by the different priorities of stakeholders. To researchers, a successful proposal may be one that is funded; a successful project for researchers may be one that generates peer-reviewed journal papers or an impact case study, while a successful evaluation for many stakeholders is one that ‘proves’ a programme or intervention works. There might be additional disagreement between stakeholders as to whether generating evidence that an intervention does not achieve the intended outcomes or that suggests an intervention should be discontinued might equally constitute a successful evaluation. Clarifying from the start of collaborative applications what the shared expectations about ‘success’ are could therefore be crucial in achieving success from their different perspectives.

This paper reports on the evaluation of the NIHR SPHR PHPES and considers what makes collaborative research applications successful, or not, in the eyes of different stakeholders. We identify three tensions between practitioners and researchers that need to be resolved to maximise the potential for generating an impact on public health through these partnerships.

Methods

The study consisted of four work packages with the overall purpose of making recommendations to the SPHR executive on the scope and implementation of a future responsive research fund:

A rapid review of applications and reports from PHPES projects (2013–2017).

Detailed review of applications and reports of a sample of eight projects (including funded and unfunded projects), and semi-structured telephone interviews with at least one researcher and one practitioner involved in each sampled application/project.

Online survey of practitioners (applicants), researchers (principal investigators [PIs]) and PHPES panel members (academic, practitioner and lay reviewers).

National workshops with a wide range of PHPES stakeholders (including lay representatives/community members and Public Health England [PHE]).

The research was conducted over a 9-month period between April and December 2019. The four work packages are discussed in more detail below. The findings from each work package informed the design of the data collection tools in the next work package to maximise data integration and facilitate an iterative research design.

Work package 1: rapid review of all 81 applications and the project reports from 14 funded projects (April–June 2018)

The purpose of the review was to understand the scope of applications and more specifically to identify their original objectives in relation to generating generalisable findings and their dissemination/implementation. The review aimed to identify any common factors that are associated with (i) a successful application, (ii) an effectively delivered project and (iii) evidence of early impact (see Additional file 1: File 1 for the data extraction template used in the document analysis).

Work package 2: individual in-depth interviews (mainly via telephone/Skype) (July–August 2018)

We sampled eight varied projects (from all applications potentially including funded and unfunded projects). The sampling frame/selection criteria were developed based on the review of documentation including applications and project reports. We selected six successful and two unsuccessful applications with representation from each SPHR members, aiming for a spread across topics and SPHR programmes. We interviewed one researcher and one practitioner involved in each sampled application/project. The lead researcher for each application was contacted first and, when they agreed to be interviewed, was asked for the contact details of their key practice partner, who was then approached for interview. In total, 34 researchers and practitioners were approached for interview; 14 people did not respond to the invitation and reminders (7 practitioners and 7 researchers), three people (2 academics and 1 practitioner) were not available for interviews during the fieldwork period, and two people had left the organisation they were working for at the time of their PHPES project and could not be reached.

Topic guides for practitioners and researchers (see Additional file 1: Files 2 and 3) included factors that make for a successful application, potential tensions between responsiveness and generalisability, and mechanisms for impact on policy and practice. Interviews were audiotaped and transcribed and a thematic analysis undertaken using NVivo software. To ensure we captured the breadth of relevant views on the programme, we also shared these initial findings and sought views from academic colleagues at Imperial College London, who were not part of the SPHR at that time, and from the chair and members of the PHPES panel which reviewed applications.

Telephone interviews lasted between 30 and 75 min, with an average of about 45 min. All interviews were audio-recorded and transcribed verbatim. Transcripts were coded and analysed thematically using a coding framework [20] informed by the interview schedule and themes drawn from published literature. Verbatim quotes from participants are included to illustrate the main themes identified.

Work package 3: online survey of all applicants and their Local Authority (LA) colleagues not involved in applications, PHPES researchers, PHPES reviewers and SPHR executive members (September–October 2018)

The online survey was developed based on work package (WP) 1 and 2 findings; it aimed to elicit views on the factors that influence applications/effective delivery/dissemination of evaluations and impact on policy and practice. Additional questions collected information on future research needs and priorities. The survey was modelled on a template developed for another SPHR evaluation project which reviewed public involvement in the first five years of the School [21]. The project’s public advisers contributed to the development of the survey, which also drew on interviews with SPHR researchers and members of the public involved in research. The template was adapted for use in this study to reflect the themes emerging from the interviews (see Additional file 1: File 4).

The online survey was circulated widely among SPHR members, including the executive group, advisory board and administrators in each member organisation, with a request to cascade to their policy and practice partners and patient and public involvement (PPI) panels to ensure that a range of additional perspectives on the issues identified in the interviews was included and to identify any additional issues arising from projects not sampled. In spite of wide circulation of the email invitation and an email reminder two weeks before the closing date of the survey, only 19 completed questionnaires were returned. Given that the circulation included a large number of stakeholders (estimated at around 100–200), the survey response rate was 10–20% at most. Because of the low response rate and likely response bias, with responses more likely from those with the most experience with the programme, responses cannot be interpreted as reflecting overall views of stakeholders. The responses do however provide useful insight into the views of a wider range of stakeholders, in addition to the evidence from those directly involved in developing proposals and delivering projects.

Work package 4: exploring the implications of the study findings (September 2018)

Informed by findings from the first two work packages, a national workshop was organised in Sheffield to which a range of PHPES stakeholders (including lay representatives/community members/public health practitioners and other LA colleagues/PHE) were invited to discuss the implications of the findings and how they might inform a future responsive funding scheme. Through a series of interactive discussions, closely facilitated by the research team, participants were invited to reflect on future selection criteria, such as contributions to major public health problems, potential for impact and scalability, and assessment of evaluability, and on suggestions for co-producing knowledge within PHPES projects and measuring their impact on policy and practice (see Additional file 1: File 5 for the programme of the national workshop).

Results

The Public Health Practice Evaluation Scheme (PHPES) was conceived as a response mode-funded evaluation programme. Fourteen PHPES projects were supported during the first five years of the SPHR’s work, covering a wide range of topics, types of public health programmes, and evaluation designs and methods. During the first SPHR programme, between 2013 and 2017, there were 81 applications made to the PHPES, from which the 14 funded evaluations were selected by a national panel. All funded projects were delivered, although a number were delayed or needed to adapt their methods for practical reasons, including changes in the delivery of the evaluated interventions. Evidence of early impact on local practices and policy was identified in some project reports. The success rate of applications and the sampling of projects across work packages is summarised in Table ​ Table1 1 below. Although it is inevitably difficult to ascribe causality, and there is often a lack of specific evidence in Local Authority (LA) policy or commissioning documents, there was evidence of policy changes or roll-out of programmes both during and after evaluations. For example, the Sheffield Housing+ programme addressed operational issues identified by the evaluation and the roll-out of the Better Care Fund’s St Ives Falls Prevention Pilot in Cambridgeshire, which also happened during the pilot evaluation.

Table 1

Success rates of applications/projects sampled in each work package

Submitted applicationsSuccessful applicationsProjects deliveredEvidence of early impact
WP181141414 a
WP2 (n = 15; 10 academics; 5 practitioners)8 (11 participants from funded projects)6 (4 participants from unfunded projects)Not applicable b Not applicable b
WP319 respondents8 with previous PHPES experienceNot applicable b Not applicable b
WP4n = 15Not applicableNot applicable b Not applicable b
Total81141414