RESUMO
There are calls for policymakers to make greater use of research when formulating policies. Therefore, it is important that policy organisations have a range of tools and systems to support their staff in using research in their work. The aim of the present study was to measure the extent to which a range of tools and systems to support research use were available within six Australian agencies with a role in health policy, and examine whether this was related to the extent of engagement with, and use of research in policymaking by their staff. The presence of relevant systems and tools was assessed via a structured interview called ORACLe which is conducted with a senior executive from the agency. To measure research use, four policymakers from each agency undertook a structured interview called SAGE, which assesses and scores the extent to which policymakers engaged with (i.e., searched for, appraised, and generated) research, and used research in the development of a specific policy document. The results showed that all agencies had at least a moderate range of tools and systems in place, in particular policy development processes; resources to access and use research (such as journals, databases, libraries, and access to research experts); processes to generate new research; and mechanisms to establish relationships with researchers. Agencies were less likely, however, to provide research training for staff and leaders, or to have evidence-based processes for evaluating existing policies. For the majority of agencies, the availability of tools and systems was related to the extent to which policymakers engaged with, and used research when developing policy documents. However, some agencies did not display this relationship, suggesting that other factors, namely the organisation's culture towards research use, must also be considered.
Assuntos
Pesquisa Biomédica/estatística & dados numéricos , Fortalecimento Institucional/estatística & dados numéricos , Política de Saúde , Pesquisa sobre Serviços de Saúde/estatística & dados numéricos , Organizações/estatística & dados numéricos , Pessoal Administrativo/normas , Pessoal Administrativo/estatística & dados numéricos , Austrália , Pesquisa Biomédica/normas , Fortalecimento Institucional/normas , Pesquisa sobre Serviços de Saúde/normas , Humanos , Entrevistas como Assunto , Organizações/normas , Formulação de Políticas , Reprodutibilidade dos Testes , Inquéritos e QuestionáriosRESUMO
BACKGROUND: An intervention's success depends on how participants interact with it in local settings. Process evaluation examines these interactions, indicating why an intervention was or was not effective, and how it (and similar interventions) can be improved for better contextual fit. This is particularly important for innovative trials like Supporting Policy In health with Research: an Intervention Trial (SPIRIT), where causal mechanisms are poorly understood. SPIRIT was testing a multi-component intervention designed to increase the capacity of health policymakers to use research. METHODS: Our mixed-methods process evaluation sought to explain variation in observed process effects across the six agencies that participated in SPIRIT. Data collection included observations of intervention workshops (n = 59), purposively sampled interviews (n = 76) and participant feedback forms (n = 553). Using a realist approach, data was coded for context-mechanism-process effect configurations (retroductive analysis) by two authors. RESULTS: Intervention workshops were very well received. There was greater variation of views regarding other aspects of SPIRIT such as data collection, communication and the intervention's overall value. We identified nine inter-related mechanisms that were crucial for engaging participants in these policy settings: (1) Accepting the premise (agreeing with the study's assumptions); (2) Self-determination (participative choice); (3) The Value Proposition (seeing potential gain); (4) 'Getting good stuff' (identifying useful ideas, resources or connections); (5) Self-efficacy (believing 'we can do this!'); (6) Respect (feeling that SPIRIT understands and values one's work); (7) Confidence (believing in the study's integrity and validity); (8) Persuasive leadership (authentic and compelling advocacy from leaders); and (9) Strategic insider facilitation (local translation and mediation). These findings were used to develop tentative explanatory propositions and to revise the programme theory. CONCLUSION: This paper describes how SPIRIT functioned in six policy agencies, including why strategies that worked well in one site were less effective in others. Findings indicate a complex interaction between participants' perception of the intervention, shifting contextual factors, and the form that the intervention took in each site. Our propositions provide transferable lessons about contextualised areas of strength and weakness that may be useful in the development and implementation of similar studies.
Assuntos
Pessoal Administrativo , Atitude , Fortalecimento Institucional , Política de Saúde , Formulação de Políticas , Pesquisa , Retroalimentação , Humanos , Avaliação de Programas e Projetos de Saúde , Inquéritos e QuestionáriosRESUMO
BACKGROUND: Capacity building strategies are widely used to increase the use of research in policy development. However, a lack of well-validated measures for policy contexts has hampered efforts to identify priorities for capacity building and to evaluate the impact of strategies. We aimed to address this gap by developing SEER (Seeking, Engaging with and Evaluating Research), a self-report measure of individual policymakers' capacity to engage with and use research. METHODS: We used the SPIRIT Action Framework to identify pertinent domains and guide development of items for measuring each domain. Scales covered (1) individual capacity to use research (confidence in using research, value placed on research, individual perceptions of the value their organisation places on research, supporting tools and systems), (2) actions taken to engage with research and researchers, and (3) use of research to inform policy (extent and type of research use). A sample of policymakers engaged in health policy development provided data to examine scale reliability (internal consistency, test-retest) and validity (relation to measures of similar concepts, relation to a measure of intention to use research, internal structure of the individual capacity scales). RESULTS: Response rates were 55% (150/272 people, 12 agencies) for the validity and internal consistency analyses, and 54% (57/105 people, 9 agencies) for test-retest reliability. The individual capacity scales demonstrated adequate internal consistency reliability (alpha coefficients > 0.7, all four scales) and test-retest reliability (intra-class correlation coefficients > 0.7 for three scales and 0.59 for fourth scale). Scores on individual capacity scales converged as predicted with measures of similar concepts (moderate correlations of > 0.4), and confirmatory factor analysis provided evidence that the scales measured related but distinct concepts. Items in each of these four scales related as predicted to concepts in the measurement model derived from the SPIRIT Action Framework. Evidence about the reliability and validity of the research engagement actions and research use scales was equivocal. CONCLUSIONS: Initial testing of SEER suggests that the four individual capacity scales may be used in policy settings to examine current capacity and identify areas for capacity building. The relation between capacity, research engagement actions and research use requires further investigation.
Assuntos
Pessoal Administrativo , Política de Saúde , Pesquisa/estatística & dados numéricos , Prática Clínica Baseada em Evidências , Estudos de Viabilidade , Humanos , Projetos Piloto , Formulação de Políticas , Prática Profissional , Autorrelato , Inquéritos e Questionários , Pesquisa Translacional BiomédicaRESUMO
BACKGROUND: Rapid reviews are increasingly being used to help policy makers access research in short time frames. A clear articulation of the review's purpose, questions, scope, methods and reporting format is thought to improve the quality and generalisability of review findings. The aim of the study is to explore the effectiveness of knowledge brokering in improving the perceived clarity of rapid review proposals from the perspective of potential reviewers. To conduct the study, we drew on the Evidence Check program, where policy makers draft a review proposal (a pre knowledge brokering proposal) and have a 1-hour session with a knowledge broker, who re-drafts the proposal based on the discussion (a post knowledge brokering proposal). METHODS: We asked 30 reviewers who had previously undertaken Evidence Check reviews to examine the quality of 60 pre and 60 post knowledge brokering proposals. Reviewers were blind to whether the review proposals they received were pre or post knowledge brokering. Using a six-point Likert scale, reviewers scored six questions examining clarity of information about the review's purpose, questions, scope, method and format and reviewers' confidence that they could meet policy makers' needs. Each reviewer was allocated two pre and two post knowledge brokering proposals, randomly ordered, from the 60 reviews, ensuring no reviewer received a pre and post knowledge brokering proposal from the same review. RESULTS: The results showed that knowledge brokering significantly improved the scores for all six questions addressing the perceived clarity of the review proposal and confidence in meeting policy makers' needs; with average changes of 0.68 to 1.23 from pre to post across the six domains. CONCLUSIONS: This study found that knowledge brokering increased the perceived clarity of information provided in Evidence Check rapid review proposals and the confidence of reviewers that they could meet policy makers' needs. Further research is needed to identify how the knowledge brokering process achieves these improvements and to test the applicability of the findings in other rapid review programs.
Assuntos
Medicina Baseada em Evidências/normas , Formulação de Políticas , Literatura de Revisão como Assunto , Estudos Controlados Antes e Depois , Medicina Baseada em Evidências/métodos , Conhecimentos, Atitudes e Prática em Saúde , HumanosRESUMO
BACKGROUND: Evidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations' research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations' capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership). METHOD: We used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe. RESULTS: The ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4). CONCLUSION: We utilised qualitative and quantitative methods to develop a system to assess and score organisations' capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development.