Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
JAMA ; 331(11): 959-971, 2024 03 19.
Artículo en Inglés | MEDLINE | ID: mdl-38502070

RESUMEN

Importance: Child maltreatment is associated with serious negative physical, psychological, and behavioral consequences. Objective: To review the evidence on primary care-feasible or referable interventions to prevent child maltreatment to inform the US Preventive Services Task Force. Data Sources: PubMed, Cochrane Library, and trial registries through February 2, 2023; references, experts, and surveillance through December 6, 2023. Study Selection: English-language, randomized clinical trials of youth through age 18 years (or their caregivers) with no known exposure or signs or symptoms of current or past maltreatment. Data Extraction and Synthesis: Two reviewers assessed titles/abstracts, full-text articles, and study quality, and extracted data; when at least 3 similar studies were available, meta-analyses were conducted. Main Outcomes and Measures: Directly measured reports of child abuse or neglect (reports to Child Protective Services or removal of the child from the home); proxy measures of abuse or neglect (injury, visits to the emergency department, hospitalization); behavioral, developmental, emotional, mental, or physical health and well-being; mortality; harms. Results: Twenty-five trials (N = 14 355 participants) were included; 23 included home visits. Evidence from 11 studies (5311 participants) indicated no differences in likelihood of reports to Child Protective Services within 1 year of intervention completion (pooled odds ratio, 1.03 [95% CI, 0.84-1.27]). Five studies (3336 participants) found no differences in removal of the child from the home within 1 to 3 years of follow-up (pooled risk ratio, 1.06 [95% CI, 0.37-2.99]). The evidence suggested no benefit for emergency department visits in the short term (<2 years) and hospitalizations. The evidence was inconclusive for all other outcomes because of the limited number of trials on each outcome and imprecise results. Among 2 trials reporting harms, neither reported statistically significant differences. Contextual evidence indicated (1) widely varying practices when screening, identifying, and reporting child maltreatment to Child Protective Services, including variations by race or ethnicity; (2) widely varying accuracy of screening instruments; and (3) evidence that child maltreatment interventions may be associated with improvements in some social determinants of health. Conclusion and Relevance: The evidence base on interventions feasible in or referable from primary care settings to prevent child maltreatment suggested no benefit or insufficient evidence for direct or proxy measures of child maltreatment. Little information was available about possible harms. Contextual evidence pointed to the potential for bias or inaccuracy in screening, identification, and reporting of child maltreatment but also highlighted the importance of addressing social determinants when intervening to prevent child maltreatment.


Asunto(s)
Maltrato a los Niños , Atención Primaria de Salud , Determinantes Sociales de la Salud , Adolescente , Niño , Humanos , Directivas Anticipadas , Comités Consultivos , Maltrato a los Niños/prevención & control , Maltrato a los Niños/estadística & datos numéricos , Servicio de Urgencia en Hospital/estadística & datos numéricos , Atención Primaria de Salud/métodos , Atención Primaria de Salud/estadística & datos numéricos , Estados Unidos/epidemiología , Servicios de Protección Infantil/estadística & datos numéricos
2.
JAMA Netw Open ; 7(6): e2417994, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38904959

RESUMEN

Importance: Interventions that address needs such as low income, housing instability, and safety are increasingly appearing in the health care sector as part of multifaceted efforts to improve health and health equity, but evidence relevant to scaling these social needs interventions is limited. Objective: To summarize the intensity and complexity of social needs interventions included in randomized clinical trials (RCTs) and assess whether these RCTs were designed to measure the causal effects of intervention components on behavioral, health, or health care utilization outcomes. Evidence Review: This review of a scoping review was based on a Patient-Centered Outcomes Research Institute-funded evidence map of English-language US-based RCTs of social needs interventions published between January 1, 1995, and April 6, 2023. Studies were assessed for features related to intensity (defined using modal values as providing as-needed interaction, 8 participant contacts or more, contacts occurring every 2 weeks or more often, encounters of 30 minutes or longer, contacts over 6 months or longer, or home visits), complexity (defined as addressing multiple social needs, having dedicated staff, involving multiple intervention components or practitioners, aiming to change multiple participant behaviors [knowledge, action, or practice], requiring or providing resources or active assistance with resources, and permitting tailoring), and the ability to assess causal inferences of components (assessing interventions, comparators, and context). Findings: This review of a scoping review of social needs interventions identified 77 RCTs in 93 publications with a total of 135 690 participants. Most articles (68 RCTs [88%]) reported 1 or more features of high intensity. All studies reported 1 or more features indicative of high complexity. Because most studies compared usual care with multicomponent interventions that were moderately or highly dependent on context and individual factors, their designs permitted causal inferences about overall effectiveness but not about individual components. Conclusions and Relevance: Social needs interventions are complex, intense, and include multiple components. Our findings suggest that RCTs of these interventions address overall intervention effectiveness but are rarely designed to distinguish the causal effects of specific components despite being resource intensive. Future studies with hybrid effectiveness-implementation and sequential designs, and more standardized reporting of intervention intensity and complexity could help stakeholders assess the return on investment of these interventions.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos
3.
Res Synth Methods ; 2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38895747

RESUMEN

Accurate data extraction is a key component of evidence synthesis and critical to valid results. The advent of publicly available large language models (LLMs) has generated interest in these tools for evidence synthesis and created uncertainty about the choice of LLM. We compare the performance of two widely available LLMs (Claude 2 and GPT-4) for extracting pre-specified data elements from 10 published articles included in a previously completed systematic review. We use prompts and full study PDFs to compare the outputs from the browser versions of Claude 2 and GPT-4. GPT-4 required use of a third-party plugin to upload and parse PDFs. Accuracy was high for Claude 2 (96.3%). The accuracy of GPT-4 with the plug-in was lower (68.8%); however, most of the errors were due to the plug-in. Both LLMs correctly recognized when prespecified data elements were missing from the source PDF and generated correct information for data elements that were not reported explicitly in the articles. A secondary analysis demonstrated that, when provided selected text from the PDFs, Claude 2 and GPT-4 accurately extracted 98.7% and 100% of the data elements, respectively. Limitations include the narrow scope of the study PDFs used, that prompt development was completed using only Claude 2, and that we cannot guarantee the open-source articles were not used to train the LLMs. This study highlights the potential for LLMs to revolutionize data extraction but underscores the importance of accurate PDF parsing. For now, it remains essential for a human investigator to validate LLM extractions.

4.
Res Synth Methods ; 15(4): 576-589, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38432227

RESUMEN

Data extraction is a crucial, yet labor-intensive and error-prone part of evidence synthesis. To date, efforts to harness machine learning for enhancing efficiency of the data extraction process have fallen short of achieving sufficient accuracy and usability. With the release of large language models (LLMs), new possibilities have emerged to increase efficiency and accuracy of data extraction for evidence synthesis. The objective of this proof-of-concept study was to assess the performance of an LLM (Claude 2) in extracting data elements from published studies, compared with human data extraction as employed in systematic reviews. Our analysis utilized a convenience sample of 10 English-language, open-access publications of randomized controlled trials included in a single systematic review. We selected 16 distinct types of data, posing varying degrees of difficulty (160 data elements across 10 studies). We used the browser version of Claude 2 to upload the portable document format of each publication and then prompted the model for each data element. Across 160 data elements, Claude 2 demonstrated an overall accuracy of 96.3% with a high test-retest reliability (replication 1: 96.9%; replication 2: 95.0% accuracy). Overall, Claude 2 made 6 errors on 160 data items. The most common errors (n = 4) were missed data items. Importantly, Claude 2's ease of use was high; it required no technical expertise or labeled training data for effective operation (i.e., zero-shot learning). Based on findings of our proof-of-concept study, leveraging LLMs has the potential to substantially enhance the efficiency and accuracy of data extraction for evidence syntheses.


Asunto(s)
Aprendizaje Automático , Prueba de Estudio Conceptual , Humanos , Reproducibilidad de los Resultados , Revisiones Sistemáticas como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Algoritmos , Almacenamiento y Recuperación de la Información/métodos , Lenguaje , Programas Informáticos , Procesamiento de Lenguaje Natural , Proyectos de Investigación
5.
JAMA Netw Open ; 7(7): e2420591, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38976263

RESUMEN

Importance: The United States Preventive Services Task Force (USPSTF) has considered the topic of prevention of child maltreatment multiple times over its nearly 40-year history, each time reaching the conclusion that the evidence is insufficient to recommend for or against interventions aimed at preventing this important health problem with significant negative sequelae before it occurs. In the most recent evidence review, which was conducted from August 2021 to November 2023 and published in March 2024, the USPSTF considered contextual questions on the evidence for bias in reporting and diagnosis of maltreatment in addition to key questions regarding effectiveness of interventions to prevent child maltreatment. Observations: A comprehensive literature review found evidence of inaccuracies in risk assessment and racial and ethnic bias in the reporting of child maltreatment and in the evaluation of injuries concerning for maltreatment, such as skull fractures. When children are incorrectly identified as being maltreated, harms, such as unnecessary family separation, may occur. Conversely, when children who are being maltreated are missed, harms, such as ongoing injury to the child, continue. Interventions focusing primarily on preventing child maltreatment did not demonstrate consistent benefit or information was insufficient. Additionally, the interventions may expose children to the risk of harm as a result of these inaccuracies and biases in reporting and evaluation. These inaccuracies and biases also complicate assessment of the evidence for making clinical prevention guidelines. Conclusions and Relevance: There are several potential strategies for consideration in future efforts to evaluate interventions aimed at the prevention of child maltreatment while minimizing the risk of exposing children to known biases in reporting and diagnosis. Promising strategies to explore might include a broader array of outcome measures for addressing child well-being, using population-level metrics for child maltreatment, and assessments of policy-level interventions aimed at improving child and family well-being. These future considerations for research in addressing child maltreatment complement the USPSTF's research considerations on this topic. Both can serve as guides to researchers seeking to study the ways in which we can help all children thrive.


Asunto(s)
Maltrato a los Niños , Humanos , Maltrato a los Niños/prevención & control , Maltrato a los Niños/diagnóstico , Niño , Estados Unidos , Comités Consultivos , Preescolar , Medición de Riesgo/métodos
6.
Environ Int ; 186: 108602, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38555664

RESUMEN

BACKGROUND: Observational epidemiologic studies provide critical data for the evaluation of the potential effects of environmental, occupational and behavioural exposures on human health. Systematic reviews of these studies play a key role in informing policy and practice. Systematic reviews should incorporate assessments of the risk of bias in results of the included studies. OBJECTIVE: To develop a new tool, Risk Of Bias In Non-randomized Studies - of Exposures (ROBINS-E) to assess risk of bias in estimates from cohort studies of the causal effect of an exposure on an outcome. METHODS AND RESULTS: ROBINS-E was developed by a large group of researchers from diverse research and public health disciplines through a series of working groups, in-person meetings and pilot testing phases. The tool aims to assess the risk of bias in a specific result (exposure effect estimate) from an individual observational study that examines the effect of an exposure on an outcome. A series of preliminary considerations informs the core ROBINS-E assessment, including details of the result being assessed and the causal effect being estimated. The assessment addresses bias within seven domains, through a series of 'signalling questions'. Domain-level judgements about risk of bias are derived from the answers to these questions, then combined to produce an overall risk of bias judgement for the result, together with judgements about the direction of bias. CONCLUSION: ROBINS-E provides a standardized framework for examining potential biases in results from cohort studies. Future work will produce variants of the tool for other epidemiologic study designs (e.g. case-control studies). We believe that ROBINS-E represents an important development in the integration of exposure assessment, evidence synthesis and causal inference.


Asunto(s)
Sesgo , Exposición a Riesgos Ambientales , Humanos , Exposición a Riesgos Ambientales/estadística & datos numéricos , Estudios de Seguimiento , Estudios Observacionales como Asunto , Estudios de Cohortes , Estudios Epidemiológicos , Medición de Riesgo/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA