Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Clin Transl Sci ; 8(1): e17, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38384919

RESUMO

Introduction: The focus on social determinants of health (SDOH) and their impact on health outcomes is evident in U.S. federal actions by Centers for Medicare & Medicaid Services and Office of National Coordinator for Health Information Technology. The disproportionate impact of COVID-19 on minorities and communities of color heightened awareness of health inequities and the need for more robust SDOH data collection. Four Clinical and Translational Science Award (CTSA) hubs comprising the Texas Regional CTSA Consortium (TRCC) undertook an inventory to understand what contextual-level SDOH datasets are offered centrally and which individual-level SDOH are collected in structured fields in each electronic health record (EHR) system potentially for all patients. Methods: Hub teams identified American Community Survey (ACS) datasets available via their enterprise data warehouses for research. Each hub's EHR analyst team identified structured fields available in their EHR for SDOH using a collection instrument based on a 2021 PCORnet survey and conducted an SDOH field completion rate analysis. Results: One hub offered ACS datasets centrally. All hubs collected eleven SDOH elements in structured EHR fields. Two collected Homeless and Veteran statuses. Completeness at four hubs was 80%-98%: Ethnicity, Race; < 10%: Education, Financial Strain, Food Insecurity, Housing Security/Stability, Interpersonal Violence, Social Isolation, Stress, Transportation. Conclusion: Completeness levels for SDOH data in EHR at TRCC hubs varied and were low for most measures. Multiple system-level discussions may be necessary to increase standardized SDOH EHR-based data collection and harmonization to drive effective value-based care, health disparities research, translational interventions, and evidence-based policy.

2.
Contemp Clin Trials ; 122: 106953, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36202199

RESUMO

BACKGROUND: Single Institutional Review Boards (sIRB) are not achieving the benefits envisioned by the National Institutes of Health. The recently published Health Level Seven (HL7®) Fast Healthcare Interoperability Resources (FHIR®) data exchange standard seeks to improve sIRB operational efficiency. METHODS AND RESULTS: We conducted a study to determine whether the use of this standard would be economically attractive for sIRB workflows collectively and for Reviewing and Relying institutions. We examined four sIRB-associated workflows at a single institution: (1) Initial Study Protocol Application, (2) Site Addition for an Approved sIRB study, (3) Continuing Review, and (4) Medical and Non-Medical Event Reporting. Task-level information identified personnel roles and their associated hour requirements for completion. Tasks that would be eliminated by the data exchange standard were identified. Personnel costs were estimated using annual salaries by role. No tasks would be eliminated in the Initial Study Protocol Application or Medical and Non-Medical Event Reporting workflows through use of the proposed data exchange standard. Site Addition workflow hours would be reduced by 2.50 h per site (from 15.50 to 13.00 h) and Continuing Review hours would be reduced by 9.00 h per site per study year (from 36.50 to 27.50 h). Associated costs savings were $251 for the Site Addition workflow (from $1609 to $1358) and $1033 for the Continuing Review workflow (from $4110 to $3076). CONCLUSION: Use of the proposed HL7 FHIR® data exchange standard would be economically attractive for sIRB workflows collectively and for each entity participating in the new workflows.


Assuntos
Registros Eletrônicos de Saúde , Comitês de Ética em Pesquisa , Humanos , Nível Sete de Saúde
3.
Ther Innov Regul Sci ; 55(6): 1250-1257, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34228318

RESUMO

BACKGROUND: The 21st Century Cures Act allows the US Food and Drug Administration (FDA) to utilize real-world data (RWD) to create real-world evidence (RWE) for new indications or post approval study requirements. We compared central adjudication with two insurance claims data sources to understand how endpoint accuracy differences impact RWE results. METHODS: We developed a decision analytic model to compare differences in efficacy (all-cause death, stroke and myocardial infarction) and safety (bleeding requiring transfusion) results for a simulated acute coronary syndrome antiplatelet therapy clinical trial. Endpoint accuracy metrics were derived from previous studies that compared centrally-adjudicated and insurance claims-based clinical trial endpoints. RESULTS: Efficacy endpoint results per 100 patients were similar for the central adjudication model (intervention event rate, 11.3; control, 13.7; difference, 2.4) and the prospective claims data collection model (intervention event rate, 11.2; control 13.6; difference, 2.3). However, the retrospective claims linking model's efficacy results were larger (intervention event rate, 14.6; control, 18.0; difference, 3.4). True positive event rate results (intervention, control and difference) for both insurance claims-based models were less than the central adjudication model due to false negative events. Differences in false positive event rates were responsible for differences in efficacy results for the two insurance claims-based models. CONCLUSION: Efficacy endpoint results differed by data source. Investigators need guidance to determine which data sources produce regulatory-grade RWE.


Assuntos
Seguro , Infarto do Miocárdio , Acidente Vascular Cerebral , Humanos , Estudos Prospectivos , Estudos Retrospectivos
4.
EGEMS (Wash DC) ; 5(1): 8, 2017 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-29881733

RESUMO

OBJECTIVE: To compare rule-based data quality (DQ) assessment approaches across multiple national clinical data sharing organizations. METHODS: Six organizations with established data quality assessment (DQA) programs provided documentation or source code describing current DQ checks. DQ checks were mapped to the categories within the data verification context of the harmonized DQA terminology. To ensure all DQ checks were consistently mapped, conventions were developed and four iterations of mapping performed. Difficult-to-map DQ checks were discussed with research team members until consensus was achieved. RESULTS: Participating organizations provided 11,026 DQ checks, of which 99.97 percent were successfully mapped to a DQA category. Of the mapped DQ checks (N=11,023), 214 (1.94 percent) mapped to multiple DQA categories. The majority of DQ checks mapped to Atemporal Plausibility (49.60 percent), Value Conformance (17.84 percent), and Atemporal Completeness (12.98 percent) categories. DISCUSSION: Using the common DQA terminology, near-complete (99.97 percent) coverage across a wide range of DQA programs and specifications was reached. Comparing the distributions of mapped DQ checks revealed important differences between participating organizations. This variation may be related to the organization's stakeholder requirements, primary analytical focus, or maturity of their DQA program. Not within scope, mapping checks within the data validation context of the terminology may provide additional insights into DQA practice differences. CONCLUSION: A common DQA terminology provides a means to help organizations and researchers understand the coverage of their current DQA efforts as well as highlight potential areas for additional DQA development. Sharing DQ checks between organizations could help expand the scope of DQA across clinical data networks.

5.
EGEMS (Wash DC) ; 4(1): 1244, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27713905

RESUMO

OBJECTIVE: Harmonized data quality (DQ) assessment terms, methods, and reporting practices can establish a common understanding of the strengths and limitations of electronic health record (EHR) data for operational analytics, quality improvement, and research. Existing published DQ terms were harmonized to a comprehensive unified terminology with definitions and examples and organized into a conceptual framework to support a common approach to defining whether EHR data is 'fit' for specific uses. MATERIALS AND METHODS: DQ publications, informatics and analytics experts, managers of established DQ programs, and operational manuals from several mature EHR-based research networks were reviewed to identify potential DQ terms and categories. Two face-to-face stakeholder meetings were used to vet an initial set of DQ terms and definitions that were grouped into an overall conceptual framework. Feedback received from data producers and users was used to construct a draft set of harmonized DQ terms and categories. Multiple rounds of iterative refinement resulted in a set of terms and organizing framework consisting of DQ categories, subcategories, terms, definitions, and examples. The harmonized terminology and logical framework's inclusiveness was evaluated against ten published DQ terminologies. RESULTS: Existing DQ terms were harmonized and organized into a framework by defining three DQ categories: (1) Conformance (2) Completeness and (3) Plausibility and two DQ assessment contexts: (1) Verification and (2) Validation. Conformance and Plausibility categories were further divided into subcategories. Each category and subcategory was defined with respect to whether the data may be verified with organizational data, or validated against an accepted gold standard, depending on proposed context and uses. The coverage of the harmonized DQ terminology was validated by successfully aligning to multiple published DQ terminologies. DISCUSSION: Existing DQ concepts, community input, and expert review informed the development of a distinct set of terms, organized into categories and subcategories. The resulting DQ terms successfully encompassed a wide range of disparate DQ terminologies. Operational definitions were developed to provide guidance for implementing DQ assessment procedures. The resulting structure is an inclusive DQ framework for standardizing DQ assessment and reporting. While our analysis focused on the DQ issues often found in EHR data, the new terminology may be applicable to a wide range of electronic health data such as administrative, research, and patient-reported data. CONCLUSION: A consistent, common DQ terminology, organized into a logical framework, is an initial step in enabling data owners and users, patients, and policy makers to evaluate and communicate data quality findings in a well-defined manner with a shared vocabulary. Future work will leverage the framework and terminology to develop reusable data quality assessment and reporting methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA