Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
BMC Med Res Methodol ; 22(1): 227, 2022 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-35971057

RESUMO

BACKGROUND: Studies have shown that data collection by medical record abstraction (MRA) is a significant source of error in clinical research studies relying on secondary use data. Yet, the quality of data collected using MRA is seldom assessed. We employed a novel, theory-based framework for data quality assurance and quality control of MRA. The objective of this work is to determine the potential impact of formalized MRA training and continuous quality control (QC) processes on data quality over time. METHODS: We conducted a retrospective analysis of QC data collected during a cross-sectional medical record review of mother-infant dyads with Neonatal Opioid Withdrawal Syndrome. A confidence interval approach was used to calculate crude (Wald's method) and adjusted (generalized estimating equation) error rates over time. We calculated error rates using the number of errors divided by total fields ("all-field" error rate) and populated fields ("populated-field" error rate) as the denominators, to provide both an optimistic and a conservative measurement, respectively. RESULTS: On average, the ACT NOW CE Study maintained an error rate between 1% (optimistic) and 3% (conservative). Additionally, we observed a decrease of 0.51 percentage points with each additional QC Event conducted. CONCLUSIONS: Formalized MRA training and continuous QC resulted in lower error rates than have been found in previous literature and a decrease in error rates over time. This study newly demonstrates the importance of continuous process controls for MRA within the context of a multi-site clinical research study.


Assuntos
Confiabilidade dos Dados , Prontuários Médicos , Coleta de Dados , Humanos , Recém-Nascido , Projetos de Pesquisa , Estudos Retrospectivos
2.
J Clin Transl Sci ; 8(1): e17, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38384919

RESUMO

Introduction: The focus on social determinants of health (SDOH) and their impact on health outcomes is evident in U.S. federal actions by Centers for Medicare & Medicaid Services and Office of National Coordinator for Health Information Technology. The disproportionate impact of COVID-19 on minorities and communities of color heightened awareness of health inequities and the need for more robust SDOH data collection. Four Clinical and Translational Science Award (CTSA) hubs comprising the Texas Regional CTSA Consortium (TRCC) undertook an inventory to understand what contextual-level SDOH datasets are offered centrally and which individual-level SDOH are collected in structured fields in each electronic health record (EHR) system potentially for all patients. Methods: Hub teams identified American Community Survey (ACS) datasets available via their enterprise data warehouses for research. Each hub's EHR analyst team identified structured fields available in their EHR for SDOH using a collection instrument based on a 2021 PCORnet survey and conducted an SDOH field completion rate analysis. Results: One hub offered ACS datasets centrally. All hubs collected eleven SDOH elements in structured EHR fields. Two collected Homeless and Veteran statuses. Completeness at four hubs was 80%-98%: Ethnicity, Race; < 10%: Education, Financial Strain, Food Insecurity, Housing Security/Stability, Interpersonal Violence, Social Isolation, Stress, Transportation. Conclusion: Completeness levels for SDOH data in EHR at TRCC hubs varied and were low for most measures. Multiple system-level discussions may be necessary to increase standardized SDOH EHR-based data collection and harmonization to drive effective value-based care, health disparities research, translational interventions, and evidence-based policy.

4.
AMIA Jt Summits Transl Sci Proc ; 2023: 632-641, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37350921

RESUMO

The 21st Century Cures Act allows the US Food and Drug Administration to consider real world data (RWD) for new indications or post approval study requirements. However, there is limited guidance as to the relative quality of different RWD types. The ACE-RWD program will compare the quality of EHR clinical data, EHR billing data, and linked healthcare claims data to traditional clinical trial data collection methods. ACE-RWD is being conducted alongside 5-10 ancillary studies, with five sponsors, across multiple therapeutic areas. Each ancillary study will be conducted after or in parallel with its parent clinical study at a minimum of two clinical sites. Although not required, it is anticipated that EHR clinical and EHR billing data will be obtained via EHR-to-eCRF mechanisms that are based on the Health Level Seven (HL7) Fast Healthcare Interoperability Resources (FHIR®) standard.

5.
Res Sq ; 2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37034600

RESUMO

Background: Medical record abstraction (MRA) is a commonly used method for data collection in clinical research, but is prone to error, and the influence of quality control (QC) measures is seldom and inconsistently assessed during the course of a study. We employed a novel, standardized MRA-QC framework as part of an ongoing observational study in an effort to control MRA error rates. In order to assess the effectiveness of our framework, we compared our error rates against traditional MRA studies that had not reported using formalized MRA-QC methods. Thus, the objective of this study was to compare the MRA error rates derived from the literature with the error rates found in a study using MRA as the sole method of data collection that employed an MRA-QC framework. Methods: Using a moderator meta-analysis employed with Q-test, the MRA error rates from the meta-analysis of the literature were compared with the error rate from a recent study that implemented formalized MRA training and continuous QC processes. Results: The MRA process for data acquisition in clinical research was associated with both high and highly variable error rates (70 - 2,784 errors per 10,000 fields). Error rates for the study using our MRA-QC framework were between 1.04% (optimistic, all-field rate) and 2.57% (conservative, populated-field rate) (or 104 - 257 errors per 10,000 fields), 4.00 - 5.53 percentage points less than the observed rate from the literature (p<0.0001). Conclusions: Review of the literature indicated that the accuracy associated with MRA varied widely across studies. However, our results demonstrate that, with appropriate training and continuous QC, MRA error rates can be significantly controlled during the course of a clinical research study.

6.
Contemp Clin Trials ; 128: 107144, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36898625

RESUMO

BACKGROUND: eSource software is used to automatically copy a patient's electronic health record data into a clinical study's electronic case report form. However, there is little evidence to assist sponsors in identifying the best sites for multi-center eSource studies. METHODS: We developed an eSource site readiness survey. The survey was administered to principal investigators, clinical research coordinators, and chief research information officers at Pediatric Trial Network sites. RESULTS: A total of 61 respondents were included in this study (clinical research coordinator, 22; principal investigator, 20; and chief research information officer, 19). Clinical research coordinators and principal investigators ranked medication administration, medication orders, laboratory, medical history, and vital signs data as having the highest priority for automation. While most organizations used some electronic health record research functions (clinical research coordinator, 77%; principal investigator, 75%; and chief research information officer, 89%), only 21% of sites were using Fast Healthcare Interoperability Resources standards to exchange patient data with other institutions. Respondents generally gave lower readiness for change ratings to organizations that did not have a separate research information technology group and where researchers practiced in hospitals not operated by their medical schools. CONCLUSIONS: Site readiness to participate in eSource studies is not merely a technical problem. While technical capabilities are important, organizational priorities, structure, and the site's support of clinical research functions are equally important considerations.


Assuntos
Registros Eletrônicos de Saúde , Software , Humanos , Criança , Inquéritos e Questionários , Eletrônica , Coleta de Dados
7.
Res Sq ; 2023 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-38196643

RESUMO

Background: In clinical research, prevention of systematic and random errors of data collected is paramount to ensuring reproducibility of trial results and the safety and efficacy of the resulting interventions. Over the last 40 years, empirical assessments of data accuracy in clinical research have been reported in the literature. Although there have been reports of data error and discrepancy rates in clinical studies, there has been little systematic synthesis of these results. Further, although notable exceptions exist, little evidence exists regarding the relative accuracy of different data processing methods. We aim to address this gap by evaluating error rates for 4 data processing methods. Methods: A systematic review of the literature identified through PubMed was performed to identify studies that evaluated the quality of data obtained through data processing methods typically used in clinical trials: medical record abstraction (MRA), optical scanning, single-data entry, and double-data entry. Quantitative information on data accuracy was abstracted from the manuscripts and pooled. Meta-analysis of single proportions based on the Freeman-Tukey transformation method and the generalized linear mixed model approach were used to derive an overall estimate of error rates across data processing methods used in each study for comparison. Results: A total of 93 papers (published from 1978 to 2008) meeting our inclusion criteria were categorized according to their data processing methods. The accuracy associated with data processing methods varied widely, with error rates ranging from 2 errors per 10,000 fields to 2,784 errors per 10,000 fields. MRA was associated with both high and highly variable error rates, having a pooled error rate of 6.57% (95% CI: 5.51, 7.72). In comparison, the pooled error rates for optical scanning, single-data entry, and double-data entry methods were 0.74% (0.21, 1.60), 0.29% (0.24, 0.35) and 0.14% (0.08, 0.20), respectively. Conclusions: Data processing and cleaning methods may explain a significant amount of the variability in data accuracy. MRA error rates, for example, were high enough to impact decisions made using the data and could necessitate increases in sample sizes to preserve statistical power. Thus, the choice of data processing methods can likely impact process capability and, ultimately, the validity of trial results.

8.
Contemp Clin Trials ; 122: 106953, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36202199

RESUMO

BACKGROUND: Single Institutional Review Boards (sIRB) are not achieving the benefits envisioned by the National Institutes of Health. The recently published Health Level Seven (HL7®) Fast Healthcare Interoperability Resources (FHIR®) data exchange standard seeks to improve sIRB operational efficiency. METHODS AND RESULTS: We conducted a study to determine whether the use of this standard would be economically attractive for sIRB workflows collectively and for Reviewing and Relying institutions. We examined four sIRB-associated workflows at a single institution: (1) Initial Study Protocol Application, (2) Site Addition for an Approved sIRB study, (3) Continuing Review, and (4) Medical and Non-Medical Event Reporting. Task-level information identified personnel roles and their associated hour requirements for completion. Tasks that would be eliminated by the data exchange standard were identified. Personnel costs were estimated using annual salaries by role. No tasks would be eliminated in the Initial Study Protocol Application or Medical and Non-Medical Event Reporting workflows through use of the proposed data exchange standard. Site Addition workflow hours would be reduced by 2.50 h per site (from 15.50 to 13.00 h) and Continuing Review hours would be reduced by 9.00 h per site per study year (from 36.50 to 27.50 h). Associated costs savings were $251 for the Site Addition workflow (from $1609 to $1358) and $1033 for the Continuing Review workflow (from $4110 to $3076). CONCLUSION: Use of the proposed HL7 FHIR® data exchange standard would be economically attractive for sIRB workflows collectively and for each entity participating in the new workflows.


Assuntos
Registros Eletrônicos de Saúde , Comitês de Ética em Pesquisa , Humanos , Nível Sete de Saúde
9.
Ther Innov Regul Sci ; 55(6): 1250-1257, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34228318

RESUMO

BACKGROUND: The 21st Century Cures Act allows the US Food and Drug Administration (FDA) to utilize real-world data (RWD) to create real-world evidence (RWE) for new indications or post approval study requirements. We compared central adjudication with two insurance claims data sources to understand how endpoint accuracy differences impact RWE results. METHODS: We developed a decision analytic model to compare differences in efficacy (all-cause death, stroke and myocardial infarction) and safety (bleeding requiring transfusion) results for a simulated acute coronary syndrome antiplatelet therapy clinical trial. Endpoint accuracy metrics were derived from previous studies that compared centrally-adjudicated and insurance claims-based clinical trial endpoints. RESULTS: Efficacy endpoint results per 100 patients were similar for the central adjudication model (intervention event rate, 11.3; control, 13.7; difference, 2.4) and the prospective claims data collection model (intervention event rate, 11.2; control 13.6; difference, 2.3). However, the retrospective claims linking model's efficacy results were larger (intervention event rate, 14.6; control, 18.0; difference, 3.4). True positive event rate results (intervention, control and difference) for both insurance claims-based models were less than the central adjudication model due to false negative events. Differences in false positive event rates were responsible for differences in efficacy results for the two insurance claims-based models. CONCLUSION: Efficacy endpoint results differed by data source. Investigators need guidance to determine which data sources produce regulatory-grade RWE.


Assuntos
Seguro , Infarto do Miocárdio , Acidente Vascular Cerebral , Humanos , Estudos Prospectivos , Estudos Retrospectivos
10.
JAMIA Open ; 2(1): 107-114, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30976757

RESUMO

OBJECTIVES: To identify factors impacting physician use of information charted by others. MATERIALS AND METHODS: A 4-round Delphi process was conducted with physicians and non-physicians publishing in the healthcare data quality literature to identify and characterize factors impacting physician use of information charted by others (other people or devices), either within or external to their organization. Factors with high average importance and reliability were categorized according to similarity of topic. RESULTS: Thirty-nine factors were ultimately identified as impacting physician use of information charted by others. Five categories of factors included aspects of: the information source, the information itself, the information user, the information system, and aspects of healthcare as an institution. In addition, 4 themes were identified: (1) value of narrative text in providing context, (2) importance of mental models and personal heuristics in deciding whether, and how to use information, (3) loss of confidence in, and decreased use of information due to errors encountered, and (4) existence of a trust hierarchy potentially influencing information use. DISCUSSION: Five similarly focused studies have recently probed clinician willingness to use information in decision-making. Our results mostly confirmed factors identified by prior studies, and uniquely identified aspects of the information user as important. CONCLUSION: According to the participants in this study, information quality is prominent among factors impacting physician use of information charted by others. Based on this and similar studies, it appears that despite concerns about information quality, physicians use information charted by others.

11.
Yearb Med Inform ; 28(1): 140-151, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31419826

RESUMO

OBJECTIVES: There exists a communication gap between the biomedical informatics community on one side and the computer science/artificial intelligence community on the other side regarding the meaning of the terms "semantic integration" and "knowledge representation". This gap leads to approaches that attempt to provide one-to-one mappings between data elements and biomedical ontologies. Our aim is to clarify the representational differences between traditional data management and semantic-web-based data management by providing use cases of clinical data and clinical research data re-representation. We discuss how and why one-to-one mappings limit the advantages of using Semantic Web Technologies (SWTs). METHODS: We employ commonly used SWTs, such as Resource Description Framework (RDF) and Ontology Web Language (OWL). We reuse pre-existing ontologies and ensure shared ontological commitment by selecting ontologies from a framework that fosters community-driven collaborative ontology development for biomedicine following the same set of principles. RESULTS: We demonstrate the results of providing SWT-compliant re-representation of data elements from two independent projects managing clinical data and clinical research data. Our results show how one-to-one mappings would hinder the exploitation of the advantages provided by using SWT. CONCLUSIONS: We conclude that SWT-compliant re-representation is an indispensable step, if using the full potential of SWT is the goal. Rather than providing one-to-one mappings, developers should provide documentation that links data elements to graph structures to specify the re-representation.


Assuntos
Inteligência Artificial , Ontologias Biológicas , Gerenciamento de Dados , Informática Médica , Web Semântica , Pesquisa Biomédica , Elementos de Dados Comuns , Humanos , Comunicação Interdisciplinar , Gestão do Conhecimento , Neoplasias
12.
AMIA Jt Summits Transl Sci Proc ; 2019: 488-494, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31259003

RESUMO

EHR-based phenotype development and validation are extremely time-consuming and have considerable monetary cost. The creation of a phenotype currently requires clinical experts and experts in the data to be queried. The new approach presented here demonstrates a computational alternative to the classification of patient cohorts based on automatic weighting of ICD codes. This approach was applied to data from six different clinics within the University of Arkansas for Medical Science (UAMS) health system. The results were compared with phenotype algorithms designed by clinicians and informaticians for asthma and melanoma. Relative to traditional phenotype development, this method shows potential to considerably reduce time requirements and monetary costs with comparable results.

13.
EGEMS (Wash DC) ; 5(1): 8, 2017 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-29881733

RESUMO

OBJECTIVE: To compare rule-based data quality (DQ) assessment approaches across multiple national clinical data sharing organizations. METHODS: Six organizations with established data quality assessment (DQA) programs provided documentation or source code describing current DQ checks. DQ checks were mapped to the categories within the data verification context of the harmonized DQA terminology. To ensure all DQ checks were consistently mapped, conventions were developed and four iterations of mapping performed. Difficult-to-map DQ checks were discussed with research team members until consensus was achieved. RESULTS: Participating organizations provided 11,026 DQ checks, of which 99.97 percent were successfully mapped to a DQA category. Of the mapped DQ checks (N=11,023), 214 (1.94 percent) mapped to multiple DQA categories. The majority of DQ checks mapped to Atemporal Plausibility (49.60 percent), Value Conformance (17.84 percent), and Atemporal Completeness (12.98 percent) categories. DISCUSSION: Using the common DQA terminology, near-complete (99.97 percent) coverage across a wide range of DQA programs and specifications was reached. Comparing the distributions of mapped DQ checks revealed important differences between participating organizations. This variation may be related to the organization's stakeholder requirements, primary analytical focus, or maturity of their DQA program. Not within scope, mapping checks within the data validation context of the terminology may provide additional insights into DQA practice differences. CONCLUSION: A common DQA terminology provides a means to help organizations and researchers understand the coverage of their current DQA efforts as well as highlight potential areas for additional DQA development. Sharing DQ checks between organizations could help expand the scope of DQA across clinical data networks.

14.
J Am Med Inform Assoc ; 24(4): 737-745, 2017 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-28339721

RESUMO

OBJECTIVE: To assess and refine competencies for the clinical research data management profession. MATERIALS AND METHODS: Based on prior work developing and maintaining a practice standard and professional certification exam, a survey was administered to a captive group of clinical research data managers to assess professional competencies, types of data managed, types of studies supported, and necessary foundational knowledge. RESULTS: Respondents confirmed a set of 91 professional competencies. As expected, differences were seen in job tasks between early- to mid-career and mid- to late-career practitioners. Respondents indicated growing variability in types of studies for which they managed data and types of data managed. DISCUSSION: Respondents adapted favorably to the separate articulation of professional competencies vs foundational knowledge. The increases in the types of data managed and variety of research settings in which data are managed indicate a need for formal education in principles and methods that can be applied to different research contexts (ie, formal degree programs supporting the profession), and stronger links with the informatics scientific discipline, clinical research informatics in particular. CONCLUSION: The results document the scope of the profession and will serve as a foundation for the next revision of the Certified Clinical Data Manager TM exam. A clear articulation of professional competencies and necessary foundational knowledge could inform the content of graduate degree programs or tracks in areas such as clinical research informatics that will develop the current and future clinical research data management workforce.


Assuntos
Certificação , Informática Médica/educação , Informática Médica/normas , Competência Profissional/normas , Pesquisa Biomédica , Ensaios Clínicos como Assunto , Coleta de Dados , História do Século XX , Informática Médica/história , Estados Unidos
15.
AMIA Jt Summits Transl Sci Proc ; 2016: 279-85, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27570682

RESUMO

A fundamental premise of scientific research is that it should be reproducible. However, the specific requirements for reproducibility of research using electronic health record (EHR) data have not been sufficiently articulated. There is no guidance for researchers about how to assess a given project and identify provisions for reproducibility. We analyze three different clinical research initiatives that use EHR data in order to define a set of requirements to reproduce the research using the original or other datasets. We identify specific project features that drive these requirements. The resulting framework will support the much-needed discussion of strategies to ensure the reproducibility of research that uses data from EHRs.

16.
EGEMS (Wash DC) ; 4(1): 1244, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27713905

RESUMO

OBJECTIVE: Harmonized data quality (DQ) assessment terms, methods, and reporting practices can establish a common understanding of the strengths and limitations of electronic health record (EHR) data for operational analytics, quality improvement, and research. Existing published DQ terms were harmonized to a comprehensive unified terminology with definitions and examples and organized into a conceptual framework to support a common approach to defining whether EHR data is 'fit' for specific uses. MATERIALS AND METHODS: DQ publications, informatics and analytics experts, managers of established DQ programs, and operational manuals from several mature EHR-based research networks were reviewed to identify potential DQ terms and categories. Two face-to-face stakeholder meetings were used to vet an initial set of DQ terms and definitions that were grouped into an overall conceptual framework. Feedback received from data producers and users was used to construct a draft set of harmonized DQ terms and categories. Multiple rounds of iterative refinement resulted in a set of terms and organizing framework consisting of DQ categories, subcategories, terms, definitions, and examples. The harmonized terminology and logical framework's inclusiveness was evaluated against ten published DQ terminologies. RESULTS: Existing DQ terms were harmonized and organized into a framework by defining three DQ categories: (1) Conformance (2) Completeness and (3) Plausibility and two DQ assessment contexts: (1) Verification and (2) Validation. Conformance and Plausibility categories were further divided into subcategories. Each category and subcategory was defined with respect to whether the data may be verified with organizational data, or validated against an accepted gold standard, depending on proposed context and uses. The coverage of the harmonized DQ terminology was validated by successfully aligning to multiple published DQ terminologies. DISCUSSION: Existing DQ concepts, community input, and expert review informed the development of a distinct set of terms, organized into categories and subcategories. The resulting DQ terms successfully encompassed a wide range of disparate DQ terminologies. Operational definitions were developed to provide guidance for implementing DQ assessment procedures. The resulting structure is an inclusive DQ framework for standardizing DQ assessment and reporting. While our analysis focused on the DQ issues often found in EHR data, the new terminology may be applicable to a wide range of electronic health data such as administrative, research, and patient-reported data. CONCLUSION: A consistent, common DQ terminology, organized into a logical framework, is an initial step in enabling data owners and users, patients, and policy makers to evaluate and communicate data quality findings in a well-defined manner with a shared vocabulary. Future work will leverage the framework and terminology to develop reusable data quality assessment and reporting methods.

17.
PLoS One ; 10(10): e0138649, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26484762

RESUMO

OBJECTIVE: Medical record abstraction (MRA) is often cited as a significant source of error in research data, yet MRA methodology has rarely been the subject of investigation. Lack of a common framework has hindered application of the extant literature in practice, and, until now, there were no evidence-based guidelines for ensuring data quality in MRA. We aimed to identify the factors affecting the accuracy of data abstracted from medical records and to generate a framework for data quality assurance and control in MRA. METHODS: Candidate factors were identified from published reports of MRA. Content validity of the top candidate factors was assessed via a four-round two-group Delphi process with expert abstractors with experience in clinical research, registries, and quality improvement. The resulting coded factors were categorized into a control theory-based framework of MRA. Coverage of the framework was evaluated using the recent published literature. RESULTS: Analysis of the identified articles yielded 292 unique factors that affect the accuracy of abstracted data. Delphi processes overall refuted three of the top factors identified from the literature based on importance and five based on reliability (six total factors refuted). Four new factors were identified by the Delphi. The generated framework demonstrated comprehensive coverage. Significant underreporting of MRA methodology in recent studies was discovered. CONCLUSION: The framework generated from this research provides a guide for planning data quality assurance and control for studies using MRA. The large number and variability of factors indicate that while prospective quality assurance likely increases the accuracy of abstracted data, monitoring the accuracy during the abstraction process is also required. Recent studies reporting research results based on MRA rarely reported data quality assurance or control measures, and even less frequently reported data quality metrics with research results. Given the demonstrated variability, these methods and measures should be reported with research results.


Assuntos
Confiabilidade dos Dados , Prontuários Médicos , Humanos , Estudos Prospectivos , Sistema de Registros , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa