RESUMO
The Human Phenotype Ontology (HPO) is a widely used resource that comprehensively organizes and defines the phenotypic features of human disease, enabling computational inference and supporting genomic and phenotypic analyses through semantic similarity and machine learning algorithms. The HPO has widespread applications in clinical diagnostics and translational research, including genomic diagnostics, gene-disease discovery, and cohort analytics. In recent years, groups around the world have developed translations of the HPO from English to other languages, and the HPO browser has been internationalized, allowing users to view HPO term labels and in many cases synonyms and definitions in ten languages in addition to English. Since our last report, a total of 2239 new HPO terms and 49235 new HPO annotations were developed, many in collaboration with external groups in the fields of psychiatry, arthrogryposis, immunology and cardiology. The Medical Action Ontology (MAxO) is a new effort to model treatments and other measures taken for clinical management. Finally, the HPO consortium is contributing to efforts to integrate the HPO and the GA4GH Phenopacket Schema into electronic health records (EHRs) with the goal of more standardized and computable integration of rare disease data in EHRs.
Assuntos
Ontologias Biológicas , Humanos , Fenótipo , Genômica , Algoritmos , Doenças RarasRESUMO
Practitioners of digital health are familiar with disjointed data environments that often inhibit effective communication among different elements of the ecosystem. This fragmentation leads in turn to issues such as inconsistencies in services versus payments, wastage, and notably, care delivered being less than best-practice. Despite the long-standing recognition of interoperable data as a potential solution, efforts in achieving interoperability have been disjointed and inconsistent, resulting in numerous incompatible standards, despite the widespread agreement that fewer standards would enhance interoperability. This paper introduces a framework for understanding health care data needs, discussing the challenges and opportunities of open data standards in the field. It emphasizes the necessity of acknowledging diverse data standards, each catering to specific viewpoints and needs, while proposing a categorization of health care data into three domains, each with its distinct characteristics and challenges, along with outlining overarching design requirements applicable to all domains and specific requirements unique to each domain.
Assuntos
Atenção à Saúde , HumanosRESUMO
OBJECTIVE: The large-scale collection of observational data and digital technologies could help curb the COVID-19 pandemic. However, the coexistence of multiple Common Data Models (CDMs) and the lack of data extract, transform, and load (ETL) tool between different CDMs causes potential interoperability issue between different data systems. The objective of this study is to design, develop, and evaluate an ETL tool that transforms the PCORnet CDM format data into the OMOP CDM. METHODS: We developed an open-source ETL tool to facilitate the data conversion from the PCORnet CDM and the OMOP CDM. The ETL tool was evaluated using a dataset with 1000 patients randomly selected from the PCORnet CDM at Mayo Clinic. Information loss, data mapping accuracy, and gap analysis approaches were conducted to assess the performance of the ETL tool. We designed an experiment to conduct a real-world COVID-19 surveillance task to assess the feasibility of the ETL tool. We also assessed the capacity of the ETL tool for the COVID-19 data surveillance using data collection criteria of the MN EHR Consortium COVID-19 project. RESULTS: After the ETL process, all the records of 1000 patients from 18 PCORnet CDM tables were successfully transformed into 12 OMOP CDM tables. The information loss for all the concept mapping was less than 0.61%. The string mapping process for the unit concepts lost 2.84% records. Almost all the fields in the manual mapping process achieved 0% information loss, except the specialty concept mapping. Moreover, the mapping accuracy for all the fields were 100%. The COVID-19 surveillance task collected almost the same set of cases (99.3% overlaps) from the original PCORnet CDM and target OMOP CDM separately. Finally, all the data elements for MN EHR Consortium COVID-19 project could be captured from both the PCORnet CDM and the OMOP CDM. CONCLUSION: We demonstrated that our ETL tool could satisfy the data conversion requirements between the PCORnet CDM and the OMOP CDM. The outcome of the work would facilitate the data retrieval, communication, sharing, and analysis between different institutions for not only COVID-19 related project, but also other real-world evidence-based observational studies.
Assuntos
COVID-19 , COVID-19/epidemiologia , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Humanos , Armazenamento e Recuperação da Informação , Pandemias , SARS-CoV-2RESUMO
Precision prevention embraces personalized prevention but includes broader factors such as social determinants of health to improve cardiovascular health. The quality, quantity, precision, and diversity of data relatable to individuals and communities continue to expand. New analytical methods can be applied to these data to create tools to attribute risk, which may allow a better understanding of cardiovascular health disparities. Interventions using these analytic tools should be evaluated to establish feasibility and efficacy for addressing cardiovascular disease disparities in diverse individuals and communities. Training in these approaches is important to create the next generation of scientists and practitioners in precision prevention. This state-of-the-art review is based on a workshop convened to identify current gaps in knowledge and methods used in precision prevention intervention research, discuss opportunities to expand trials of implementation science to close the health equity gaps, and expand the education and training of a diverse precision prevention workforce.
RESUMO
Objective: Determine the incidence of vestibular disorders in patients with SARS-CoV-2 compared to the control population. Study Design: Retrospective. Setting: Clinical data in the National COVID Cohort Collaborative database (N3C). Methods: Deidentified patient data from the National COVID Cohort Collaborative database (N3C) were queried based on variant peak prevalence (untyped, alpha, delta, omicron 21K, and omicron 23A) from covariants.org to retrospectively analyze the incidence of vestibular disorders in patients with SARS-CoV-2 compared to control population, consisting of patients without documented evidence of COVID infection during the same period. Results: Patients testing positive for COVID-19 were significantly more likely to have a vestibular disorder compared to the control population. Compared to control patients, the odds ratio of vestibular disorders was significantly elevated in patients with untyped (odds ratio [OR], 2.39; confidence intervals [CI], 2.29-2.50; P < 0.001), alpha (OR, 3.63; CI, 3.48-3.78; P < 0.001), delta (OR, 3.03; CI, 2.94-3.12; P < 0.001), omicron 21K variant (OR, 2.97; CI, 2.90-3.04; P < 0.001), and omicron 23A variant (OR, 8.80; CI, 8.35-9.27; P < 0.001). Conclusions: The incidence of vestibular disorders differed between COVID-19 variants and was significantly elevated in COVID-19-positive patients compared to the control population. These findings have implications for patient counseling and further research is needed to discern the long-term effects of these findings.
RESUMO
Importance: Understanding of SARS-CoV-2 infection in US children has been limited by the lack of large, multicenter studies with granular data. Objective: To examine the characteristics, changes over time, outcomes, and severity risk factors of children with SARS-CoV-2 within the National COVID Cohort Collaborative (N3C). Design, Setting, and Participants: A prospective cohort study of encounters with end dates before September 24, 2021, was conducted at 56 N3C facilities throughout the US. Participants included children younger than 19 years at initial SARS-CoV-2 testing. Main Outcomes and Measures: Case incidence and severity over time, demographic and comorbidity severity risk factors, vital sign and laboratory trajectories, clinical outcomes, and acute COVID-19 vs multisystem inflammatory syndrome in children (MIS-C), and Delta vs pre-Delta variant differences for children with SARS-CoV-2. Results: A total of 1â¯068â¯410 children were tested for SARS-CoV-2 and 167â¯262 test results (15.6%) were positive (82â¯882 [49.6%] girls; median age, 11.9 [IQR, 6.0-16.1] years). Among the 10â¯245 children (6.1%) who were hospitalized, 1423 (13.9%) met the criteria for severe disease: mechanical ventilation (796 [7.8%]), vasopressor-inotropic support (868 [8.5%]), extracorporeal membrane oxygenation (42 [0.4%]), or death (131 [1.3%]). Male sex (odds ratio [OR], 1.37; 95% CI, 1.21-1.56), Black/African American race (OR, 1.25; 95% CI, 1.06-1.47), obesity (OR, 1.19; 95% CI, 1.01-1.41), and several pediatric complex chronic condition (PCCC) subcategories were associated with higher severity disease. Vital signs and many laboratory test values from the day of admission were predictive of peak disease severity. Variables associated with increased odds for MIS-C vs acute COVID-19 included male sex (OR, 1.59; 95% CI, 1.33-1.90), Black/African American race (OR, 1.44; 95% CI, 1.17-1.77), younger than 12 years (OR, 1.81; 95% CI, 1.51-2.18), obesity (OR, 1.76; 95% CI, 1.40-2.22), and not having a pediatric complex chronic condition (OR, 0.72; 95% CI, 0.65-0.80). The children with MIS-C had a more inflammatory laboratory profile and severe clinical phenotype, with higher rates of invasive ventilation (117 of 707 [16.5%] vs 514 of 8241 [6.2%]; P < .001) and need for vasoactive-inotropic support (191 of 707 [27.0%] vs 426 of 8241 [5.2%]; P < .001) compared with those who had acute COVID-19. Comparing children during the Delta vs pre-Delta eras, there was no significant change in hospitalization rate (1738 [6.0%] vs 8507 [6.2%]; P = .18) and lower odds for severe disease (179 [10.3%] vs 1242 [14.6%]) (decreased by a factor of 0.67; 95% CI, 0.57-0.79; P < .001). Conclusions and Relevance: In this cohort study of US children with SARS-CoV-2, there were observed differences in demographic characteristics, preexisting comorbidities, and initial vital sign and laboratory values between severity subgroups. Taken together, these results suggest that early identification of children likely to progress to severe disease could be achieved using readily available data elements from the day of admission. Further work is needed to translate this knowledge into improved outcomes.
Assuntos
COVID-19/epidemiologia , Adolescente , Distribuição por Idade , COVID-19/complicações , COVID-19/diagnóstico , COVID-19/terapia , COVID-19/virologia , Criança , Pré-Escolar , Comorbidade , Progressão da Doença , Diagnóstico Precoce , Feminino , Humanos , Lactente , Masculino , Fatores de Risco , SARS-CoV-2 , Índice de Gravidade de Doença , Fatores Sociodemográficos , Síndrome de Resposta Inflamatória Sistêmica/diagnóstico , Síndrome de Resposta Inflamatória Sistêmica/epidemiologia , Síndrome de Resposta Inflamatória Sistêmica/terapia , Síndrome de Resposta Inflamatória Sistêmica/virologia , Estados Unidos/epidemiologia , Sinais VitaisRESUMO
Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec.
Assuntos
Metadados , Web Semântica , Gerenciamento de Dados , Bases de Dados Factuais , Fluxo de TrabalhoRESUMO
OBJECTIVE: In response to COVID-19, the informatics community united to aggregate as much clinical data as possible to characterize this new disease and reduce its impact through collaborative analytics. The National COVID Cohort Collaborative (N3C) is now the largest publicly available HIPAA limited dataset in US history with over 6.4 million patients and is a testament to a partnership of over 100 organizations. MATERIALS AND METHODS: We developed a pipeline for ingesting, harmonizing, and centralizing data from 56 contributing data partners using 4 federated Common Data Models. N3C data quality (DQ) review involves both automated and manual procedures. In the process, several DQ heuristics were discovered in our centralized context, both within the pipeline and during downstream project-based analysis. Feedback to the sites led to many local and centralized DQ improvements. RESULTS: Beyond well-recognized DQ findings, we discovered 15 heuristics relating to source Common Data Model conformance, demographics, COVID tests, conditions, encounters, measurements, observations, coding completeness, and fitness for use. Of 56 sites, 37 sites (66%) demonstrated issues through these heuristics. These 37 sites demonstrated improvement after receiving feedback. DISCUSSION: We encountered site-to-site differences in DQ which would have been challenging to discover using federated checks alone. We have demonstrated that centralized DQ benchmarking reveals unique opportunities for DQ improvement that will support improved research analytics locally and in aggregate. CONCLUSION: By combining rapid, continual assessment of DQ with a large volume of multisite data, it is possible to support more nuanced scientific questions with the scale and rigor that they require.
Assuntos
COVID-19 , Estudos de Coortes , Confiabilidade dos Dados , Health Insurance Portability and Accountability Act , Humanos , Estados UnidosRESUMO
The biomedical research community relies on a diverse set of resources, both within their own institutions and at other research centers. In addition, an increasing number of shared electronic resources have been developed. Without effective means to locate and query these resources, it is challenging, if not impossible, for investigators to be aware of the myriad resources available, or to effectively perform resource discovery when the need arises. In this paper, we describe the development and use of the Biomedical Resource Ontology (BRO) to enable semantic annotation and discovery of biomedical resources. We also describe the Resource Discovery System (RDS) which is a federated, inter-institutional pilot project that uses the BRO to facilitate resource discovery on the Internet. Through the RDS framework and its associated Biositemaps infrastructure, the BRO facilitates semantic search and discovery of biomedical resources, breaking down barriers and streamlining scientific research that will improve human health.
Assuntos
Pesquisa Biomédica , Sistemas de Gerenciamento de Base de Dados , Documentação , Informática Médica , Pesquisa Translacional Biomédica , Animais , Biologia Computacional , Humanos , Internet , Semântica , Interface Usuário-ComputadorRESUMO
IMPORTANCE: SARS-CoV-2. OBJECTIVE: To determine the characteristics, changes over time, outcomes, and severity risk factors of SARS-CoV-2 affected children within the National COVID Cohort Collaborative (N3C). DESIGN: Prospective cohort study of patient encounters with end dates before May 27th, 2021. SETTING: 45 N3C institutions. PARTICIPANTS: Children <19-years-old at initial SARS-CoV-2 testing. MAIN OUTCOMES AND MEASURES: Case incidence and severity over time, demographic and comorbidity severity risk factors, vital sign and laboratory trajectories, clinical outcomes, and acute COVID-19 vs MIS-C contrasts for children infected with SARS-CoV-2. RESULTS: 728,047 children in the N3C were tested for SARS-CoV-2; of these, 91,865 (12.6%) were positive. Among the 5,213 (6%) hospitalized children, 685 (13%) met criteria for severe disease: mechanical ventilation (7%), vasopressor/inotropic support (7%), ECMO (0.6%), or death/discharge to hospice (1.1%). Male gender, African American race, older age, and several pediatric complex chronic condition (PCCC) subcategories were associated with higher clinical severity (p ≤ 0.05). Vital signs (all p≤0.002) and many laboratory tests from the first day of hospitalization were predictive of peak disease severity. Children with severe (vs moderate) disease were more likely to receive antimicrobials (71% vs 32%, p<0.001) and immunomodulatory medications (53% vs 16%, p<0.001). Compared to those with acute COVID-19, children with MIS-C were more likely to be male, Black/African American, 1-to-12-years-old, and less likely to have asthma, diabetes, or a PCCC (p < 0.04). MIS-C cases demonstrated a more inflammatory laboratory profile and more severe clinical phenotype with higher rates of invasive ventilation (12% vs 6%) and need for vasoactive-inotropic support (31% vs 6%) compared to acute COVID-19 cases, respectively (p<0.03). CONCLUSIONS: In the largest U.S. SARS-CoV-2-positive pediatric cohort to date, we observed differences in demographics, pre-existing comorbidities, and initial vital sign and laboratory test values between severity subgroups. Taken together, these results suggest that early identification of children likely to progress to severe disease could be achieved using readily available data elements from the day of admission. Further work is needed to translate this knowledge into improved outcomes.
RESUMO
Importance: The National COVID Cohort Collaborative (N3C) is a centralized, harmonized, high-granularity electronic health record repository that is the largest, most representative COVID-19 cohort to date. This multicenter data set can support robust evidence-based development of predictive and diagnostic tools and inform clinical care and policy. Objectives: To evaluate COVID-19 severity and risk factors over time and assess the use of machine learning to predict clinical severity. Design, Setting, and Participants: In a retrospective cohort study of 1â¯926â¯526 US adults with SARS-CoV-2 infection (polymerase chain reaction >99% or antigen <1%) and adult patients without SARS-CoV-2 infection who served as controls from 34 medical centers nationwide between January 1, 2020, and December 7, 2020, patients were stratified using a World Health Organization COVID-19 severity scale and demographic characteristics. Differences between groups over time were evaluated using multivariable logistic regression. Random forest and XGBoost models were used to predict severe clinical course (death, discharge to hospice, invasive ventilatory support, or extracorporeal membrane oxygenation). Main Outcomes and Measures: Patient demographic characteristics and COVID-19 severity using the World Health Organization COVID-19 severity scale and differences between groups over time using multivariable logistic regression. Results: The cohort included 174â¯568 adults who tested positive for SARS-CoV-2 (mean [SD] age, 44.4 [18.6] years; 53.2% female) and 1â¯133â¯848 adult controls who tested negative for SARS-CoV-2 (mean [SD] age, 49.5 [19.2] years; 57.1% female). Of the 174â¯568 adults with SARS-CoV-2, 32â¯472 (18.6%) were hospitalized, and 6565 (20.2%) of those had a severe clinical course (invasive ventilatory support, extracorporeal membrane oxygenation, death, or discharge to hospice). Of the hospitalized patients, mortality was 11.6% overall and decreased from 16.4% in March to April 2020 to 8.6% in September to October 2020 (P = .002 for monthly trend). Using 64 inputs available on the first hospital day, this study predicted a severe clinical course using random forest and XGBoost models (area under the receiver operating curve = 0.87 for both) that were stable over time. The factor most strongly associated with clinical severity was pH; this result was consistent across machine learning methods. In a separate multivariable logistic regression model built for inference, age (odds ratio [OR], 1.03 per year; 95% CI, 1.03-1.04), male sex (OR, 1.60; 95% CI, 1.51-1.69), liver disease (OR, 1.20; 95% CI, 1.08-1.34), dementia (OR, 1.26; 95% CI, 1.13-1.41), African American (OR, 1.12; 95% CI, 1.05-1.20) and Asian (OR, 1.33; 95% CI, 1.12-1.57) race, and obesity (OR, 1.36; 95% CI, 1.27-1.46) were independently associated with higher clinical severity. Conclusions and Relevance: This cohort study found that COVID-19 mortality decreased over time during 2020 and that patient demographic characteristics and comorbidities were associated with higher clinical severity. The machine learning models accurately predicted ultimate clinical severity using commonly collected clinical data from the first 24 hours of a hospital admission.
Assuntos
COVID-19 , Bases de Dados Factuais , Previsões , Hospitalização , Modelos Biológicos , Índice de Gravidade de Doença , Adulto , Idoso , Idoso de 80 Anos ou mais , COVID-19/etnologia , COVID-19/mortalidade , Comorbidade , Etnicidade , Oxigenação por Membrana Extracorpórea , Feminino , Humanos , Concentração de Íons de Hidrogênio , Masculino , Pessoa de Meia-Idade , Pandemias , Respiração Artificial , Estudos Retrospectivos , Fatores de Risco , SARS-CoV-2 , Estados Unidos , Adulto JovemRESUMO
Background: The majority of U.S. reports of COVID-19 clinical characteristics, disease course, and treatments are from single health systems or focused on one domain. Here we report the creation of the National COVID Cohort Collaborative (N3C), a centralized, harmonized, high-granularity electronic health record repository that is the largest, most representative U.S. cohort of COVID-19 cases and controls to date. This multi-center dataset supports robust evidence-based development of predictive and diagnostic tools and informs critical care and policy. Methods and Findings: In a retrospective cohort study of 1,926,526 patients from 34 medical centers nationwide, we stratified patients using a World Health Organization COVID-19 severity scale and demographics; we then evaluated differences between groups over time using multivariable logistic regression. We established vital signs and laboratory values among COVID-19 patients with different severities, providing the foundation for predictive analytics. The cohort included 174,568 adults with severe acute respiratory syndrome associated with SARS-CoV-2 (PCR >99% or antigen <1%) as well as 1,133,848 adult patients that served as lab-negative controls. Among 32,472 hospitalized patients, mortality was 11.6% overall and decreased from 16.4% in March/April 2020 to 8.6% in September/October 2020 (p = 0.002 monthly trend). In a multivariable logistic regression model, age, male sex, liver disease, dementia, African-American and Asian race, and obesity were independently associated with higher clinical severity. To demonstrate the utility of the N3C cohort for analytics, we used machine learning (ML) to predict clinical severity and risk factors over time. Using 64 inputs available on the first hospital day, we predicted a severe clinical course (death, discharge to hospice, invasive ventilation, or extracorporeal membrane oxygenation) using random forest and XGBoost models (AUROC 0.86 and 0.87 respectively) that were stable over time. The most powerful predictors in these models are patient age and widely available vital sign and laboratory values. The established expected trajectories for many vital signs and laboratory values among patients with different clinical severities validates observations from smaller studies, and provides comprehensive insight into COVID-19 characterization in U.S. patients. Conclusions: This is the first description of an ongoing longitudinal observational study of patients seen in diverse clinical settings and geographical regions and is the largest COVID-19 cohort in the United States. Such data are the foundation for ML models that can be the basis for generalizable clinical decision support tools. The N3C Data Enclave is unique in providing transparent, reproducible, easily shared, versioned, and fully auditable data and analytic provenance for national-scale patient-level EHR data. The N3C is built for intensive ML analyses by academic, industry, and citizen scientists internationally. Many observational correlations can inform trial designs and care guidelines for this new disease.
RESUMO
OBJECTIVE: Coronavirus disease 2019 (COVID-19) poses societal challenges that require expeditious data and knowledge sharing. Though organizational clinical data are abundant, these are largely inaccessible to outside researchers. Statistical, machine learning, and causal analyses are most successful with large-scale data beyond what is available in any given organization. Here, we introduce the National COVID Cohort Collaborative (N3C), an open science community focused on analyzing patient-level data from many centers. MATERIALS AND METHODS: The Clinical and Translational Science Award Program and scientific community created N3C to overcome technical, regulatory, policy, and governance barriers to sharing and harmonizing individual-level clinical data. We developed solutions to extract, aggregate, and harmonize data across organizations and data models, and created a secure data enclave to enable efficient, transparent, and reproducible collaborative analytics. RESULTS: Organized in inclusive workstreams, we created legal agreements and governance for organizations and researchers; data extraction scripts to identify and ingest positive, negative, and possible COVID-19 cases; a data quality assurance and harmonization pipeline to create a single harmonized dataset; population of the secure data enclave with data, machine learning, and statistical analytics tools; dissemination mechanisms; and a synthetic data pilot to democratize data access. CONCLUSIONS: The N3C has demonstrated that a multisite collaborative learning health network can overcome barriers to rapidly build a scalable infrastructure incorporating multiorganizational clinical data for COVID-19 analytics. We expect this effort to save lives by enabling rapid collaboration among clinicians, researchers, and data scientists to identify treatments and specialized care and thereby reduce the immediate and long-term impacts of COVID-19.
Assuntos
COVID-19 , Ciência de Dados/organização & administração , Disseminação de Informação , Colaboração Intersetorial , Segurança Computacional , Análise de Dados , Comitês de Ética em Pesquisa , Regulamentação Governamental , Humanos , National Institutes of Health (U.S.) , Estados UnidosRESUMO
Death, as a biological phenomenon, is well understood and a commonly employed endpoint for clinical trials. However, death identification and adjudication may be difficult for pragmatic clinical trials (PCT) that rely upon electronic health record and patient reported data. We propose a novel death identification and verification approach that is being used in the ToRsemide compArisoN with furoSemide FOR Management of Heart Failure (TRANSFORM-HF) PCT. We describe our hybrid approach that includes gathering information from clinical trial sites, a centralized call center, and National Death Index searches. Our methods detail how a possible death is triggered from each of these components and the types of information we require to verify a triggered death. Our different trigger/verification elements collectively define the TRANSFORM-HF PCT's definition of a death event.
Assuntos
Determinação de Ponto Final , Insuficiência Cardíaca , Mortalidade , Ensaios Clínicos como Assunto , Coleta de Dados , Diuréticos/uso terapêutico , Furosemida/uso terapêutico , Insuficiência Cardíaca/tratamento farmacológico , Humanos , Torasemida/uso terapêuticoRESUMO
BACKGROUND: To assess the current state of clinical data interoperability, we evaluated the use of data standards across 38 large professional society registries. METHODS: The analysis included 4 primary components: 1) environmental scan, 2) abstraction and cross-tabulation of clinical concepts and corresponding data elements from registry case report forms, dictionaries, and / or data models, 3) cross-tabulation of same across national common data models, and 4) specifying data element metadata to achieve native data interoperability. RESULTS: The registry analysis identified approximately 50 core clinical concepts. None were captured using the same data representation across all registries, and there was little implementation of data standards. To improve technical implementation, we specified 13 key metadata for each concept to be used to achieve data consistency. CONCLUSION: The registry community has not benefitted from and does not contribute to interoperability efforts. A common, authoritative process to specify and implement common data elements is greatly needed.
Assuntos
Elementos de Dados Comuns , Interoperabilidade da Informação em Saúde , Metadados , Sistema de Registros/normas , Feminino , Humanos , Masculino , Sociedades , Estados UnidosRESUMO
In children, levels of play, physical activity, and fitness are key indicators of health and disease and closely tied to optimal growth and development. Cardiopulmonary exercise testing (CPET) provides clinicians with biomarkers of disease and effectiveness of therapy, and researchers with novel insights into fundamental biological mechanisms reflecting an integrated physiological response that is hidden when the child is at rest. Yet the growth of clinical trials utilizing CPET in pediatrics remains stunted despite the current emphasis on preventative medicine and the growing recognition that therapies used in children should be clinically tested in children. There exists a translational gap between basic discovery and clinical application in this essential component of child health. To address this gap, the NIH provided funding through the Clinical and Translational Science Award (CTSA) program to convene a panel of experts. This report summarizes our major findings and outlines next steps necessary to enhance child health exercise medicine translational research. We present specific plans to bolster data interoperability, improve child health CPET reference values, stimulate formal training in exercise medicine for child health care professionals, and outline innovative approaches through which exercise medicine can become more accessible and advance therapeutics across the broad spectrum of child health.
Assuntos
Proteção da Criança , Exercício Físico , Inovação Organizacional , Pesquisa , Pesquisa Translacional Biomédica , Biomarcadores/metabolismo , Calibragem , Criança , Diretrizes para o Planejamento em Saúde , Humanos , Consumo de Oxigênio , Pesquisadores , SemânticaRESUMO
This article describes the patient-centered Scalable National Network for Effectiveness Research (pSCANNER), which is part of the recently formed PCORnet, a national network composed of learning healthcare systems and patient-powered research networks funded by the Patient Centered Outcomes Research Institute (PCORI). It is designed to be a stakeholder-governed federated network that uses a distributed architecture to integrate data from three existing networks covering over 21 million patients in all 50 states: (1) VA Informatics and Computing Infrastructure (VINCI), with data from Veteran Health Administration's 151 inpatient and 909 ambulatory care and community-based outpatient clinics; (2) the University of California Research exchange (UC-ReX) network, with data from UC Davis, Irvine, Los Angeles, San Francisco, and San Diego; and (3) SCANNER, a consortium of UCSD, Tennessee VA, and three federally qualified health systems in the Los Angeles area supplemented with claims and health information exchange data, led by the University of Southern California. Initial use cases will focus on three conditions: (1) congestive heart failure; (2) Kawasaki disease; (3) obesity. Stakeholders, such as patients, clinicians, and health service researchers, will be engaged to prioritize research questions to be answered through the network. We will use a privacy-preserving distributed computation model with synchronous and asynchronous modes. The distributed system will be based on a common data model that allows the construction and evaluation of distributed multivariate models for a variety of statistical analyses.
Assuntos
Redes de Comunicação de Computadores , Registros Eletrônicos de Saúde/organização & administração , Disseminação de Informação , Avaliação de Resultados em Cuidados de Saúde/organização & administração , Assistência Centrada no Paciente , Confidencialidade , Humanos , Estados Unidos , United States Department of Veterans AffairsRESUMO
The Clinical and Translational Science Awards (CTSA) program represents a significant public investment. To realize its major goal of improving the public's health and reducing health disparities, the CTSA Consortium's Community Engagement Key Function Committee has undertaken the challenge of developing a taxonomy of community health indicators. The objective is to initiate a unified approach for monitoring progress in improving population health outcomes. Such outcomes include, importantly, the interests and priorities of community stakeholders, plus the multiple, overlapping interests of universities and of the public health and health care professions involved in the development and use of local health care indicators.The emerging taxonomy of community health indicators that the authors propose supports alignment of CTSA activities and facilitates comparative effectiveness research across CTSAs, thereby improving the health of communities and reducing health disparities. The proposed taxonomy starts at the broadest level, determinants of health; subsequently moves to more finite categories of community health indicators; and, finally, addresses specific quantifiable measures. To illustrate the taxonomy's application, the authors have synthesized 21 health indicator projects from the literature and categorized them into international, national, or local/special jurisdictions. They furthered categorized the projects within the taxonomy by ranking indicators with the greatest representation among projects and by ranking the frequency of specific measures. They intend for the taxonomy to provide common metrics for measuring changes to population health and, thus, extend the utility of the CTSA Community Engagement Logic Model. The input of community partners will ultimately improve population health.
Assuntos
Centros Médicos Acadêmicos/classificação , Centros Comunitários de Saúde/classificação , Indicadores Básicos de Saúde , Saúde Pública/classificação , Feminino , Nível de Saúde , Humanos , Comunicação Interdisciplinar , Masculino , Qualidade da Assistência à Saúde/classificação , Estados UnidosRESUMO
OBJECTIVE: The Cross-Institutional Clinical Translational Research project explored a federated query tool and looked at how this tool can facilitate clinical trial cohort discovery by managing access to aggregate patient data located within unaffiliated academic medical centers. METHODS: The project adapted software from the Informatics for Integrating Biology and the Bedside (i2b2) program to connect three Clinical Translational Research Award sites: University of Washington, Seattle, University of California, Davis, and University of California, San Francisco. The project developed an iterative spiral software development model to support the implementation and coordination of this multisite data resource. RESULTS: By standardizing technical infrastructures, policies, and semantics, the project enabled federated querying of deidentified clinical datasets stored in separate institutional environments and identified barriers to engaging users for measuring utility. DISCUSSION: The authors discuss the iterative development and evaluation phases of the project and highlight the challenges identified and the lessons learned. CONCLUSION: The common system architecture and translational processes provide high-level (aggregate) deidentified access to a large patient population (>5 million patients), and represent a novel and extensible resource. Enhancing the network for more focused disease areas will require research-driven partnerships represented across all partner sites.
Assuntos
Redes de Comunicação de Computadores/normas , Bases de Dados como Assunto/normas , Pesquisa Translacional Biomédica/organização & administração , Confidencialidade , Humanos , Armazenamento e Recuperação da Informação , Logical Observation Identifiers Names and Codes , SoftwareRESUMO
In designing a comprehensive mechanism for managing informed consents and permissions for biomedical research involving human participants, a significant effort is dedicated to the development of standardized classification of these consents and permissions. In this paper, we describe the considerations and implications of this effort that should be addressed during the development of a Biomedical Research Permissions Ontology (RPO). It is hoped that this standardization will allow disparate research institutions to pool research data and associated consents and permissions in order to facilitate collaborative translational research projects across multiple institutions and subsequent new breakthroughs in medicine while providing: 1) essential built in protections for privacy and confidentiality of research participants and 2) a mechanism for insuring that researchers adhere to patient's intent whether to participate in research or not.