Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.287
Filtrar
1.
J Alzheimers Dis ; 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38759012

RESUMO

Background: Despite numerous past endeavors for the semantic harmonization of Alzheimer's disease (AD) cohort studies, an automatic tool has yet to be developed. Objective: As cohort studies form the basis of data-driven analysis, harmonizing them is crucial for cross-cohort analysis. We aimed to accelerate this task by constructing an automatic harmonization tool. Methods: We created a common data model (CDM) through cross-mapping data from 20 cohorts, three CDMs, and ontology terms, which was then used to fine-tune a BioBERT model. Finally, we evaluated the model using three previously unseen cohorts and compared its performance to a string-matching baseline model. Results: Here, we present our AD-Mapper interface for automatic harmonization of AD cohort studies, which outperformed a string-matching baseline on previously unseen cohort studies. We showcase our CDM comprising 1218 unique variables. Conclusion: AD-Mapper leverages semantic similarities in naming conventions across cohorts to improve mapping performance.

2.
Sensors (Basel) ; 24(9)2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38733028

RESUMO

Interoperability is a central problem in digitization and System of Systems (SoS) engineering, which concerns the capacity of systems to exchange information and cooperate. The task to dynamically establish interoperability between heterogeneous cyber-physical systems (CPSs) at run-time is a challenging problem. Different aspects of the interoperability problem have been studied in fields such as SoS, neural translation, and agent-based systems, but there are no unifying solutions beyond domain-specific standardization efforts. The problem is complicated by the uncertain and variable relations between physical processes and human-centric symbols, which result from, e.g., latent physical degrees of freedom, maintenance, re-configurations, and software updates. Therefore, we surveyed the literature for concepts and methods needed to automatically establish SoSs with purposeful CPS communication, focusing on machine learning and connecting approaches that are not integrated in the present literature. Here, we summarize recent developments relevant to the dynamic interoperability problem, such as representation learning for ontology alignment and inference on heterogeneous linked data; neural networks for transcoding of text and code; concept learning-based reasoning; and emergent communication. We find that there has been a recent interest in deep learning approaches to establishing communication under different assumptions about the environment, language, and nature of the communicating entities. Furthermore, we present examples of architectures and discuss open problems associated with artificial intelligence (AI)-enabled solutions in relation to SoS interoperability requirements. Although these developments open new avenues for research, there are still no examples that bridge the concepts necessary to establish dynamic interoperability in complex SoSs, and realistic testbeds are needed.

3.
Cureus ; 16(4): e57672, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38707055

RESUMO

Background and aim In 2005, the Moroccan Ministry of Health established Magredial, a registry to track and monitor patients with end-stage renal disease (ESRD), with the aim of improving healthcare outcomes. After achieving initial success, Magredial's activity decreased, leading to its inactivity by 2015. Currently, efforts are underway to revive Magredial's use. The main goal of this study is to investigate the feasibility of data transfer between the electronic medical records (EMRs) of Hassan II Hospital of Fes, Morocco, and the registry by achieving semantic interoperability between the two systems Materials and methods The initial phase of this study involved a detailed review of existing literature, highlighting the importance of registries, especially in nephrology. This part of the study also aims to emphasize the role of semantic interoperability in facilitating the sharing of data between EMRs and registries. Following that, the study's second phase, which centered on the case study, conducted a detailed analysis of the data architectures in both Magredial and the EMR of the nephrology department to pinpoint areas of alignment and discrepancy. This step required cooperative efforts between the nephrology and IT departments of Hassan II Hospital. Results Our findings indicate a significant interoperability gap between the two systems, stemming from differences in their data architectures and semantic frameworks. Such discrepancies severely impede the effective exchange of information between the systems. To address this challenge, a comprehensive restructuring of the EMR is proposed. This strategy is designed to align disparate systems and ensure compliance with the interoperability standards the Health Level 7 Clinical Document Architecture (HL7-CDA) set forth. Implementing the proposed medical record approach is complex and time-consuming, necessitating healthcare professional commitment, and adherence to ethical standards for patient consent and data privacy. Conclusions Implementing this strategy is expected to facilitate the seamless automation of data transfer between the EMR and Magredial. It introduces a framework that could be a foundational model for establishing a robust interoperability framework within nephrology information systems in line with international standards. Ultimately, this initiative could lead to creating a nephrologist-shared health record across the country, enhancing patient care and data management within the specialty.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38699518

RESUMO

The personalised oncology paradigm remains challenging to deliver despite technological advances in genomics-based identification of actionable variants combined with the increasing focus of drug development on these specific targets. To ensure we continue to build concerted momentum to improve outcomes across all cancer types, financial, technological and operational barriers need to be addressed. For example, complete integration and certification of the 'molecular tumour board' into 'standard of care' ensures a unified clinical decision pathway that both counteracts fragmentation and is the cornerstone of evidence-based delivery inside and outside of a research setting. Generally, integrated delivery has been restricted to specific (common) cancer types either within major cancer centres or small regional networks. Here, we focus on solutions in real-world integration of genomics, pathology, surgery, oncological treatments, data from clinical source systems and analysis of whole-body imaging as digital data that can facilitate cost-effectiveness analysis, clinical trial recruitment, and outcome assessment. This urgent imperative for cancer also extends across the early diagnosis and adjuvant treatment interventions, individualised cancer vaccines, immune cell therapies, personalised synthetic lethal therapeutics and cancer screening and prevention. Oncology care systems worldwide require proactive step-changes in solutions that include inter-operative digital working that can solve patient centred challenges to ensure inclusive, quality, sustainable, fair and cost-effective adoption and efficient delivery. Here we highlight workforce, technical, clinical, regulatory and economic challenges that prevent the implementation of precision oncology at scale, and offer a systematic roadmap of integrated solutions for standard of care based on minimal essential digital tools. These include unified decision support tools, quality control, data flows within an ethical and legal data framework, training and certification, monitoring and feedback. Bridging the technical, operational, regulatory and economic gaps demands the joint actions from public and industry stakeholders across national and global boundaries.

5.
JAMIA Open ; 7(2): ooae023, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38751411

RESUMO

Objective: Integrating clinical research into routine clinical care workflows within electronic health record systems (EHRs) can be challenging, expensive, and labor-intensive. This case study presents a large-scale clinical research project conducted entirely within a commercial EHR during the COVID-19 pandemic. Case Report: The UCSD and UCSDH COVID-19 NeutraliZing Antibody Project (ZAP) aimed to evaluate antibody levels to SARS-CoV-2 virus in a large population at an academic medical center and examine the association between antibody levels and subsequent infection diagnosis. Results: The project rapidly and successfully enrolled and consented over 2000 participants, integrating the research trial with standing COVID-19 testing operations, staff, lab, and mobile applications. EHR-integration increased enrollment, ease of scheduling, survey distribution, and return of research results at a low cost by utilizing existing resources. Conclusion: The case study highlights the potential benefits of EHR-integrated clinical research, expanding their reach across multiple health systems and facilitating rapid learning during a global health crisis.

6.
Healthc Inform Res ; 30(2): 93-102, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38755100

RESUMO

OBJECTIVES: The need for interoperability at the national level was highlighted in Korea, leading to a consensus on the importance of establishing national standards that align with international technological standards and reflect contemporary needs. This article aims to share insights into the background of the recent national health data standardization policy, the activities of the Health Data Standardization Taskforce, and the future direction of health data standardization in Korea. METHODS: To ensure health data interoperability, the Health Data Standardization Taskforce was jointly organized by the public and private sectors in December 2022. The taskforce operated three working groups. It reviewed international trends in interoperability standardization, assessed the current status of health data standardization, discussed its vision, mission, and strategies, engaged in short-term standardization activities, and established a governance system for standardization. RESULTS: On September 15, 2023, the notice of "Health Data Terminology and Transmission Standards" in Korea was thoroughly revised to improve the exchange of health information between information systems and ensure interoperability. This notice includes the Korea Core Data for Interoperability (KR CDI) and the Korea Core Data Transmission Standard (HL7 FHIR KR Core), which are outcomes of the taskforce's efforts. Additionally, to reinforce the standardized governance system, the Health-Data Standardization Promotion Committee was established. CONCLUSIONS: Active interest and support from medical informatics experts are needed for the development and widespread adoption of health data standards in Korea.

7.
Artigo em Alemão | MEDLINE | ID: mdl-38753021

RESUMO

The digital health progress hubs pilot the extensibility of the concepts and solutions of the Medical Informatics Initiative to improve regional healthcare and research. The six funded projects address different diseases, areas in regional healthcare, and methods of cross-institutional data linking and use. Despite the diversity of the scenarios and regional conditions, the technical, regulatory, and organizational challenges and barriers that the progress hubs encounter in the actual implementation of the solutions are often similar. This results in some common approaches to solutions, but also in political demands that go beyond the Health Data Utilization Act, which is considered a welcome improvement by the progress hubs.In this article, we present the digital progress hubs and discuss achievements, challenges, and approaches to solutions that enable the shared use of data from university hospitals and non-academic institutions in the healthcare system and can make a sustainable contribution to improving medical care and research.

8.
Artigo em Alemão | MEDLINE | ID: mdl-38750239

RESUMO

Health data are extremely important in today's data-driven world. Through automation, healthcare processes can be optimized, and clinical decisions can be supported. For any reuse of data, the quality, validity, and trustworthiness of data are essential, and it is the only way to guarantee that data can be reused sensibly. Specific requirements for the description and coding of reusable data are defined in the FAIR guiding principles for data stewardship. Various national research associations and infrastructure projects in the German healthcare sector have already clearly positioned themselves on the FAIR principles: both the infrastructures of the Medical Informatics Initiative and the University Medicine Network operate explicitly on the basis of the FAIR principles, as do the National Research Data Infrastructure for Personal Health Data and the German Center for Diabetes Research.To ensure that a resource complies with the FAIR principles, the degree of FAIRness should first be determined (so-called FAIR assessment), followed by the prioritization for improvement steps (so-called FAIRification). Since 2016, a set of tools and guidelines have been developed for both steps, based on the different, domain-specific interpretations of the FAIR principles.Neighboring European countries have also invested in the development of a national framework for semantic interoperability in the context of the FAIR (Findable, Accessible, Interoperable, Reusable) principles. Concepts for comprehensive data enrichment were developed to simplify data analysis, for example, in the European Health Data Space or via the Observational Health Data Sciences and Informatics network. With the support of the European Open Science Cloud, among others, structured FAIRification measures have already been taken for German health datasets.

9.
Heliyon ; 10(7): e28861, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38601595

RESUMO

In the context of the increasingly diversified blockchain technology, interoperability among heterogeneous blockchains has become key to further advancing this field. Existing cross-chain technologies, while facilitating data and asset exchange between different blockchains to some extent, have exposed issues such as insufficient security, low efficiency, and inconsistent standards. Consequently, these issues give rise to significant obstacles in terms of both scalability and seamless communication among blockchains within a multi-chain framework. To address this, this paper proposes an efficient method for cross-chain interaction in a multi-chain environment. Building upon the traditional sidechain model, this method employs smart contracts and hash time-locked contracts (HTLCs) to design a cross-chain interaction scheme. This approach decentralizes the execution of locking, verifying, and unlocking stages in cross-chain transactions, effectively avoiding centralization risks associated with third-party entities in the process. It also greatly enhances the efficiency of fund transfers between the main chain and sidechains, while ensuring the security of cross-chain transactions to some extent. Additionally, this paper innovatively proposes a cross-chain data interaction strategy. Through smart contracts on the main chain, data from sidechains can be uploaded, verified, and stored on the main chain, achieving convenient and efficient cross-chain data sharing. The contribution of this paper is the development of a decentralized protocol that coordinates the execution of cross-chain interactions without the need to trust external parties, thereby reducing the risk of centralization and enhancing security. Experimental results validate the effectiveness of our solution in increasing transaction security and efficiency, with significant improvements over existing models. Our experiments emphasize the system's ability to handle a variety of transaction scenarios with improved throughput and reduced latency, highlighting the practical applicability and scalability of our approach.

10.
medRxiv ; 2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-38585743

RESUMO

Background: Electronic health records (EHR) are increasingly used for studying multimorbidities. However, concerns about accuracy, completeness, and EHRs being primarily designed for billing and administration raise questions about the consistency and reproducibility of EHR-based multimorbidity research. Methods: Utilizing phecodes to represent the disease phenome, we analyzed pairwise comorbidity strengths using a dual logistic regression approach and constructed multimorbidity as an undirected weighted graph. We assessed the consistency of the multimorbidity networks within and between two major EHR systems at local (nodes and edges), meso (neighboring patterns), and global (network statistics) scales. We present case studies to identify disease clusters and uncover clinically interpretable disease relationships. We provide an interactive web tool and a knowledge base combing data from multiple sources for online multimorbidity analysis. Findings: Analyzing data from 500,000 patients across Vanderbilt University Medical Center and Mass General Brigham health systems, we observed a strong correlation in disease frequencies ( Kendall's τ = 0.643) and comorbidity strengths (Pearson ρ = 0.79). Consistent network statistics across EHRs suggest a similar structure of multimorbidity networks at various scales. Comorbidity strengths and similarities of multimorbidity connection patterns align with the disease genetic correlations. Graph-theoretic analyses revealed a consistent core-periphery structure, implying efficient network clustering through threshold graph construction. Using hydronephrosis as a case study, we demonstrated the network's ability to uncover clinically relevant disease relationships and provide novel insights. Interpretation: Our findings demonstrate the robustness of large-scale EHR data for studying complex disease interactions. The alignment of multimorbidity patterns with genetic data suggests the potential utility for uncovering shared etiology of diseases. The consistent core-periphery network structure offers a strategic approach to analyze disease clusters. This work also sets the stage for advanced disease modeling, with implications for precision medicine. Funding: VUMC Biostatistics Development Award, UL1 TR002243, R21DK127075, R01HL140074, P50GM115305, R01CA227481.

11.
Front Digit Health ; 6: 1249454, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38645757

RESUMO

The AUD2IT-algorithm is a tool to structure the data, which is collected during an emergency treatment. The goal is on the one hand to structure the documentation of the data and on the other hand to give a standardised data structure for the report during handover of an emergency patient. AUD2IT-algorithm was developed to provide residents a documentation aid, which helps to structure the medical reports without getting lost in unimportant details or forgetting important information. The sequence of anamnesis, clinical examination, considering a differential diagnosis, technical diagnostics, interpretation and therapy is rather an academic classification than a description of the real workflow. In a real setting, most of these steps take place simultaneously. Therefore, the application of the AUD2IT-algorithm should also be carried out according to the real processes. A big advantage of the AUD2IT-algorithm is that it can be used as a structure for the entire treatment process and also is entirely usable as a handover protocol within this process to make sure, that the existing state of knowledge is ensured at each point of a team-timeout. PR-E-(AUD2IT)-algorithm makes it possible to document a treatment process that, in principle, does not have to be limited to the field of emergency medicine. Also, in the outpatient treatment the PR-E-(AUD2IT)-algorithm could be used and further developed. One example could be the preparation and allocation of needed resources at the general practitioner. The algorithm is a standardised tool that can be used by healthcare professionals of any level of training. It gives the user a sense of security in their daily work.

12.
J Am Med Inform Assoc ; 31(5): 1199-1205, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38563821

RESUMO

OBJECTIVE: This article presents the National Healthcare Safety Network (NHSN)'s approach to automation for public health surveillance using digital quality measures (dQMs) via an open-source tool (NHSNLink) and piloting of this approach using real-world data in a newly established collaborative program (NHSNCoLab). The approach leverages Health Level Seven Fast Healthcare Interoperability Resources (FHIR) application programming interfaces to improve data collection and reporting for public health and patient safety beginning with common, clinically significant, and preventable patient harms, such as medication-related hypoglycemia, healthcare facility-onset Clostridioides difficile infection, and healthcare-associated venous thromboembolism. CONCLUSIONS: The NHSN's FHIR dQMs hold the promise of minimizing the burden of reporting, improving accuracy, quality, and validity of data collected by NHSN, and increasing speed and efficiency of public health surveillance.


Assuntos
Infecções por Clostridium , Segurança do Paciente , Humanos , Estados Unidos , Qualidade da Assistência à Saúde , Coleta de Dados , Centers for Disease Control and Prevention, U.S.
13.
Front Med (Lausanne) ; 11: 1301660, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660421

RESUMO

Introduction: The potential for secondary use of health data to improve healthcare is currently not fully exploited. Health data is largely kept in isolated data silos and key infrastructure to aggregate these silos into standardized bodies of knowledge is underdeveloped. We describe the development, implementation, and evaluation of a federated infrastructure to facilitate versatile secondary use of health data based on Health Data Space nodes. Materials and methods: Our proposed nodes are self-contained units that digest data through an extract-transform-load framework that pseudonymizes and links data with privacy-preserving record linkage and harmonizes into a common data model (OMOP CDM). To support collaborative analyses a multi-level feature store is also implemented. A feasibility experiment was conducted to test the infrastructures potential for machine learning operations and deployment of other apps (e.g., visualization). Nodes can be operated in a network at different levels of sharing according to the level of trust within the network. Results: In a proof-of-concept study, a privacy-preserving registry for heart failure patients has been implemented as a real-world showcase for Health Data Space nodes at the highest trust level, linking multiple data sources including (a) electronical medical records from hospitals, (b) patient data from a telemonitoring system, and (c) data from Austria's national register of deaths. The registry is deployed at the tirol kliniken, a hospital carrier in the Austrian state of Tyrol, and currently includes 5,004 patients, with over 2.9 million measurements, over 574,000 observations, more than 63,000 clinical free text notes, and in total over 5.2 million data points. Data curation and harmonization processes are executed semi-automatically at each individual node according to data sharing policies to ensure data sovereignty, scalability, and privacy. As a feasibility test, a natural language processing model for classification of clinical notes was deployed and tested. Discussion: The presented Health Data Space node infrastructure has proven to be practicable in a real-world implementation in a live and productive registry for heart failure. The present work was inspired by the European Health Data Space initiative and its spirit to interconnect health data silos for versatile secondary use of health data.

14.
JAMIA Open ; 7(2): ooae032, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38660616

RESUMO

Objective: The objective was to identify information loss that could affect clinical care in laboratory data transmission between 2 health care institutions via a Health Information Exchange platform. Materials and Methods: Data transmission results of 9 laboratory tests, including LOINC codes, were compared in the following: between sending and receiving electronic health record (EHR) systems, the individual Health Level Seven International (HL7) Version 2 messages across the instrument, laboratory information system, and sending EHR. Results: Loss of information for similar tests indicated the following potential patient safety issues: (1) consistently missing specimen source; (2) lack of reporting of analytical technique or instrument platform; (3) inconsistent units and reference ranges; (4) discordant LOINC code use; and (5) increased complexity with multiple HL7 versions. Discussion and Conclusions: Using an HIE with standard messaging, SHIELD (Systemic Harmonization and Interoperability Enhancement for Laboratory Data) recommendations, and enhanced EHR functionality to support necessary data elements would yield consistent test identification and result value transmission.

15.
Artigo em Alemão | MEDLINE | ID: mdl-38668882

RESUMO

Intensive care units provide a data-rich environment with the potential to generate datasets in the realm of big data, which could be utilized to train powerful machine learning (ML) models. However, the currently available datasets are too small and exhibit too little diversity due to their limitation to individual hospitals. This lack of extensive and varied datasets is a primary reason for the limited generalizability and resulting low clinical utility of current ML models. Often, these models are based on data from single centers and suffer from poor external validity. There is an urgent need for the development of large-scale, multicentric, and multinational datasets. Ensuring data protection and minimizing re-identification risks pose central challenges in this process. The "Amsterdam University Medical Center database (AmsterdamUMCdb)" and the "Salzburg Intensive Care database (SICdb)" demonstrate that open access datasets are possible in Europe while complying with the data protection regulations of the General Data Protection Regulation (GDPR). Another challenge in building intensive care datasets is the absence of semantic definitions in the source data and the heterogeneity of data formats. Establishing binding industry standards for the semantic definition is crucial to ensure seamless semantic interoperability between datasets.

16.
BMC Med Inform Decis Mak ; 24(Suppl 3): 103, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38641585

RESUMO

BACKGROUND: Alzheimer's Disease (AD) is a devastating disease that destroys memory and other cognitive functions. There has been an increasing research effort to prevent and treat AD. In the US, two major data sharing resources for AD research are the National Alzheimer's Coordinating Center (NACC) and the Alzheimer's Disease Neuroimaging Initiative (ADNI); Additionally, the National Institutes of Health (NIH) Common Data Elements (CDE) Repository has been developed to facilitate data sharing and improve the interoperability among data sets in various disease research areas. METHOD: To better understand how AD-related data elements in these resources are interoperable with each other, we leverage different representation models to map data elements from different resources: NACC to ADNI, NACC to NIH CDE, and ADNI to NIH CDE. We explore bag-of-words based and word embeddings based models (Word2Vec and BioWordVec) to perform the data element mappings in these resources. RESULTS: The data dictionaries downloaded on November 23, 2021 contain 1,195 data elements in NACC, 13,918 in ADNI, and 27,213 in NIH CDE Repository. Data element preprocessing reduced the numbers of NACC and ADNI data elements for mapping to 1,099 and 7,584 respectively. Manual evaluation of the mapping results showed that the bag-of-words based approach achieved the best precision, while the BioWordVec based approach attained the best recall. In total, the three approaches mapped 175 out of 1,099 (15.92%) NACC data elements to ADNI; 107 out of 1,099 (9.74%) NACC data elements to NIH CDE; and 171 out of 7,584 (2.25%) ADNI data elements to NIH CDE. CONCLUSIONS: The bag-of-words based and word embeddings based approaches showed promise in mapping AD-related data elements between different resources. Although the mapping approaches need further improvement, our result indicates that there is a critical need to standardize CDEs across these valuable AD research resources in order to maximize the discoveries regarding AD pathophysiology, diagnosis, and treatment that can be gleaned from them.


Assuntos
Doença de Alzheimer , Estados Unidos/epidemiologia , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/epidemiologia , Elementos de Dados Comuns , Neuroimagem , National Institutes of Health (U.S.)
17.
Front Digit Health ; 6: 1260855, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38665619

RESUMO

Accessible and affordable health services and products including medicines, vaccines, and public health are an important health agenda of all countries. It is well understood that without digital health technologies, countries will face difficulties in tackling the needs and demands of their population. Global agencies including the World Health Organization (WHO), United Nations (UN), International Telecommunication Union (ITU), etc. have been instrumental in providing various tools, and guidance through digital health strategies in improving health and digital health maturity of the countries. The Digital Health Platform Handbook (DHPH) is a toolkit published by WHO and ITU to help countries create and implement a digital health platform (DHP) to serve as the underlying infrastructure for an interoperable and integrated national digital health system. We apply the foundational principles of DHPH and provide a perspective of DHP components in a layered, enterprise architecture of a digital health infrastructure. India has rolled out the blueprint of its National Digital Health Mission (NDHM) to address the emerging needs for digitization of healthcare in the country. In this paper, we also illustrate the design and implementation of WHO-ITU DHP components at the national level by exploring India's digital health mission implementation utilizing various digital public goods to build a digital health ecosystem in the country.

18.
JMIR Form Res ; 8: e54109, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38587885

RESUMO

BACKGROUND: The escalating prevalence of cesarean delivery globally poses significant health impacts on mothers and newborns. Despite this trend, the underlying reasons for increased cesarean delivery rates, which have risen to 36.3% in Portugal as of 2020, remain unclear. This study delves into these issues within the Portuguese health care context, where national efforts are underway to reduce cesarean delivery occurrences. OBJECTIVE: This paper aims to introduce a machine learning, algorithm-based support system designed to assist clinical teams in identifying potentially unnecessary cesarean deliveries. Key objectives include developing clinical decision support systems for cesarean deliveries using interoperability standards, identifying predictive factors influencing delivery type, assessing the economic impact of implementing this tool, and comparing system outputs with clinicians' decisions. METHODS: This study used retrospective data collected from 9 public Portuguese hospitals, encompassing maternal and fetal data and delivery methods from 2019 to 2020. We used various machine learning algorithms for model development, with light gradient-boosting machine (LightGBM) selected for deployment due to its efficiency. The model's performance was compared with clinician assessments through questionnaires. Additionally, an economic simulation was conducted to evaluate the financial impact on Portuguese public hospitals. RESULTS: The deployed model, based on LightGBM, achieved an area under the receiver operating characteristic curve of 88%. In the trial deployment phase at a single hospital, 3.8% (123/3231) of cases triggered alarms for potentially unnecessary cesarean deliveries. Financial simulation results indicated potential benefits for 30% (15/48) of Portuguese public hospitals with the implementation of our tool. However, this study acknowledges biases in the model, such as combining different vaginal delivery types and focusing on potentially unwarranted cesarean deliveries. CONCLUSIONS: This study presents a promising system capable of identifying potentially incorrect cesarean delivery decisions, with potentially positive implications for medical practice and health care economics. However, it also highlights the challenges and considerations necessary for real-world application, including further evaluation of clinical decision-making impacts and understanding the diverse reasons behind delivery type choices. This study underscores the need for careful implementation and further robust analysis to realize the full potential and real-world applicability of such clinical support systems.

19.
J Med Internet Res ; 26: e55779, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38593431

RESUMO

Practitioners of digital health are familiar with disjointed data environments that often inhibit effective communication among different elements of the ecosystem. This fragmentation leads in turn to issues such as inconsistencies in services versus payments, wastage, and notably, care delivered being less than best-practice. Despite the long-standing recognition of interoperable data as a potential solution, efforts in achieving interoperability have been disjointed and inconsistent, resulting in numerous incompatible standards, despite the widespread agreement that fewer standards would enhance interoperability. This paper introduces a framework for understanding health care data needs, discussing the challenges and opportunities of open data standards in the field. It emphasizes the necessity of acknowledging diverse data standards, each catering to specific viewpoints and needs, while proposing a categorization of health care data into three domains, each with its distinct characteristics and challenges, along with outlining overarching design requirements applicable to all domains and specific requirements unique to each domain.


Assuntos
Atenção à Saúde , Humanos
20.
Stud Health Technol Inform ; 313: 9-14, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38682497

RESUMO

BACKGROUND: Dementia is becoming a significant public health concern, affecting approximately 130,000 individuals in Austria, whereby nearly 40% of the cases are attributed to modifiable risk factors. Multidomain lifestyle interventions have thereby demonstrated significant effects in reducing the risk of dementia. OBJECTIVES: The goal was to define an interoperability framework to conduct standardized monitoring in clinical trials for enhancing dementia risk mitigation. In addition, the identified standards should be integrated into the components of the project. METHODS: A step-by-step approach was used, where initially data collection, aggregation and harmonization was carried out with retrospective data from various clinical centers. Afterwards, the interoperability framework was defined including the prospective data that is gathered during a clinical trial. RESULTS: A guideline for integrating healthcare standards was developed and incorporated into the technical components for the clinical trial. CONCLUSION: The interoperability framework was designed in a scalable way and will be regularly updated for future needs.


Assuntos
Ensaios Clínicos como Assunto , Demência , Humanos , Demência/prevenção & controle , Idoso , Áustria , Fatores de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...