Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Artículo en Alemán | MEDLINE | ID: mdl-38750239

RESUMEN

Health data are extremely important in today's data-driven world. Through automation, healthcare processes can be optimized, and clinical decisions can be supported. For any reuse of data, the quality, validity, and trustworthiness of data are essential, and it is the only way to guarantee that data can be reused sensibly. Specific requirements for the description and coding of reusable data are defined in the FAIR guiding principles for data stewardship. Various national research associations and infrastructure projects in the German healthcare sector have already clearly positioned themselves on the FAIR principles: both the infrastructures of the Medical Informatics Initiative and the University Medicine Network operate explicitly on the basis of the FAIR principles, as do the National Research Data Infrastructure for Personal Health Data and the German Center for Diabetes Research.To ensure that a resource complies with the FAIR principles, the degree of FAIRness should first be determined (so-called FAIR assessment), followed by the prioritization for improvement steps (so-called FAIRification). Since 2016, a set of tools and guidelines have been developed for both steps, based on the different, domain-specific interpretations of the FAIR principles.Neighboring European countries have also invested in the development of a national framework for semantic interoperability in the context of the FAIR (Findable, Accessible, Interoperable, Reusable) principles. Concepts for comprehensive data enrichment were developed to simplify data analysis, for example, in the European Health Data Space or via the Observational Health Data Sciences and Informatics network. With the support of the European Open Science Cloud, among others, structured FAIRification measures have already been taken for German health datasets.


Asunto(s)
Registros Electrónicos de Salud , Humanos , Alemania , Internacionalidad , Programas Nacionales de Salud
2.
Comput Methods Programs Biomed ; 242: 107814, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37722311

RESUMEN

BACKGROUND AND OBJECTIVE: The Oxford Classification for IgA nephropathy is the most successful example of an evidence-based nephropathology classification system. The aim of our study was to replicate the glomerular components of Oxford scoring with an end-to-end deep learning pipeline that involves automatic glomerular segmentation followed by classification for mesangial hypercellularity (M), endocapillary hypercellularity (E), segmental sclerosis (S) and active crescents (C). METHODS: A total number of 1056 periodic acid-Schiff (PAS) whole slide images (WSIs), coming from 386 kidney biopsies, were annotated. Several detection models for glomeruli, based on the Mask R-CNN architecture, were trained on 587 WSIs, validated on 161 WSIs, and tested on 127 WSIs. For the development of segmentation models, 20,529 glomeruli were annotated, of which 16,571 as training and 3958 as validation set. The test set of the segmentation module comprised of 2948 glomeruli. For the Oxford classification, 6206 expert-annotated glomeruli from 308 PAS WSIs were labelled for M, E, S, C and split into a training set of 4298 glomeruli from 207 WSIs, and a test set of 1908 glomeruli. We chose the best-performing models to construct an end-to-end pipeline, which we named MESCnn (MESC classification by neural network), for the glomerular Oxford classification of WSIs. RESULTS: Instance segmentation yielded excellent results with an AP50 ranging between 78.2-80.1 % (79.4 ± 0.7 %) on the validation and 75.1-77.7 % (76.5 ± 0.9 %) on the test set. The aggregated Jaccard Index was between 73.4-75.9 % (75.0 ± 0.8 %) on the validation and 69.1-73.4 % (72.2 ± 1.4 %) on the test set. At granular glomerular level, Oxford Classification was best replicated for M with EfficientNetV2-L with a mean ROC-AUC of 90.2 % and a mean precision/recall area under the curve (PR-AUC) of 81.8 %, best for E with MobileNetV2 (ROC-AUC 94.7 %) and ResNet50 (PR-AUC 75.8 %), best for S with EfficientNetV2-M (mean ROC-AUC 92.7 %, mean PR-AUC 87.7 %), best for C with EfficientNetV2-L (ROC-AUC 92.3 %) and EfficientNetV2-S (PR-AUC 54.7 %). At biopsy-level, correlation between expert and deep learning labels fulfilled the demands of the Oxford Classification. CONCLUSION: We designed an end-to-end pipeline for glomerular Oxford Classification on both a granular glomerular and an entire biopsy level. Both the glomerular segmentation and the classification modules are freely available for further development to the renal medicine community.


Asunto(s)
Aprendizaje Profundo , Glomerulonefritis por IGA , Humanos , Glomerulonefritis por IGA/diagnóstico , Glomerulonefritis por IGA/patología , Tasa de Filtración Glomerular , Glomérulos Renales/patología , Riñón/diagnóstico por imagen
3.
Brief Bioinform ; 24(5)2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37478371

RESUMEN

Artificial intelligence (AI) systems utilizing deep neural networks and machine learning (ML) algorithms are widely used for solving critical problems in bioinformatics, biomedical informatics and precision medicine. However, complex ML models that are often perceived as opaque and black-box methods make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. In sensitive areas such as healthcare, explainability and accountability are not only desirable properties but also legally required for AI systems that can have a significant impact on human lives. Fairness is another growing concern, as algorithmic decisions should not show bias or discrimination towards certain groups or individuals based on sensitive attributes. Explainable AI (XAI) aims to overcome the opaqueness of black-box models and to provide transparency in how AI systems make decisions. Interpretable ML models can explain how they make predictions and identify factors that influence their outcomes. However, the majority of the state-of-the-art interpretable ML methods are domain-agnostic and have evolved from fields such as computer vision, automated reasoning or statistics, making direct application to bioinformatics problems challenging without customization and domain adaptation. In this paper, we discuss the importance of explainability and algorithmic transparency in the context of bioinformatics. We provide an overview of model-specific and model-agnostic interpretable ML methods and tools and outline their potential limitations. We discuss how existing interpretable ML methods can be customized and fit to bioinformatics research problems. Further, through case studies in bioimaging, cancer genomics and text mining, we demonstrate how XAI methods can improve transparency and decision fairness. Our review aims at providing valuable insights and serving as a starting point for researchers wanting to enhance explainability and decision transparency while solving bioinformatics problems. GitHub: https://github.com/rezacsedu/XAI-for-bioinformatics.


Asunto(s)
Inteligencia Artificial , Biología Computacional , Humanos , Aprendizaje Automático , Algoritmos , Genómica
4.
5.
Stud Health Technol Inform ; 302: 1027-1028, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203572

RESUMEN

Supervised methods, such as those utilized in classification, prediction, and segmentation tasks for medical images, experience a decline in performance when the training and testing datasets violate the i.i.d (independent and identically distributed) assumption. Hence we adopted the CycleGAN(Generative Adversarial Networks) method to cycle training the CT(Computer Tomography) data from different terminals/manufacturers, which aims to eliminate the distribution shift from diverse data terminals. But due to the model collapse problem of the GAN-based model, the images we generated suffer serious radiology artifacts. To eliminate the boundary marks and artifacts, we adopted a score-based generative model to refine the images voxel-wisely. This novel combination of two generative models makes the transformation between diverse data providers to a higher fidelity level without sacrificing any significant features. In future works, we will evaluate the original datasets and generative datasets by experimenting with a broader range of supervised methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía , Artefactos
6.
Stud Health Technol Inform ; 302: 43-47, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203606

RESUMEN

FHIR is a widely accepted interoperability standard for exchanging medical data, but data transformation from the primary health information systems into FHIR is usually challenging and requires advanced technical skills and infrastructure. There is a critical need for low-cost solutions, and using Mirth Connect as an open-source tool provides this opportunity. We developed a reference implementation to transform data from CSV (the most common data format) into FHIR resources using Mirth Connect without any advanced technical resources or programming skills. This reference implementation is tested successfully for both quality and performance, and it enables reproducing and improving the implemented approach by healthcare providers to transform raw data into FHIR resources. For ensuring replicability, the used channel, mapping, and templates are available publicly on GitHub (https://github.com/alkarkoukly/CSV-FHIR-Transformer).


Asunto(s)
Sistemas de Información en Salud , Programas Informáticos , Registros Electrónicos de Salud , Estándar HL7
7.
Stud Health Technol Inform ; 302: 125-126, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203623

RESUMEN

Developing smart clinical decision support systems requires integrating data from several medical departments. This short paper outlines the challenges we faced in cross-departmental data integration for an oncological use case. Most severely, they have led to a significant reduction in case numbers. Only 2,77% of cases meeting the initial inclusion criteria of the use case were present in all accessed data sources.


Asunto(s)
Informática Médica , Integración de Sistemas , Oncología Médica
8.
Front Med (Lausanne) ; 10: 1305415, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38259836

RESUMEN

The growing interest in data-driven medicine, in conjunction with the formation of initiatives such as the European Health Data Space (EHDS) has demonstrated the need for methodologies that are capable of facilitating privacy-preserving data analysis. Distributed Analytics (DA) as an enabler for privacy-preserving analysis across multiple data sources has shown its potential to support data-intensive research. However, the application of DA creates new challenges stemming from its distributed nature, such as identifying single points of failure (SPOFs) in DA tasks before their actual execution. Failing to detect such SPOFs can, for example, result in improper termination of the DA code, necessitating additional efforts from multiple stakeholders to resolve the malfunctions. Moreover, these malfunctions disrupt the seamless conduct of DA and entail several crucial consequences, including technical obstacles to resolve the issues, potential delays in research outcomes, and increased costs. In this study, we address this challenge by introducing a concept based on a method called Smoke Testing, an initial and foundational test run to ensure the operability of the analysis code. We review existing DA platforms and systematically extract six specific Smoke Testing criteria for DA applications. With these criteria in mind, we create an interactive environment called Development Environment for AuTomated and Holistic Smoke Testing of Analysis-Runs (DEATHSTAR), which allows researchers to perform Smoke Tests on their DA experiments. We conduct a user-study with 29 participants to assess our environment and additionally apply it to three real use cases. The results of our evaluation validate its effectiveness, revealing that 96.6% of the analyses created and (Smoke) tested by participants using our approach successfully terminated without any errors. Thus, by incorporating Smoke Testing as a fundamental method, our approach helps identify potential malfunctions early in the development process, ensuring smoother data-driven research within the scope of DA. Through its flexibility and adaptability to diverse real use cases, our solution enables more robust and efficient development of DA experiments, which contributes to their reliability.

9.
Stud Health Technol Inform ; 290: 22-26, 2022 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-35672963

RESUMEN

Medical data science aims to facilitate knowledge discovery assisting in data, algorithms, and results analysis. The FAIR principles aim to guide scientific data management and stewardship, and are relevant to all digital health ecosystem stakeholders. The FAIR4Health project aims to facilitate and encourage the health research community to reuse datasets derived from publicly funded research initiatives using the FAIR principles. The 'FAIRness for FHIR' project aims to provide guidance on how HL7 FHIR could be utilized as a common data model to support the health datasets FAIRification process. This first expected result is an HL7 FHIR Implementation Guide (IG) called FHIR4FAIR, covering how FHIR can be used to cover FAIRification in different scenarios. This IG aims to provide practical underpinnings for the FAIR4Health FAIRification workflow as a domain-specific extension of the GoFAIR process, while simplifying curation, advancing interoperability, and providing insights into a roadmap for health datasets FAIR certification.


Asunto(s)
Registros Electrónicos de Salud , Estándar HL7 , Manejo de Datos , Ecosistema , Flujo de Trabajo
10.
Methods Inf Med ; 61(S 01): e1-e11, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35038764

RESUMEN

BACKGROUND: In recent years, data-driven medicine has gained increasing importance in terms of diagnosis, treatment, and research due to the exponential growth of health care data. However, data protection regulations prohibit data centralisation for analysis purposes because of potential privacy risks like the accidental disclosure of data to third parties. Therefore, alternative data usage policies, which comply with present privacy guidelines, are of particular interest. OBJECTIVE: We aim to enable analyses on sensitive patient data by simultaneously complying with local data protection regulations using an approach called the Personal Health Train (PHT), which is a paradigm that utilises distributed analytics (DA) methods. The main principle of the PHT is that the analytical task is brought to the data provider and the data instances remain in their original location. METHODS: In this work, we present our implementation of the PHT paradigm, which preserves the sovereignty and autonomy of the data providers and operates with a limited number of communication channels. We further conduct a DA use case on data stored in three different and distributed data providers. RESULTS: We show that our infrastructure enables the training of data models based on distributed data sources. CONCLUSION: Our work presents the capabilities of DA infrastructures in the health care sector, which lower the regulatory obstacles of sharing patient data. We further demonstrate its ability to fuel medical science by making distributed data sets available for scientists or health care practitioners.


Asunto(s)
Seguridad Computacional , Privacidad , Atención a la Salud , Humanos , Almacenamiento y Recuperación de la Información
11.
Artículo en Inglés | MEDLINE | ID: mdl-32750845

RESUMEN

The study of genetic variants (GVs) can help find correlating population groups and to identify cohorts that are predisposed to common diseases and explain differences in disease susceptibility and how patients react to drugs. Machine learning techniques are increasingly being applied to identify interacting GVs to understand their complex phenotypic traits. Since the performance of a learning algorithm not only depends on the size and nature of the data but also on the quality of underlying representation, deep neural networks (DNNs) can learn non-linear mappings that allow transforming GVs data into more clustering and classification friendly representations than manual feature selection. In this paper, we propose convolutional embedded networks (CEN) in which we combine two DNN architectures called convolutional embedded clustering (CEC) and convolutional autoencoder (CAE) classifier for clustering individuals and predicting geographic ethnicity based on GVs, respectively. We employed CAE-based representation learning to 95 million GVs from the '1000 genomes' (covering 2,504 individuals from 26 ethnic origins) and 'Simons genome diversity' (covering 279 individuals from 130 ethnic origins) projects. Quantitative and qualitative analyses with a focus on accuracy and scalability show that our approach outperforms state-of-the-art approaches such as VariantSpark and ADMIXTURE. In particular, CEC can cluster targeted population groups in 22 hours with an adjusted rand index (ARI) of 0.915, the normalized mutual information (NMI) of 0.92, and the clustering accuracy (ACC) of 89 percent. Contrarily, the CAE classifier can predict the geographic ethnicity of unknown samples with an F1 and Mathews correlation coefficient (MCC) score of 0.9004 and 0.8245, respectively. Further, to provide interpretations of the predictions, we identify significant biomarkers using gradient boosted trees (GBT) and SHapley Additive exPlanations (SHAP). Overall, our approach is transparent and faster than the baseline methods, and scalable for 5 to 100 percent of the full human genome.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Algoritmos , Análisis por Conglomerados , Humanos
12.
Stud Health Technol Inform ; 281: 352-356, 2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34042764

RESUMEN

Skin cancer has become the most common cancer type. Research has applied image processing and analysis tools to support and improve the diagnose process. Conventional procedures usually centralise data from various data sources to a single location and execute the analysis tasks on central servers. However, centralisation of medical data does not often comply with local data protection regulations due to its sensitive nature and the loss of sovereignty if data providers allow unlimited access to the data. The Personal Health Train (PHT) is a Distributed Analytics (DA) infrastructure bringing the algorithms to the data instead of vice versa. By following this paradigm shift, it proposes a solution for persistent privacy- related challenges. In this work, we present a feasibility study, which demonstrates the capability of the PHT to perform statistical analyses and Machine Learning on skin lesion data distributed among three Germany-wide data providers.


Asunto(s)
Almacenamiento y Recuperación de la Información , Aprendizaje Automático , Algoritmos , Alemania , Privacidad
13.
Brief Bioinform ; 22(1): 393-415, 2021 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-32008043

RESUMEN

Clustering is central to many data-driven bioinformatics research and serves a powerful computational method. In particular, clustering helps at analyzing unstructured and high-dimensional data in the form of sequences, expressions, texts and images. Further, clustering is used to gain insights into biological processes in the genomics level, e.g. clustering of gene expressions provides insights on the natural structure inherent in the data, understanding gene functions, cellular processes, subtypes of cells and understanding gene regulations. Subsequently, clustering approaches, including hierarchical, centroid-based, distribution-based, density-based and self-organizing maps, have long been studied and used in classical machine learning settings. In contrast, deep learning (DL)-based representation and feature learning for clustering have not been reviewed and employed extensively. Since the quality of clustering is not only dependent on the distribution of data points but also on the learned representation, deep neural networks can be effective means to transform mappings from a high-dimensional data space into a lower-dimensional feature space, leading to improved clustering results. In this paper, we review state-of-the-art DL-based approaches for cluster analysis that are based on representation learning, which we hope to be useful, particularly for bioinformatics research. Further, we explore in detail the training procedures of DL-based clustering algorithms, point out different clustering quality metrics and evaluate several DL-based approaches on three bioinformatics use cases, including bioimaging, cancer genomics and biomedical text mining. We believe this review and the evaluation results will provide valuable insights and serve a starting point for researchers wanting to apply DL-based unsupervised methods to solve emerging bioinformatics research problems.


Asunto(s)
Biología Computacional/métodos , Aprendizaje Profundo , Análisis por Conglomerados
14.
J Biomed Semantics ; 11(1): 6, 2020 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-32641124

RESUMEN

BACKGROUND: Sharing sensitive data across organizational boundaries is often significantly limited by legal and ethical restrictions. Regulations such as the EU General Data Protection Rules (GDPR) impose strict requirements concerning the protection of personal and privacy sensitive data. Therefore new approaches, such as the Personal Health Train initiative, are emerging to utilize data right in their original repositories, circumventing the need to transfer data. RESULTS: Circumventing limitations of previous systems, this paper proposes a configurable and automated schema extraction and publishing approach, which enables ad-hoc SPARQL query formulation against RDF triple stores without requiring direct access to the private data. The approach is compatible with existing Semantic Web-based technologies and allows for the subsequent execution of such queries in a safe setting under the data provider's control. Evaluation with four distinct datasets shows that a configurable amount of concise and task-relevant schema, closely describing the structure of the underlying data, was derived, enabling the schema introspection-assisted authoring of SPARQL queries. CONCLUSIONS: Automatically extracting and publishing data schema can enable the introspection-assisted creation of data selection and integration queries. In conjunction with the presented system architecture, this approach can enable reuse of data from private repositories and in settings where agreeing upon a shared schema and encoding a priori is infeasible. As such, it could provide an important step towards reuse of data from previously inaccessible sources and thus towards the proliferation of data-driven methods in the biomedical domain.


Asunto(s)
Almacenamiento y Recuperación de la Información , Privacidad , Seguridad Computacional/legislación & jurisprudencia , Estudios de Factibilidad , Internet
15.
Eur Radiol ; 30(10): 5510-5524, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32377810

RESUMEN

Digitization of medicine requires systematic handling of the increasing amount of health data to improve medical diagnosis. In this context, the integration of the versatile diagnostic information, e.g., from anamnesis, imaging, histopathology, and clinical chemistry, and its comprehensive analysis by artificial intelligence (AI)-based tools is expected to improve diagnostic precision and the therapeutic conduct. However, the complex medical environment poses a major obstacle to the translation of integrated diagnostics into clinical research and routine. There is a high need to address aspects like data privacy, data integration, interoperability standards, appropriate IT infrastructure, and education of staff. Besides this, a plethora of technical, political, and ethical challenges exists. This is complicated by the high diversity of approaches across Europe. Thus, we here provide insights into current international activities on the way to digital comprehensive diagnostics. This includes a technical view on challenges and solutions for comprehensive diagnostics in terms of data integration and analysis. Current data communications standards and common IT solutions that are in place in hospitals are reported. Furthermore, the international hospital digitalization scoring and the European funding situation were analyzed. In addition, the regional activities in radiomics and the related publication trends are discussed. Our findings show that prerequisites for comprehensive diagnostics have not yet been sufficiently established throughout Europe. The manifold activities are characterized by a heterogeneous digitization progress and they are driven by national efforts. This emphasizes the importance of clear governance, concerted investments, and cooperation at various levels in the health systems.Key Points• Europe is characterized by heterogeneity in its digitization progress with predominantly national efforts. Infrastructural prerequisites for comprehensive diagnostics are not given and not sufficiently funded throughout Europe, which is particularly true for data integration.• The clinical establishment of comprehensive diagnostics demands for a clear governance, significant investments, and cooperation at various levels in the healthcare systems.• While comprehensive diagnostics is on its way, concerted efforts should be taken in Europe to get consensus concerning interoperability and standards, security, and privacy as well as ethical and legal concerns.


Asunto(s)
Inteligencia Artificial/tendencias , Informática Médica/tendencias , Radiología/tendencias , Telemedicina/tendencias , Sistemas de Computación , Minería de Datos , Europa (Continente) , Humanos , Investigación Interdisciplinaria , Internacionalidad , Privacidad , Edición/tendencias , Programas Informáticos
16.
Front Pharmacol ; 11: 608068, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33762928

RESUMEN

Despite the significant health impacts of adverse events associated with drug-drug interactions, no standard models exist for managing and sharing evidence describing potential interactions between medications. Minimal information models have been used in other communities to establish community consensus around simple models capable of communicating useful information. This paper reports on a new minimal information model for describing potential drug-drug interactions. A task force of the Semantic Web in Health Care and Life Sciences Community Group of the World-Wide Web consortium engaged informaticians and drug-drug interaction experts in in-depth examination of recent literature and specific potential interactions. A consensus set of information items was identified, along with example descriptions of selected potential drug-drug interactions (PDDIs). User profiles and use cases were developed to demonstrate the applicability of the model. Ten core information items were identified: drugs involved, clinical consequences, seriousness, operational classification statement, recommended action, mechanism of interaction, contextual information/modifying factors, evidence about a suspected drug-drug interaction, frequency of exposure, and frequency of harm to exposed persons. Eight best practice recommendations suggest how PDDI knowledge artifact creators can best use the 10 information items when synthesizing drug interaction evidence into artifacts intended to aid clinicians. This model has been included in a proposed implementation guide developed by the HL7 Clinical Decision Support Workgroup and in PDDIs published in the CDS Connect repository. The complete description of the model can be found at https://w3id.org/hclscg/pddi.

17.
Stud Health Technol Inform ; 264: 724-728, 2019 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-31438019

RESUMEN

Potential drug-drug interactions (PDDI) rules are currently represented without any common standard making them difficult to update, maintain, and exchange. The PDDI minimum information model developed by the Semantic Web in the Healthcare and Life Sciences Community Group describes PDDI knowledge in an actionable format. In this paper, we report implementation and evaluation of CDS Services which represent PDDI knowledge with Clinical Quality Language (CQL). The suggested solution is based on emerging standards including CDS Hooks, FHIR, and CQL. Two use cases are selected, implemented with CQL rules and tested at the Connectathon held at the 32nd Annual Plenary & Working Group Meeting of HL7.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Interacciones Farmacológicas , Lenguaje , Semántica
18.
Stud Health Technol Inform ; 264: 1528-1529, 2019 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-31438215

RESUMEN

Secondary use of electronic health record (EHR) data requires a detailed description of metadata, especially when data collection and data re-use are organizationally and technically far apart. This paper describes the concept of the SMITH consortium that includes conventions, processes, and tools for describing and managing metadata using common standards for semantic interoperability. It deals in particular with the chain of processing steps of data from existing information systems and provides an overview of the planned use of metadata, medical terminologies, and semantic services in the consortium.


Asunto(s)
Registros Electrónicos de Salud , Metadatos , Recolección de Datos , Alemania , Sistemas de Información , Semántica
19.
Methods Inf Med ; 57(S 01): e92-e105, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-30016815

RESUMEN

INTRODUCTION: This article is part of the Focus Theme of Methods of Information in Medicine on the German Medical Informatics Initiative. "Smart Medical Information Technology for Healthcare (SMITH)" is one of four consortia funded by the German Medical Informatics Initiative (MI-I) to create an alliance of universities, university hospitals, research institutions and IT companies. SMITH's goals are to establish Data Integration Centers (DICs) at each SMITH partner hospital and to implement use cases which demonstrate the usefulness of the approach. OBJECTIVES: To give insight into architectural design issues underlying SMITH data integration and to introduce the use cases to be implemented. GOVERNANCE AND POLICIES: SMITH implements a federated approach as well for its governance structure as for its information system architecture. SMITH has designed a generic concept for its data integration centers. They share identical services and functionalities to take best advantage of the interoperability architectures and of the data use and access process planned. The DICs provide access to the local hospitals' Electronic Medical Records (EMR). This is based on data trustee and privacy management services. DIC staff will curate and amend EMR data in the Health Data Storage. METHODOLOGY AND ARCHITECTURAL FRAMEWORK: To share medical and research data, SMITH's information system is based on communication and storage standards. We use the Reference Model of the Open Archival Information System and will consistently implement profiles of Integrating the Health Care Enterprise (IHE) and Health Level Seven (HL7) standards. Standard terminologies will be applied. The SMITH Market Place will be used for devising agreements on data access and distribution. 3LGM2 for enterprise architecture modeling supports a consistent development process.The DIC reference architecture determines the services, applications and the standardsbased communication links needed for efficiently supporting the ingesting, data nourishing, trustee, privacy management and data transfer tasks of the SMITH DICs. The reference architecture is adopted at the local sites. Data sharing services and the market place enable interoperability. USE CASES: The methodological use case "Phenotype Pipeline" (PheP) constructs algorithms for annotations and analyses of patient-related phenotypes according to classification rules or statistical models based on structured data. Unstructured textual data will be subject to natural language processing to permit integration into the phenotyping algorithms. The clinical use case "Algorithmic Surveillance of ICU Patients" (ASIC) focusses on patients in Intensive Care Units (ICU) with the acute respiratory distress syndrome (ARDS). A model-based decision-support system will give advice for mechanical ventilation. The clinical use case HELP develops a "hospital-wide electronic medical record-based computerized decision support system to improve outcomes of patients with blood-stream infections" (HELP). ASIC and HELP use the PheP. The clinical benefit of the use cases ASIC and HELP will be demonstrated in a change of care clinical trial based on a step wedge design. DISCUSSION: SMITH's strength is the modular, reusable IT architecture based on interoperability standards, the integration of the hospitals' information management departments and the public-private partnership. The project aims at sustainability beyond the first 4-year funding period.


Asunto(s)
Atención a la Salud , Tecnología de la Información , Algoritmos , Gestión Clínica , Comunicación , Sistemas de Apoyo a Decisiones Clínicas , Registros Electrónicos de Salud , Almacenamiento y Recuperación de la Información , Unidades de Cuidados Intensivos , Modelos Teóricos , Fenotipo , Políticas
20.
J Med Syst ; 36(1): 201-21, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20703735

RESUMEN

Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.


Asunto(s)
Técnicas de Apoyo para la Decisión , Atención a la Salud/organización & administración , Bases del Conocimiento , Indicadores de Calidad de la Atención de Salud/organización & administración , Motor de Búsqueda/métodos , Atención a la Salud/normas , Humanos , Sistemas de Información/organización & administración , Modelos Teóricos , Redes Neurales de la Computación , Indicadores de Calidad de la Atención de Salud/normas , Interfaz Usuario-Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...