Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Stud Health Technol Inform ; 302: 977-981, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203548

RESUMEN

Electrocardiography analysis is widely used in various clinical applications and Deep Learning models for classification tasks are currently in the focus of research. Due to their data-driven character, they bear the potential to handle signal noise efficiently, but its influence on the accuracy of these methods is still unclear. Therefore, we benchmark the influence of four types of noise on the accuracy of a Deep Learning-based method for atrial fibrillation detection in 12-lead electrocardiograms. We use a subset of a publicly available dataset (PTB-XL) and use the metadata provided by human experts regarding noise for assigning a signal quality to each electrocardiogram. Furthermore, we compute a quantitative signal-to-noise ratio for each electrocardiogram. We analyze the accuracy of the Deep Learning model with respect to both metrics and observe that the method can robustly identify atrial fibrillation, even in cases signals are labelled by human experts as being noisy on multiple leads. False positive and false negative rates are slightly worse for data being labelled as noisy. Interestingly, data annotated as showing baseline drift noise results in an accuracy very similar to data without. We conclude that the issue of processing noisy electrocardiography data can be addressed successfully by Deep Learning methods that might not need preprocessing as many conventional methods do.


Asunto(s)
Fibrilación Atrial , Aprendizaje Profundo , Humanos , Fibrilación Atrial/diagnóstico , Benchmarking , Electrocardiografía/métodos , Relación Señal-Ruido , Algoritmos
2.
Artículo en Inglés | MEDLINE | ID: mdl-37126621

RESUMEN

Despite their remarkable performance, deep neural networks remain unadopted in clinical practice, which is considered to be partially due to their lack of explainability. In this work, we apply explainable attribution methods to a pre-trained deep neural network for abnormality classification in 12-lead electrocardiography to open this "black box" and understand the relationship between model prediction and learned features. We classify data from two public databases (CPSC 2018, PTB-XL) and the attribution methods assign a "relevance score" to each sample of the classified signals. This allows analyzing what the network learned during training, for which we propose quantitative methods: average relevance scores over a) classes, b) leads, and c) average beats. The analyses of relevance scores for atrial fibrillation and left bundle branch block compared to healthy controls show that their mean values a) increase with higher classification probability and correspond to false classifications when around zero, and b) correspond to clinical recommendations regarding which lead to consider. Furthermore, c) visible P-waves and concordant T-waves result in clearly negative relevance scores in atrial fibrillation and left bundle branch block classification, respectively. Results are similar across both databases despite differences in study population and hardware. In summary, our analysis suggests that the DNN learned features similar to cardiology textbook knowledge.

3.
Stud Health Technol Inform ; 290: 61-65, 2022 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-35672971

RESUMEN

Research data management requires stable, trustworthy repositories to safeguard scientific research results. In this context, rich markup with metadata is crucial for the discoverability and interpretability of the relevant resources. SEEK is a web-based software to manage all important artifacts of a research project, including project structures, involved actors, documents and datasets. SEEK is organized along the ISA model (Investigation - Study - Assay). It offers several machine-readable serializations, including JSON and RDF. In this paper, we extend the power of RDF serialization by leveraging the W3C Data Catalog Vocabulary (DCAT). DCAT was specifically designed to improve interoperability between digital assets on the Web and enables cross-domain markup. By using community-consented gold standard vocabularies and a formal knowledge description language, findability and interoperability according to the FAIR principles are significantly improved.


Asunto(s)
Metadatos , Vocabulario , Manejo de Datos , Proyectos de Investigación , Programas Informáticos
4.
Stud Health Technol Inform ; 283: 39-45, 2021 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-34545818

RESUMEN

Automatic electrocardiogram (ECG) analysis has been one of the very early use cases for computer assisted diagnosis (CAD). Most ECG devices provide some level of automatic ECG analysis. In the recent years, Deep Learning (DL) is increasingly used for this task, with the first models that claim to perform better than human physicians. In this manuscript, a pilot study is conducted to evaluate the added value of such a DL model to existing built-in analysis with respect to clinical relevance. 29 12-lead ECGs have been analyzed with a published DL model and results are compared to build-in analysis and clinical diagnosis. We could not reproduce the results of the test data exactly, presumably due to a different runtime environment. However, the errors were in the order of rounding errors and did not affect the final classification. The excellent performance in detection of left bundle branch block and atrial fibrillation that was reported in the publication could be reproduced. The DL method and the built-in method performed similarly good for the chosen cases regarding clinical relevance. While benefit of the DL method for research can be attested and usage in training can be envisioned, evaluation of added value in clinical practice would require a more comprehensive study with further and more complex cases.


Asunto(s)
Fibrilación Atrial , Aprendizaje Profundo , Diagnóstico por Computador , Electrocardiografía , Humanos , Proyectos Piloto
5.
Stud Health Technol Inform ; 281: 794-798, 2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34042687

RESUMEN

COVID-19 poses a major challenge to individuals and societies around the world. Yet, it is difficult to obtain a good overview of studies across different medical fields of research such as clinical trials, epidemiology, and public health. Here, we describe a consensus metadata model to facilitate structured searches of COVID-19 studies and resources along with its implementation in three linked complementary web-based platforms. A relational database serves as central study metadata hub that secures compatibilities with common trials registries (e.g. ICTRP and standards like HL7 FHIR, CDISC ODM, and DataCite). The Central Search Hub was developed as a single-page application, the other two components with additional frontends are based on the SEEK platform and MICA, respectively. These platforms have different features concerning cohort browsing, item browsing, and access to documents and other study resources to meet divergent user needs. By this we want to promote transparent and harmonized COVID-19 research.


Asunto(s)
COVID-19 , Estudios Epidemiológicos , Humanos , Metadatos , Sistema de Registros , SARS-CoV-2
6.
Methods Inf Med ; 58(6): 229-234, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32349157

RESUMEN

BACKGROUND: Managing research data in biomedical informatics research requires solid data governance rules to guarantee sustainable operation, as it generally involves several professions and multiple sites. As every discipline involved in biomedical research applies its own set of tools and methods, research data as well as applied methods tend to branch out into numerous intermediate and output data objects, making it very difficult to reproduce research results. OBJECTIVES: This article gives an overview of our implementation status applying the Findability, Accessibility, Interoperability and Reusability (FAIR) Guiding Principles for scientific data management and stewardship onto our research data management pipeline focusing on the software tools that are in use. METHODS: We analyzed our progress FAIRificating the whole data management pipeline, from processing non-FAIR data up to data usage. We looked at software tools for data integration, data storage, and data usage as well as how the FAIR Guiding Principles helped to choose appropriate tools for each task. RESULTS: We were able to advance the degree of FAIRness of our data integration as well as data storage solutions, but lack enabling more FAIR Guiding Principles regarding Data Usage. Existing evaluation methods regarding the FAIR Guiding Principles (FAIRmetrics) were not applicable to our analysis of software tools. CONCLUSION: Using the FAIR Guiding Principles, we FAIRificated relevant parts of our research data management pipeline improving findability, accessibility, interoperability and reuse of datasets and research results. We aim to implement the FAIRmetrics to our data management infrastructure and-where required-to contribute to the FAIRmetrics for research data in the biomedical informatics domain as well as for software tools to achieve a higher degree of FAIRness of our research data management pipeline.


Asunto(s)
Investigación Biomédica , Manejo de Datos , Interoperabilidad de la Información en Salud , Accesibilidad a los Servicios de Salud , Informática , Programas Informáticos , Humanos
7.
Stud Health Technol Inform ; 264: 298-302, 2019 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-31437933

RESUMEN

Research data generated in large projects raise challenges about not only data analytics but also data quality assessments and data governance. The provenance of a data set - that is the history of data sets - holds information relevant to technicians and non-technicians and is able to answer questions regarding data quality, transparency, and more. We propose an implementation roadmap to extract, store, and utilize provenance records in order to make provenance available to data analysts, research subjects, privacy officers, and machines (machine readability). Each aspect is tackled separately, resulting in the implementation of a provenance toolbox. We aim to do so within the context of HiGHmed, a research consortium established within the medical informatics initiative in Germany. In this testbed of federated IT-infrastructures, the toolbox shall assist each stakeholder in answering domain-specific and domain-agnostic questions regarding the provenance of data sets. This way, we will improve data re-use, transparency, and reproducibility.


Asunto(s)
Investigación Biomédica , Informática Médica , Alemania , Reproducibilidad de los Resultados
8.
Stud Health Technol Inform ; 253: 75-79, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30147044

RESUMEN

This paper examines the relevance of genetic pedigree data in the context of medical research platforms. By surveying currently available tools for visualization and analysis of this data type and by considering possible use cases that could make usage of the combination of singular patient data and pedigree data, the advantages of integrating the data type into a medical research platform were shown. In a practical work step, an integration procedure of pedigree data into tranSMART was created. Furthermore, a tool to analyze and visualize pedigree data in combination with other patient data was implemented into SmartR, a dynamic analysis tool inside of tranSMART. Finally, we address limitations and future development strategies of the tool.


Asunto(s)
Linaje , Programas Informáticos , Investigación Biomédica , Humanos , Estadística como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA