Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Interact J Med Res ; 13: e51563, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39353185

RESUMEN

BACKGROUND: Clinical routine data derived from university hospitals hold immense value for health-related research on large cohorts. However, using secondary data for hypothesis testing necessitates adherence to scientific, legal (such as the General Data Protection Regulation, federal and state protection legislations), technical, and administrative requirements. This process is intricate, time-consuming, and susceptible to errors. OBJECTIVE: This study aims to develop a platform that enables clinicians to use current real-world data for testing research and evaluate advantages and limitations at a large university medical center (542,944 patients in 2022). METHODS: We identified requirements from clinical practitioners, conceptualized and implemented a platform based on the existing components, and assessed its applicability in clinical reality quantitatively and qualitatively. RESULTS: The proposed platform was established at the University Medical Center Hamburg-Eppendorf and made 639 forms encompassing 10,629 data elements accessible to all resident scientists and clinicians. Every day, the number of patients rises, and parts of their electronic health records are made accessible through the platform. Qualitatively, we were able to conduct a retrospective analysis of Parkinson disease over 777 patients, where we provide additional evidence for a significantly higher proportion of action tremors in patients with rest tremors (340/777, 43.8%) compared with those without rest tremors (255/777, 32.8%), as determined by a chi-square test (P<.001). Quantitatively, our findings demonstrate increased user engagement within the last 90 days, underscoring clinicians' increasing adoption of the platform in their regular research activities. Notably, the platform facilitated the retrieval of clinical data from 600,000 patients, emphasizing its substantial added value. CONCLUSIONS: This study demonstrates the feasibility of simplifying the use of clinical data to enhance exploration and sustainability in scientific research. The proposed platform emerges as a potential technological and legal framework for other medical centers, providing them with the means to unlock untapped potential within their routine data.

2.
Stud Health Technol Inform ; 316: 1617-1621, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176520

RESUMEN

This work introduces a novel approach to facilitate clinical research on secondary clinical data by integrating an LLM-based chatbot within a specialized platform called data hotel. The platform is designed to empower clinical researchers within our institution by enabling the generation of research hypotheses from secondary use patient data sources. Our focus in this work is on the deployment and functionality of the LLM-based chatbot within the data hotel ecosystem. The aim is to aid medical experts in visualizing and analyzing data sourced from the platform but also to enable the seamless storage of the generated code, enhancing the efficiency and reproducibility of the research process. This integration represents a significant advancement in leveraging LLM capabilities to enhance the utility and accessibility of clinical research platforms.


Asunto(s)
Programas Informáticos , Humanos , Registros Electrónicos de Salud , Investigación Biomédica , Almacenamiento y Recuperación de la Información/métodos , Interfaz Usuario-Computador
3.
JMIR Med Inform ; 12: e49865, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39046780

RESUMEN

BACKGROUND: Interpretability and intuitive visualization facilitate medical knowledge generation through big data. In addition, robustness to high-dimensional and missing data is a requirement for statistical approaches in the medical domain. A method tailored to the needs of physicians must meet all the abovementioned criteria. OBJECTIVE: This study aims to develop an accessible tool for visual data exploration without the need for programming knowledge, adjusting complex parameterizations, or handling missing data. We sought to use statistical analysis using the setting of disease and control cohorts familiar to clinical researchers. We aimed to guide the user by identifying and highlighting data patterns associated with disease and reveal relations between attributes within the data set. METHODS: We introduce the attribute association graph, a novel graph structure designed for visual data exploration using robust statistical metrics. The nodes capture frequencies of participant attributes in disease and control cohorts as well as deviations between groups. The edges represent conditional relations between attributes. The graph is visualized using the Neo4j (Neo4j, Inc) data platform and can be interactively explored without the need for technical knowledge. Nodes with high deviations between cohorts and edges of noticeable conditional relationship are highlighted to guide the user during the exploration. The graph is accompanied by a dashboard visualizing variable distributions. For evaluation, we applied the graph and dashboard to the Hamburg City Health Study data set, a large cohort study conducted in the city of Hamburg, Germany. All data structures can be accessed freely by researchers, physicians, and patients. In addition, we developed a user test conducted with physicians incorporating the System Usability Scale, individual questions, and user tasks. RESULTS: We evaluated the attribute association graph and dashboard through an exemplary data analysis of participants with a general cardiovascular disease in the Hamburg City Health Study data set. All results extracted from the graph structure and dashboard are in accordance with findings from the literature, except for unusually low cholesterol levels in participants with cardiovascular disease, which could be induced by medication. In addition, 95% CIs of Pearson correlation coefficients were calculated for all associations identified during the data analysis, confirming the results. In addition, a user test with 10 physicians assessing the usability of the proposed methods was conducted. A System Usability Scale score of 70.5% and average successful task completion of 81.4% were reported. CONCLUSIONS: The proposed attribute association graph and dashboard enable intuitive visual data exploration. They are robust to high-dimensional as well as missing data and require no parameterization. The usability for clinicians was confirmed via a user test, and the validity of the statistical results was confirmed by associations known from literature and standard statistical inference.

4.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38732794

RESUMEN

High-quality eye-tracking data are crucial in behavioral sciences and medicine. Even with a solid understanding of the literature, selecting the most suitable algorithm for a specific research project poses a challenge. Empowering applied researchers to choose the best-fitting detector for their research needs is the primary contribution of this paper. We developed a framework to systematically assess and compare the effectiveness of 13 state-of-the-art algorithms through a unified application interface. Hence, we more than double the number of algorithms that are currently usable within a single software package and allow researchers to identify the best-suited algorithm for a given scientific setup. Our framework validation on retrospective data underscores its suitability for algorithm selection. Through a detailed and reproducible step-by-step workflow, we hope to contribute towards significantly improved data quality in scientific experiments.


Asunto(s)
Algoritmos , Tecnología de Seguimiento Ocular , Humanos , Programas Informáticos , Exactitud de los Datos , Movimientos Oculares/fisiología , Reproducibilidad de los Resultados
5.
Stud Health Technol Inform ; 307: 22-30, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37697834

RESUMEN

INTRODUCTION: The diagnosis and treatment of Parkinson's disease depend on the assessment of motor symptoms. Wearables and machine learning algorithms have emerged to collect large amounts of data and potentially support clinicians in clinical and ambulant settings. STATE OF THE ART: However, a systematical and reusable data architecture for storage, processing, and analysis of inertial sensor data is not available. Consequently, datasets vary significantly between studies and prevent comparability. CONCEPT: To simplify research on the neurodegenerative disorder, we propose an efficient and real-time-optimized architecture compatible with HL7 FHIR backed by a relational database schema. LESSONS LEARNED: We can verify the adequate performance of the system on an experimental benchmark and in a clinical experiment. However, existing standards need to be further optimized to be fully sufficient for data with high temporal resolution.


Asunto(s)
Enfermedad de Parkinson , Humanos , Enfermedad de Parkinson/diagnóstico , Algoritmos , Benchmarking , Bases de Datos Factuales , Aprendizaje Automático
6.
Stud Health Technol Inform ; 307: 51-59, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37697837

RESUMEN

INTRODUCTION: The collection of examination data for large clinical studies is often done with proprietary systems, which are accompanied by several disadvantages such as high cost and low flexibility. With the use of open-source tools, these disadvantages can be overcome and thereby improve data collection as well as data quality. Here we exemplary use the data collection process of the Hamburg City Health Study (HCHS), carried out at the University Medical Center Hamburg-Eppendorf (UKE). We evaluated how the recording of the examination data can be converted from an established, proprietary electronic healthcare record (EHR) system to the free-to-use Research Electronic Data Capture (REDCap) software. METHODS: For this purpose, a technical conversion of the EHR system is described first. Metafiles derived from the EHR system were used for REDCap electronic case report form (eCRF) building. The REDCap system was tested by HCHS study assistants via completion of self-developed tasks mimicking their everyday study life. Usability was quantitatively evaluated via the IBM Computer System Usability Questionnaire (CSUQ) and qualitatively assessed with a semi-structured interview. RESULTS: With the IBM CSUQ, the study assistants rated the usage of the basic REDCap system for HCHS examination data collection with an overall score of 4.39, which represents a medium acceptance. The interview feedback was used to formulate user stories to subsequently increase the administrative sovereignty and to conceptualize a REDCap HCHS information technology (IT) infrastructure. CONCLUSION: Our work aims to serve as a template for evaluating the feasibility of a conversion from a proprietary to a free-to-use data collection tool for large clinical studies such as the HCHS. REDCap has great potential, but extensions and an integration to the current IT infrastructure are required.


Asunto(s)
Centros Médicos Académicos , Exactitud de los Datos , Humanos , Recolección de Datos , Sistemas de Computación , Electrónica
7.
Cancers (Basel) ; 15(16)2023 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-37627087

RESUMEN

In their joint effort against cancer, all involved parties within the German healthcare system are obligated to report diagnostics, treatments, progression, and follow-up information for tumor patients to the respective cancer registries. Given the federal structure of Germany, the oncological basis dataset (oBDS) operates as the legally required national standard for oncological reporting. Unfortunately, the usage of various documentation software solutions leads to semantic and technical heterogeneity of the data, complicating the establishment of research networks and collective data analysis. Within this feasibility study, we evaluated the transferability of all oBDS characteristics to the standardized vocabularies, a metadata repository of the observational medical outcomes partnership (OMOP) common data model (CDM). A total of 17,844 oBDS expressions were mapped automatically or manually to standardized concepts of the OMOP CDM. In a second step, we converted real patient data retrieved from the Hamburg Cancer Registry to the new terminologies. Given our pipeline, we transformed 1773.373 cancer-related data elements to the OMOP CDM. The mapping of the oBDS to the standardized vocabularies of the OMOP CDM promotes the semantic interoperability of oncological data in Germany. Moreover, it allows the participation in network studies of the observational health data sciences and informatics under the usage of federated analysis beyond the level of individual countries.

8.
ESC Heart Fail ; 10(2): 975-984, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36482800

RESUMEN

AIMS: We aim to develop a pragmatic screening tool for heart failure at the general population level. METHODS AND RESULTS: This study was conducted within the Hamburg-City-Health-Study, an ongoing, prospective, observational study enrolling randomly selected inhabitants of the city of Hamburg aged 45-75 years. Heart failure was diagnosed per current guidelines. Using only digital electrocardiograms (ECGs), a convolutional neural network (CNN) was built to discriminate participants with and without heart failure. As comparisons, known risk variables for heart failure were fitted into a logistic regression model and a random forest classifier. Of the 5299 individuals included into this study, 318 individuals (6.0%) had heart failure. Using only the digital ECGs instead of several risk variables as an input, the CNN provided a comparable predictive accuracy for heart failure versus the logistic regression model and the random forest classifier [area under the curve (AUC) of 0.75, a sensitivity of 0.67 and a specificity of 0.69 for the CNN; AUC 0.77, a sensitivity of 0.63 and a specificity of 0.76 for the logistic regression; AUC 0.79, a sensitivity of 0.67 and a specificity of 0.72 for the random forest classifier]. CONCLUSIONS: Using a CNN build on digital ECGs only and requiring no additional input, we derived a screening tool for heart failure in the general population. This could be perfectly embedded into clinical routine of general practitioners, as it builds on an already established diagnostic tool and does not require additional, time-consuming input. This could help to alleviate the underdiagnosis of heart failure.


Asunto(s)
Insuficiencia Cardíaca , Redes Neurales de la Computación , Humanos , Estudios Prospectivos , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/epidemiología , Bosques Aleatorios , Electrocardiografía
9.
Med Image Anal ; 76: 102306, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34879287

RESUMEN

Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.


Asunto(s)
Ciencia de los Datos , Aprendizaje Automático , Humanos
10.
Int J Mol Sci ; 22(6)2021 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-33802234

RESUMEN

Recent advances in sequencing and biotechnological methodologies have led to the generation of large volumes of molecular data of different omics layers, such as genomics, transcriptomics, proteomics and metabolomics. Integration of these data with clinical information provides new opportunities to discover how perturbations in biological processes lead to disease. Using data-driven approaches for the integration and interpretation of multi-omics data could stably identify links between structural and functional information and propose causal molecular networks with potential impact on cancer pathophysiology. This knowledge can then be used to improve disease diagnosis, prognosis, prevention, and therapy. This review will summarize and categorize the most current computational methodologies and tools for integration of distinct molecular layers in the context of translational cancer research and personalized therapy. Additionally, the bioinformatics tools Multi-Omics Factor Analysis (MOFA) and netDX will be tested using omics data from public cancer resources, to assess their overall robustness, provide reproducible workflows for gaining biological knowledge from multi-omics data, and to comprehensively understand the significantly perturbed biological entities in distinct cancer types. We show that the performed supervised and unsupervised analyses result in meaningful and novel findings.


Asunto(s)
Biomarcadores de Tumor , Biología Computacional , Genómica , Metabolómica , Neoplasias , Proteómica , Investigación Biomédica Traslacional , Biomarcadores de Tumor/genética , Biomarcadores de Tumor/metabolismo , Humanos , Neoplasias/genética , Neoplasias/metabolismo , Neoplasias/terapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...