Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Endoscopy ; 56(1): 63-69, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37532115

RESUMEN

BACKGROUND AND STUDY AIMS: Artificial intelligence (AI)-based systems for computer-aided detection (CADe) of polyps receive regular updates and occasionally offer customizable detection thresholds, both of which impact their performance, but little is known about these effects. This study aimed to compare the performance of different CADe systems on the same benchmark dataset. METHODS: 101 colonoscopy videos were used as benchmark. Each video frame with a visible polyp was manually annotated with bounding boxes, resulting in 129 705 polyp images. The videos were then analyzed by three different CADe systems, representing five conditions: two versions of GI Genius, Endo-AID with detection Types A and B, and EndoMind, a freely available system. Evaluation included an analysis of sensitivity and false-positive rate, among other metrics. RESULTS: Endo-AID detection Type A, the earlier version of GI Genius, and EndoMind detected all 93 polyps. Both the later version of GI Genius and Endo-AID Type B missed 1 polyp. The mean per-frame sensitivities were 50.63 % and 67.85 %, respectively, for the earlier and later versions of GI Genius, 65.60 % and 52.95 %, respectively, for Endo-AID Types A and B, and 60.22 % for EndoMind. CONCLUSIONS: This study compares the performance of different CADe systems, different updates, and different configuration modes. This might help clinicians to select the most appropriate system for their specific needs.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Humanos , Pólipos del Colon/diagnóstico por imagen , Inteligencia Artificial , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico
2.
Sensors (Basel) ; 24(3)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38339603

RESUMEN

In a time where sustainability and CO2 efficiency are of ever-increasing importance, heating systems deserve special considerations. Despite well-functioning hardware, inefficiencies may arise when controller parameters are not well chosen. While monitoring systems could help to identify such issues, they lack improvement suggestions. One possible solution would be the use of digital twins; however, critical values such as the water consumption of the residents can often not be acquired for accurate models. To address this issue, coarse models can be employed to generate quantitative predictions, which can then be interpreted qualitatively to assess "better or worse" system behavior. In this paper, we present a simulation and calibration framework as well as a preprocessing module. These components can be run locally or deployed as containerized microservices and are easy to interface with existing data acquisition infrastructure. We evaluate the two main operating modes, namely automatic model calibration, using measured data, and the optimization of controller parameters. Our results show that using a coarse model of a real heating system and data augmentation through preprocessing, it is possible to achieve an acceptable fit of partially incomplete measured data, and that the calibrated model can subsequently be used to perform an optimization of the controller parameters in regard to the simulated boiler gas consumption.

3.
BMC Med Imaging ; 23(1): 59, 2023 04 20.
Artículo en Inglés | MEDLINE | ID: mdl-37081495

RESUMEN

BACKGROUND: Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. METHODS: We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. RESULTS: For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. CONCLUSION: Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.


Asunto(s)
Pólipos del Colon , Aprendizaje Profundo , Humanos , Pólipos del Colon/diagnóstico por imagen , Colonoscopía , Redes Neurales de la Computación , Algoritmos
4.
J Digit Imaging ; 36(2): 715-724, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36417023

RESUMEN

This study aims to show the feasibility and benefit of single queries in a research data warehouse combining data from a hospital's clinical and imaging systems. We used a comprehensive integration of a production picture archiving and communication system (PACS) with a clinical data warehouse (CDW) for research to create a system that allows data from both domains to be queried jointly with a single query. To achieve this, we mapped the DICOM information model to the extended entity-attribute-value (EAV) data model of a CDW, which allows data linkage and query constraints on multiple levels: the patient, the encounter, a document, and a group level. Accordingly, we have integrated DICOM metadata directly into CDW and linked it to existing clinical data. We included data collected in 2016 and 2017 from the Department of Internal Medicine in this analysis for two query inquiries from researchers targeting research about a disease and in radiology. We obtained quantitative information about the current availability of combinations of clinical and imaging data using a single multilevel query compiled for each query inquiry. We compared these multilevel query results to results that linked data at a single level, resulting in a quantitative representation of results that was up to 112% and 573% higher. An EAV data model can be extended to store data from clinical systems and PACS on multiple levels to enable combined querying with a single query to quickly display actual frequency data.


Asunto(s)
Sistemas de Información Radiológica , Radiología , Humanos , Data Warehousing , Almacenamiento y Recuperación de la Información , Diagnóstico por Imagen
5.
Gastrointest Endosc ; 95(4): 794-798, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34929183

RESUMEN

BACKGROUND AND AIMS: Adenoma detection rate is the crucial parameter for colorectal cancer screening. Increasing the field of view with additional side optics has been reported to detect flat adenomas hidden behind folds. Furthermore, artificial intelligence (AI) has also recently been introduced to detect more adenomas. We therefore aimed to combine both technologies in a new prototypic colonoscopy concept. METHODS: A 3-dimensional-printed cap including 2 microcameras was attached to a conventional endoscope. The prototype was applied in 8 gene-targeted pigs with mutations in the adenomatous polyposis coli gene. The first 4 animals were used to train an AI system based on the images generated by microcameras. Thereafter, the conceptual prototype for detecting adenomas was tested in a further series of 4 pigs. RESULTS: Using our prototype, we detected, with side optics, adenomas that might have been missed conventionally. Furthermore, the newly developed AI could detect, mark, and present adenomas visualized with side optics outside of the conventional field of view. CONCLUSIONS: Combining AI with side optics might help detect adenomas that otherwise might have been missed.


Asunto(s)
Adenoma , Pólipos del Colon , Neoplasias Colorrectales , Adenoma/diagnóstico , Animales , Inteligencia Artificial , Pólipos del Colon/diagnóstico por imagen , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico , Humanos , Porcinos
6.
Scand J Gastroenterol ; 57(11): 1397-1403, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35701020

RESUMEN

BACKGROUND AND AIMS: Computer-aided polyp detection (CADe) may become a standard for polyp detection during colonoscopy. Several systems are already commercially available. We report on a video-based benchmark technique for the first preclinical assessment of such systems before comparative randomized trials are to be undertaken. Additionally, we compare a commercially available CADe system with our newly developed one. METHODS: ENDOTEST consisted in the combination of two datasets. The validation dataset contained 48 video-snippets with 22,856 manually annotated images of which 53.2% contained polyps. The performance dataset contained 10 full-length screening colonoscopies with 230,898 manually annotated images of which 15.8% contained a polyp. Assessment parameters were accuracy for polyp detection and time delay to first polyp detection after polyp appearance (FDT). Two CADe systems were assessed: a commercial CADe system (GI-Genius, Medtronic), and a self-developed new system (ENDOMIND). The latter being a convolutional neuronal network trained on 194,983 manually labeled images extracted from colonoscopy videos recorded in mainly six different gastroenterologic practices. RESULTS: On the ENDOTEST, both CADe systems detected all polyps in at least one image. The per-frame sensitivity and specificity in full colonoscopies was 48.1% and 93.7%, respectively for GI-Genius; and 54% and 92.7%, respectively for ENDOMIND. Median FDT of ENDOMIND with 217 ms (Inter-Quartile Range(IQR)8-1533) was significantly faster than GI-Genius with 1050 ms (IQR 358-2767, p = 0.003). CONCLUSIONS: Our benchmark ENDOTEST may be helpful for preclinical testing of new CADe devices. There seems to be a correlation between a shorter FDT with a higher sensitivity and a lower specificity for polyp detection.


Asunto(s)
Pólipos del Colon , Humanos , Pólipos del Colon/diagnóstico por imagen , Benchmarking , Colonoscopía/métodos , Tamizaje Masivo
7.
Int J Colorectal Dis ; 37(6): 1349-1354, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35543874

RESUMEN

PURPOSE: Computer-aided polyp detection (CADe) systems for colonoscopy are already presented to increase adenoma detection rate (ADR) in randomized clinical trials. Those commercially available closed systems often do not allow for data collection and algorithm optimization, for example regarding the usage of different endoscopy processors. Here, we present the first clinical experiences of a, for research purposes publicly available, CADe system. METHODS: We developed an end-to-end data acquisition and polyp detection system named EndoMind. Examiners of four centers utilizing four different endoscopy processors used EndoMind during their clinical routine. Detected polyps, ADR, time to first detection of a polyp (TFD), and system usability were evaluated (NCT05006092). RESULTS: During 41 colonoscopies, EndoMind detected 29 of 29 adenomas in 66 of 66 polyps resulting in an ADR of 41.5%. Median TFD was 130 ms (95%-CI, 80-200 ms) while maintaining a median false positive rate of 2.2% (95%-CI, 1.7-2.8%). The four participating centers rated the system using the System Usability Scale with a median of 96.3 (95%-CI, 70-100). CONCLUSION: EndoMind's ability to acquire data, detect polyps in real-time, and high usability score indicate substantial practical value for research and clinical practice. Still, clinical benefit, measured by ADR, has to be determined in a prospective randomized controlled trial.


Asunto(s)
Adenoma , Pólipos del Colon , Neoplasias Colorrectales , Adenoma/diagnóstico , Pólipos del Colon/diagnóstico , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico , Computadores , Humanos , Proyectos Piloto , Estudios Prospectivos , Ensayos Clínicos Controlados Aleatorios como Asunto
8.
Biomed Eng Online ; 21(1): 33, 2022 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-35614504

RESUMEN

BACKGROUND: Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. METHODS: In our framework, an expert reviews the video and annotates a few video frames to verify the object's annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. RESULTS: Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. CONCLUSION: In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.


Asunto(s)
Gastroenterólogos , Endoscopía , Humanos , Aprendizaje Automático , Estudios Prospectivos
9.
Graefes Arch Clin Exp Ophthalmol ; 260(10): 3349-3356, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35501491

RESUMEN

PURPOSE: To determine whether 24-h IOP monitoring can be a predictor for glaucoma progression and to analyze the inter-eye relationship of IOP, perfusion, and progression parameters. METHODS: We extracted data from manually drawn IOP curves with HIOP-Reader, a software suite we developed. The relationship between measured IOPs and mean ocular perfusion pressures (MOPP) to retinal nerve fiber layer (RNFL) thickness was analyzed. We determined the ROC curves for peak IOP (Tmax), average IOP(Tavg), IOP variation (IOPvar), and historical IOP cut-off levels to detect glaucoma progression (rate of RNFL loss). Bivariate analysis was also conducted to check for various inter-eye relationships. RESULTS: Two hundred seventeen eyes were included. The average IOP was 14.8 ± 3.5 mmHg, with a 24-h variation of 5.2 ± 2.9 mmHg. A total of 52% of eyes with RNFL progression data showed disease progression. There was no significant difference in Tmax, Tavg, and IOPvar between progressors and non-progressors (all p > 0.05). Except for Tavg and the temporal RNFL, there was no correlation between disease progression in any quadrant and Tmax, Tavg, and IOPvar. Twenty-four-hour and outpatient IOP variables had poor sensitivities and specificities in detecting disease progression. The correlation of inter-eye parameters was moderate; correlation with disease progression was weak. CONCLUSION: In line with our previous study, IOP data obtained during a single visit (outpatient or inpatient monitoring) make for a poor diagnostic tool, no matter the method deployed. Glaucoma progression and perfusion pressure in left and right eyes correlated weakly to moderately with each other.


Asunto(s)
Glaucoma , Presión Intraocular , Progresión de la Enfermedad , Glaucoma/diagnóstico , Humanos , Retina
10.
Eur Heart J ; 41(11): 1203-1211, 2020 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-30957867

RESUMEN

AIMS: Anxiety, depression, and reduced quality of life (QoL) are common in patients with implantable cardioverter-defibrillators (ICDs). Treatment options are limited and insufficiently defined. We evaluated the efficacy of a web-based intervention (WBI) vs. usual care (UC) for improving psychosocial well-being in ICD patients with elevated psychosocial distress. METHODS AND RESULTS: This multicentre, randomized controlled trial (RCT) enrolled 118 ICD patients with increased anxiety or depression [≥6 points on either subscale of the Hospital Anxiety and Depression Scale (HADS)] or reduced QoL [≤16 points on the Satisfaction with Life Scale (SWLS)] from seven German sites (mean age 58.8 ± 11.3 years, 22% women). The primary outcome was a composite assessing change in heart-focused fear, depression, and mental QoL 6 weeks after randomization to WBI or UC, stratified for age, gender, and indication for ICD placement. Web-based intervention consisted of 6 weeks' access to a structured interactive web-based programme (group format) including self-help interventions based on cognitive behaviour therapy, a virtual self-help group, and on-demand support from a trained psychologist. Linear mixed-effects models analyses showed that the primary outcome was similar between groups (ηp2 = 0.001). Web-based intervention was superior to UC in change from pre-intervention to 6 weeks (overprotective support; P = 0.004, ηp2 = 0.036), pre-intervention to 1 year (depression, P = 0.004, ηp2 = 0.032; self-management, P = 0.03, ηp2 = 0.015; overprotective support; P = 0.02, ηp2 = 0.031), and 6 weeks to 1 year (depression, P = 0.02, ηp2 = 0.026; anxiety, P = 0.03, ηp2 = 0.022; mobilization of social support, P = 0.047, ηp2 = 0.018). CONCLUSION: Although the primary outcome was neutral, this is the first RCT showing that WBI can improve psychosocial well-being in ICD patients.


Asunto(s)
Terapia Cognitivo-Conductual , Desfibriladores Implantables , Intervención basada en la Internet , Anciano , Ansiedad/prevención & control , Depresión/terapia , Femenino , Humanos , Masculino , Persona de Mediana Edad , Calidad de Vida
11.
J Digit Imaging ; 33(4): 1016-1025, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32314069

RESUMEN

Clinical Data Warehouses (DWHs) are used to provide researchers with simplified access to pseudonymized and homogenized clinical routine data from multiple primary systems. Experience with the integration of imaging and metadata from picture archiving and communication systems (PACS), however, is rare. Our goal was therefore to analyze the viability of integrating a production PACS with a research DWH to enable DWH queries combining clinical and medical imaging metadata and to enable the DWH to display and download images ad hoc. We developed an application interface that enables to query the production PACS of a large hospital from a clinical research DWH containing pseudonymized data. We evaluated the performance of bulk extracting metadata from the PACS to the DWH and the performance of retrieving images ad hoc from the PACS for display and download within the DWH. We integrated the system into the query interface of our DWH and used it successfully in four use cases. The bulk extraction of imaging metadata required a median (quartiles) time of 0.09 (0.03-2.25) to 12.52 (4.11-37.30) seconds for a median (quartiles) number of 10 (3-29) to 103 (8-693) images per patient, depending on the extraction approach. The ad hoc image retrieval from the PACS required a median (quartiles) of 2.57 (2.57-2.79) seconds per image for the download, but 5.55 (4.91-6.06) seconds to display the first and 40.77 (38.60-41.63) seconds to display all images using the pure web-based viewer. A full integration of a production PACS with a research DWH is viable and enables various use cases in research. While the extraction of basic metadata from all images can be done with reasonable effort, the extraction of all metadata seems to be more appropriate for subgroups.


Asunto(s)
Data Warehousing , Sistemas de Información Radiológica , Diagnóstico por Imagen , Humanos
12.
BMC Med Inform Decis Mak ; 19(1): 15, 2019 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-30658633

RESUMEN

BACKGROUND: Medication trend studies show the changes of medication over the years and may be replicated using a clinical Data Warehouse (CDW). Even nowadays, a lot of the patient information, like medication data, in the EHR is stored in the format of free text. As the conventional approach of information extraction (IE) demands a high developmental effort, we used ad hoc IE instead. This technique queries information and extracts it on the fly from texts contained in the CDW. METHODS: We present a generalizable approach of ad hoc IE for pharmacotherapy (medications and their daily dosage) presented in hospital discharge letters. We added import and query features to the CDW system, like error tolerant queries to deal with misspellings and proximity search for the extraction of the daily dosage. During the data integration process in the CDW, negated, historical and non-patient context data are filtered. For the replication studies, we used a drug list grouped by ATC (Anatomical Therapeutic Chemical Classification System) codes as input for queries to the CDW. RESULTS: We achieve an F1 score of 0.983 (precision 0.997, recall 0.970) for extracting medication from discharge letters and an F1 score of 0.974 (precision 0.977, recall 0.972) for extracting the dosage. We replicated three published medical trend studies for hypertension, atrial fibrillation and chronic kidney disease. Overall, 93% of the main findings could be replicated, 68% of sub-findings, and 75% of all findings. One study could be completely replicated with all main and sub-findings. CONCLUSION: A novel approach for ad hoc IE is presented. It is very suitable for basic medical texts like discharge letters and finding reports. Ad hoc IE is by definition more limited than conventional IE and does not claim to replace it, but it substantially exceeds the search capabilities of many CDWs and it is convenient to conduct replication studies fast and with high quality.


Asunto(s)
Data Warehousing , Quimioterapia/tendencias , Registros Electrónicos de Salud , Almacenamiento y Recuperación de la Información/métodos , Alta del Paciente , Fibrilación Atrial/tratamiento farmacológico , Humanos , Hipertensión/tratamiento farmacológico , Insuficiencia Renal Crónica/tratamiento farmacológico
13.
BMC Med Inform Decis Mak ; 15: 91, 2015 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-26563260

RESUMEN

BACKGROUND: Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. METHODS: A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. RESULTS: The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. CONCLUSIONS: The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.


Asunto(s)
Ecocardiografía/estadística & datos numéricos , Almacenamiento y Recuperación de la Información/estadística & datos numéricos , Sistemas de Información/estadística & datos numéricos , Alemania , Humanos , Tórax/diagnóstico por imagen
14.
JMIR Med Inform ; 11: e41808, 2023 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-37213191

RESUMEN

BACKGROUND: Due to the importance of radiologic examinations, such as X-rays or computed tomography scans, for many clinical diagnoses, the optimal use of the radiology department is 1 of the primary goals of many hospitals. OBJECTIVE: This study aims to calculate the key metrics of this use by creating a radiology data warehouse solution, where data from radiology information systems (RISs) can be imported and then queried using a query language as well as a graphical user interface (GUI). METHODS: Using a simple configuration file, the developed system allowed for the processing of radiology data exported from any kind of RIS into a Microsoft Excel, comma-separated value (CSV), or JavaScript Object Notation (JSON) file. These data were then imported into a clinical data warehouse. Additional values based on the radiology data were calculated during this import process by implementing 1 of several provided interfaces. Afterward, the query language and GUI of the data warehouse were used to configure and calculate reports on these data. For the most common types of requested reports, a web interface was created to view their numbers as graphics. RESULTS: The tool was successfully tested with the data of 4 different German hospitals from 2018 to 2021, with a total of 1,436,111 examinations. The user feedback was good, since all their queries could be answered if the available data were sufficient. The initial processing of the radiology data for using them with the clinical data warehouse took (depending on the amount of data provided by each hospital) between 7 minutes and 1 hour 11 minutes. Calculating 3 reports of different complexities on the data of each hospital was possible in 1-3 seconds for reports with up to 200 individual calculations and in up to 1.5 minutes for reports with up to 8200 individual calculations. CONCLUSIONS: A system was developed with the main advantage of being generic concerning the export of different RISs as well as concerning the configuration of queries for various reports. The queries could be configured easily using the GUI of the data warehouse, and their results could be exported into the standard formats Excel and CSV for further processing.

15.
J Imaging ; 9(2)2023 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-36826945

RESUMEN

Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark.

16.
Health Informatics J ; 28(1): 14604582211058081, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34986681

RESUMEN

A deep integration of routine care and research remains challenging in many respects. We aimed to show the feasibility of an automated transformation and transfer process feeding deeply structured data with a high level of granularity collected for a clinical prospective cohort study from our hospital information system to the study's electronic data capture system, while accounting for study-specific data and visits. We developed a system integrating all necessary software and organizational processes then used in the study. The process and key system components are described together with descriptive statistics to show its feasibility in general and to identify individual challenges in particular. Data of 2051 patients enrolled between 2014 and 2020 was transferred. We were able to automate the transfer of approximately 11 million individual data values, representing 95% of all entered study data. These were recorded in n = 314 variables (28% of all variables), with some variables being used multiple times for follow-up visits. Our validation approach allowed for constant good data quality over the course of the study. In conclusion, the automated transfer of multi-dimensional routine medical data from HIS to study databases using specific study data and visit structures is complex, yet viable.


Asunto(s)
Data Warehousing , Registros Electrónicos de Salud , Bases de Datos Factuales , Estudios de Seguimiento , Humanos , Estudios Prospectivos
17.
Transl Vis Sci Technol ; 11(6): 22, 2022 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-35737376

RESUMEN

Purpose: Nycthemeral (24-hour) intraocular pressure (IOP) monitoring in glaucoma has been used in Europe for more than 100 years to detect peaks missed during regular office hours. Data supporting this practice are lacking, because it is difficult to correlate manually drawn IOP curves to objective glaucoma progression. To address this, we developed an automated IOP data extraction tool, HIOP-Reader. Methods: Machine learning image analysis software extracted IOP data from hand-drawn, nycthemeral IOP curves of 225 retrospectively identified patients with glaucoma. The relationship between demographic parameters, IOP, and mean ocular perfusion pressure (MOPP) data to spectral-domain optical coherence tomography (SDOCT) data was analyzed. Sensitivities and specificities for the historical cutoff values of 15 mm Hg and 22 mm Hg in detecting glaucoma progression were calculated. Results: Machine data extraction was 119 times faster than manual data extraction. The IOP average was 15.2 ± 4.0 mm Hg, nycthemeral IOP variation was 6.9 ± 4.2 mm Hg, and MOPP was 59.1 ± 8.9 mm Hg. Peak IOP occurred at 10 am and trough at 9 pm. Progression occurred mainly in the temporal-superior and temporal-inferior SDOCT sectors. No correlation could be established between demographic, IOP, or MOPP variables and disease progression on OCT. The sensitivity and specificity of both cutoff points (15 and 22 mm Hg) were insufficient to be clinically useful. Outpatient IOPs were noninferior to nycthemeral IOPs. Conclusions: IOP data obtained during a single visit make for a poor diagnostic tool, no matter whether obtained using nycthemeral measurements or during outpatient hours. Translational Relevance: HIOP-Reader rapidly extracts manually recorded IOP data to allow critical analysis of existing databases.


Asunto(s)
Glaucoma de Ángulo Abierto , Glaucoma , Ritmo Circadiano , Glaucoma/diagnóstico , Glaucoma de Ángulo Abierto/diagnóstico , Glaucoma de Ángulo Abierto/etiología , Humanos , Presión Intraocular , Estudios Retrospectivos , Tonometría Ocular/efectos adversos
18.
Stud Health Technol Inform ; 281: 484-485, 2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34042612

RESUMEN

A semi-automatic tool for fast and accurate annotation of endoscopic videos utilizing trained object detection models is presented. A novel workflow is implemented and the preliminary results suggest that the annotation process is nearly twice as fast with our novel tool compared to the current state of the art.


Asunto(s)
Algoritmos , Gastroenterólogos , Endoscopía , Humanos , Aprendizaje Automático , Flujo de Trabajo
19.
Stud Health Technol Inform ; 283: 69-77, 2021 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-34545821

RESUMEN

Optimizing the utilization of radiology departments is one of the primary objectives for many hospitals. To support this, a solution has been developed, which at first transforms the export of different Radiological Information Systems (RIS) into the data format of a clinical data warehouse (CDW). Additional features, like for example the time between the creation of a radiologic request and the finalization of the diagnosis for the created images, can then be defined using a simple interface and are calculated and saved in the CDW as well. Finally, the query language of the CDW can be used to create custom reports with all the RIS data including the calculated features and export them into the standard formats Excel and CSV. The solution has been successfully tested with data from two German hospitals.


Asunto(s)
Sistemas de Información Radiológica , Radiología , Data Warehousing , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA