Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 91
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Eur Radiol ; 34(1): 330-337, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37505252

RESUMEN

OBJECTIVES: Provide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models. METHODS: We evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports. RESULTS: Continued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set. CONCLUSIONS: General domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports. CLINICAL RELEVANCE STATEMENT: By reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients. KEY POINTS: • BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports. • The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer. • Additional radiology-specific pretraining of all investigated BERT models improves their performance.


Asunto(s)
Almacenamiento y Recuperación de la Información , Radiología , Humanos , Lenguaje , Aprendizaje Automático , Procesamiento de Lenguaje Natural
2.
Clin Oral Investig ; 28(7): 381, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38886242

RESUMEN

OBJECTIVES: Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS: Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS: The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION: AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE: AI could help monitor at-risk teeth and reduce errors in indications for extractions.


Asunto(s)
Inteligencia Artificial , Radiografía Panorámica , Extracción Dental , Humanos , Odontólogos , Femenino , Masculino , Adulto
3.
BMC Med Educ ; 24(1): 250, 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38500112

RESUMEN

OBJECTIVE: The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS: An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS: The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION: The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.


Asunto(s)
Márgenes de Escisión , Neoplasias de la Boca , Animales , Humanos , Biopsia , Cadáver , Cabeza , Neoplasias de la Boca/cirugía , Neoplasias de la Boca/patología , Porcinos
4.
J Med Syst ; 48(1): 55, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38780820

RESUMEN

Designing implants for large and complex cranial defects is a challenging task, even for professional designers. Current efforts on automating the design process focused mainly on convolutional neural networks (CNN), which have produced state-of-the-art results on reconstructing synthetic defects. However, existing CNN-based methods have been difficult to translate to clinical practice in cranioplasty, as their performance on large and complex cranial defects remains unsatisfactory. In this paper, we present a statistical shape model (SSM) built directly on the segmentation masks of the skulls represented as binary voxel occupancy grids and evaluate it on several cranial implant design datasets. Results show that, while CNN-based approaches outperform the SSM on synthetic defects, they are inferior to SSM when it comes to large, complex and real-world defects. Experienced neurosurgeons evaluate the implants generated by the SSM to be feasible for clinical use after minor manual corrections. Datasets and the SSM model are publicly available at https://github.com/Jianningli/ssm .


Asunto(s)
Redes Neurales de la Computación , Cráneo , Humanos , Cráneo/cirugía , Cráneo/anatomía & histología , Cráneo/diagnóstico por imagen , Modelos Estadísticos , Procesamiento de Imagen Asistido por Computador/métodos , Procedimientos de Cirugía Plástica/métodos , Prótesis e Implantes
5.
Eur J Nucl Med Mol Imaging ; 50(7): 2196-2209, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36859618

RESUMEN

PURPOSE: The aim of this study was to systematically evaluate the effect of thresholding algorithms used in computer vision for the quantification of prostate-specific membrane antigen positron emission tomography (PET) derived tumor volume (PSMA-TV) in patients with advanced prostate cancer. The results were validated with respect to the prognostication of overall survival in patients with advanced-stage prostate cancer. MATERIALS AND METHODS: A total of 78 patients who underwent [177Lu]Lu-PSMA-617 radionuclide therapy from January 2018 to December 2020 were retrospectively included in this study. [68Ga]Ga-PSMA-11 PET images, acquired prior to radionuclide therapy, were used for the analysis of thresholding algorithms. All PET images were first analyzed semi-automatically using a pre-evaluated, proprietary software solution as the baseline method. Subsequently, five histogram-based thresholding methods and two local adaptive thresholding methods that are well established in computer vision were applied to quantify molecular tumor volume. The resulting whole-body molecular tumor volumes were validated with respect to the prognostication of overall patient survival as well as their statistical correlation to the baseline methods and their performance on standardized phantom scans. RESULTS: The whole-body PSMA-TVs, quantified using different thresholding methods, demonstrate a high positive correlation with the baseline methods. We observed the highest correlation with generalized histogram thresholding (GHT) (Pearson r (r), p value (p): r = 0.977, p < 0.001) and Sauvola thresholding (r = 0.974, p < 0.001) and the lowest correlation with Multiotsu (r = 0.877, p < 0.001) and Yen thresholding methods (r = 0.878, p < 0.001). The median survival time of all patients was 9.87 months (95% CI [9.3 to 10.13]). Stratification by median whole-body PSMA-TV resulted in a median survival time from 11.8 to 13.5 months for the patient group with lower tumor burden and 6.5 to 6.6 months for the patient group with higher tumor burden. The patient group with lower tumor burden had significantly higher probability of survival (p < 0.00625) in eight out of nine thresholding methods (Fig. 2); those methods were SUVmax50 (p = 0.0038), SUV ≥3 (p = 0.0034), Multiotsu (p = 0.0015), Yen (p = 0.0015), Niblack (p = 0.001), Sauvola (p = 0.0001), Otsu (p = 0.0053), and Li thresholding (p = 0.0053). CONCLUSION: Thresholding methods commonly used in computer vision are promising tools for the semiautomatic quantification of whole-body PSMA-TV in [68Ga]Ga-PSMA-11-PET. The proposed algorithm-driven thresholding strategy is less arbitrary and less prone to biases than thresholding with predefined values, potentially improving the application of whole-body PSMA-TV as an imaging biomarker.


Asunto(s)
Neoplasias de la Próstata Resistentes a la Castración , Neoplasias de la Próstata , Humanos , Masculino , Radioisótopos de Galio , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones , Antígeno Prostático Específico , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología , Neoplasias de la Próstata Resistentes a la Castración/patología , Estudios Retrospectivos , Carga Tumoral
6.
BMC Med Imaging ; 23(1): 174, 2023 10 31.
Artículo en Inglés | MEDLINE | ID: mdl-37907876

RESUMEN

BACKGROUND: With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS: Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS: The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS: We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.


Asunto(s)
Glioblastoma , Redes Neurales de la Computación , Humanos , Imagen por Resonancia Magnética , Encéfalo , Glioblastoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
7.
Eur Radiol ; 32(12): 8769-8776, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35788757

RESUMEN

OBJECTIVES: Over the course of their treatment, patients often switch hospitals, requiring staff at the new hospital to import external imaging studies to their local database. In this study, the authors present MOdality Mapping and Orchestration (MOMO), a Deep Learning-based approach to automate this mapping process by combining metadata analysis and a neural network ensemble. METHODS: A set of 11,934 imaging series with existing anatomical labels was retrieved from the PACS database of the local hospital to train an ensemble of neural networks (DenseNet-161 and ResNet-152), which process radiological images and predict the type of study they belong to. We developed an algorithm that automatically extracts relevant metadata from imaging studies, regardless of their structure, and combines it with the neural network ensemble, forming a powerful classifier. A set of 843 anonymized external studies from 321 hospitals was hand-labeled to assess performance. We tested several variations of this algorithm. RESULTS: MOMO achieves 92.71% accuracy and 2.63% minor errors (at 99.29% predictive power) on the external study classification task, outperforming both a commercial product (82.86% accuracy, 1.36% minor errors, 96.20% predictive power) and a pure neural network ensemble (72.69% accuracy, 10.3% minor errors, 99.05% predictive power) performing the same task. We find that the highest performance is achieved by an algorithm that combines all information into one vote-based classifier. CONCLUSION: Deep Learning combined with metadata matching is a promising and flexible approach for the automated classification of external DICOM studies for PACS archiving. KEY POINTS: • The algorithm can successfully identify 76 medical study types across seven modalities (CT, X-ray angiography, radiographs, MRI, PET (+CT/MRI), ultrasound, and mammograms). • The algorithm outperforms a commercial product performing the same task by a significant margin (> 9% accuracy gain). • The performance of the algorithm increases through the application of Deep Learning techniques.


Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Algoritmos , Bases de Datos Factuales , Imagen por Resonancia Magnética/métodos
8.
J Digit Imaging ; 35(2): 340-355, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35064372

RESUMEN

Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.


Asunto(s)
Nube Computacional , Procesamiento de Imagen Asistido por Computador , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
9.
J Digit Imaging ; 32(6): 1008-1018, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31485953

RESUMEN

As of common routine in tumor resections, surgeons rely on local examinations of the removed tissues and on the swiftly made microscopy findings of the pathologist, which are based on intraoperatively taken tissue probes. This approach may imply an extended duration of the operation, increased effort for the medical staff, and longer occupancy of the operating room (OR). Mixed reality technologies, and particularly augmented reality, have already been applied in surgical scenarios with positive initial outcomes. Nonetheless, these methods have used manual or marker-based registration. In this work, we design an application for a marker-less registration of PET-CT information for a patient. The algorithm combines facial landmarks extracted from an RGB video stream, and the so-called Spatial-Mapping API provided by the HMD Microsoft HoloLens. The accuracy of the system is compared with a marker-based approach, and the opinions of field specialists have been collected during a demonstration. A survey based on the standard ISO-9241/110 has been designed for this purpose. The measurements show an average positioning error along the three axes of (x, y, z) = (3.3 ± 2.3, - 4.5 ± 2.9, - 9.3 ± 6.1) mm. Compared with the marker-based approach, this shows an increment of the positioning error of approx. 3 mm along two dimensions (x, y), which might be due to the absence of explicit markers. The application has been positively evaluated by the specialists; they have shown interest in continued further work and contributed to the development process with constructive criticism.


Asunto(s)
Realidad Aumentada , Imagenología Tridimensional/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Cirugía Asistida por Computador/métodos , Cirugía Bucal/métodos , Algoritmos , Humanos , Proyectos Piloto , Reproducibilidad de los Resultados
10.
Clin Nephrol ; 90(2): 125-141, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29578402

RESUMEN

AIMS: New chemotherapeutic agents prolong survival of patients with pancreatic ductal adenocarcinoma (PDAC). Although their incidence is rising, patients with end-stage renal disease (ESRD) requiring hemodialysis (HD) are not included in the phase III trials evaluating the effects of these chemotherapies. Many experts recommend applying chemotherapy after HD using a reduced dose. Alternatively, the concept of prior dosing allows for the application of dialyzable chemotherapeutic drugs using a normal dose, with an HD followed shortly after to mimic normal renal function. In this work, we provide guidance for clinicians on how to use chemotherapy in patients with PDAC on HD and how to identify substances suitable for prior dosing. MATERIALS AND METHODS: We systematically searched PubMed, from inception to September 2016, for published studies describing patients with ESRD on HD who received chemotherapies commonly applied in PDAC, including gemcitabine, fluorouracil (5-FU), capecitabine, oxaliplatin, irinotecan, docetaxel, erlotinib, sunitinib, S-1, and afatinib. Applied dosages, described toxicities, application time relative to HD, and pharmacokinetic measurements of the drug and its metabolites were assessed. Quantitative analysis of the drug plasma concentrations, including half-life during and in between HD and fraction of the drug eliminated during HD, were assessed. RESULTS: We identified 56 studies describing 128 patients with ESRD undergoing HD during chemotherapeutic treatment. Quantitative pharmacokinetic analysis revealed that the following substances are dialyzable and thus suitable for application using the prior-dosing method: gemcitabine, 5-FU, oxaliplatin, irinotecan, and S-1. CONCLUSION: This work supports the application of dialyzable chemotherapeutic agents in patients with PDAC in standard dose when HD is performed shortly after the infusion.
.


Asunto(s)
Antineoplásicos/uso terapéutico , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapéutico , Carcinoma Ductal Pancreático/tratamiento farmacológico , Fallo Renal Crónico/terapia , Neoplasias Pancreáticas/tratamiento farmacológico , Diálisis Renal , Afatinib , Camptotecina/análogos & derivados , Camptotecina/uso terapéutico , Carcinoma Ductal Pancreático/complicaciones , Desoxicitidina/análogos & derivados , Desoxicitidina/uso terapéutico , Docetaxel , Fluorouracilo/uso terapéutico , Humanos , Irinotecán , Fallo Renal Crónico/complicaciones , Compuestos Organoplatinos/uso terapéutico , Oxaliplatino , Neoplasias Pancreáticas/complicaciones , Quinazolinas/uso terapéutico , Taxoides/uso terapéutico , Gemcitabina
11.
J Biomed Inform ; 55: 124-31, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25882923

RESUMEN

The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements.


Asunto(s)
Gráficos por Computador/instrumentación , Aumento de la Imagen/instrumentación , Imagenología Tridimensional/instrumentación , Cirugía Asistida por Computador/instrumentación , Interfaz Usuario-Computador , Diseño de Equipo , Análisis de Falla de Equipo , Cabeza , Dispositivos de Protección de la Cabeza , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
12.
Comput Methods Programs Biomed ; 245: 108013, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38262126

RESUMEN

The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.


Asunto(s)
Inteligencia Artificial , Médicos , Humanos , Bases de Datos Factuales , Procesamiento de Lenguaje Natural , PubMed
13.
J Imaging Inform Med ; 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38862851

RESUMEN

3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine. While the fast development of AI enhances imaging and analysis, commonly used visualization methods lag far behind. Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices. While unrestricted touch benefits volumetric data examination, implementing natural haptic interaction with XR is challenging. The research question is whether a multisensory XR application with intuitive haptic interaction adds value and should be pursued. In a study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications: a multisensory virtual reality (VR) prototype using haptic gloves, a simple VR prototype using controllers, and a standard PC application. Results of standardized questionnaires showed no significant differences between all application types regarding usability and no significant difference between both VR applications regarding presence. Participants agreed to statements that VR visualizations provide better depth information, using the hands instead of controllers simplifies data exploration, the multisensory VR prototype allows intuitive data exploration, and it is beneficial over traditional data examination methods. While most participants mentioned manual interaction as the best aspect, they also found it the most improvable. We conclude that a multisensory XR application with improved manual interaction adds value for volumetric biomedical data examination. We will proceed with our open-source research project ISH3DE (Intuitive Stereoptic Haptic 3D Data Exploration) to serve medical education, therapeutic decisions, surgery preparations, or research data analysis.

14.
Med Image Anal ; 93: 103100, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38340545

RESUMEN

With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.


Asunto(s)
Algoritmos , Análisis de Datos , Humanos , Enfermedades Raras
15.
Med Image Anal ; 95: 103199, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38759258

RESUMEN

The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), positive predictive value (PPV) and F1-score.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/clasificación , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Algoritmos
16.
Syst Rev ; 13(1): 74, 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38409059

RESUMEN

BACKGROUND: The radial forearm free flap (RFFF) serves as a workhorse for a variety of reconstructions. Although there are a variety of surgical techniques for donor site closure after RFFF raising, the most common techniques are closure using a split-thickness skin graft (STSG) or a full-thickness skin graft (FTSG). The closure can result in wound complications and function and aesthetic compromise of the forearm and hand. The aim of the planned systematic review and meta-analysis is to compare the wound-related, function-related and aesthetics-related outcome associated with full-thickness skin grafts (FTSG) and split-thickness skin grafts (STSG) in radial forearm free flap (RFFF) donor site closure. METHODS: A systematic review and meta-analysis will be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines will be followed. Electronic databases and platforms (PubMed, Embase, Scopus, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), China National Knowledge Infrastructure (CNKI)) and clinical trial registries (ClinicalTrials.gov, the German Clinical Trials Register, the ISRCTN registry, the International Clinical Trials Registry Platform) will be searched using predefined search terms until 15 January 2024. A rerun of the search will be carried out within 12 months before publication of the review. Eligible studies should report on the occurrence of donor site complications after raising an RFFF and closure of the defect. Included closure techniques are techniques that use full-thickness skin grafts and split-thickness skin grafts. Excluded techniques for closure are primary wound closure without the use of skin graft. Outcomes are considered wound-, functional-, and aesthetics-related. Studies that will be included are randomized controlled trials (RCTs) and prospective and retrospective comparative cohort studies. Case-control studies, studies without a control group, animal studies and cadaveric studies will be excluded. Screening will be performed in a blinded fashion by two reviewers per study. A third reviewer resolves discrepancies. The risk of bias in the original studies will be assessed using the ROBINS-I and RoB 2 tools. Data synthesis will be done using Review Manager (RevMan) 5.4.1. If appropriate, a meta-analysis will be conducted. Between-study variability will be assessed using the I2 index. If necessary, R will be used. The quality of evidence for outcomes will eventually be assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. DISCUSSION: This study's findings may help us understand both closure techniques' complication rates and may have important implications for developing future guidelines for RFFF donor site management. If available data is limited and several questions remain unanswered, additional comparative studies will be needed. SYSTEMATIC REVIEW REGISTRATION: The protocol was developed in line with the PRISMA-P extension for protocols and was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 17 September 2023 (registration number CRD42023351903).


Asunto(s)
Colgajos Tisulares Libres , Trasplante de Piel , Humanos , Trasplante de Piel/métodos , Antebrazo/cirugía , Revisiones Sistemáticas como Asunto , Metaanálisis como Asunto
17.
Comput Methods Programs Biomed ; 252: 108215, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38781811

RESUMEN

BACKGROUND AND OBJECTIVE: Cell segmentation in bright-field histological slides is a crucial topic in medical image analysis. Having access to accurate segmentation allows researchers to examine the relationship between cellular morphology and clinical observations. Unfortunately, most segmentation methods known today are limited to nuclei and cannot segment the cytoplasm. METHODS: We present a new network architecture Cyto R-CNN that is able to accurately segment whole cells (with both the nucleus and the cytoplasm) in bright-field images. We also present a new dataset CytoNuke, consisting of multiple thousand manual annotations of head and neck squamous cell carcinoma cells. Utilizing this dataset, we compared the performance of Cyto R-CNN to other popular cell segmentation algorithms, including QuPath's built-in algorithm, StarDist, Cellpose and a multi-scale Attention Deeplabv3+. To evaluate segmentation performance, we calculated AP50, AP75 and measured 17 morphological and staining-related features for all detected cells. We compared these measurements to the gold standard of manual segmentation using the Kolmogorov-Smirnov test. RESULTS: Cyto R-CNN achieved an AP50 of 58.65% and an AP75 of 11.56% in whole-cell segmentation, outperforming all other methods (QuPath 19.46/0.91%; StarDist 45.33/2.32%; Cellpose 31.85/5.61%, Deeplabv3+ 3.97/1.01%). Cell features derived from Cyto R-CNN showed the best agreement to the gold standard (D¯=0.15) outperforming QuPath (D¯=0.22), StarDist (D¯=0.25), Cellpose (D¯=0.23) and Deeplabv3+ (D¯=0.33). CONCLUSION: Our newly proposed Cyto R-CNN architecture outperforms current algorithms in whole-cell segmentation while providing more reliable cell measurements than any other model. This could improve digital pathology workflows, potentially leading to improved diagnosis. Moreover, our published dataset can be used to develop further models in the future.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Núcleo Celular , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/patología , Carcinoma de Células Escamosas de Cabeza y Cuello/diagnóstico por imagen , Carcinoma de Células Escamosas de Cabeza y Cuello/patología , Citoplasma , Reproducibilidad de los Resultados , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/patología
18.
Diagnostics (Basel) ; 14(3)2024 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-38337796

RESUMEN

PURPOSE: To assess the diagnostic accuracy of BMI-adapted, low-radiation and low-iodine dose, dual-source aortic CT for endoleak detection in non-obese and obese patients following endovascular aortic repair. METHODS: In this prospective single-center study, patients referred for follow-up CT after endovascular repair with a history of at least one standard triphasic (native, arterial and delayed phase) routine CT protocol were enrolled. Patients were divided into two groups and allocated to a BMI-adapted (group A, BMI < 30 kg/m2; group B, BMI ≥ 30 kg/m2) double low-dose CT (DLCT) protocol comprising single-energy arterial and dual-energy delayed phase series with virtual non-contrast (VNC) reconstructions. An in-patient comparison of the DLCT and routine CT protocol as reference standard was performed regarding differences in diagnostic accuracy, radiation dose, and image quality. RESULTS: Seventy-five patients were included in the study (mean age 73 ± 8 years, 63 (84%) male). Endoleaks were diagnosed in 20 (26.7%) patients, 11 of 53 (20.8%) in group A and 9 of 22 (40.9%) in group B. Two radiologists achieved an overall diagnostic accuracy of 98.7% and 97.3% for endoleak detection, with 100% in group A and 95.5% and 90.9% in group B. All examinations were diagnostic. The DLCT protocol reduced the effective dose from 10.0 ± 3.6 mSv to 6.1 ± 1.5 mSv (p < 0.001) and the total iodine dose from 31.5 g to 14.5 g in group A and to 17.4 g in group B. CONCLUSION: Optimized double low-dose dual-source aortic CT with VNC, arterial and delayed phase images demonstrated high diagnostic accuracy for endoleak detection and significant radiation and iodine dose reductions in both obese and non-obese patients compared to the reference standard of triple phase, standard radiation and iodine dose aortic CT.

19.
Med Image Anal ; 94: 103143, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38507894

RESUMEN

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.


Asunto(s)
Núcleo Celular , Redes Neurales de la Computación , Humanos , Eosina Amarillenta-(YS) , Hematoxilina , Coloración y Etiquetado , Procesamiento de Imagen Asistido por Computador
20.
Sci Data ; 11(1): 596, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38844767

RESUMEN

Aortic dissections (ADs) are serious conditions of the main artery of the human body, where a tear in the inner layer of the aortic wall leads to the formation of a new blood flow channel, named false lumen. ADs affecting the aorta distally to the left subclavian artery are classified as a Stanford type B aortic dissection (type B AD). This is linked to substantial morbidity and mortality, however, the course of the disease for the individual case is often unpredictable. Computed tomography angiography (CTA) is the gold standard for the diagnosis of type B AD. To advance the tools available for the analysis of CTA scans, we provide a CTA collection of 40 type B AD cases from clinical routine with corresponding expert segmentations of the true and false lumina. Segmented CTA scans might aid clinicians in decision making, especially if it is possible to fully automate the process. Therefore, the data collection is meant to be used to develop, train and test algorithms.


Asunto(s)
Algoritmos , Disección Aórtica , Angiografía por Tomografía Computarizada , Humanos , Disección Aórtica/diagnóstico por imagen , Inteligencia Artificial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA