Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
1.
JHEP Rep ; 6(8): 101125, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39139458

RESUMEN

Background & Aims: Body composition assessment (BCA) parameters have recently been identified as relevant prognostic factors for patients with hepatocellular carcinoma (HCC). Herein, we aimed to investigate the role of BCA parameters for prognosis prediction in patients with HCC undergoing transarterial chemoembolization (TACE). Methods: This retrospective multicenter study included a total of 754 treatment-naïve patients with HCC who underwent TACE at six tertiary care centers between 2010-2020. Fully automated artificial intelligence-based quantitative 3D volumetry of abdominal cavity tissue composition was performed to assess skeletal muscle volume (SM), total adipose tissue (TAT), intra- and intermuscular adipose tissue, visceral adipose tissue, and subcutaneous adipose tissue (SAT) on pre-intervention computed tomography scans. BCA parameters were normalized to the slice number of the abdominal cavity. We assessed the influence of BCA parameters on median overall survival and performed multivariate analysis including established estimates of survival. Results: Univariate survival analysis revealed that impaired median overall survival was predicted by low SM (p <0.001), high TAT volume (p = 0.013), and high SAT volume (p = 0.006). In multivariate survival analysis, SM remained an independent prognostic factor (p = 0.039), while TAT and SAT volumes no longer showed predictive ability. This predictive role of SM was confirmed in a subgroup analysis of patients with BCLC stage B. Conclusions: SM is an independent prognostic factor for survival prediction. Thus, the integration of SM into novel scoring systems could potentially improve survival prediction and clinical decision-making. Fully automated approaches are needed to foster the implementation of this imaging biomarker into daily routine. Impact and implications: Body composition assessment parameters, especially skeletal muscle volume, have been identified as relevant prognostic factors for many diseases and treatments. In this study, skeletal muscle volume has been identified as an independent prognostic factor for patients with hepatocellular carcinoma undergoing transarterial chemoembolization. Therefore, skeletal muscle volume as a metaparameter could play a role as an opportunistic biomarker in holistic patient assessment and be integrated into decision support systems. Workflow integration with artificial intelligence is essential for automated, quantitative body composition assessment, enabling broad availability in multidisciplinary case discussions.

2.
Lancet Oncol ; 2024 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-39089299

RESUMEN

BACKGROUND: Prostate-specific membrane antigen (PSMA)-PET was introduced into clinical practice in 2012 and has since transformed the staging of prostate cancer. Prostate Cancer Molecular Imaging Standardized Evaluation (PROMISE) criteria were proposed to standardise PSMA-PET reporting. We aimed to compare the prognostic value of PSMA-PET by PROMISE (PPP) stage with established clinical nomograms in a large prostate cancer dataset with follow-up data for overall survival. METHODS: In this multicentre retrospective study, we used data from patients of any age with histologically proven prostate cancer who underwent PSMA-PET at the University Hospitals in Essen, Münster, Freiburg, and Dresden, Germany, between Oct 30, 2014, and Dec 27, 2021. We linked a subset of patient hospital records with patient data, including mortality data, from the Cancer Registry North-Rhine Westphalia, Germany. Patients from Essen University Hospital were randomly assigned to the development or internal validation cohorts (2:1). Patients from Münster, Freiburg, and Dresden University Hospitals were included in an external validation cohort. Using the development cohort, we created quantitative and visual PPP nomograms based on Cox regression models, assessing potential PPP predictors for overall survival, with least absolute shrinkage and selection operator penalty for overall survival as the primary endpoint. Performance was measured using Harrell's C-index in the internal and external validation cohorts and compared with established clinical risk scores (International Staging Collaboration for Cancer of the Prostate [STARCAP], European Association of Urology [EAU], and National Comprehensive Cancer Network [NCCN] risk scores) and a previous nomogram defined by Gafita et al (hereafter referred to as GAFITA) using receiver operating characteristic (ROC) curves and area under the ROC curve (AUC) estimates. FINDINGS: We analysed 2414 male patients (1110 included in the development cohort, 502 in the internal cohort, and 802 in the external validation cohort), among whom 901 (37%) had died as of data cutoff (June 30, 2023; median follow-up of 52·9 months [IQR 33·9-79·0]). Predictors in the quantitative PPP nomogram were locoregional lymph node metastases (molecular imaging N2), distant metastases (extrapelvic nodal metastases, bone metastases [disseminated or diffuse marrow involvement], and organ metastases), tumour volume (in L), and tumour mean standardised uptake value. Predictors in the visual PPP nomogram were distant metastases (extrapelvic nodal metastases, bone metastases [disseminated or diffuse marrow involvement], and organ metastases) and total tumour lesion count. In the internal and external validation cohorts, C-indices were 0·80 (95% CI 0·77-0·84) and 0·77 (0·75-0·78) for the quantitative nomogram, respectively, and 0·78 (0·75-0·82) and 0·77 (0·75-0·78) for the visual nomogram, respectively. In the combined development and internal validation cohort, the quantitative PPP nomogram was superior to STARCAP risk score for patients at initial staging (n=139 with available staging data; AUC 0·73 vs 0·54; p=0·018), EAU risk score at biochemical recurrence (n=412; 0·69 vs 0·52; p<0·0001), and NCCN pan-stage risk score (n=1534; 0·81 vs 0·74; p<0·0001) for the prediction of overall survival, but was similar to GAFITA nomogram for metastatic hormone-sensitive prostate cancer (mHSPC; n=122; 0·76 vs 0·72; p=0·49) and metastatic castration-resistant prostate cancer (mCRPC; n=270; 0·67 vs 0·75; p=0·20). The visual PPP nomogram was superior to EAU at biochemical recurrence (n=414; 0·64 vs 0·52; p=0·0004) and NCCN across all stages (n=1544; 0·79 vs 0·73; p<0·0001), but similar to STARCAP for initial staging (n=140; 0·56 vs 0·53; p=0·74) and GAFITA for mHSPC (n=122; 0·74 vs 0·72; p=0·66) and mCRPC (n=270; 0·71 vs 0·75; p=0·23). INTERPRETATION: Our PPP nomograms accurately stratify high-risk and low-risk groups for overall survival in early and late stages of prostate cancer and yield equal or superior prediction accuracy compared with established clinical risk tools. Validation and improvement of the nomograms with long-term follow-up is ongoing (NCT06320223). FUNDING: Cancer Registry North-Rhine Westphalia.

3.
J Imaging Inform Med ; 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38862851

RESUMEN

3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine. While the fast development of AI enhances imaging and analysis, commonly used visualization methods lag far behind. Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices. While unrestricted touch benefits volumetric data examination, implementing natural haptic interaction with XR is challenging. The research question is whether a multisensory XR application with intuitive haptic interaction adds value and should be pursued. In a study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications: a multisensory virtual reality (VR) prototype using haptic gloves, a simple VR prototype using controllers, and a standard PC application. Results of standardized questionnaires showed no significant differences between all application types regarding usability and no significant difference between both VR applications regarding presence. Participants agreed to statements that VR visualizations provide better depth information, using the hands instead of controllers simplifies data exploration, the multisensory VR prototype allows intuitive data exploration, and it is beneficial over traditional data examination methods. While most participants mentioned manual interaction as the best aspect, they also found it the most improvable. We conclude that a multisensory XR application with improved manual interaction adds value for volumetric biomedical data examination. We will proceed with our open-source research project ISH3DE (Intuitive Stereoptic Haptic 3D Data Exploration) to serve medical education, therapeutic decisions, surgery preparations, or research data analysis.

4.
Sci Data ; 11(1): 596, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38844767

RESUMEN

Aortic dissections (ADs) are serious conditions of the main artery of the human body, where a tear in the inner layer of the aortic wall leads to the formation of a new blood flow channel, named false lumen. ADs affecting the aorta distally to the left subclavian artery are classified as a Stanford type B aortic dissection (type B AD). This is linked to substantial morbidity and mortality, however, the course of the disease for the individual case is often unpredictable. Computed tomography angiography (CTA) is the gold standard for the diagnosis of type B AD. To advance the tools available for the analysis of CTA scans, we provide a CTA collection of 40 type B AD cases from clinical routine with corresponding expert segmentations of the true and false lumina. Segmented CTA scans might aid clinicians in decision making, especially if it is possible to fully automate the process. Therefore, the data collection is meant to be used to develop, train and test algorithms.


Asunto(s)
Algoritmos , Disección Aórtica , Angiografía por Tomografía Computarizada , Humanos , Disección Aórtica/diagnóstico por imagen , Inteligencia Artificial
5.
J Imaging Inform Med ; 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38926263

RESUMEN

Standardized reporting of multiparametric prostate MRI (mpMRI) is widespread and follows international standards (Pi-RADS). However, quantitative measurements from mpMRI are not widely comparable. Although T2 mapping sequences can provide repeatable quantitative image measurements and extract reliable imaging biomarkers from mpMRI, they are often time-consuming. We therefore investigated the value of quantitative measurements on a highly accelerated T2 mapping sequence, in order to establish a threshold to differentiate benign from malignant lesions. For this purpose, we evaluated a novel, highly accelerated T2 mapping research sequence that enables high-resolution image acquisition with short acquisition times in everyday clinical practice. In this retrospective single-center study, we included 54 patients with clinically indicated MRI of the prostate and biopsy-confirmed carcinoma (n = 37) or exclusion of carcinoma (n = 17). All patients had received a standard of care biopsy of the prostate, results of which were used to confirm or exclude presence of malignant lesions. We used the linear mixed-effects model-fit by REML to determine the difference between mean values of cancerous tissue and healthy tissue. We found good differentiation between malignant lesions and normal appearing tissue in the peripheral zone based on the mean T2 value. Specifically, the mean T2 value for tissue without malignant lesions was (151.7 ms [95% CI: 146.9-156.5 ms] compared to 80.9 ms for malignant lesions [95% CI: 67.9-79.1 ms]; p < 0.001). Based on this assessment, a limit of 109.2 ms is suggested. Aditionally, a significant correlation was observed between T2 values of the peripheral zone and PI-RADS scores (p = 0.0194). However, no correlation was found between the Gleason Score and the T2 relaxation time. Using REML, we found a difference of -82.7 ms in mean values between cancerous tissue and healthy tissue. We established a cut-off-value of 109.2 ms to accurately differentiate between malignant and non-malignant prostate regions. The addition of T2 mapping sequences to routine imaging could benefit automated lesion detection and facilitate contrast-free multiparametric MRI of the prostate.

6.
Sci Data ; 11(1): 483, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38729970

RESUMEN

The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.


Asunto(s)
Tomografía Computarizada por Rayos X , Imagen de Cuerpo Entero , Femenino , Humanos , Masculino , Procesamiento de Imagen Asistido por Computador
7.
J Med Syst ; 48(1): 55, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38780820

RESUMEN

Designing implants for large and complex cranial defects is a challenging task, even for professional designers. Current efforts on automating the design process focused mainly on convolutional neural networks (CNN), which have produced state-of-the-art results on reconstructing synthetic defects. However, existing CNN-based methods have been difficult to translate to clinical practice in cranioplasty, as their performance on large and complex cranial defects remains unsatisfactory. In this paper, we present a statistical shape model (SSM) built directly on the segmentation masks of the skulls represented as binary voxel occupancy grids and evaluate it on several cranial implant design datasets. Results show that, while CNN-based approaches outperform the SSM on synthetic defects, they are inferior to SSM when it comes to large, complex and real-world defects. Experienced neurosurgeons evaluate the implants generated by the SSM to be feasible for clinical use after minor manual corrections. Datasets and the SSM model are publicly available at https://github.com/Jianningli/ssm .


Asunto(s)
Redes Neurales de la Computación , Cráneo , Humanos , Cráneo/cirugía , Cráneo/anatomía & histología , Cráneo/diagnóstico por imagen , Modelos Estadísticos , Procesamiento de Imagen Asistido por Computador/métodos , Procedimientos de Cirugía Plástica/métodos , Prótesis e Implantes
8.
Comput Methods Programs Biomed ; 252: 108215, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38781811

RESUMEN

BACKGROUND AND OBJECTIVE: Cell segmentation in bright-field histological slides is a crucial topic in medical image analysis. Having access to accurate segmentation allows researchers to examine the relationship between cellular morphology and clinical observations. Unfortunately, most segmentation methods known today are limited to nuclei and cannot segment the cytoplasm. METHODS: We present a new network architecture Cyto R-CNN that is able to accurately segment whole cells (with both the nucleus and the cytoplasm) in bright-field images. We also present a new dataset CytoNuke, consisting of multiple thousand manual annotations of head and neck squamous cell carcinoma cells. Utilizing this dataset, we compared the performance of Cyto R-CNN to other popular cell segmentation algorithms, including QuPath's built-in algorithm, StarDist, Cellpose and a multi-scale Attention Deeplabv3+. To evaluate segmentation performance, we calculated AP50, AP75 and measured 17 morphological and staining-related features for all detected cells. We compared these measurements to the gold standard of manual segmentation using the Kolmogorov-Smirnov test. RESULTS: Cyto R-CNN achieved an AP50 of 58.65% and an AP75 of 11.56% in whole-cell segmentation, outperforming all other methods (QuPath 19.46/0.91%; StarDist 45.33/2.32%; Cellpose 31.85/5.61%, Deeplabv3+ 3.97/1.01%). Cell features derived from Cyto R-CNN showed the best agreement to the gold standard (D¯=0.15) outperforming QuPath (D¯=0.22), StarDist (D¯=0.25), Cellpose (D¯=0.23) and Deeplabv3+ (D¯=0.33). CONCLUSION: Our newly proposed Cyto R-CNN architecture outperforms current algorithms in whole-cell segmentation while providing more reliable cell measurements than any other model. This could improve digital pathology workflows, potentially leading to improved diagnosis. Moreover, our published dataset can be used to develop further models in the future.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Núcleo Celular , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/patología , Carcinoma de Células Escamosas de Cabeza y Cuello/diagnóstico por imagen , Carcinoma de Células Escamosas de Cabeza y Cuello/patología , Citoplasma , Reproducibilidad de los Resultados , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/patología
10.
Med Image Anal ; 94: 103143, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38507894

RESUMEN

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.


Asunto(s)
Núcleo Celular , Redes Neurales de la Computación , Humanos , Eosina Amarillenta-(YS) , Hematoxilina , Coloración y Etiquetado , Procesamiento de Imagen Asistido por Computador
11.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347141

RESUMEN

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Semántica
12.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347140

RESUMEN

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Asunto(s)
Inteligencia Artificial
13.
Med Image Anal ; 93: 103100, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38340545

RESUMEN

With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.


Asunto(s)
Algoritmos , Análisis de Datos , Humanos , Enfermedades Raras
14.
Comput Methods Programs Biomed ; 245: 108013, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38262126

RESUMEN

The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Procesamiento de Lenguaje Natural , Humanos
15.
ArXiv ; 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36945687

RESUMEN

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

16.
IEEE J Biomed Health Inform ; 28(1): 100-109, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37624724

RESUMEN

Recently, deep learning has been demonstrated to be feasible in eliminating the use of gadoliniumbased contrast agents (GBCAs) through synthesizing gadolinium-free contrast-enhanced MRI (GFCE-MRI) from contrast-free MRI sequences, providing the community with an alternative to get rid of GBCAs-associated safety issues in patients. Nevertheless, generalizability assessment of the GFCE-MRI model has been largely challenged by the high inter-institutional heterogeneity of MRI data, on top of the scarcity of multi-institutional data itself. Although various data normalization methods have been adopted to address the heterogeneity issue, it has been limited to single-institutional investigation and there is no standard normalization approach presently. In this study, we aimed at investigating generalizability of GFCE-MRI model using data from seven institutions by manipulating heterogeneity of MRI data under five popular normalization approaches. Three state-of-the-art neural networks were applied to map from T1-weighted and T2-weighted MRI to contrast-enhanced MRI (CE-MRI) for GFCE-MRI synthesis in patients with nasopharyngeal carcinoma. MRI data from three institutions were used separately to generate three uni-institution models and jointly for a tri-institution model. The five normalization methods were applied to normalize the data of each model. MRI data from the remaining four institutions served as external cohorts for model generalizability assessment. Quality of GFCE-MRI was quantitatively evaluated against ground-truth CE-MRI using mean absolute error (MAE) and peak signal-to-noise ratio(PSNR). Results showed that performance of all uni-institution models remarkably dropped on the external cohorts. By contrast, model trained using multi-institutional data with Z-Score normalization yielded the best model generalizability improvement.


Asunto(s)
Gadolinio , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Relación Señal-Ruido
17.
Eur Radiol ; 34(1): 330-337, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37505252

RESUMEN

OBJECTIVES: Provide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models. METHODS: We evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports. RESULTS: Continued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set. CONCLUSIONS: General domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports. CLINICAL RELEVANCE STATEMENT: By reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients. KEY POINTS: • BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports. • The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer. • Additional radiology-specific pretraining of all investigated BERT models improves their performance.


Asunto(s)
Almacenamiento y Recuperación de la Información , Radiología , Humanos , Lenguaje , Aprendizaje Automático , Procesamiento de Lenguaje Natural
18.
Comput Methods Programs Biomed ; 243: 107912, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37981454

RESUMEN

BACKGROUND AND OBJECTIVE: We present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich complex valued k-space. METHODS: Using four datasets from different institutions with a total of around 200,000 MRI slices, we show that our network can perform skull-stripping on the raw data of MRIs while preserving the phase information which no other skull stripping algorithm is able to work with. For two of the datasets, skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain is used as the ground truth, whereas the third and fourth dataset comes with per-hand annotated brain segmentations. RESULTS: All four datasets were very similar to the ground truth (DICE scores of 92 %-99 % and Hausdorff distances of under 5.5 pixel). Results on slices above the eye-region reach DICE scores of up to 99 %, whereas the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-Strip often has smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. CONCLUSION: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Besides preserving valuable information for further diagnostics, this approach makes an immediate anonymization of patient data possible, already before being transformed into the image domain. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.


Asunto(s)
Algoritmos , Cráneo , Humanos , Cráneo/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Procesamiento de Imagen Asistido por Computador/métodos , Cabeza , Imagen por Resonancia Magnética/métodos
19.
Sci Rep ; 13(1): 21231, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-38040865

RESUMEN

Cerebral organoids recapitulate the structure and function of the developing human brain in vitro, offering a large potential for personalized therapeutic strategies. The enormous growth of this research area over the past decade with its capability for clinical translation makes a non-invasive, automated analysis pipeline of organoids highly desirable. This work presents a novel non-invasive approach to monitor and analyze cerebral organoids over time using high-field magnetic resonance imaging and state-of-the-art tools for automated image analysis. Three specific objectives are addressed, (I) organoid segmentation to investigate organoid development over time, (II) global cysticity classification and (III) local cyst segmentation for organoid quality assessment. We show that organoid growth can be monitored reliably over time and cystic and non-cystic organoids can be separated with high accuracy, with on par or better performance compared to state-of-the-art tools applied to brightfield imaging. Local cyst segmentation is feasible but could be further improved in the future. Overall, these results highlight the potential of the pipeline for clinical application to larger-scale comparative organoid analysis.


Asunto(s)
Quistes , Organoides , Humanos , Organoides/patología , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Quistes/patología , Inteligencia Artificial
20.
BMC Med Imaging ; 23(1): 174, 2023 10 31.
Artículo en Inglés | MEDLINE | ID: mdl-37907876

RESUMEN

BACKGROUND: With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS: Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS: The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS: We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.


Asunto(s)
Glioblastoma , Redes Neurales de la Computación , Humanos , Imagen por Resonancia Magnética , Encéfalo , Glioblastoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...