Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 20988, 2024 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251664

RESUMEN

Image segmentation of the liver is an important step in treatment planning for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a generalizable deep learning model to segment the liver on T1-weighted MR images. In particular, three distinct deep learning architectures (nnUNet, PocketNet, Swin UNETR) were considered using data gathered from six geographically different institutions. A total of 819 T1-weighted MR images were gathered from both public and internal sources. Our experiments compared each architecture's testing performance when trained both intra-institutionally and inter-institutionally. Models trained using nnUNet and its PocketNet variant achieved mean Dice-Sorensen similarity coefficients>0.9 on both intra- and inter-institutional test set data. The performance of these models suggests that nnUNet and PocketNet liver segmentation models trained on a large and diverse collection of T1-weighted MR images would on average achieve good intra-institutional segmentation performance.


Asunto(s)
Aprendizaje Profundo , Hepatopatías , Hígado , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Hígado/diagnóstico por imagen , Hígado/patología , Hepatopatías/diagnóstico por imagen , Hepatopatías/patología , Medios de Contraste , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología
2.
Res Sq ; 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38746406

RESUMEN

Image segmentation of the liver is an important step in several treatments for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a deep learning model to segment the liver on T1w MR images. We sought to determine the best architecture by training, validating, and testing three different deep learning architectures using a total of 819 T1w MR images gathered from six different datasets, both publicly and internally available. Our experiments compared each architecture's testing performance when trained on data from the same dataset via 5-fold cross validation to its testing performance when trained on all other datasets. Models trained using nnUNet achieved mean Dice-Sorensen similarity coefficients > 90% when tested on each of the six datasets individually. The performance of these models suggests that an nnUNet liver segmentation model trained on a large and diverse collection of T1w MR images would be robust to potential changes in contrast protocol and disease etiology.

3.
Med Phys ; 51(7): 4898-4906, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38640464

RESUMEN

BACKGROUND: Magnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment-based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal-to-noise, contrast-to-noise) and segmentation accuracy. PURPOSE: Deep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL-based brain tumor segmentation accuracy toward developing more generalizable models for multi-institutional data. METHODS: We trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non-ET on MRI; with performance quantified via a 5-fold cross-validated Dice coefficient. MRI scans were evaluated through the open-source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as "better" quality (BQ) or "worse" quality (WQ), via relative thresholding. Segmentation performance was re-evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts. RESULTS: For this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal-to-noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models. CONCLUSIONS: Our results suggest that a significant correlation may exist between specific MR IQMs and DenseNet-based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Neoplasias Encefálicas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Control de Calidad
4.
IEEE Trans Med Imaging ; 42(4): 1172-1184, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36427285

RESUMEN

Medical imaging deep learning models are often large and complex, requiring specialized hardware to train and evaluate these models. To address such issues, we propose the PocketNet paradigm to reduce the size of deep learning models by throttling the growth of the number of channels in convolutional neural networks. We demonstrate that, for a range of segmentation and classification tasks, PocketNet architectures produce results comparable to that of conventional neural networks while reducing the number of parameters by multiple orders of magnitude, using up to 90% less GPU memory, and speeding up training times by up to 40%, thereby allowing such models to be trained and deployed in resource-constrained settings.


Asunto(s)
Diagnóstico por Imagen , Redes Neurales de la Computación
5.
J Appl Clin Med Phys ; 23(4): e13557, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35148034

RESUMEN

PURPOSE: Complex data processing and curation for artificial intelligence applications rely on high-quality data sets for training and analysis. Manually reviewing images and their associated annotations is a very laborious task and existing quality control tools for data review are generally limited to raw images only. The purpose of this work was to develop an imaging informatics dashboard for the easy and fast review of processed magnetic resonance (MR) imaging data sets; we demonstrated its ability in a large-scale data review. METHODS: We developed a custom R Shiny dashboard that displays key static snapshots of each imaging study and its annotations. A graphical interface allows the structured entry of review data and download of tabulated review results. We evaluated the dashboard using two large data sets: 1380 processed MR imaging studies from our institution and 285 studies from the 2018 MICCAI Brain Tumor Segmentation Challenge (BraTS). RESULTS: Studies were reviewed at an average rate of 100/h using the dashboard, 10 times faster than using existing data viewers. For data from our institution, 1181 of the 1380 (86%) studies were of acceptable quality. The most commonly identified failure modes were tumor segmentation (9.6% of cases) and image registration (4.6% of cases). Tumor segmentation without visible errors on the dashboard had much better agreement with reference tumor volume measurements (root-mean-square error 12.2 cm3 ) than did segmentations with minor errors (20.5 cm3 ) or failed segmentations (27.4 cm3 ). In the BraTS data, 242 of 285 (85%) studies were acceptable quality after processing. Among the 43 cases that failed review, 14 had unacceptable raw image quality. CONCLUSION: Our dashboard provides a fast, effective tool for reviewing complex processed MR imaging data sets. It is freely available for download at https://github.com/EGates1/MRDQED.


Asunto(s)
Inteligencia Artificial , Neoplasias Encefálicas , Exactitud de los Datos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...