Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38717881

RESUMO

Deep learning models have achieved remarkable success in medical image classification. These models are typically trained once on the available annotated images and thus lack the ability of continually learning new tasks (i.e., new classes or data distributions) due to the problem of catastrophic forgetting. Recently, there has been more interest in designing continual learning methods to learn different tasks presented sequentially over time while preserving previously acquired knowledge. However, these methods focus mainly on preventing catastrophic forgetting and are tested under a closed-world assumption; i.e., assuming the test data is drawn from the same distribution as the training data. In this work, we advance the state-of-the-art in continual learning by proposing GC2 for medical image classification, which learns a sequence of tasks while simultaneously enhancing its out-of-distribution robustness. To alleviate forgetting, GC2 employs a gradual culpability-based network pruning to identify an optimal subnetwork for each task. To improve generalization, GC2 incorporates adversarial image augmentation and knowledge distillation approaches for learning generalized and robust representations for each subnetwork. Our extensive experiments on multiple benchmarks in a task-agnostic inference demonstrate that GC2 significantly outperforms baselines and other continual learning methods in reducing forgetting and enhancing generalization. Our code is publicly available at the following link: https://github.com/ nourhanb/TMI2024-GC2.

2.
Comput Med Imaging Graph ; 102: 102127, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36257092

RESUMO

Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Imageamento Tridimensional/métodos , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
3.
Comput Med Imaging Graph ; 90: 101924, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33895621

RESUMO

Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Carcinoma de Células Renais/diagnóstico por imagem , Feminino , Humanos , Rim , Neoplasias Renais/diagnóstico por imagem , Masculino , Gradação de Tumores , Tomografia Computadorizada por Raios X
4.
Int J Comput Assist Radiol Surg ; 16(7): 1121-1129, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33966168

RESUMO

PURPOSE: Estimating uncertainty in predictions made by neural networks is critically important for increasing the trust medical experts have in automatic data analysis results. In segmentation tasks, quantifying levels of confidence can provide meaningful additional information to aid clinical decision making. In recent work, we proposed an interpretable uncertainty measure to aid clinicians in assessing the reliability of developmental dysplasia of the hip metrics measured from 3D ultrasound screening scans, as well as that of the US scan itself. In this work, we propose a technique to quantify confidence in the associated segmentation process that incorporates voxel-wise uncertainty into the binary loss function used in the training regime, which encourages the network to concentrate its training effort on its least certain predictions. METHODS: We propose using a Bayesian-based technique to quantify 3D segmentation uncertainty by modifying the loss function within an encoder-decoder type voxel labeling deep network. By appending a voxel-wise uncertainty measure, our modified loss helps the network improve prediction uncertainty for voxels that are harder to train. We validate our approach by training a Bayesian 3D U-Net with the proposed modified loss function on a dataset comprising 92 clinical 3D US neonate scans and test on a separate hold-out dataset of 24 patients. RESULTS: Quantitatively, we show that the Dice score of ilium and acetabulum segmentation improves by 5% when trained with our proposed voxel-wise uncertainty loss compared to training with standard cross-entropy loss. Qualitatively, we further demonstrate how our modified loss function results in meaningful reduction of voxel-wise segmentation uncertainty estimates, with the network making more confident accurate predictions. CONCLUSION: We proposed a Bayesian technique to encode voxel-wise segmentation uncertainty information into deep neural network optimization, and demonstrated how it can be leveraged into meaningful confidence measures to improve the model's predictive performance.


Assuntos
Teorema de Bayes , Diagnóstico por Imagem/métodos , Luxação Congênita de Quadril/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Criança , Humanos , Reprodutibilidade dos Testes , Incerteza
5.
IEEE Trans Med Imaging ; 40(6): 1555-1567, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33606626

RESUMO

Kidney volume is an essential biomarker for a number of kidney disease diagnoses, for example, chronic kidney disease. Existing total kidney volume estimation methods often rely on an intermediate kidney segmentation step. On the other hand, automatic kidney localization in volumetric medical images is a critical step that often precedes subsequent data processing and analysis. Most current approaches perform kidney localization via an intermediate classification or regression step. This paper proposes an integrated deep learning approach for (i) kidney localization in computed tomography scans and (ii) segmentation-free renal volume estimation. Our localization method uses a selection-convolutional neural network that approximates the kidney inferior-superior span along the axial direction. Cross-sectional (2D) slices from the estimated span are subsequently used in a combined sagittal-axial Mask-RCNN that detects the organ bounding boxes on the axial and sagittal slices, the combination of which produces a final 3D organ bounding box. Furthermore, we use a fully convolutional network to estimate the kidney volume that skips the segmentation procedure. We also present a mathematical expression to approximate the 'volume error' metric from the 'Sørensen-Dice coefficient.' We accessed 100 patients' CT scans from the Vancouver General Hospital records and obtained 210 patients' CT scans from the 2019 Kidney Tumor Segmentation Challenge database to validate our method. Our method produces a kidney boundary wall localization error of ~2.4mm and a mean volume estimation error of ~5%.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Estudos Transversais , Humanos , Rim/diagnóstico por imagem , Tomografia Computadorizada por Raios X
6.
Ultrasound Med Biol ; 47(9): 2713-2722, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34238616

RESUMO

Developmental dysplasia of the hip (DDH) metrics based on 3-D ultrasound have proven more reliable than those based on 2-D images, but to date have been based mainly on hand-engineered features. Here, we test the performance of 3-D convolutional neural networks for automatically segmenting and delineating the key anatomical structures used to define DDH metrics: the pelvis bone surface and the femoral head. Our models are trained and tested on a data set of 136 volumes from 34 participants. For the pelvis, a 3D-U-Net achieves a Dice score of 85%, outperforming the confidence-weighted structured phase symmetry algorithm (Dice score = 19%). For the femoral head, the 3D-U-Net had centre and radius errors of 1.42 and 0.46 mm, respectively, outperforming the random forest classifier (3.90 and 2.01 mm). The improved segmentation may improve DDH measurement accuracy and reliability, which could reduce misdiagnosis.


Assuntos
Luxação do Quadril , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Reprodutibilidade dos Testes , Ultrassonografia
7.
Ultrasound Med Biol ; 47(1): 139-153, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33239155

RESUMO

Developmental dysplasia of the hip is a hip abnormality that ranges from mild acetabular dysplasia to irreducible femoral head dislocations. While 2-D B-mode ultrasound (US)-based dysplasia metrics or disease metrics are currently used clinically to diagnose developmental dysplasia of the hip, such estimates suffer from high inter-exam variability. In this work, we propose and evaluate 3-D US-derived dysplasia metrics that are automatically computed and demonstrate that these automatically derived dysplasia metrics are considerably more reproducible. The key features of our automatic method are (i) a random forest-based learning technique to remove regions across the coronal axis that do not contain bone structures necessary for dysplasia-metric extraction, thereby reducing outliers; (ii) a bone segmentation method that uses rotation-invariant and intensity-invariant filters, thus remaining robust to signal dropout and varying bone morphology; (iii) a novel slice-based learning and 3-D reconstruction strategy to estimate a probability map of the hypoechoic femoral head in the US volume; and (iv) formulae for calculating the 3-D US-derived dysplasia metrics. We validate our proposed method on real clinical data acquired from 40 infant hip examinations. Results show a considerable (around 70%) reduction in variability in two key 3-D US-derived dysplasia metrics compared with their 2-D counterparts.


Assuntos
Benchmarking , Luxação Congênita de Quadril/diagnóstico por imagem , Imageamento Tridimensional , Humanos , Lactente , Reprodutibilidade dos Testes , Ultrassonografia/métodos
8.
Ultrasound Med Biol ; 46(4): 921-935, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31982208

RESUMO

Ultrasound bone segmentation is an important yet challenging task for many clinical applications. Several works have emerged attempting to improve and automate bone segmentation, which has led to a variety of computational techniques, validation practices and applied clinical scenarios. We characterize this exciting and growing body of research by reviewing published ultrasound bone segmentation techniques. We review 56 articles in detail and categorize and discuss the image analysis techniques that have been used for bone segmentation. We highlight the general trends of this field in terms of clinical motivation, image analysis techniques, ultrasound modalities and the types of validation practices used to quantify segmentation performance. Finally, we present an outlook on promising areas of research based on the unaddressed needs for solving ultrasound bone segmentation.


Assuntos
Osso e Ossos/diagnóstico por imagem , Ultrassonografia/métodos , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos
9.
Brain Connect ; 2018 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-30499336

RESUMO

Brain parcellation is often a prerequisite for network analysis due to the statistical challenges, computational burdens, and interpretation difficulties arising from the high dimensionality of neuroimaging data. Predominant approaches are largely unimodal with functional magnetic resonance imaging (fMRI) being the primary modality used. These approaches thus neglect other brain attributes that relate to brain organization. In this paper, we propose an approach for integrating fMRI and diffusion MRI (dMRI) data. Our approach introduces a nonlinear mapping between the connectivity values of two modalities, and adaptively balances their weighting based on their voxel-wise test-retest reliability. An efficient region level extension that additionally incorporates structural information on gyri and sulci is further presented. To validate, we compare multimodal parcellations with unimodal parcellations and existing atlases on the Human Connectome Project data. We show that multimodal parcellations achieve higher reproducibility, comparable/higher functional homogeneity, and comparable/higher leftout data likelihood. The boundaries of multimodal parcels are observed to align to those based on cyto-architecture, and subnetworks extracted from multimodal parcels matched well with established brain systems. Our results thus show that multimodal information improves brain parcellation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA