Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
2.
Ultrasound Med Biol ; 47(9): 2713-2722, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34238616

RESUMO

Developmental dysplasia of the hip (DDH) metrics based on 3-D ultrasound have proven more reliable than those based on 2-D images, but to date have been based mainly on hand-engineered features. Here, we test the performance of 3-D convolutional neural networks for automatically segmenting and delineating the key anatomical structures used to define DDH metrics: the pelvis bone surface and the femoral head. Our models are trained and tested on a data set of 136 volumes from 34 participants. For the pelvis, a 3D-U-Net achieves a Dice score of 85%, outperforming the confidence-weighted structured phase symmetry algorithm (Dice score = 19%). For the femoral head, the 3D-U-Net had centre and radius errors of 1.42 and 0.46 mm, respectively, outperforming the random forest classifier (3.90 and 2.01 mm). The improved segmentation may improve DDH measurement accuracy and reliability, which could reduce misdiagnosis.


Assuntos
Luxação do Quadril , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Reprodutibilidade dos Testes , Ultrassonografia
3.
Int J Comput Assist Radiol Surg ; 16(7): 1121-1129, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33966168

RESUMO

PURPOSE: Estimating uncertainty in predictions made by neural networks is critically important for increasing the trust medical experts have in automatic data analysis results. In segmentation tasks, quantifying levels of confidence can provide meaningful additional information to aid clinical decision making. In recent work, we proposed an interpretable uncertainty measure to aid clinicians in assessing the reliability of developmental dysplasia of the hip metrics measured from 3D ultrasound screening scans, as well as that of the US scan itself. In this work, we propose a technique to quantify confidence in the associated segmentation process that incorporates voxel-wise uncertainty into the binary loss function used in the training regime, which encourages the network to concentrate its training effort on its least certain predictions. METHODS: We propose using a Bayesian-based technique to quantify 3D segmentation uncertainty by modifying the loss function within an encoder-decoder type voxel labeling deep network. By appending a voxel-wise uncertainty measure, our modified loss helps the network improve prediction uncertainty for voxels that are harder to train. We validate our approach by training a Bayesian 3D U-Net with the proposed modified loss function on a dataset comprising 92 clinical 3D US neonate scans and test on a separate hold-out dataset of 24 patients. RESULTS: Quantitatively, we show that the Dice score of ilium and acetabulum segmentation improves by 5% when trained with our proposed voxel-wise uncertainty loss compared to training with standard cross-entropy loss. Qualitatively, we further demonstrate how our modified loss function results in meaningful reduction of voxel-wise segmentation uncertainty estimates, with the network making more confident accurate predictions. CONCLUSION: We proposed a Bayesian technique to encode voxel-wise segmentation uncertainty information into deep neural network optimization, and demonstrated how it can be leveraged into meaningful confidence measures to improve the model's predictive performance.


Assuntos
Teorema de Bayes , Diagnóstico por Imagem/métodos , Luxação Congênita de Quadril/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Criança , Humanos , Reprodutibilidade dos Testes , Incerteza
4.
Comput Med Imaging Graph ; 90: 101924, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33895621

RESUMO

Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Carcinoma de Células Renais/diagnóstico por imagem , Feminino , Humanos , Rim , Neoplasias Renais/diagnóstico por imagem , Masculino , Gradação de Tumores , Tomografia Computadorizada por Raios X
5.
IEEE Trans Med Imaging ; 40(6): 1555-1567, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33606626

RESUMO

Kidney volume is an essential biomarker for a number of kidney disease diagnoses, for example, chronic kidney disease. Existing total kidney volume estimation methods often rely on an intermediate kidney segmentation step. On the other hand, automatic kidney localization in volumetric medical images is a critical step that often precedes subsequent data processing and analysis. Most current approaches perform kidney localization via an intermediate classification or regression step. This paper proposes an integrated deep learning approach for (i) kidney localization in computed tomography scans and (ii) segmentation-free renal volume estimation. Our localization method uses a selection-convolutional neural network that approximates the kidney inferior-superior span along the axial direction. Cross-sectional (2D) slices from the estimated span are subsequently used in a combined sagittal-axial Mask-RCNN that detects the organ bounding boxes on the axial and sagittal slices, the combination of which produces a final 3D organ bounding box. Furthermore, we use a fully convolutional network to estimate the kidney volume that skips the segmentation procedure. We also present a mathematical expression to approximate the 'volume error' metric from the 'Sørensen-Dice coefficient.' We accessed 100 patients' CT scans from the Vancouver General Hospital records and obtained 210 patients' CT scans from the 2019 Kidney Tumor Segmentation Challenge database to validate our method. Our method produces a kidney boundary wall localization error of ~2.4mm and a mean volume estimation error of ~5%.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Estudos Transversais , Humanos , Rim/diagnóstico por imagem , Tomografia Computadorizada por Raios X
6.
Ultrasound Med Biol ; 47(1): 139-153, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33239155

RESUMO

Developmental dysplasia of the hip is a hip abnormality that ranges from mild acetabular dysplasia to irreducible femoral head dislocations. While 2-D B-mode ultrasound (US)-based dysplasia metrics or disease metrics are currently used clinically to diagnose developmental dysplasia of the hip, such estimates suffer from high inter-exam variability. In this work, we propose and evaluate 3-D US-derived dysplasia metrics that are automatically computed and demonstrate that these automatically derived dysplasia metrics are considerably more reproducible. The key features of our automatic method are (i) a random forest-based learning technique to remove regions across the coronal axis that do not contain bone structures necessary for dysplasia-metric extraction, thereby reducing outliers; (ii) a bone segmentation method that uses rotation-invariant and intensity-invariant filters, thus remaining robust to signal dropout and varying bone morphology; (iii) a novel slice-based learning and 3-D reconstruction strategy to estimate a probability map of the hypoechoic femoral head in the US volume; and (iv) formulae for calculating the 3-D US-derived dysplasia metrics. We validate our proposed method on real clinical data acquired from 40 infant hip examinations. Results show a considerable (around 70%) reduction in variability in two key 3-D US-derived dysplasia metrics compared with their 2-D counterparts.


Assuntos
Benchmarking , Luxação Congênita de Quadril/diagnóstico por imagem , Imageamento Tridimensional , Humanos , Lactente , Reprodutibilidade dos Testes , Ultrassonografia/métodos
7.
Ultrasound Med Biol ; 46(4): 921-935, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31982208

RESUMO

Ultrasound bone segmentation is an important yet challenging task for many clinical applications. Several works have emerged attempting to improve and automate bone segmentation, which has led to a variety of computational techniques, validation practices and applied clinical scenarios. We characterize this exciting and growing body of research by reviewing published ultrasound bone segmentation techniques. We review 56 articles in detail and categorize and discuss the image analysis techniques that have been used for bone segmentation. We highlight the general trends of this field in terms of clinical motivation, image analysis techniques, ultrasound modalities and the types of validation practices used to quantify segmentation performance. Finally, we present an outlook on promising areas of research based on the unaddressed needs for solving ultrasound bone segmentation.


Assuntos
Osso e Ossos/diagnóstico por imagem , Ultrassonografia/métodos , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA