Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 21(20)2021 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-34695931

RESUMEN

Quantification of renal perfusion based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) requires determination of signal intensity time courses in the region of renal parenchyma. Thus, selection of voxels representing the kidney must be accomplished with special care and constitutes one of the major technical limitations which hampers wider usage of this technique as a standard clinical routine. Manual segmentation of renal compartments-even if performed by experts-is a common source of decreased repeatability and reproducibility. In this paper, we present a processing framework for the automatic kidney segmentation in DCE-MR images. The framework consists of two stages. Firstly, kidney masks are generated using a convolutional neural network. Then, mask voxels are classified to one of three regions-cortex, medulla, and pelvis-based on DCE-MRI signal intensity time courses. The proposed approach was evaluated on a cohort of 10 healthy volunteers who underwent the DCE-MRI examination. MRI scanning was repeated on two time events within a 10-day interval. For semantic segmentation task we employed a classic U-Net architecture, whereas experiments on voxel classification were performed using three alternative algorithms-support vector machines, logistic regression and extreme gradient boosting trees, among which SVM produced the most accurate results. Both segmentation and classification steps were accomplished by a series of models, each trained separately for a given subject using the data from other participants only. The mean achieved accuracy of the whole kidney segmentation was 94% in terms of IoU coefficient. Cortex, medulla and pelvis were segmented with IoU ranging from 90 to 93% depending on the tissue and body side. The results were also validated by comparing image-derived perfusion parameters with ground truth measurements of glomerular filtration rate (GFR). The repeatability of GFR calculation, as assessed by the coefficient of variation was determined at the level of 14.5 and 17.5% for the left and right kidney, respectively and it improved relative to manual segmentation. Reproduciblity, in turn, was evaluated by measuring agreement between image-derived and iohexol-based GFR values. The estimated absolute mean differences were equal to 9.4 and 12.9 mL/min/1.73 m2 for scanning sessions 1 and 2 and the proposed automated segmentation method. The result for session 2 was comparable with manual segmentation, whereas for session 1 reproducibility in the automatic pipeline was weaker.


Asunto(s)
Medios de Contraste , Imagen por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador , Riñón/diagnóstico por imagen , Redes Neurales de la Computación , Reproducibilidad de los Resultados
2.
Technol Health Care ; 32(5): 3279-3292, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38875055

RESUMEN

BACKGROUND: The incidence of kidney tumors is progressively increasing each year. The precision of segmentation for kidney tumors is crucial for diagnosis and treatment. OBJECTIVE: To enhance accuracy and reduce manual involvement, propose a deep learning-based method for the automatic segmentation of kidneys and kidney tumors in CT images. METHODS: The proposed method comprises two parts: object detection and segmentation. We first use a model to detect the position of the kidney, then narrow the segmentation range, and finally use an attentional recurrent residual convolutional network for segmentation. RESULTS: Our model achieved a kidney dice score of 0.951 and a tumor dice score of 0.895 on the KiTS19 dataset. Experimental results show that our model significantly improves the accuracy of kidney and kidney tumor segmentation and outperforms other advanced methods. CONCLUSION: The proposed method provides an efficient and automatic solution for accurately segmenting kidneys and renal tumors on CT images. Additionally, this study can assist radiologists in assessing patients' conditions and making informed treatment decisions.


Asunto(s)
Aprendizaje Profundo , Neoplasias Renales , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/patología , Tomografía Computarizada por Rayos X/métodos , Riñón/diagnóstico por imagen , Algoritmos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
3.
Comput Med Imaging Graph ; 113: 102349, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38330635

RESUMEN

Autosomal-dominant polycystic kidney disease is a prevalent genetic disorder characterized by the development of renal cysts, leading to kidney enlargement and renal failure. Accurate measurement of total kidney volume through polycystic kidney segmentation is crucial to assess disease severity, predict progression and evaluate treatment effects. Traditional manual segmentation suffers from intra- and inter-expert variability, prompting the exploration of automated approaches. In recent years, convolutional neural networks have been employed for polycystic kidney segmentation from magnetic resonance images. However, the use of Transformer-based models, which have shown remarkable performance in a wide range of computer vision and medical image analysis tasks, remains unexplored in this area. With their self-attention mechanism, Transformers excel in capturing global context information, which is crucial for accurate organ delineations. In this paper, we evaluate and compare various convolutional-based, Transformers-based, and hybrid convolutional/Transformers-based networks for polycystic kidney segmentation. Additionally, we propose a dual-task learning scheme, where a common feature extractor is followed by per-kidney decoders, towards better generalizability and efficiency. We extensively evaluate various architectures and learning schemes on a heterogeneous magnetic resonance imaging dataset collected from 112 patients with polycystic kidney disease. Our results highlight the effectiveness of Transformer-based models for polycystic kidney segmentation and the relevancy of exploiting dual-task learning to improve segmentation accuracy and mitigate data scarcity issues. A promising ability in accurately delineating polycystic kidneys is especially shown in the presence of heterogeneous cyst distributions and adjacent cyst-containing organs. This work contribute to the advancement of reliable delineation methods in nephrology, paving the way for a broad spectrum of clinical applications.


Asunto(s)
Quistes , Enfermedades Renales Poliquísticas , Riñón Poliquístico Autosómico Dominante , Humanos , Riñón/diagnóstico por imagen , Riñón Poliquístico Autosómico Dominante/diagnóstico por imagen , Riñón Poliquístico Autosómico Dominante/patología , Enfermedades Renales Poliquísticas/patología , Imagen por Resonancia Magnética/métodos , Quistes/patología
4.
Ir J Med Sci ; 192(3): 1401-1409, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35930139

RESUMEN

BACKGROUND AND PURPOSE: The precise segmentation of the kidneys in computed tomography (CT) images is vital in urology for diagnosis, treatment, and surgical planning. Medical experts can get assistance through segmentation, as it provides information about kidney malformations in terms of shape and size. Manual segmentation is slow, tedious, and not reproducible. An automatic computer-aided system is a solution to this problem. This paper presents an automated kidney segmentation technique based on active contour and deep learning. MATERIALS AND METHODS: In this work, 210 CTs from the KiTS 19 repository were used. The used dataset was divided into a train set (168 CTs), test set (21 CTs), and validation set (21 CTs). The suggested technique has broadly four phases: (1) extraction of kidney regions using active contours, (2) preprocessing, (3) kidney segmentation using 3D U-Net, and (4) reconstruction of the segmented CT images. RESULTS: The proposed segmentation method has received the Dice score of 97.62%, Jaccard index of 95.74%, average sensitivity of 98.28%, specificity of 99.95%, and accuracy of 99.93% over the validation dataset. CONCLUSION: The proposed method can efficiently solve the problem of tumorous kidney segmentation in CT images by using active contour and deep learning. The active contour was used to select kidney regions and 3D-UNet was used for precisely segmenting the tumorous kidney.


Asunto(s)
Neoplasias , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Abdomen , Riñón/diagnóstico por imagen , Imagenología Tridimensional/métodos
5.
Med Biol Eng Comput ; 61(1): 285-295, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36414816

RESUMEN

One of the techniques for achieving unique and reliable information in medicine is renal scintigraphy. A key step for quantitative renal scintigraphy is segmentation of the kidneys. Here, an automatic segmentation framework was proposed for computer-aided renal scintigraphy procedures. To extract kidney boundary in dynamic renal scintigraphic images, a multi-step approach was proposed. This technique is featured with key steps, namely, localization and segmentation. At first, the ROI of each kidney was estimated using Otsu's thresholding, anatomical constraint, and integral projection, which is done in an automatic process. Afterwards, the ROI obtained for the kidneys was used as the initial contours to create the final counter of kidneys using geometric active contours. At this step and for the segmentation, an improved variational level set was utilized through Mumford-Shah formulation. Using e.cam gamma camera system (SIEMENS), 30 data sets were used to assess the proposed method. By comparing the manually outlined borders, the performance of the proposed method was shown. Different measures were used to examine the performance. It was found that the proposed segmentation method managed to extract the kidney boundary in renal scintigraphic images. The proposed technique achieved a sensitivity of 95.15% and a specificity of 95.33%. In addition, the section under the curve in the ROC analysis was equal to 0.974. The proposed technique successfully segmented the renal contour in dynamic renal scintigraphy. Using all the data sets, a correct segmentation of the kidney was performed. In addition, the technique was successful with noisy and low-resolution images and challenging cases with close interfering activities such as liver and spleen activities.


Asunto(s)
Algoritmos , Riñón , Riñón/diagnóstico por imagen , Abdomen , Hígado , Computadores , Procesamiento de Imagen Asistido por Computador/métodos
6.
Diagnostics (Basel) ; 13(7)2023 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-37046576

RESUMEN

When deciding on a kidney tumor's diagnosis and treatment, it is critical to take its morphometry into account. It is challenging to undertake a quantitative analysis of the association between kidney tumor morphology and clinical outcomes due to a paucity of data and the need for the time-consuming manual measurement of imaging variables. To address this issue, an autonomous kidney segmentation technique, namely SegTGAN, is proposed in this paper, which is based on a conventional generative adversarial network model. Its core framework includes a discriminator network with multi-scale feature extraction and a fully convolutional generator network made up of densely linked blocks. For qualitative and quantitative comparisons with the SegTGAN technique, the widely used and related medical image segmentation networks U-Net, FCN, and SegAN are used. The experimental results show that the Dice similarity coefficient (DSC), volumetric overlap error (VOE), accuracy (ACC), and average surface distance (ASD) of SegTGAN on the Kits19 dataset reach 92.28%, 16.17%, 97.28%, and 0.61 mm, respectively. SegTGAN outscores all the other neural networks, which indicates that our proposed model has the potential to improve the accuracy of CT-based kidney segmentation.

7.
Bioengineering (Basel) ; 10(7)2023 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-37508782

RESUMEN

The dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) technique has taken on a significant and increasing role in diagnostic procedures and treatments for patients who suffer from chronic kidney disease. Careful segmentation of kidneys from DCE-MRI scans is an essential early step towards the evaluation of kidney function. Recently, deep convolutional neural networks have increased in popularity in medical image segmentation. To this end, in this paper, we propose a new and fully automated two-phase approach that integrates convolutional neural networks and level set methods to delimit kidneys in DCE-MRI scans. We first develop two convolutional neural networks that rely on the U-Net structure (UNT) to predict a kidney probability map for DCE-MRI scans. Then, to leverage the segmentation performance, the pixel-wise kidney probability map predicted from the deep model is exploited with the shape prior information in a level set method to guide the contour evolution towards the target kidney. Real DCE-MRI datasets of 45 subjects are used for training, validating, and testing the proposed approach. The valuation results demonstrate the high performance of the two-phase approach, achieving a Dice similarity coefficient of 0.95 ± 0.02 and intersection over union of 0.91 ± 0.03, and 1.54 ± 1.6 considering a 95% Hausdorff distance. Our intensive experiments confirm the potential and effectiveness of that approach over both UNT models and numerous recent level set-based methods.

8.
Radiol Artif Intell ; 5(6): e230043, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38074795

RESUMEN

Purpose: To develop and validate a semisupervised style transfer-assisted deep learning method for automated segmentation of the kidneys using multiphase contrast-enhanced (MCE) MRI acquisitions. Materials and Methods: This retrospective, Health Insurance Portability and Accountability Act-compliant, institutional review board-approved study included 125 patients (mean age, 57.3 years; 67 male, 58 female) with renal masses. Cohort 1 consisted of 102 coronal T2-weighted MRI acquisitions and 27 MCE MRI acquisitions during the corticomedullary phase. Cohort 2 comprised 92 MCE MRI acquisitions (23 acquisitions during four phases each, including precontrast, corticomedullary, early nephrographic, and nephrographic phases). The kidneys were manually segmented on T2-weighted images. A cycle-consistent generative adversarial network (CycleGAN) was trained to generate anatomically coregistered synthetic corticomedullary style images using T2-weighted images as input. Synthetic images for precontrast, early nephrographic, and nephrographic phases were then generated using the synthetic corticomedullary images as input. Mask region-based convolutional neural networks were trained on the four synthetic phase series for kidney segmentation using T2-weighted masks. Segmentation performance was evaluated in a different cohort of 20 originally acquired MCE MRI examinations by using Dice and Jaccard scores. Results: The CycleGAN network successfully generated anatomically coregistered synthetic MCE MRI-like datasets from T2-weighted acquisitions. The proposed deep learning approach for kidney segmentation achieved high mean Dice scores in all four phases of the original MCE MRI acquisitions (0.91 for precontrast, 0.92 for corticomedullary, 0.91 for early nephrographic, and 0.93 for nephrographic). Conclusion: The proposed deep learning approach achieved high performance in kidney segmentation on different MCE MRI acquisitions.Keywords: Kidney Segmentation, Generative Adversarial Network, CycleGAN, Convolutional Neural Network, Transfer Learning Supplemental material is available for this article. Published under a CC BY 4.0 license.

9.
Z Med Phys ; 2023 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-37666698

RESUMEN

For dosimetry of radiopharmaceutical therapies, it is essential to determine the volume of relevant structures exposed to therapeutic radiation. For many radiopharmaceuticals, the kidneys represent an important organ-at-risk. To reduce the time required for kidney segmentation, which is often still performed manually, numerous approaches have been presented in recent years to apply deep learning-based methods for CT-based automated segmentation. While the automatic segmentation methods presented so far have been based solely on CT information, the aim of this work is to examine the added value of incorporating PSMA-PET data in the automatic kidney segmentation. METHODS: A total of 108 PET/CT examinations (53 [68Ga]Ga-PSMA-I&T and 55 [18F]F-PSMA-1007 examinations) were grouped to create a reference data set of manual segmentations of the kidney. These segmentations were performed by a human examiner. For each subject, two segmentations were carried out: one CT-based (detailed) segmentation and one PET-based (coarser) segmentation. Five different u-net based approaches were applied to the data set to perform an automated segmentation of the kidney: CT images only, PET images only (coarse segmentation), a combination of CT and PET images, a combination of CT images and a PET-based coarse mask, and a CT image, which had been pre-segmented using a PET-based coarse mask. A quantitative assessment of these approaches was performed based on a test data set of 20 patients, including Dice score, volume deviation and average Hausdorff distance between automated and manual segmentations. Additionally, a visual evaluation of automated segmentations for 100 additional (i.e., exclusively automatically segmented) patients was performed by a nuclear physician. RESULTS: Out of all approaches, the best results were achieved by using CT images which had been pre-segmented using a PET-based coarse mask as input. In addition, this method performed significantly better than the segmentation based solely on CT, which was supported by the visual examination of the additional segmentations. In 80% of the cases, the segmentations created by exploiting the PET-based pre-segmentation were preferred by the nuclear physician. CONCLUSION: This study shows that deep-learning based kidney segmentation can be significantly improved through the addition of a PET-based pre-segmentation. The presented method was shown to be especially beneficial for kidneys with cysts or kidneys that are closely adjacent to other organs such as the spleen, liver or pancreas. In the future, this could lead to a considerable reduction in the time required for dosimetry calculations as well as an improvement in the results.

10.
Biomolecules ; 13(10)2023 10 19.
Artículo en Inglés | MEDLINE | ID: mdl-37892229

RESUMEN

Background and Objective: Kidney ultrasound (US) imaging is a significant imaging modality for evaluating kidney health and is essential for diagnosis, treatment, surgical intervention planning, and follow-up assessments. Kidney US image segmentation consists of extracting useful objects or regions from the total image, which helps determine tissue organization and improve diagnosis. Thus, obtaining accurate kidney segmentation data is an important first step for precisely diagnosing kidney diseases. However, manual delineation of the kidney in US images is complex and tedious in clinical practice. To overcome these challenges, we developed a novel automatic method for US kidney segmentation. Methods: Our method comprises two cascaded steps for US kidney segmentation. The first step utilizes a coarse segmentation procedure based on a deep fusion learning network to roughly segment each input US kidney image. The second step utilizes a refinement procedure to fine-tune the result of the first step by combining an automatic searching polygon tracking method with a machine learning network. In the machine learning network, a suitable and explainable mathematical formula for kidney contours is denoted by basic parameters. Results: Our method is assessed using 1380 trans-abdominal US kidney images obtained from 115 patients. Based on comprehensive comparisons of different noise levels, our method achieves accurate and robust results for kidney segmentation. We use ablation experiments to assess the significance of each component of the method. Compared with state-of-the-art methods, the evaluation metrics of our method are significantly higher. The Dice similarity coefficient (DSC) of our method is 94.6 ± 3.4%, which is higher than those of recent deep learning and hybrid algorithms (89.4 ± 7.1% and 93.7 ± 3.8%, respectively). Conclusions: We develop a coarse-to-refined architecture for the accurate segmentation of US kidney images. It is important to precisely extract kidney contour features because segmentation errors can cause under-dosing of the target or over-dosing of neighboring normal tissues during US-guided brachytherapy. Hence, our method can be used to increase the rigor of kidney US segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía , Algoritmos , Riñón/diagnóstico por imagen
11.
J Med Imaging (Bellingham) ; 9(3): 036001, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35721309

RESUMEN

Purpose: Multiparametric magnetic resonance imaging (mp-MRI) is being investigated for kidney cancer because of better soft tissue contrast ability. The necessity of manual labels makes the development of supervised kidney segmentation algorithms challenging for each mp-MRI protocol. Here, we developed a transfer learning-based approach to improve kidney segmentation on a small dataset of five other mp-MRI sequences. Approach: We proposed a fully automated two-dimensional (2D) attention U-Net model for kidney segmentation on T1 weighted-nephrographic phase contrast enhanced (CE)-MRI (T1W-NG) dataset ( N = 108 ). The pretrained weights of T1W-NG kidney segmentation model transferred to five other distinct mp-MRI sequences model (T2W, T1W-in-phase (T1W-IP), T1W-out-of-phase (T1W-OP), T1W precontrast (T1W-PRE), and T1W-corticomedullary-CE (T1W-CM), N = 50 ) and fine-tuned by unfreezing the layers. The individual model performances were evaluated with and without transfer-learning fivefold cross-validation on average Dice similarity coefficient (DSC), absolute volume difference, Hausdorff distance (HD), and center-of-mass distance (CD) between algorithm generated and manually segmented kidneys. Results: The developed 2D attention U-Net model for T1W-NG produced kidney segmentation DSC of 89.34 ± 5.31 % . Compared with randomly initialized weight models, the transfer learning-based models of five mp-MRI sequences showed average increase of 2.96% in DSC of kidney segmentation ( p = 0.001 to 0.006). Specifically, the transfer-learning approach increased average DSC on T2W from 87.19% to 89.90%, T1W-IP from 83.64% to 85.42%, T1W-OP from 79.35% to 83.66%, T1W-PRE from 82.05% to 85.94%, and T1W-CM from 85.65% to 87.64%. Conclusions: We demonstrate that a pretrained model for automated kidney segmentation of one mp-MRI sequence improved automated kidney segmentation on five other additional sequences.

12.
Comput Methods Programs Biomed ; 221: 106854, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35567864

RESUMEN

This paper proposes an encoder-decoder architecture for kidney segmentation. A hyperparameter optimization process is implemented, including the development of a model architecture, selecting a windowing method and a loss function, and data augmentation. The model consists of EfficientNet-B5 as the encoder and a feature pyramid network as the decoder that yields the best performance with a Dice score of 0.969 on the 2019 Kidney and Kidney Tumor Segmentation Challenge dataset. The proposed model is tested with different voxel spacing, anatomical planes, and kidney and tumor volumes. Moreover, case studies are conducted to analyze segmentation outliers. Finally, five-fold cross-validation and the 3D-IRCAD-01 dataset are used to evaluate the developed model in terms of the following evaluation metrics: the Dice score, recall, precision, and the Intersection over Union score. A new development and application of artificial intelligence algorithms to solve image analysis and interpretation will be demonstrated in this paper. Overall, our experiment results show that the proposed kidney segmentation solutions in CT images can be significantly applied to clinical needs to assist surgeons in surgical planning. It enables the calculation of the total kidney volume for kidney function estimation in ADPKD and supports radiologists or doctors in disease diagnoses and disease progression.


Asunto(s)
Aprendizaje Profundo , Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
13.
Abdom Radiol (NY) ; 47(7): 2408-2419, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35476147

RESUMEN

PURPOSE: Total kidney volume (TKV) is the most important imaging biomarker for quantifying the severity of autosomal-dominant polycystic kidney disease (ADPKD). 3D ultrasound (US) can accurately measure kidney volume compared to 2D US; however, manual segmentation is tedious and requires expert annotators. We investigated a deep learning-based approach for automated segmentation of TKV from 3D US in ADPKD patients. METHOD: We used axially acquired 3D US-kidney images in 22 ADPKD patients where each patient and each kidney were scanned three times, resulting in 132 scans that were manually segmented. We trained a convolutional neural network to segment the whole kidney and measure TKV. All patients were subsequently imaged with MRI for measurement comparison. RESULTS: Our method automatically segmented polycystic kidneys in 3D US images obtaining an average Dice coefficient of 0.80 on the test dataset. The kidney volume measurement compared with linear regression coefficient and bias from human tracing were R2 = 0.81, and - 4.42%, and between AI and reference standard were R2 = 0.93, and - 4.12%, respectively. MRI and US measured kidney volumes had R2 = 0.84 and a bias of 7.47%. CONCLUSION: This is the first study applying deep learning to 3D US in ADPKD. Our method shows promising performance for auto-segmentation of kidneys using 3D US to measure TKV, close to human tracing and MRI measurement. This imaging and analysis method may be useful in a number of settings, including pediatric imaging, clinical studies, and longitudinal tracking of patient disease progression.


Asunto(s)
Enfermedades Renales Poliquísticas , Riñón Poliquístico Autosómico Dominante , Niño , Humanos , Imagenología Tridimensional , Riñón/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Riñón Poliquístico Autosómico Dominante/diagnóstico por imagen
14.
Ultrasonics ; 122: 106706, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35149255

RESUMEN

Accurate segmentation of kidney in ultrasound images is a vital procedure in clinical diagnosis and interventional operation. In recent years, deep learning technology has demonstrated promising prospects in medical image analysis. However, due to the inherent problems of ultrasound images, data with annotations are scarce and arduous to acquire, hampering the application of data-hungry deep learning methods. In this paper, we propose cross-modal transfer learning from computerized tomography (CT) to ultrasound (US) by leveraging annotated data in the CT modality. In particular, we adopt cycle generative adversarial network (CycleGAN) to synthesize US images from CT data and construct a transition dataset to mitigate the immense domain discrepancy between US and CT. Mainstream convolutional neural networks such as U-Net, U-Res, PSPNet, and DeepLab v3+ are pretrained on the transition dataset and then transferred to real US images. We first trained CNN models on a data set composed of 50 ultrasound images and validated them on a validation set composed of 30 ultrasound images. In addition, we selected 82 ultrasound images from another hospital to construct a cross-site data set to verify the generalization performance of the models. The experimental results show that with our proposed transfer learning strategy, the segmentation accuracy in dice similarity coefficient (DSC) reaches 0.853 for U-Net, 0.850 for U-Res, 0.826 for PSPNet and 0.827 for DeepLab v3+ on the cross-site test set. Compared with training from scratch, the accuracy improvement was 0.127, 0.097, 0.105 and 0.036 respectively. Our transfer learning strategy effectively improves the accuracy and generalization ability of ultrasound image segmentation model with limited training data.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Ultrasonografía , Conjuntos de Datos como Asunto , Humanos
15.
Eur J Radiol Open ; 9: 100458, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36467572

RESUMEN

Purpose: Quantitative evaluation of renal obstruction is crucial for preventing renal atrophy. This study presents a novel method for diagnosing renal obstruction by automatically extracting objective indicators from routine multi-phase CT Urography (CTU). Material and methods: The study included multi-phase CTU examinations of 6 hydronephrotic kidneys and 24 non-hydronephrotic kidneys (23,164 slices). The developed algorithm segmented the renal parenchyma and the renal pelvis of each kidney in each CTU slice. Following a 3D reconstruction of the parenchyma and renal pelvis, the algorithm evaluated the amount of the contrast media in both components in each phase. Finally, the algorithm evaluated two indicators for assessing renal obstruction: the change in the total amount of contrast media in both components during the CTU phases, and the drainage time, "T1/2", from the renal parenchyma. Results: The algorithm segmented the parenchyma and renal pelvis with an average dice coefficient of 0.97 and 0.92 respectively. In all the hydronephrotic kidneys the total amount of contrast media did not decrease during the CTU examination and the T1/2 value was longer than 20 min. Both indicators yielded a statistically significant difference (p < 0.001) between hydronephrotic and normal kidneys, and combining both indicators yielded 100% accuracy. Conclusions: The novel algorithm enables accurate 3D segmentation of the renal parenchyma and pelvis and estimates the amount of contrast media in multi-phase CTU examinations. This serves as a proof-of-concept for the ability to extract from routine CTU indicators that alert to the presence of renal obstruction and estimate its severity.

16.
Artículo en Inglés | MEDLINE | ID: mdl-35340560

RESUMEN

Bosniak renal cyst classification has been widely used in determining the complexity of a renal cyst. However, it turns out that about half of patients undergoing surgery for Bosniak category III, take surgical risks that reward them with no clinical benefit at all. This is because their pathological results reveal that the cysts are actually benign not malignant. This problem inspires us to use recently popular deep learning techniques and study alternative analytics methods for precise binary classification (benign or malignant tumor) on Computerized Tomography (CT) images. To achieve our goal, two consecutive steps are required-segmenting kidney organs or lesions from CT images then classifying the segmented kidneys. In this paper, we propose a study of kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for efficiently extracting intra-slice and inter-slice features. Our models are trained and validated on the public data set from Kidney Tumor Segmentation (KiTS19) challenge in two different training environments. As a result, all experimental models achieve high mean kidney Dice scores of at least 95% on the KiTS19 validation set consisting of 60 patients. Apart from the KiTS19 data set, we also conduct separate experiments on abdomen CT images of four Thai patients. Based on the four Thai patients, our experimental models show a drop in performance, where the best mean kidney Dice score is 87.60%.

17.
Diagnostics (Basel) ; 12(8)2022 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-35892498

RESUMEN

Recent breakthroughs of deep learning algorithms in medical imaging, automated detection, and segmentation techniques for renal (kidney) in abdominal computed tomography (CT) images have been limited. Radiomics and machine learning analyses of renal diseases rely on the automatic segmentation of kidneys in CT images. Inspired by this, our primary aim is to utilize deep semantic segmentation learning models with a proposed training scheme to achieve precise and accurate segmentation outcomes. Moreover, this work aims to provide the community with an open-source, unenhanced abdominal CT dataset for training and testing the deep learning segmentation networks to segment kidneys and detect kidney stones. Five variations of deep segmentation networks are trained and tested both dependently (based on the proposed training scheme) and independently. Upon comparison, the models trained with the proposed training scheme enable the highly accurate 2D and 3D segmentation of kidneys and kidney stones. We believe this work is a fundamental step toward AI-driven diagnostic strategies, which can be an essential component of personalized patient care and improved decision-making in treating kidney diseases.

18.
Bioengineering (Basel) ; 9(11)2022 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-36354565

RESUMEN

The segmentation of dynamic contrast-enhanced magnetic resonance images (DCE-MRI) of the kidney is a fundamental step in the early and noninvasive detection of acute renal allograft rejection. In this paper, a new and accurate DCE­MRI kidney segmentation method is proposed. In this method, fuzzy c-means (FCM) clustering is embedded into a level set method, with the fuzzy memberships being iteratively updated during the level set contour evolution. Moreover, population­based shape (PB-shape) and subject-specific shape (SS-shape) statistics are both exploited. The PB-shape model is trained offline from ground-truth kidney segmentations of various subjects, whereas the SS-shape model is trained on the fly using the segmentation results that are obtained for a specific subject. The proposed method was evaluated on the real medical datasets of 45 subjects and reports a Dice similarity coefficient (DSC) of 0.953 ± 0.018, an intersection-over-union (IoU) of 0.91 ± 0.033, and 1.10 ± 1.4 in the 95-percentile of Hausdorff distance (HD95). Extensive experiments confirm the superiority of the proposed method over several state-of-the-art level set methods, with an average improvement of 0.7 in terms of HD95. It also offers an HD95 improvement of 9.5 and 3.8 over two deep neural networks based on the U-Net architecture. The accuracy improvements have been experimentally found to be more prominent on low-contrast and noisy images.

19.
Phys Med Biol ; 67(22)2022 11 18.
Artículo en Inglés | MEDLINE | ID: mdl-36401576

RESUMEN

Objective.Effective learning and modelling of spatial and semantic relations between image regions in various ranges are critical yet challenging in image segmentation tasks.Approach.We propose a novel deep graph reasoning model to learn from multi-order neighborhood topologies for volumetric image segmentation. A graph is first constructed with nodes representing image regions and graph topology to derive spatial dependencies and semantic connections across image regions. We propose a new node attribute embedding mechanism to formulate topological attributes for each image region node by performing multi-order random walks (RW) on the graph and updating neighboring topologies at different neighborhood ranges. Afterwards, multi-scale graph convolutional autoencoders are developed to extract deep multi-scale topological representations of nodes and propagate learnt knowledge along graph edges during the convolutional and optimization process. We also propose a scale-level attention module to learn the adaptive weights of topological representations at multiple scales for enhanced fusion. Finally, the enhanced topological representation and knowledge from graph reasoning are integrated with content features before feeding into the segmentation decoder.Main results.The evaluation results over public kidney and tumor CT segmentation dataset show that our model outperforms other state-of-the-art segmentation methods. Ablation studies and experiments using different convolutional neural networks backbones show the contributions of major technical innovations and generalization ability.Significance.We propose for the first time an RW-driven MCG with scale-level attention to extract semantic connections and spatial dependencies between a diverse range of regions for accurate kidney and tumor segmentation in CT volumes.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Algoritmos , Redes Neurales de la Computación , Riñón
20.
Biomedicines ; 11(1)2022 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-36672514

RESUMEN

The dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) technique has great potential in the diagnosis, therapy, and follow-up of patients with chronic kidney disease (CKD). Towards that end, precise kidney segmentation from DCE-MRI data becomes a prerequisite processing step. Exploiting the useful information about the kidney's shape in this step mandates a registration operation beforehand to relate the shape model coordinates to those of the image to be segmented. Imprecise alignment of the shape model induces errors in the segmentation results. In this paper, we propose a new variational formulation to jointly segment and register DCE-MRI kidney images based on fuzzy c-means clustering embedded within a level-set (LSet) method. The image pixels' fuzzy memberships and the spatial registration parameters are simultaneously updated in each evolution step to direct the LSet contour toward the target kidney. Results on real medical datasets of 45 subjects demonstrate the superior performance of the proposed approach, reporting a Dice similarity coefficient of 0.94 ± 0.03, Intersection-over-Union of 0.89 ± 0.05, and 2.2 ± 2.3 in 95-percentile of Hausdorff distance. Extensive experiments show that our approach outperforms several state-of-the-art LSet-based methods as well as two UNet-based deep neural models trained for the same task in terms of accuracy and consistency.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA