Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Phys Med Biol ; 68(17)2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-37582392

RESUMEN

Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada por Rayos X , Fantasmas de Imagen , Relación Señal-Ruido
2.
Quant Imaging Med Surg ; 11(5): 1836-1853, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33936969

RESUMEN

BACKGROUND: Microvascular invasion (MVI) has a significant effect on the prognosis of hepatocellular carcinoma (HCC), but its preoperative identification is challenging. Radiomics features extracted from medical images, such as magnetic resonance (MR) images, can be used to predict MVI. In this study, we explored the effects of different imaging sequences, feature extraction and selection methods, and classifiers on the performance of HCC MVI predictive models. METHODS: After screening against the inclusion criteria, 69 patients with HCC and preoperative gadoxetic acid-enhanced MR images were enrolled. In total, 167 features were extracted from the MR images of each sequence for each patient. Experiments were designed to investigate the effects of imaging sequence, number of gray levels (Ng), quantization algorithm, feature selection method, and classifiers on the performance of radiomics biomarkers in the prediction of HCC MVI. We trained and tested these models using leave-one-out cross-validation (LOOCV). RESULTS: The radiomics model based on the images of the hepatobiliary phase (HBP) had better predictive performance than those based on the arterial phase (AP), portal venous phase (PVP), and pre-enhanced T1-weighted images [area under the receiver operating characteristic (ROC) curve (AUC) =0.792 vs. 0.641/0.634/0.620, P=0.041/0.021/0.010, respectively]. Compared with the equal-probability and Lloyd-Max algorithms, the radiomics features obtained using the Uniform quantization algorithm had a better performance (AUC =0.643/0.666 vs. 0.792, P=0.002/0.003, respectively). Among the values of 8, 16, 32, 64, and 128, the best predictive performance was achieved when the Ng was 64 (AUC =0.792 vs. 0.584/0.697/0.677/0.734, P<0.001/P=0.039/0.001/0.137, respectively). We used a two-stage feature selection method which combined the least absolute shrinkage and selection operator (LASSO) and recursive feature elimination (RFE) gradient boosting decision tree (GBDT), which achieved better stability than and outperformed LASSO, minimum redundancy maximum relevance (mRMR), and support vector machine (SVM)-RFE (stability =0.967 vs. 0.837/0.623/0.390, respectively; AUC =0.850 vs. 0.792/0.713/0.699, P=0.142/0.007/0.003, respectively). The model based on the radiomics features of HBP images using the GBDT classifier showed a better performance for the preoperative prediction of MVI compared with logistic regression (LR), SVM, and random forest (RF) classifiers (AUC =0.895 vs. 0.850/0.834/0.884, P=0.558/0.229/0.058, respectively). With the optimal combination of these factors, we established the best model, which had an AUC of 0.895, accuracy of 87.0%, specificity of 82.5%, and sensitivity of 93.1%. CONCLUSIONS: Imaging sequences, feature extraction and selection methods, and classifiers can have a considerable effect on the predictive performance of radiomics models for HCC MVI.

3.
Vis Comput Ind Biomed Art ; 2(1): 21, 2019 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-32240395

RESUMEN

An accurate segmentation and quantification of the superficial foveal avascular zone (sFAZ) is important to facilitate the diagnosis and treatment of many retinal diseases, such as diabetic retinopathy and retinal vein occlusion. We proposed a method based on deep learning for the automatic segmentation and quantification of the sFAZ in optical coherence tomography angiography (OCTA) images with robustness to brightness and contrast (B/C) variations. A dataset of 405 OCTA images from 45 participants was acquired with Zeiss Cirrus HD-OCT 5000 and the ground truth (GT) was manually segmented subsequently. A deep learning network with an encoder-decoder architecture was created to classify each pixel into an sFAZ or non-sFAZ class. Subsequently, we applied largest-connected-region extraction and hole-filling to fine-tune the automatic segmentation results. A maximum mean dice similarity coefficient (DSC) of 0.976 ± 0.011 was obtained when the automatic segmentation results were compared against the GT. The correlation coefficient between the area calculated from the automatic segmentation results and that calculated from the GT was 0.997. In all nine parameter groups with various brightness/contrast, all the DSCs of the proposed method were higher than 0.96. The proposed method achieved better performance in the sFAZ segmentation and quantification compared to two previously reported methods. In conclusion, we proposed and successfully verified an automatic sFAZ segmentation and quantification method based on deep learning with robustness to B/C variations. For clinical applications, this is an important progress in creating an automated segmentation and quantification applicable to clinical analysis.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...