Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Comput Biol Med ; 160: 107002, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37187136

RESUMEN

BACKGROUND: Non-contrast chest CT is widely used for lung cancer screening, and its images carry potential information of the thoracic aorta. The morphological assessment of the thoracic aorta may have potential value in the presymptomatic detection of thoracic aortic-related diseases and the risk prediction of future adverse events. However, due to low vasculature contrast in such images, visual assessment of aortic morphology is challenging and highly depends on physicians' experience. PURPOSE: The main objective of this study is to propose a novel multi-task framework based on deep learning for simultaneous aortic segmentation and localization of key landmarks on unenhanced chest CT. The secondary objective is to use the algorithm to measure quantitative features of thoracic aorta morphology. METHODS: The proposed network is composed of two subnets to carry out segmentation and landmark detection, respectively. The segmentation subnet aims to demarcate the aortic sinuses of the Valsalva, aortic trunk and aortic branches, whereas the detection subnet is devised to locate five landmarks on the aorta to facilitate morphology measures. The networks share a common encoder and run decoders in parallel, taking full advantage of the synergy of the segmentation and landmark detection tasks. Furthermore, the volume of interest (VOI) module and the squeeze-and-excitation (SE) block with attention mechanisms are incorporated to further boost the capability of feature learning. RESULTS: Benefiting from the multitask framework, we achieved a mean Dice score of 0.95, average symmetric surface distance of 0.53 mm, Hausdorff distance of 2.13 mm for aortic segmentation, and mean square error (MSE) of 3.23 mm for landmark localization in 40 testing cases. CONCLUSION: We proposed a multitask learning framework which can perform segmentation of the thoracic aorta and localization of landmarks simultaneously and achieved good results. It can support quantitative measurement of aortic morphology for further analysis of aortic diseases, such as hypertension.


Asunto(s)
Enfermedades de la Aorta , Neoplasias Pulmonares , Humanos , Detección Precoz del Cáncer , Aorta/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
2.
Med Image Anal ; 84: 102708, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36516554

RESUMEN

Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers. Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR images. However, it requires large-scale and diverse medical data with high-quality annotations to train such robust and accurate CADs. To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation. Nevertheless, previous methods lack the ability to generate nodules that are realistic with the shape/size attributes desired by the detector. To address this issue, we introduce a novel lung nodule synthesis framework in this paper, which decomposes nodule attributes into three main aspects including the shape, the size, and the texture, respectively. A GAN-based Shape Generator firstly models nodule shapes by generating diverse shape masks. The following Size Modulation then enables quantitative control on the diameters of the generated nodule shapes in pixel-level granularity. A coarse-to-fine gated convolutional Texture Generator finally synthesizes visually plausible nodule textures conditioned on the modulated shape masks. Moreover, we propose to synthesize nodule CXR images by controlling the disentangled nodule attributes for data augmentation, in order to better compensate for the nodules that are easily missed in the detection task. Our experiments demonstrate the enhanced image quality, diversity, and controllability of the proposed lung nodule synthesis framework. We also validate the effectiveness of our data augmentation strategy on greatly improving nodule detection performance.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Humanos , Tomografía Computarizada por Rayos X/métodos , Rayos X , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Radiografía , Nódulo Pulmonar Solitario/diagnóstico por imagen , Pulmón
3.
BMC Musculoskelet Disord ; 23(1): 869, 2022 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-36115981

RESUMEN

BACKGROUND: A deep convolutional neural network (DCNN) system is proposed to measure the lower limb parameters of the mechanical lateral distal femur angle (mLDFA), medial proximal tibial angle (MPTA), lateral distal tibial angle (LDTA), joint line convergence angle (JLCA), and mechanical axis of the lower limbs. METHODS: Standing X-rays of 1000 patients' lower limbs were examined for the DCNN and assigned to training, validation, and test sets. A coarse-to-fine network was employed to locate 20 key landmarks on both limbs that first recognised the regions of hip, knee, and ankle, and subsequently outputted the key points in each sub-region from a full-length X-ray. Finally, information from these key landmark locations was used to calculate the above five parameters. RESULTS: The DCNN system showed high consistency (intraclass correlation coefficient > 0.91) for all five lower limb parameters. Additionally, the mean absolute error (MAE) and root mean squared error (RMSE) of all angle predictions were lower than 3° for both the left and right limbs. The MAE of the mechanical axis of the lower limbs was 1.124 mm and 1.416 mm and the RMSE was 1.032 mm and 1.321 mm, for the right and left limbs, respectively. The measurement time of the DCNN system was 1.8 ± 1.3 s, which was significantly shorter than that of experienced radiologists (616.8 ± 48.2 s, t = -180.4, P < 0.001). CONCLUSIONS: The proposed DCNN system can automatically measure mLDFA, MPTA, LDTA, JLCA, and the mechanical axis of the lower limbs, thus helping physicians manage lower limb alignment accurately and efficiently.


Asunto(s)
Extremidad Inferior , Tibia , Humanos , Extremidad Inferior/diagnóstico por imagen , Redes Neurales de la Computación , Estudios Retrospectivos , Tibia/diagnóstico por imagen
4.
Comput Intell Neurosci ; 2022: 9469234, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35733559

RESUMEN

Lung cancer accounts for the greatest number of cancer-related mortality, while the accurate evaluation of pulmonary nodules in computed tomography (CT) images can significantly increase the 5-year relative survival rate. Despite deep learning methods that have recently been introduced to the identification of malignant nodules, a substantial challenge remains due to the limited datasets. In this study, we propose a cascaded-recalibrated multiple instance learning (MIL) model based on multiattribute features transfer for pathologic-level lung cancer prediction in CT images. This cascaded-recalibrated MIL deep model incorporates a cascaded recalibration mechanism at the nodule level and attribute level, which fuses the informative attribute features into nodule embeddings and then the key nodule features can be converged into the patient-level embedding to improve the performance of lung cancer prediction. We evaluated the proposed cascaded-recalibrated MIL model on the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) benchmark dataset and compared it to the latest approaches. The experimental results showed a significant performance boost by the cascaded-recalibrated MIL model over the higher-order transfer learning, instance-space MIL, and embedding-space MIL models and the radiologists. In addition, the recalibration coefficients of the nodule and attribute feature for the final decision were also analyzed to reveal the underlying relationship between the confirmed diagnosis and its highly-correlated attributes. The cascaded recalibration mechanism enables the MIL model to pay more attention to those important nodules and attributes while suppressing less-useful feature embeddings, and the cascaded-recalibrated MIL model provides substantial improvements for the pathologic-level lung cancer prediction by using the CT images. The identification of the important nodules and attributes also provides better interpretability for model decision-making, which is very important for medical applications.


Asunto(s)
Neoplasias Pulmonares , Interpretación de Imagen Radiográfica Asistida por Computador , Bases de Datos Factuales , Humanos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
5.
J Transl Med ; 19(1): 443, 2021 10 24.
Artículo en Inglés | MEDLINE | ID: mdl-34689804

RESUMEN

BACKGROUND: This study aimed to evaluate the utility of radiomics-based machine learning analysis with multiparametric DWI and to compare the diagnostic performance of radiomics features and mean diffusion metrics in the characterization of breast lesions. METHODS: This retrospective study included 542 lesions from February 2018 to November 2018. One hundred radiomics features were computed from mono-exponential (ME), biexponential (BE), stretched exponential (SE), and diffusion-kurtosis imaging (DKI). Radiomics-based analysis was performed by comparing four classifiers, including random forest (RF), principal component analysis (PCA), L1 regularization (L1R), and support vector machine (SVM). These four classifiers were trained on a training set with 271 patients via ten-fold cross-validation and tested on an independent testing set with 271 patients. The diagnostic performance of the mean diffusion metrics of ME (mADCall b, mADC0-1000), BE (mD, mD*, mf), SE (mDDC, mα), and DKI (mK, mD) were also calculated for comparison. The area under the receiver operating characteristic curve (AUC) was used to compare the diagnostic performance. RESULTS: RF attained higher AUCs than L1R, PCA and SVM. The AUCs of radiomics features for the differential diagnosis of breast lesions ranged from 0.80 (BE_D*) to 0.85 (BE_D). The AUCs of the mean diffusion metrics ranged from 0.54 (BE_mf) to 0.79 (ME_mADC0-1000). There were significant differences in the AUCs between the mean values of all diffusion metrics and radiomics features of AUCs (all P < 0.001) for the differentiation of benign and malignant breast lesions. Of the radiomics features computed, the most important sequence was BE_D (AUC: 0.85), and the most important feature was FO-10 percentile (Feature Importance: 0.04). CONCLUSIONS: The radiomics-based analysis of multiparametric DWI by RF enables better differentiation of benign and malignant breast lesions than the mean diffusion metrics.


Asunto(s)
Mama , Aprendizaje Automático , Área Bajo la Curva , Mama/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Humanos , Curva ROC , Estudios Retrospectivos
6.
IEEE Trans Med Imaging ; 40(10): 2698-2710, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33284748

RESUMEN

We consider the problem of abnormality localization for clinical applications. While deep learning has driven much recent progress in medical imaging, many clinical challenges are not fully addressed, limiting its broader usage. While recent methods report high diagnostic accuracies, physicians have concerns trusting these algorithm results for diagnostic decision-making purposes because of a general lack of algorithm decision reasoning and interpretability. One potential way to address this problem is to further train these models to localize abnormalities in addition to just classifying them. However, doing this accurately will require a large amount of disease localization annotations by clinical experts, a task that is prohibitively expensive to accomplish for most applications. In this work, we take a step towards addressing these issues by means of a new attention-driven weakly supervised algorithm comprising a hierarchical attention mining framework that unifies activation- and gradient-based visual attention in a holistic manner. Our key algorithmic innovations include the design of explicit ordinal attention constraints, enabling principled model training in a weakly-supervised fashion, while also facilitating the generation of visual-attention-driven model explanations by means of localization cues. On two large-scale chest X-ray datasets (NIH ChestX-ray14 and CheXpert), we demonstrate significant localization performance improvements over the current state of the art while also achieving competitive classification performance.


Asunto(s)
Algoritmos , Radiografía , Rayos X
7.
BMC Med Inform Decis Mak ; 20(Suppl 14): 317, 2020 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-33323117

RESUMEN

BACKGROUND: Pneumothorax (PTX) may cause a life-threatening medical emergency with cardio-respiratory collapse that requires immediate intervention and rapid treatment. The screening and diagnosis of pneumothorax usually rely on chest radiographs. However, the pneumothoraces in chest X-rays may be very subtle with highly variable in shape and overlapped with the ribs or clavicles, which are often difficult to identify. Our objective was to create a large chest X-ray dataset for pneumothorax with pixel-level annotation and to train an automatic segmentation and diagnosis framework to assist radiologists to identify pneumothorax accurately and timely. METHODS: In this study, an end-to-end deep learning framework is proposed for the segmentation and diagnosis of pneumothorax on chest X-rays, which incorporates a fully convolutional DenseNet (FC-DenseNet) with multi-scale module and spatial and channel squeezes and excitation (scSE) modules. To further improve the precision of boundary segmentation, we propose a spatial weighted cross-entropy loss function to penalize the target, background and contour pixels with different weights. RESULTS: This retrospective study are conducted on a total of eligible 11,051 front-view chest X-ray images (5566 cases of PTX and 5485 cases of Non-PTX). The experimental results show that the proposed algorithm outperforms the five state-of-the-art segmentation algorithms in terms of mean pixel-wise accuracy (MPA) with [Formula: see text] and dice similarity coefficient (DSC) with [Formula: see text], and achieves competitive performance on diagnostic accuracy with 93.45% and [Formula: see text]-score with 92.97%. CONCLUSION: This framework provides substantial improvements for the automatic segmentation and diagnosis of pneumothorax and is expected to become a clinical application tool to help radiologists to identify pneumothorax on chest X-rays.


Asunto(s)
Neumotórax , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador , Neumotórax/diagnóstico por imagen , Estudios Retrospectivos , Rayos X
8.
Med Image Anal ; 64: 101753, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32574986

RESUMEN

The automated whole breast ultrasound (AWBUS) is a new breast imaging technique that can depict the whole breast anatomy. To facilitate the reading of AWBUS images and support the breast density estimation, an automatic breast anatomy segmentation method for AWBUS images is proposed in this study. The problem at hand is quite challenging as it needs to address issues of low image quality, ill-defined boundary, large anatomical variation, etc. To address these issues, a new deep learning encoder-decoder segmentation method based on a self-co-attention mechanism is developed. The self-attention mechanism is comprised of spatial and channel attention module (SC) and embedded in the ResNeXt (i.e., Res-SC) block in the encoder path. A non-local context block (NCB) is further incorporated to augment the learning of high-level contextual cues. The decoder path of the proposed method is equipped with the weighted up-sampling block (WUB) to attain class-specific better up-sampling effect. Meanwhile, the co-attention mechanism is also developed to improve the segmentation coherence among two consecutive slices. Extensive experiments are conducted with comparison to several the state-of-the-art deep learning segmentation methods. The experimental results corroborate the effectiveness of the proposed method on the difficult breast anatomy segmentation problem on AWBUS images.


Asunto(s)
Redes Neurales de la Computación , Ultrasonografía Mamaria , Mama/diagnóstico por imagen , Femenino , Humanos , Ultrasonografía
9.
IEEE Trans Med Imaging ; 38(6): 1543, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31199246

RESUMEN

In [1], Baiying Lei was indicated as the corresponding author. Tianfu Wang and Baiying Lei should have been indicated as the corresponding authors.

10.
Magn Reson Imaging ; 64: 90-100, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31175927

RESUMEN

We propose a novel dual-domain convolutional neural network framework to improve structural information of routine 3 T images. We introduce a parameter-efficient butterfly network that involves two complementary domains: a spatial domain and a frequency domain. The butterfly network allows the interaction of these two domains in learning the complex mapping from 3 T to 7 T images. We verified the efficacy of the dual-domain strategy and butterfly network using 3 T and 7 T image pairs. Experimental results demonstrate that the proposed framework generates synthetic 7 T-like images and achieves performance superior to state-of-the-art methods.


Asunto(s)
Encéfalo/diagnóstico por imagen , Epilepsia/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Algoritmos , Humanos
11.
Med Image Comput Comput Assist Interv ; 11070: 410-417, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30957107

RESUMEN

Due to the high cost and low accessibility of 7T magnetic resonance imaging (MRI) scanners, we propose a novel dual-domain cascaded regression framework to synthesize 7T images from the routine 3T images. Our framework is composed of two parallel and interactive multi-stage regression streams, where one stream regresses on spatial domain and the other regresses on frequency domain. These two streams complement each other and enable the learning of complex mappings between 3T and 7T images. We evaluated the proposed framework on a set of 3T and 7T images by leave-one-out cross-validation. Experimental results demonstrate that the proposed framework generates realistic 7T images and achieves better results than state-of-the-art methods.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Algoritmos , Humanos , Análisis Multivariante , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
12.
IEEE J Biomed Health Inform ; 22(1): 215-223, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28504954

RESUMEN

Head circumference (HC) is one of the most important biometrics in assessing fetal growth during prenatal ultrasound examinations. However, the manual measurement of this biometric by doctors often requires substantial experience. We developed a learning-based framework that used prior knowledge and employed a fast ellipse fitting method (ElliFit) to measure HC automatically. We first integrated the prior knowledge about the gestational age and ultrasound scanning depth into a random forest classifier to localize the fetal head. We further used phase symmetry to detect the center line of the fetal skull and employed ElliFit to fit the HC ellipse for measurement. The experimental results from 145 HC images showed that our method had an average measurement error of 1.7 mm and outperformed traditional methods. The experimental results demonstrated that our method shows great promise for applications in clinical practice.


Asunto(s)
Cefalometría/métodos , Feto/diagnóstico por imagen , Cabeza/diagnóstico por imagen , Ultrasonografía Prenatal/métodos , Árboles de Decisión , Femenino , Humanos , Aprendizaje Automático , Embarazo
13.
Sci Rep ; 7(1): 8533, 2017 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-28864824

RESUMEN

We present a computer-aided diagnosis system (CADx) for the automatic categorization of solid, part-solid and non-solid nodules in pulmonary computerized tomography images using a Convolutional Neural Network (CNN). Provided with only a two-dimensional region of interest (ROI) surrounding each nodule, our CNN automatically reasons from image context to discover informative computational features. As a result, no image segmentation processing is needed for further analysis of nodule attenuation, allowing our system to avoid potential errors caused by inaccurate image processing. We implemented two computerized texture analysis schemes, classification and regression, to automatically categorize solid, part-solid and non-solid nodules in CT scans, with hierarchical features in each case learned directly by the CNN model. To show the effectiveness of our CNN-based CADx, an established method based on histogram analysis (HIST) was implemented for comparison. The experimental results show significant performance improvement by the CNN model over HIST in both classification and regression tasks, yielding nodule classification and rating performance concordant with those of practicing radiologists. Adoption of CNN-based CADx systems may reduce the inter-observer variation among screening radiologists and provide a quantitative reference for further nodule analysis.


Asunto(s)
Diagnóstico por Computador/métodos , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Bases de Datos Factuales , Diagnóstico Diferencial , Humanos , Pulmón/patología , Neoplasias Pulmonares/diagnóstico por imagen
14.
IEEE Trans Cybern ; 47(6): 1576-1586, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28371793

RESUMEN

Ultrasound (US) imaging is a widely used screening tool for obstetric examination and diagnosis. Accurate acquisition of fetal standard planes with key anatomical structures is very crucial for substantial biometric measurement and diagnosis. However, the standard plane acquisition is a labor-intensive task and requires operator equipped with a thorough knowledge of fetal anatomy. Therefore, automatic approaches are highly demanded in clinical practice to alleviate the workload and boost the examination efficiency. The automatic detection of standard planes from US videos remains a challenging problem due to the high intraclass and low interclass variations of standard planes, and the relatively low image quality. Unlike previous studies which were specifically designed for individual anatomical standard planes, respectively, we present a general framework for the automatic identification of different standard planes from US videos. Distinct from conventional way that devises hand-crafted visual features for detection, our framework explores in- and between-plane feature learning with a novel composite framework of the convolutional and recurrent neural networks. To further address the issue of limited training data, a multitask learning framework is implemented to exploit common knowledge across detection tasks of distinctive standard planes for the augmentation of feature learning. Extensive experiments have been conducted on hundreds of US fetus videos to corroborate the better efficacy of the proposed framework on the difficult standard plane detection problem.


Asunto(s)
Feto/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Ultrasonografía Prenatal/métodos , Grabación en Video/métodos , Algoritmos , Femenino , Humanos , Embarazo
15.
IEEE Trans Cybern ; 47(5): 1336-1349, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28362600

RESUMEN

The quality of ultrasound (US) images for the obstetric examination is crucial for accurate biometric measurement. However, manual quality control is a labor intensive process and often impractical in a clinical setting. To improve the efficiency of examination and alleviate the measurement error caused by improper US scanning operation and slice selection, a computerized fetal US image quality assessment (FUIQA) scheme is proposed to assist the implementation of US image quality control in the clinical obstetric examination. The proposed FUIQA is realized with two deep convolutional neural network models, which are denoted as L-CNN and C-CNN, respectively. The L-CNN aims to find the region of interest (ROI) of the fetal abdominal region in the US image. Based on the ROI found by the L-CNN, the C-CNN evaluates the image quality by assessing the goodness of depiction for the key structures of stomach bubble and umbilical vein. To further boost the performance of the L-CNN, we augment the input sources of the neural network with the local phase features along with the original US data. It will be shown that the heterogeneous input sources will help to improve the performance of the L-CNN. The performance of the proposed FUIQA is compared with the subjective image quality evaluation results from three medical doctors. With comprehensive experiments, it will be illustrated that the computerized assessment with our FUIQA scheme can be comparable to the subjective ratings from medical doctors.


Asunto(s)
Feto/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Ultrasonografía Prenatal/métodos , Ultrasonografía Prenatal/normas , Femenino , Humanos , Redes Neurales de la Computación , Embarazo , Control de Calidad
16.
IEEE Trans Med Imaging ; 36(1): 288-300, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27623573

RESUMEN

Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.


Asunto(s)
Prueba de Papanicolaou , Algoritmos , Femenino , Humanos , Neoplasias del Cuello Uterino
17.
Med Biol Eng Comput ; 54(12): 1807-1818, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-27376641

RESUMEN

Though numerous segmentation algorithms have been proposed to segment brain tissue from magnetic resonance (MR) images, few of them consider combining the tissue segmentation and bias field correction into a unified framework while simultaneously removing the noise. In this paper, we present a new unified MR image segmentation algorithm whereby tissue segmentation, bias correction and noise reduction are integrated within the same energy model. Our method is presented by a total variation term introduced to the coherent local intensity clustering criterion function. To solve the nonconvex problem with respect to membership functions, we add auxiliary variables in the energy function such as Chambolle's fast dual projection method can be used and the optimal segmentation and bias field estimation can be achieved simultaneously throughout the reciprocal iteration. Experimental results show that the proposed method has a salient advantage over the other three baseline methods on either tissue segmentation or bias correction, and the noise is significantly reduced via its applications on highly noise-corrupted images. Moreover, benefiting from the fast convergence of the proposed solution, our method is less time-consuming and robust to parameter setting.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética/métodos , Análisis de Varianza , Análisis por Conglomerados , Sustancia Gris/patología , Humanos , Sustancia Blanca/patología
18.
Sci Rep ; 6: 24454, 2016 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-27079888

RESUMEN

This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.


Asunto(s)
Mama/patología , Diagnóstico por Computador , Aprendizaje Automático , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Ultrasonografía , Algoritmos , Conjuntos de Datos como Asunto , Femenino , Humanos , Reproducibilidad de los Resultados
19.
Med Image Comput Comput Assist Interv ; 9901: 247-255, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28386607

RESUMEN

Cystocele is a common disease in woman. Accurate assessment of cystocele severity is very important for treatment options. The transperineal ultrasound (US) has recently emerged as an alternative tool for cystocele grading. The cystocele severity is usually evaluated with the manual measurement of the maximal descent of the bladder (MDB) relative to the symphysis pubis (SP) during Valsalva maneuver. However, this process is time-consuming and operator-dependent. In this study, we propose an automatic scheme for csystocele grading from transperineal US video. A two-layer spatio-temporal regression model is proposed to identify the middle axis and lower tip of the SP, and segment the bladder, which are essential tasks for the measurement of the MDB. Both appearance and context features are extracted in the spatio-temporal domain to help the anatomy detection. Experimental results on 85 transperineal US videos show that our method significantly outperforms the state-of-the-art regression method.


Asunto(s)
Cistocele/diagnóstico por imagen , Cistocele/patología , Ultrasonografía/métodos , Adulto , Algoritmos , Cistocele/terapia , Femenino , Humanos , Análisis de Regresión , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Índice de Severidad de la Enfermedad , Adulto Joven
20.
IEEE Trans Med Imaging ; 35(2): 589-604, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26441446

RESUMEN

Registration and fusion of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland can provide high-quality guidance for prostate interventions. However, accurate MR-TRUS registration remains a challenging task, due to the great intensity variation between two modalities, the lack of intrinsic fiducials within the prostate, the large gland deformation caused by the TRUS probe insertion, and distinctive biomechanical properties in patients and prostate zones. To address these challenges, a personalized model-to-surface registration approach is proposed in this study. The main contributions of this paper can be threefold. First, a new personalized statistical deformable model (PSDM) is proposed with the finite element analysis and the patient-specific tissue parameters measured from the ultrasound elastography. Second, a hybrid point matching method is developed by introducing the modality independent neighborhood descriptor (MIND) to weight the Euclidean distance between points to establish reliable surface point correspondence. Third, the hybrid point matching is further guided by the PSDM for more physically plausible deformation estimation. Eighteen sets of patient data are included to test the efficacy of the proposed method. The experimental results demonstrate that our approach provides more accurate and robust MR-TRUS registration than state-of-the-art methods do. The averaged target registration error is 1.44 mm, which meets the clinical requirement of 1.9 mm for the accurate tumor volume detection. It can be concluded that the presented method can effectively fuse the heterogeneous image information in the elastography, MR, and TRUS to attain satisfactory image alignment performance.


Asunto(s)
Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Modelos Estadísticos , Próstata/diagnóstico por imagen , Ultrasonografía/métodos , Humanos , Masculino , Medicina de Precisión , Neoplasias de la Próstata/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA