Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Bioengineering (Basel) ; 10(7)2023 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-37508857

RESUMEN

Accurate segmentation of interstitial lung disease (ILD) patterns from computed tomography (CT) images is an essential prerequisite to treatment and follow-up. However, it is highly time-consuming for radiologists to pixel-by-pixel segment ILD patterns from CT scans with hundreds of slices. Consequently, it is hard to obtain large amounts of well-annotated data, which poses a huge challenge for data-driven deep learning-based methods. To alleviate this problem, we propose an end-to-end semi-supervised learning framework for the segmentation of ILD patterns (ESSegILD) from CT images via self-training with selective re-training. The proposed ESSegILD model is trained using a large CT dataset with slice-wise sparse annotations, i.e., only labeling a few slices in each CT volume with ILD patterns. Specifically, we adopt a popular semi-supervised framework, i.e., Mean-Teacher, that consists of a teacher model and a student model and uses consistency regularization to encourage consistent outputs from the two models under different perturbations. Furthermore, we propose introducing the latest self-training technique with a selective re-training strategy to select reliable pseudo-labels generated by the teacher model, which are used to expand training samples to promote the student model during iterative training. By leveraging consistency regularization and self-training with selective re-training, our proposed ESSegILD can effectively utilize unlabeled data from a partially annotated dataset to progressively improve the segmentation performance. Experiments are conducted on a dataset of 67 pneumonia patients with incomplete annotations containing over 11,000 CT images with eight different lung patterns of ILDs, with the results indicating that our proposed method is superior to the state-of-the-art methods.

2.
Comput Med Imaging Graph ; 105: 102186, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36731328

RESUMEN

Bone suppression is to suppress the superimposed bone components over the soft tissues within the lung area of Chest X-ray (CXR), which is potentially useful for the subsequent lung disease diagnosis for radiologists, as well as computer-aided systems. Despite bone suppression methods for frontal CXRs being well studied, it remains challenging for lateral CXRs due to the limited and imperfect DES dataset containing paired lateral CXR and soft-tissue/bone images and more complex anatomical structures in the lateral view. In this work, we propose a bone suppression method for lateral CXRs by leveraging a two-stage distillation learning strategy and a specific data correction method. Specifically, a primary model is first trained on a real DES dataset with limited samples. The bone-suppressed results on a relatively large lateral CXR dataset produced by the primary model are improved by a designed gradient correction method. Secondly, the corrected results serve as training samples to train the distillated model. By automatically learning knowledge from both the primary model and the extra correction procedure, our distillated model is expected to promote the performance of the primary model while omitting the tedious correction procedure. We adopt an ensemble model named MsDd-MAP for the primary and distillated models, which learns the complementary information of Multi-scale and Dual-domain (i.e., intensity and gradient) and fuses them in a maximum-a-posteriori (MAP) framework. Our method is evaluated on a two-exposure lateral DES dataset consisting of 46 subjects and a lateral CXR dataset consisting of 240 subjects. The experimental results suggest that our method is superior to other competing methods regarding the quantitative evaluation metrics. Furthermore, the subjective evaluation by three experienced radiologists also indicates that the distillated model can produce more visually appealing soft-tissue images than the primary model, even comparable to real DES imaging for lateral CXRs.


Asunto(s)
Radiografía Torácica , Tórax , Humanos , Radiografía Torácica/métodos , Rayos X , Radiografía , Huesos
3.
Med Image Comput Comput Assist Interv ; 14394: 265-275, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38435413

RESUMEN

Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.

4.
Med Image Anal ; 75: 102266, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34700245

RESUMEN

Accurately assessing clinical progression from subjective cognitive decline (SCD) to mild cognitive impairment (MCI) is crucial for early intervention of pathological cognitive decline. Multi-modal neuroimaging data such as T1-weighted magnetic resonance imaging (MRI) and positron emission tomography (PET), help provide objective and supplementary disease biomarkers for computer-aided diagnosis of MCI. However, there are few studies dedicated to SCD progression prediction since subjects usually lack one or more imaging modalities. Besides, one usually has a limited number (e.g., tens) of SCD subjects, negatively affecting model robustness. To this end, we propose a Joint neuroimage Synthesis and Representation Learning (JSRL) framework for SCD conversion prediction using incomplete multi-modal neuroimages. The JSRL contains two components: 1) a generative adversarial network to synthesize missing images and generate multi-modal features, and 2) a classification network to fuse multi-modal features for SCD conversion prediction. The two components are incorporated into a joint learning framework by sharing the same features, encouraging effective fusion of multi-modal features for accurate prediction. A transfer learning strategy is employed in the proposed framework by leveraging model trained on the Alzheimer's Disease Neuroimaging Initiative (ADNI) with MRI and fluorodeoxyglucose PET from 863 subjects to both the Chinese Longitudinal Aging Study (CLAS) with only MRI from 76 SCD subjects and the Australian Imaging, Biomarkers and Lifestyle (AIBL) with MRI from 235 subjects. Experimental results suggest that the proposed JSRL yields superior performance in SCD and MCI conversion prediction and cross-database neuroimage synthesis, compared with several state-of-the-art methods.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Australia , Disfunción Cognitiva/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Neuroimagen
5.
Med Image Anal ; 71: 102076, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33930828

RESUMEN

Structural magnetic resonance imaging (MRI) has shown great clinical and practical values in computer-aided brain disorder identification. Multi-site MRI data increase sample size and statistical power, but are susceptible to inter-site heterogeneity caused by different scanners, scanning protocols, and subject cohorts. Multi-site MRI harmonization (MMH) helps alleviate the inter-site difference for subsequent analysis. Some MMH methods performed at imaging level or feature extraction level are concise but lack robustness and flexibility to some extent. Even though several machine/deep learning-based methods have been proposed for MMH, some of them require a portion of labeled data in the to-be-analyzed target domain or ignore the potential contributions of different brain regions to the identification of brain disorders. In this work, we propose an attention-guided deep domain adaptation (AD2A) framework for MMH and apply it to automated brain disorder identification with multi-site MRIs. The proposed framework does not need any category label information of target data, and can also automatically identify discriminative regions in whole-brain MR images. Specifically, the proposed AD2A is composed of three key modules: (1) an MRI feature encoding module to extract representations of input MRIs, (2) an attention discovery module to automatically locate discriminative dementia-related regions in each whole-brain MRI scan, and (3) a domain transfer module trained with adversarial learning for knowledge transfer between the source and target domains. Experiments have been performed on 2572 subjects from four benchmark datasets with T1-weighted structural MRIs, with results demonstrating the effectiveness of the proposed method in both tasks of brain disorder identification and disease progression prediction.


Asunto(s)
Encefalopatías , Imagen por Resonancia Magnética , Atención , Encéfalo/diagnóstico por imagen , Humanos , Aprendizaje Automático
6.
IEEE J Biomed Health Inform ; 24(4): 1114-1124, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31295129

RESUMEN

Given the complicated relationship between the magnetic resonance imaging (MRI) signals and the attenuation values, the attenuation correction in hybrid positron emission tomography (PET)/MRI systems remains a challenging task. Currently, existing methods are either time-consuming or require sufficient samples to train the models. In this paper, an efficient approach for predicting pseudo computed tomography (CT) images from T1- and T2-weighted MRI data with limited data is proposed. The proposed approach uses improved neighborhood anchored regression (INAR) as a baseline method to pre-calculate projected matrices to flexibly predict the pseudo CT patches. Techniques, including the augmentation of the MR/CT dataset, learning of the nonlinear descriptors of MR images, hierarchical search for nearest neighbors, data-driven optimization, and multi-regressor ensemble, are adopted to improve the effectiveness of the proposed approach. In total, 22 healthy subjects were enrolled in the study. The pseudo CT images obtained using INAR with multi-regressor ensemble yielded mean absolute error (MAE) of 92.73 ± 14.86 HU, peak signal-to-noise ratio of 29.77 ± 1.63 dB, Pearson linear correlation coefficient of 0.82 ± 0.05, dice similarity coefficient of 0.81 ± 0.03, and the relative mean absolute error (rMAE) in PET attenuation correction of 1.30 ± 0.20% compared with true CT images. Moreover, our proposed INAR method, without any refinement strategies, can achieve considerable results with only seven subjects (MAE 106.89 ± 14.43 HU, rMAE 1.51 ± 0.21%). The experiments prove the superior performance of the proposed method over the six innovative methods. Moreover, the proposed method can rapidly generate the pseudo CT images that are suitable for PET attenuation correction.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Adulto , Anciano , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Masculino , Persona de Mediana Edad , Análisis de Regresión
7.
Comput Methods Programs Biomed ; 180: 105014, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31430596

RESUMEN

BACKGROUND AND OBJECTIVE: In chest radiographs (CXRs), all bones and soft tissues are overlapping with each other, which raises issues for radiologists to read and interpret CXRs. Delineating the ribs and clavicles is helpful for suppressing them from chest radiographs so that their effects can be reduced for chest radiography analysis. However, delineating ribs and clavicles automatically is difficult by methods without deep learning models. Moreover, few of methods without deep learning models can delineate the anterior ribs effectively due to their faint rib edges in the posterior-anterior (PA) CXRs. METHODS: In this work, we present an effective deep learning method for delineating posterior ribs, anterior ribs and clavicles automatically using a fully convolutional DenseNet (FC-DenseNet) as pixel classifier. We consider a pixel-weighted loss function to mitigate the uncertainty issue during manually delineating for robust prediction. RESULTS: We conduct a comparative analysis with two other fully convolutional networks for edge detection and the state-of-the-art method without deep learning models. The proposed method significantly outperforms these methods in terms of quantitative evaluation metrics and visual perception. The average recall, precision and F-measure are 0.773 ± 0.030, 0.861 ± 0.043 and 0.814 ± 0.023 respectively, and the mean boundary distance (MBD) is 0.855 ± 0.642 pixels of the proposed method on the test dataset. The proposed method also performs well on JSRT and NIH Chest X-ray datasets, indicating its generalizability across multiple databases. Besides, a preliminary result of suppressing the bone components of CXRs has been produced by using our delineating system. CONCLUSIONS: The proposed method can automatically delineate ribs and clavicles in CXRs and produce accurate edge maps.


Asunto(s)
Automatización , Clavícula/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Costillas/diagnóstico por imagen , Aprendizaje Profundo , Humanos , Radiografía Torácica/métodos
8.
Appl Bionics Biomech ; 2019: 9806464, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31341514

RESUMEN

BACKGROUND AND OBJECTIVE: When radiologists diagnose lung diseases in chest radiography, they can miss some lung nodules overlapped with ribs or clavicles. Dual-energy subtraction (DES) imaging performs well because it can produce soft tissue images, in which the bone components in chest radiography were almost suppressed but the visibility of nodules and lung vessels was still maintained. However, most routinely available X-ray machines do not possess the DES function. Thus, we presented a data-driven decomposition model to perform virtual DES function for decomposing a single conventional chest radiograph into soft tissue and bone images. METHODS: For a given chest radiograph, similar chest radiographs with corresponding DES soft tissue and bone images are selected from the training database as exemplars for decomposition. The corresponding fields between the observed chest radiograph and the exemplars are solved by a hierarchically dense matching algorithm. Then, nonparametric priors of soft tissue and bone components are constructed by sampling image patches from the selected soft tissue and bone images according to the corresponding fields. Finally, these nonparametric priors are integrated into our decomposition model, the energy function of which is efficiently optimized by an iteratively reweighted least-squares scheme (IRLS). RESULTS: The decomposition method is evaluated on a data set of posterior-anterior DES radiography (503 cases), as well as on the JSRT data set. The proposed method can produce soft tissue and bone images similar to those produced by the actual DES system. CONCLUSIONS: The proposed method can markedly reduce the visibility of bony structures in chest radiographs and shows potential to enhance diagnosis.

9.
IEEE J Biomed Health Inform ; 22(3): 842-851, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28368835

RESUMEN

Lung field segmentation in chest radiographs (CXRs) is an essential preprocessing step in automatically analyzing such images. We present a method for lung field segmentation that is built on a high-quality boundary map detected by an efficient modern boundary detector, namely a structured edge detector (SED). A SED is trained beforehand to detect lung boundaries in CXRs with manually outlined lung fields. Then, an ultrametric contour map (UCM) is transformed from the masked and marked boundary map. Finally, the contours with the highest confidence level in the UCM are extracted as lung contours. Our method is evaluated using the public Japanese Society of Radiological Technology database of scanned films. The average Jaccard index of our method is 95.2%, which is comparable with those of other state-of-the-art methods (95.4%). The computation time of our method is less than 0.1 s for a CXR when executed on an ordinary laptop. Our method is also validated on CXRs acquired with different digital radiography units. The results demonstrate the generalization of the trained SED model and the usefulness of our method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Radiografía Torácica/métodos , Algoritmos , Bases de Datos Factuales , Humanos
10.
Med Image Anal ; 35: 421-433, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27589577

RESUMEN

Suppression of bony structures in chest radiographs (CXRs) is potentially useful for radiologists and computer-aided diagnostic schemes. In this paper, we present an effective deep learning method for bone suppression in single conventional CXR using deep convolutional neural networks (ConvNets) as basic prediction units. The deep ConvNets were adapted to learn the mapping between the gradients of the CXRs and the corresponding bone images. We propose a cascade architecture of ConvNets (called CamsNet) to refine progressively the predicted bone gradients in which the ConvNets work at successively increased resolutions. The predicted bone gradients at different scales from the CamsNet are fused in a maximum-a-posteriori framework to produce the final estimation of a bone image. This estimation of a bone image is subtracted from the original CXR to produce a soft-tissue image in which the bone components are eliminated. Our method was evaluated on a dataset that consisted of 504 cases of real two-exposure dual-energy subtraction chest radiographs (404 cases for training and 100 cases for test). The results demonstrate that our method can produce high-quality and high-resolution bone and soft-tissue images. The average relative mean absolute error of the produced bone images and peak signal-to-noise ratio of the produced soft-tissue images were 3.83% and 38.7dB, respectively. The average bone suppression ratio of our method was 83.8% for the CXRs with pixel sizes of nearly 0.194mm. Furthermore, we apply the trained CamsNet model on the CXRs acquired by various types of X-ray machines, including scanned films, and our method can also produce visually appealing bone and soft-tissue images.


Asunto(s)
Huesos , Diagnóstico por Computador/métodos , Redes Neurales de la Computación , Radiografía Torácica/métodos , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...