Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36433784

RESUMEN

Biomedical multi-modality data (also named multi-omics data) refer to data that span different types and derive from multiple sources in clinical practices (e.g. gene sequences, proteomics and histopathological images), which can provide comprehensive perspectives for cancers and generally improve the performance of survival models. However, the performance improvement of multi-modality survival models may be hindered by two key issues as follows: (1) how to learn and fuse modality-sharable and modality-individual representations from multi-modality data; (2) how to explore the potential risk-aware characteristics in each risk subgroup, which is beneficial to risk stratification and prognosis evaluation. Additionally, learning-based survival models generally refer to numerous hyper-parameters, which requires time-consuming parameter setting and might result in a suboptimal solution. In this paper, we propose an adaptive risk-aware sharable and individual subspace learning method for cancer survival analysis. The proposed method jointly learns sharable and individual subspaces from multi-modality data, whereas two auxiliary terms (i.e. intra-modality complementarity and inter-modality incoherence) are developed to preserve the complementary and distinctive properties of each modality. Moreover, it equips with a grouping co-expression constraint for obtaining risk-aware representation and preserving local consistency. Furthermore, an adaptive-weighted strategy is employed to efficiently estimate crucial parameters during the training stage. Experimental results on three public datasets demonstrate the superiority of our proposed model.


Asunto(s)
Aprendizaje Automático , Neoplasias , Humanos , Neoplasias/genética , Análisis de Supervivencia
2.
Neuroimage ; 295: 120652, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38797384

RESUMEN

Accurate processing and analysis of non-human primate (NHP) brain magnetic resonance imaging (MRI) serves an indispensable role in understanding brain evolution, development, aging, and diseases. Despite the accumulation of diverse NHP brain MRI datasets at various developmental stages and from various imaging sites/scanners, existing computational tools designed for human MRI typically perform poor on NHP data, due to huge differences in brain sizes, morphologies, and imaging appearances across species, sites, and ages, highlighting the imperative for NHP-specialized MRI processing tools. To address this issue, in this paper, we present a robust, generic, and fully automated computational pipeline, called non-human primates Brain Extraction and Segmentation Toolbox (nBEST), whose main functionality includes brain extraction, non-cerebrum removal, and tissue segmentation. Building on cutting-edge deep learning techniques by employing lifelong learning to flexibly integrate data from diverse NHP populations and innovatively constructing 3D U-NeXt architecture, nBEST can well handle structural NHP brain MR images from multi-species, multi-site, and multi-developmental-stage (from neonates to the elderly). We extensively validated nBEST based on, to our knowledge, the largest assemblage dataset in NHP brain studies, encompassing 1,469 scans with 11 species (e.g., rhesus macaques, cynomolgus macaques, chimpanzees, marmosets, squirrel monkeys, etc.) from 23 independent datasets. Compared to alternative tools, nBEST outperforms in precision, applicability, robustness, comprehensiveness, and generalizability, greatly benefiting downstream longitudinal, cross-sectional, and cross-species quantitative analyses. We have made nBEST an open-source toolbox (https://github.com/TaoZhong11/nBEST) and we are committed to its continual refinement through lifelong learning with incoming data to greatly contribute to the research field.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Imagen por Resonancia Magnética , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Macaca mulatta , Neuroimagen/métodos , Pan troglodytes/anatomía & histología , Envejecimiento/fisiología
3.
Radiol Med ; 128(5): 509-519, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37115392

RESUMEN

BACKGROUND: Accurate preoperative clinical staging of gastric cancer helps determine therapeutic strategies. However, no multi-category grading models for gastric cancer have been established. This study aimed to develop multi-modal (CT/EHRs) artificial intelligence (AI) models for predicting tumor stages and optimal treatment indication based on preoperative CT images and electronic health records (EHRs) in patients with gastric cancer. METHODS: This retrospective study enrolled 602 patients with a pathological diagnosis of gastric cancer from Nanfang hospital retrospectively and divided them into training (n = 452) and validation sets (n = 150). A total of 1326 features were extracted of which 1316 radiomic features were extracted from the 3D CT images and 10 clinical parameters were obtained from electronic health records (EHRs). Four multi-layer perceptrons (MLPs) whose input was the combination of radiomic features and clinical parameters were automatically learned with the neural architecture search (NAS) strategy. RESULTS: Two two-layer MLPs identified by NAS approach were employed to predict the stage of the tumor showed greater discrimination with the average ACC value of 0.646 for five T stages, 0.838 for four N stages than traditional methods with ACC of 0.543 (P value = 0.034) and 0.468 (P value = 0.021), respectively. Furthermore, our models reported high prediction accuracy for the indication of endoscopic resection and the preoperative neoadjuvant chemotherapy with the AUC value of 0.771 and 0.661, respectively. CONCLUSIONS: Our multi-modal (CT/EHRs) artificial intelligence models generated with the NAS approach have high accuracy for tumor stage prediction and optimal treatment regimen and timing, which could facilitate radiologists and gastroenterologists to improve diagnosis and treatment efficiency.


Asunto(s)
Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/cirugía , Neoplasias Gástricas/tratamiento farmacológico , Estudios Retrospectivos , Inteligencia Artificial , Terapia Neoadyuvante
4.
Neuroimage ; 227: 117649, 2021 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-33338616

RESUMEN

As non-human primates, macaques have a close phylogenetic relationship to human beings and have been proven to be a valuable and widely used animal model in human neuroscience research. Accurate skull stripping (aka. brain extraction) of brain magnetic resonance imaging (MRI) is a crucial prerequisite in neuroimaging analysis of macaques. Most of the current skull stripping methods can achieve satisfactory results for human brains, but when applied to macaque brains, especially during early brain development, the results are often unsatisfactory. In fact, the early dynamic, regionally-heterogeneous development of macaque brains, accompanied by poor and age-related contrast between different anatomical structures, poses significant challenges for accurate skull stripping. To overcome these challenges, we propose a fully-automated framework to effectively fuse the age-specific intensity information and domain-invariant prior knowledge as important guiding information for robust skull stripping of developing macaques from 0 to 36 months of age. Specifically, we generate Signed Distance Map (SDM) and Center of Gravity Distance Map (CGDM) based on the intermediate segmentation results as guidance. Instead of using local convolution, we fuse all information using the Dual Self-Attention Module (DSAM), which can capture global spatial and channel-dependent information of feature maps. To extensively evaluate the performance, we adopt two relatively-large challenging MRI datasets from rhesus macaques and cynomolgus macaques, respectively, with a total of 361 scans from two different scanners with different imaging protocols. We perform cross-validation by using one dataset for training and the other one for testing. Our method outperforms five popular brain extraction tools and three deep-learning-based methods on cross-source MRI datasets without any transfer learning.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/anatomía & histología , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Macaca , Imagen por Resonancia Magnética
5.
Bioinformatics ; 36(9): 2888-2895, 2020 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-31985775

RESUMEN

MOTIVATION: As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance. RESULTS: We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728-0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data. AVAILABILITY AND IMPLEMENTATION: https://github.com/zhang-de-lab/zhang-lab? from=singlemessage. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Carcinoma de Células Renales/diagnóstico por imagen , Carcinoma de Células Renales/genética , Genoma , Humanos , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/genética , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
6.
Clin Gastroenterol Hepatol ; 18(13): 2998-3007.e5, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32205218

RESUMEN

BACKGROUND & AIMS: Noninvasive and accurate methods are needed to identify patients with clinically significant portal hypertension (CSPH). We investigated the ability of deep convolutional neural network (CNN) analysis of computed tomography (CT) or magnetic resonance (MR) to identify patients with CSPH. METHODS: We collected liver and spleen images from patients who underwent contrast-enhanced CT or MR analysis within 14 days of transjugular catheterization for hepatic venous pressure gradient measurement. The CT cohort comprised participants with cirrhosis in the CHESS1701 study, performed at 4 university hospitals in China from August 2016 through September 2017. The MR cohort comprised participants with cirrhosis in the CHESS1802 study, performed at 8 university hospitals in China and 1 in Turkey from December 2018 through April 2019. Patients with CSPH were identified as those with a hepatic venous pressure gradient of 10 mm Hg or higher. In total, we analyzed 10,014 liver images and 899 spleen images collected from 679 participants who underwent CT analysis, and 45,554 liver and spleen images from 271 participants who underwent MR analysis. For each cohort, participants were shuffled and then sampled randomly and equiprobably for 6 times into training, validation, and test data sets (ratio, 3:1:1). Therefore, a total of 6 deep CNN models for each cohort were developed for identification of CSPH. RESULTS: The CT-based CNN analysis identified patients with CSPH with an area under the receiver operating characteristic curve (AUC) value of 0.998 in the training set (95% CI, 0.996-1.000), an AUC of 0.912 in the validation set (95% CI, 0.854-0.971), and an AUC of 0.933 (95% CI, 0.883-0.984) in the test data sets. The MR-based CNN analysis identified patients with CSPH with an AUC of 1.000 in the training set (95% CI, 0.999-1.000), an AUC of 0.924 in the validation set (95% CI, 0.833-1.000), and an AUC of 0.940 in the test data set (95% CI, 0.880-0.999). When the model development procedures were repeated 6 times, AUC values for all CNN analyses were 0.888 or greater, with no significant differences between rounds (P > .05). CONCLUSIONS: We developed a deep CNN to analyze CT or MR images of liver and spleen from patients with cirrhosis that identifies patients with CSPH with an AUC value of 0.9. This provides a noninvasive and rapid method for detection of CSPH (ClincialTrials.gov numbers: NCT03138915 and NCT03766880).


Asunto(s)
Hipertensión Portal , Humanos , Hipertensión Portal/complicaciones , Hipertensión Portal/diagnóstico , Cirrosis Hepática/complicaciones , Cirrosis Hepática/diagnóstico , Redes Neurales de la Computación , Presión Portal
7.
Eur Radiol ; 29(3): 1074-1082, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30116959

RESUMEN

OBJECTIVE: To develop and evaluate a radiomics nomogram for differentiating the malignant risk of gastrointestinal stromal tumours (GISTs). METHODS: A total of 222 patients (primary cohort: n = 130, our centre; external validation cohort: n = 92, two other centres) with pathologically diagnosed GISTs were enrolled. A Relief algorithm was used to select the feature subset with the best distinguishing characteristics and to establish a radiomics model with a support vector machine (SVM) classifier for malignant risk differentiation. Determinant clinical characteristics and subjective CT features were assessed to separately construct a corresponding model. The models showing statistical significance in a multivariable logistic regression analysis were used to develop a nomogram. The diagnostic performance of these models was evaluated using ROC curves. Further calibration of the nomogram was evaluated by calibration curves. RESULTS: The generated radiomics model had an AUC value of 0.867 (95% CI 0.803-0.932) in the primary cohort and 0.847 (95% CI 0.765-0.930) in the external cohort. In the entire cohort, the AUCs for the radiomics model, subjective CT findings model, clinical index model and radiomics nomogram were 0.858 (95% CI 0.807-0.908), 0.774 (95% CI 0.713-0.835), 0.759 (95% CI 0.697-0.821) and 0.867 (95% CI 0.818-0.915), respectively. The nomogram showed good calibration. CONCLUSIONS: This radiomics nomogram predicted the malignant potential of GISTs with excellent accuracy and may be used as an effective tool to guide preoperative clinical decision-making. KEY POINTS: • CT-based radiomics model can differentiate low- and high-malignant-potential GISTs with satisfactory accuracy compared with subjective CT findings and clinical indexes. • Radiomics nomogram integrated with the radiomics signature, subjective CT findings and clinical indexes can achieve individualised risk prediction with improved diagnostic performance. • This study might provide significant and valuable background information for further studies such as response evaluation of neoadjuvant imatinib and recurrence risk prediction.


Asunto(s)
Algoritmos , Tumores del Estroma Gastrointestinal/diagnóstico , Imagenología Tridimensional/métodos , Clasificación del Tumor/métodos , Nomogramas , Tomografía Computarizada por Rayos X/métodos , Diagnóstico Diferencial , Femenino , Tumores del Estroma Gastrointestinal/clasificación , Tumores del Estroma Gastrointestinal/cirugía , Humanos , Masculino , Persona de Mediana Edad , Periodo Preoperatorio , Curva ROC , Máquina de Vectores de Soporte
8.
Comput Med Imaging Graph ; 116: 102404, 2024 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-38870599

RESUMEN

Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool's generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at https://github.com/TaoZhong11/Macaque_subcortical_segmentation for direct application.

9.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 7577-7594, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36383577

RESUMEN

Current survival analysis of cancers confronts two key issues. While comprehensive perspectives provided by data from multiple modalities often promote the performance of survival models, data with inadequate modalities at the testing phase are more ubiquitous in clinical scenarios, which makes multi-modality approaches not applicable. Additionally, incomplete observations (i.e., censored instances) bring a unique challenge for survival analysis, to tackle which, some models have been proposed based on certain strict assumptions or attribute distributions that, however, may limit their applicability. In this paper, we present a mutual-assistance learning paradigm for standalone mono-modality survival analysis of cancers. The mutual assistance implies the cooperation of multiple components and embodies three aspects: 1) it leverages the knowledge of multi-modality data to guide the representation learning of an individual modality via mutual-assistance similarity and geometry constraints; 2) it formulates mutual-assistance regression and ranking functions independent of strong hypotheses to estimate the relative risk, in which a bias vector is introduced to efficiently cope with the censoring problem; 3) it integrates representation learning and survival modeling into a unified mutual-assistance framework for alleviating the requirement of attribute distributions. Extensive experiments on several datasets demonstrate our method can significantly improve the performance of mono-modality survival model.


Asunto(s)
Algoritmos , Neoplasias , Humanos , Análisis de Supervivencia , Neoplasias/diagnóstico por imagen , Neoplasias/terapia , Aprendizaje Automático
10.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3737-3750, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34596560

RESUMEN

The Cox proportional hazard model has been widely applied to cancer prognosis prediction. Nowadays, multi-modal data, such as histopathological images and gene data, have advanced this field by providing histologic phenotype and genotype information. However, how to efficiently fuse and select the complementary information of high-dimensional multi-modal data remains challenging for Cox model, as it generally does not equip with feature fusion/selection mechanism. Many previous studies typically perform feature fusion/selection in the original feature space before Cox modeling. Alternatively, learning a latent shared feature space that is tailored for Cox model and simultaneously keeps sparsity is desirable. In addition, existing Cox-based models commonly pay little attention to the actual length of the observed time that may help to boost the model's performance. In this article, we propose a novel Cox-driven multi-constraint latent representation learning framework for prognosis analysis with multi-modal data. Specifically, for efficient feature fusion, a multi-modal latent space is learned via a bi-mapping approach under ranking and regression constraints. The ranking constraint utilizes the log-partial likelihood of Cox model to induce learning discriminative representations in a task-oriented manner. Meanwhile, the representations also benefit from regression constraint, which imposes the supervision of specific survival time on representation learning. To improve generalization and alleviate overfitting, we further introduce similarity and sparsity constraints to encourage extra consistency and sparseness. Extensive experiments on three datasets acquired from The Cancer Genome Atlas (TCGA) demonstrate that the proposed method is superior to state-of-the-art Cox-based models.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Generalización Psicológica , Pronóstico , Probabilidad
12.
IEEE Trans Med Imaging ; 41(1): 186-198, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34460368

RESUMEN

The integrative analysis of complementary phenotype information contained in multi-modality data (e.g., histopathological images and genomic data) has advanced the prognostic evaluation of cancers. However, multi-modality based prognosis analysis confronts two challenges: (1) how to explore underlying relations inherent in different modalities data for learning compact and discriminative multi-modality representations; (2) how to take full consideration of incomplete multi-modality data for constructing accurate and robust prognostic model, since a host of complete multi-modality data are not always available. Additionally, many existing multi-modality based prognostic methods commonly ignore relevant clinical variables (e.g., grade and stage), which, however, may provide supplemental information to promote the performance of model. In this paper, we propose a relation-aware shared representation learning method for prognosis analysis of cancers, which makes full use of clinical information and incomplete multi-modality data. The proposed method learns multi-modal shared space tailored for prognostic model via a dual mapping. Within the shared space, it equips with relational regularizers to explore the potential relations (i.e., feature-label and feature-feature relations) among multi-modality data for inducing discriminatory representations and simultaneously obtaining extra sparsity for alleviating overfitting. Moreover, it regresses and incorporates multiple auxiliary clinical attributes with dynamic coefficients to meliorate performance. Furthermore, in training stage, a partial mapping strategy is employed to extend and train a more reliable model with incomplete multi-modality data. We have evaluated our method on three public datasets derived from The Cancer Genome Atlas (TCGA) project, and the experimental results demonstrate the superior performance of the proposed method.


Asunto(s)
Genómica , Neoplasias , Humanos , Neoplasias/diagnóstico por imagen , Pronóstico
13.
IEEE Trans Med Imaging ; 41(2): 476-490, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34582349

RESUMEN

Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía , Ultrasonografía Mamaria
14.
Phys Med Biol ; 66(8)2021 04 16.
Artículo en Inglés | MEDLINE | ID: mdl-33765665

RESUMEN

Magnetic resonance imaging (MRI) has been widely used in assessing development of Alzheimer's disease (AD) by providing structural information of disease-associated regions (e.g. atrophic regions). In this paper, we propose a light-weight cross-view hierarchical fusion network (CvHF-net), consisting of local patch and global subject subnets, for joint localization and identification of the discriminative local patches and regions in the whole brain MRI, upon which feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Firstly, based on the extracted class-discriminative 3D patches, we employ the local patch subnets to utilize multiple 2D views to represent 3D patches by using an attention-aware hierarchical fusion structure in a divide-and-conquer manner. Since different local patches are with various abilities in AD identification, the global subject subnet is developed to bias the allocation of available resources towards the most informative parts among these local patches to obtain global information for AD identification. Besides, an instance declined pruning algorithm is embedded in the CvHF-net for adaptively selecting most discriminant patches in a task-driven manner. The proposed method was evaluated on the AD Neuroimaging Initiative dataset and the experimental results show that our proposed method can achieve good performance on AD diagnosis.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Neuroimagen
15.
Med Image Anal ; 73: 102160, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34303890

RESUMEN

The new subtypes of diffuse gliomas are recognized by the World Health Organization (WHO) on the basis of genotypes, e.g., isocitrate dehydrogenase and chromosome arms 1p/19q, in addition to the histologic phenotype. Glioma subtype identification can provide valid guidances for both risk-benefit assessment and clinical decision. The feature representations of gliomas in magnetic resonance imaging (MRI) have been prevalent for revealing underlying subtype status. However, since gliomas are highly heterogeneous tumors with quite variable imaging phenotypes, learning discriminative feature representations in MRI for gliomas remains challenging. In this paper, we propose a deep cross-view co-regularized representation learning framework for glioma subtype identification, in which view representation learning and multiple constraints are integrated into a unified paradigm. Specifically, we first learn latent view-specific representations based on cross-view images generated from MRI via a bi-directional mapping connecting original imaging space and latent space, and view-correlated regularizer and output-consistent regularizer in the latent space are employed to explore view correlation and derive view consistency, respectively. We further learn view-sharable representations which can explore complementary information of multiple views by projecting the view-specific representations into a holistically shared space and enhancing via adversary learning strategy. Finally, the view-specific and view-sharable representations are incorporated for identifying glioma subtype. Experimental results on multi-site datasets demonstrate the proposed method outperforms several state-of-the-art methods in detection of glioma subtype status.


Asunto(s)
Neoplasias Encefálicas , Glioma , Neoplasias Encefálicas/diagnóstico por imagen , Glioma/diagnóstico por imagen , Humanos , Isocitrato Deshidrogenasa , Imagen por Resonancia Magnética
16.
IEEE Trans Med Imaging ; 40(6): 1632-1645, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33651685

RESUMEN

The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer's disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multi-modal data remains appealing. To this end, we propose a relation-induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis. To validate the effectiveness of our proposed approach, we conduct extensive experiments on two independent datasets (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our proposed method outperforms several state-of-the-art methods.


Asunto(s)
Enfermedad de Alzheimer , Enfermedad de Alzheimer/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Neuroimagen , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X
17.
Med Phys ; 48(8): 4262-4278, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34053092

RESUMEN

PURPOSE: Breast ultrasound (BUS) image segmentation plays a crucial role in computer-aided diagnosis systems for BUS examination, which are useful for improved accuracy of breast cancer diagnosis. However, such performance remains a challenging task owing to the poor image quality and large variations in the sizes, shapes, and locations of breast lesions. In this paper, we propose a new convolutional neural network with coarse-to-fine feature fusion to address the aforementioned challenges. METHODS: The proposed fusion network consists of an encoder path, a decoder path, and a core fusion stream path (FSP). The encoder path is used to capture the context information, and the decoder path is used for localization prediction. The FSP is designed to generate beneficial aggregate feature representations (i.e., various-sized lesion features, aggregated coarse-to-fine information, and high-resolution edge characteristics) from the encoder and decoder paths, which are eventually used for accurate breast lesion segmentation. To better retain the boundary information and alleviate the effect of image noise, we input the superpixel image along with the original image to the fusion network. Furthermore, a weighted-balanced loss function was designed to address the problem of lesion regions having different sizes. We then conducted exhaustive experiments on three public BUS datasets to evaluate the proposed network. RESULTS: The proposed method outperformed state-of-the-art (SOTA) segmentation methods on the three public BUS datasets, with average dice similarity coefficients of 84.71(±1.07), 83.76(±0.83), and 86.52(±1.52), average intersection-over-union values of 76.34(±1.50), 75.70(±0.98), and 77.86(±2.07), average sensitivities of 86.66(±1.82), 85.21(±1.98), and 87.21(±2.51), average specificities of 97.92(±0.46), 98.57(±0.19), and 99.42(±0.21), and average accuracies of 95.89(±0.57), 97.17(±0.3), and 98.51(±0.3). CONCLUSIONS: The proposed fusion network could effectively segment lesions from BUS images, thereby presenting a new feature fusion strategy to handle challenging task of segmentation, while outperforming the SOTA segmentation methods. The code is publicly available at https://github.com/mniwk/CF2-NET.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Neoplasias de la Mama/diagnóstico por imagen , Diagnóstico por Computador , Femenino , Humanos , Redes Neurales de la Computación , Ultrasonografía Mamaria
18.
Ann Transl Med ; 9(4): 298, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33708925

RESUMEN

BACKGROUND: To investigate the feasibility of integrating global radiomics and local deep features based on multi-modal magnetic resonance imaging (MRI) for developing a noninvasive glioma grading model. METHODS: In this study, 567 patients [211 patients with glioblastomas (GBMs) and 356 patients with low-grade gliomas (LGGs)] between May 2006 and September 2018, were enrolled and divided into training (n=186), validation (n=47), and testing cohorts (n=334), respectively. All patients underwent postcontrast enhanced T1-weighted and T2 fluid-attenuated inversion recovery MRI scanning. Radiomics and deep features (trained by 8,510 3D patches) were extracted to quantify the global and local information of gliomas, respectively. A kernel fusion-based support vector machine (SVM) classifier was used to integrate these multi-modal features for grading gliomas. The performance of the grading model was assessed using the area under receiver operating curve (AUC), sensitivity, specificity, Delong test, and t-test. RESULTS: The AUC, sensitivity, and specificity of the model based on combination of radiomics and deep features were 0.94 [95% confidence interval (CI): 0.85, 0.99], 86% (95% CI: 64%, 97%), and 92% (95% CI: 75%, 99%), respectively, for the validation cohort; and 0.88 (95% CI: 0.84, 0.91), 88% (95% CI: 80%, 93%), and 81% (95% CI: 76%, 86%), respectively, for the independent testing cohort from a local hospital. The developed model outperformed the models based only on either radiomics or deep features (Delong test, both of P<0.001), and was also comparable to the clinical radiologists. CONCLUSIONS: This study demonstrated the feasibility of integrating multi-modal MRI radiomics and deep features to develop a promising noninvasive grading model for gliomas.

19.
Bone ; 140: 115561, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32730939

RESUMEN

Osteoporosis is a prevalent but underdiagnosed condition. As compared to dual-energy X-ray absorptiometry (DXA) measures, we aimed to develop a deep convolutional neural network (DCNN) model to classify osteopenia and osteoporosis with the use of lumbar spine X-ray images. Herein, we developed the DCNN models based on the training dataset, which comprising 1616 lumbar spine X-ray images from 808 postmenopausal women (aged 50 to 92 years). DXA-derived bone mineral density (BMD) measures were used as the reference standard. We categorized patients into three groups according to DXA BMD T-score: normal (T ≥ -1.0), osteopenia (-2.5 < T < -1.0), and osteoporosis (T ≤ -2.5). T-scores were calculated by using the BMD dataset of young Chinese female aged 20-40 years as a reference. A 3-class DCNN model was trained to classify normal BMD, osteoporosis, and osteopenia. Model performance was tested in a validation dataset (204 images from 102 patients) and two test datasets (396 images from 198 patients and 348 images from 147 patients respectively). Model performance was assessed by the receiver operating characteristic (ROC) curve analysis. The results showed that in the test dataset 1, the model diagnosing osteoporosis achieved an AUC of 0.767 (95% confidence interval [CI]: 0.701-0.824) with sensitivity of 73.7% (95% CI: 62.3-83.1), the model diagnosing osteopenia achieved an AUC of 0.787 (95% CI: 0.723-0.842) with sensitivity of 81.8% (95% CI: 67.3-91.8); In the test dataset 2, the model diagnosing osteoporosis yielded an AUC of 0.726 (95% CI: 0.646-0.796) with sensitivity of 68.4% (95% CI: 54.8-80.1), the model diagnosing osteopenia yielded an AUC of 0.810 (95% CI, 0.737-0.870) with sensitivity of 85.3% (95% CI, 68.9-95.0). Accordingly, a deep learning diagnostic network may have the potential in screening osteoporosis and osteopenia based on lumbar spine radiographs. However, further studies are necessary to verify and improve the diagnostic performance of DCNN models.


Asunto(s)
Enfermedades Óseas Metabólicas , Aprendizaje Profundo , Osteoporosis , Absorciometría de Fotón , Densidad Ósea , Enfermedades Óseas Metabólicas/diagnóstico por imagen , Enfermedades Óseas Metabólicas/epidemiología , Femenino , Humanos , Vértebras Lumbares/diagnóstico por imagen , Osteoporosis/diagnóstico por imagen , Osteoporosis/epidemiología , Estudios Retrospectivos , Rayos X
20.
IEEE J Biomed Health Inform ; 23(3): 1181-1191, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-29993591

RESUMEN

Predicting malignant potential is one of the most critical components of a computer-aided diagnosis system for gastrointestinal stromal tumors (GISTs). These tumors have been studied only on the basis of subjective computed tomography findings. Among various methodologies, radiomics, and deep learning algorithms, specifically convolutional neural networks (CNNs), have recently been confirmed to achieve significant success by outperforming the state-of-the-art performance in medical image pattern classification and have rapidly become leading methodologies in this field. However, the existing methods generally use radiomics or deep convolutional features independently for pattern classification, which tend to take into account only global or local features, respectively. In this paper, we introduce and evaluate a hybrid structure that includes different features selected with radiomics model and CNNs and integrates these features to deal with GISTs classification. The Radiomics model and CNNs are constructed for global radiomics and local convolutional feature selection, respectively. Subsequently, we utilize distinct radiomics and deep convolutional features to perform pattern classification for GISTs. Specifically, we propose a new pooling strategy to assemble the deep convolutional features of 54 three-dimensional patches from the same case and integrate these features with the radiomics features for independent case, followed by random forest classifier. Our method can be extensively evaluated using multiple clinical datasets. The classification performance (area under the curve (AUC): 0.882; 95% confidence interval (CI): 0.816-0.947) consistently outperforms those of independent radiomics (AUC: 0.807; 95% CI: 0.724-0.892) and CNNs (AUC: 0.826; 95% CI: 0.795-0.856) approaches.


Asunto(s)
Neoplasias Gastrointestinales/diagnóstico por imagen , Tumores del Estroma Gastrointestinal/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Bases de Datos Factuales , Humanos , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA