Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Bioinformatics ; 36(9): 2888-2895, 2020 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-31985775

RESUMEN

MOTIVATION: As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance. RESULTS: We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728-0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data. AVAILABILITY AND IMPLEMENTATION: https://github.com/zhang-de-lab/zhang-lab? from=singlemessage. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Carcinoma de Células Renales/diagnóstico por imagen , Carcinoma de Células Renales/genética , Genoma , Humanos , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/genética , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
2.
Phys Med Biol ; 66(8)2021 04 16.
Artículo en Inglés | MEDLINE | ID: mdl-33765665

RESUMEN

Magnetic resonance imaging (MRI) has been widely used in assessing development of Alzheimer's disease (AD) by providing structural information of disease-associated regions (e.g. atrophic regions). In this paper, we propose a light-weight cross-view hierarchical fusion network (CvHF-net), consisting of local patch and global subject subnets, for joint localization and identification of the discriminative local patches and regions in the whole brain MRI, upon which feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Firstly, based on the extracted class-discriminative 3D patches, we employ the local patch subnets to utilize multiple 2D views to represent 3D patches by using an attention-aware hierarchical fusion structure in a divide-and-conquer manner. Since different local patches are with various abilities in AD identification, the global subject subnet is developed to bias the allocation of available resources towards the most informative parts among these local patches to obtain global information for AD identification. Besides, an instance declined pruning algorithm is embedded in the CvHF-net for adaptively selecting most discriminant patches in a task-driven manner. The proposed method was evaluated on the AD Neuroimaging Initiative dataset and the experimental results show that our proposed method can achieve good performance on AD diagnosis.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Neuroimagen
3.
Ann Transl Med ; 9(4): 298, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33708925

RESUMEN

BACKGROUND: To investigate the feasibility of integrating global radiomics and local deep features based on multi-modal magnetic resonance imaging (MRI) for developing a noninvasive glioma grading model. METHODS: In this study, 567 patients [211 patients with glioblastomas (GBMs) and 356 patients with low-grade gliomas (LGGs)] between May 2006 and September 2018, were enrolled and divided into training (n=186), validation (n=47), and testing cohorts (n=334), respectively. All patients underwent postcontrast enhanced T1-weighted and T2 fluid-attenuated inversion recovery MRI scanning. Radiomics and deep features (trained by 8,510 3D patches) were extracted to quantify the global and local information of gliomas, respectively. A kernel fusion-based support vector machine (SVM) classifier was used to integrate these multi-modal features for grading gliomas. The performance of the grading model was assessed using the area under receiver operating curve (AUC), sensitivity, specificity, Delong test, and t-test. RESULTS: The AUC, sensitivity, and specificity of the model based on combination of radiomics and deep features were 0.94 [95% confidence interval (CI): 0.85, 0.99], 86% (95% CI: 64%, 97%), and 92% (95% CI: 75%, 99%), respectively, for the validation cohort; and 0.88 (95% CI: 0.84, 0.91), 88% (95% CI: 80%, 93%), and 81% (95% CI: 76%, 86%), respectively, for the independent testing cohort from a local hospital. The developed model outperformed the models based only on either radiomics or deep features (Delong test, both of P<0.001), and was also comparable to the clinical radiologists. CONCLUSIONS: This study demonstrated the feasibility of integrating multi-modal MRI radiomics and deep features to develop a promising noninvasive grading model for gliomas.

4.
IEEE J Biomed Health Inform ; 23(3): 1181-1191, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-29993591

RESUMEN

Predicting malignant potential is one of the most critical components of a computer-aided diagnosis system for gastrointestinal stromal tumors (GISTs). These tumors have been studied only on the basis of subjective computed tomography findings. Among various methodologies, radiomics, and deep learning algorithms, specifically convolutional neural networks (CNNs), have recently been confirmed to achieve significant success by outperforming the state-of-the-art performance in medical image pattern classification and have rapidly become leading methodologies in this field. However, the existing methods generally use radiomics or deep convolutional features independently for pattern classification, which tend to take into account only global or local features, respectively. In this paper, we introduce and evaluate a hybrid structure that includes different features selected with radiomics model and CNNs and integrates these features to deal with GISTs classification. The Radiomics model and CNNs are constructed for global radiomics and local convolutional feature selection, respectively. Subsequently, we utilize distinct radiomics and deep convolutional features to perform pattern classification for GISTs. Specifically, we propose a new pooling strategy to assemble the deep convolutional features of 54 three-dimensional patches from the same case and integrate these features with the radiomics features for independent case, followed by random forest classifier. Our method can be extensively evaluated using multiple clinical datasets. The classification performance (area under the curve (AUC): 0.882; 95% confidence interval (CI): 0.816-0.947) consistently outperforms those of independent radiomics (AUC: 0.807; 95% CI: 0.724-0.892) and CNNs (AUC: 0.826; 95% CI: 0.795-0.856) approaches.


Asunto(s)
Neoplasias Gastrointestinales/diagnóstico por imagen , Tumores del Estroma Gastrointestinal/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Bases de Datos Factuales , Humanos , Tomografía Computarizada por Rayos X/métodos
5.
Phys Med Biol ; 63(24): 245014, 2018 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-30523819

RESUMEN

Breast cancer is the most common female malignancy among women. Sentinel lymph node (SLN) status is a crucial prognostic factor for breast cancer. In this paper, we propose an integrated scheme of deep learning and bag-of-features (BOF) model for preoperative prediction of SLN metastasis. Specifically, convolution neural networks (CNNs) are used to extract deep features from the three 2D representative orthogonal views of a segmented 3D volume of interest. Then, we use a BOF model to furtherly encode the all deep features, which makes features more compact and products high-dimension sparse representation. In particular, a kernel fusion method that assembles all features is proposed to build a discriminative support vector machine (SVM) classifier. The bag of deep feature model is evaluated using the diffusion-weighted magnetic resonance imaging (DWI) database of 172 patients, including 74 SLN and 98 non-SLN. The results show that the proposed method achieves area under the curve (AUC) as high as 0.852 (95% confidence interval (CI): 0.716-0.988) at test set. The results demonstrate that the proposed model can potentially provide a noninvasive approach for automatically predicting prediction of SLN metastasis in patients with breast cancer.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Metástasis Linfática/diagnóstico por imagen , Neoplasias Primarias Secundarias/diagnóstico por imagen , Biopsia del Ganglio Linfático Centinela , Adulto , Área Bajo la Curva , Neoplasias de la Mama/patología , Reacciones Falso Positivas , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Persona de Mediana Edad , Periodo Preoperatorio , Curva ROC , Reproducibilidad de los Resultados , Ganglio Linfático Centinela/patología , Máquina de Vectores de Soporte
6.
Adv Mater ; 23(8): 926-52, 2011 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-21031450

RESUMEN

Although organic light-emitting devices have been commercialized as flat panel displays since 1997, only singlet excitons were emitted. Full use of singlet and triplet excitons, electrophosphorescence, has attracted increasing attentions after the premier work made by Forrest, Thompson, and co-workers. In fact, red electrophosphorescent dye has already been used in sub-display of commercial mobile phones since 2003. Highly efficient green phosphorescent dye is now undergoing of commercialization. Very recently, blue phosphorescence approaching the theoretical efficiency has also been achieved, which may overcome the final obstacle against the commercialization of full color display and white light sources from phosphorescent materials. Combining light out-coupling structures with highly efficient phosphors (shown in the table-of-contents image), white emission with an efficiency matching that of fluorescent tubes (90 lm/W) has now been realized. It is possible to tune the color to the true white region by changing to a deep blue emitter and corresponding wide gap host and transporting material for the blue phosphor. In this article, recent progresses in red, green, blue, and white electrophosphorescent materials for OLEDs are reviewed, with special emphasis on blue electrophosphorescent materials.


Asunto(s)
Equipos y Suministros Eléctricos , Electricidad , Luz , Sustancias Luminiscentes/química , Compuestos Orgánicos/química , Color
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA