Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Multimed Tools Appl ; : 1-21, 2023 Mar 17.
Artículo en Inglés | MEDLINE | ID: mdl-37362735

RESUMEN

Epidermal growth factor receptor (EGFR) is the key to targeted therapy with tyrosine kinase inhibitors in lung cancer. Traditional identification of EGFR mutation status requires biopsy and sequence testing, which may not be suitable for certain groups who cannot perform biopsy. In this paper, using easily accessible and non-invasive CT images, the residual neural network (ResNet) with mixed loss based on batch training technique is proposed for identification of EGFR mutation status in lung cancer. In this model, the ResNet is regarded as the baseline for feature extraction to avoid the gradient disappearance. Besides, a new mixed loss based on the batch similarity and the cross entropy is proposed to guide the network to better learn the model parameters. The proposed mixed loss utilizes the similarity among batch samples to evaluate the distribution of training data, which can reduce the similarity of different classes and the difference of the same classes. In the experiments, VGG16Net, DenseNet, ResNet18, ResNet34 and ResNet50 models with the mixed loss are trained on the public CT dataset with 155 patients including EGFR mutation status from TCIA. The trained networks are employed to the collected preoperative CT dataset with 56 patients from the cooperative hospital for validating the efficiency of the proposed models. Experimental results show that the proposed models are more appropriate and effective on the lung cancer dataset for identifying the EGFR mutation status. In these models, the ResNet34 with mixed loss is optimal (accuracy = 81.58%, AUC = 0.8861, sensitivity = 80.02%, specificity = 82.90%).

2.
Comput Biol Med ; 158: 106812, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37004434

RESUMEN

BACKGROUND AND PURPOSE: Accurate identification of lung cancer subtypes in medical images is of great significance for the diagnosis and treatment of lung cancer. Despite substantial progress in existing methods, they remain challenging due to limited annotated datasets, large intra-class differences, and high inter-class similarities. METHODS: To address these challenges, we propose a Frequency Domain Transformer Model (FDTrans) to identify patients' lung cancer subtypes using the TCGA lung cancer dataset. We add a pre-processing process to transfer histopathological images to the frequency domain using a block-based discrete cosine transform and design a coordinate Coordinate-Spatial Attention Module (CSAM) to obtain critical detail information by reassigning weights to the location information and channel information of different frequency vectors. Then, a Cross-Domain Transformer Block (CDTB) is designed for Y, Cb, and Cr channel features, capturing the long-term dependencies and global contextual connections between different component features. At the same time, feature extraction is performed on the genomic data to obtain specific features. Finally, the image branch and the gene branch are fused, and the classification result is output through the fully connected layer. RESULTS: In 10-fold cross-validation, the method achieves an AUC of 93.16% and overall accuracy of 92.33%, which is better than similar current lung cancer subtypes classification detection methods. CONCLUSION: This method can help physicians diagnose the subtypes classification of lung cancer in patients and can benefit from both spatial and frequency domain information.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/genética , Genómica , Tórax
3.
Phys Med Biol ; 68(7)2023 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-36867882

RESUMEN

Objective. Recently, imaging genomics has increasingly shown great potential for predicting postoperative recurrence of lung cancer patients. However, prediction methods based on imaging genomics have some disadvantages such as small sample size, high-dimensional information redundancy and poor multimodal fusion efficiency. This study aim to develop a new fusion model to overcome these challenges.Approach. In this study, a dynamic adaptive deep fusion network (DADFN) model based on imaging genomics is proposed for predicting recurrence of lung cancer. In this model, the 3D spiral transformation is used to augment the dataset, which better retains the 3D spatial information of the tumor for deep feature extraction. The intersection of genes screened by LASSO, F-test and CHI-2 selection methods is used to eliminate redundant data and retain the most relevant gene features for the gene feature extraction. A dynamic adaptive fusion mechanism based on the cascade idea is proposed, and multiple different types of base classifiers are integrated in each layer, which can fully utilize the correlation and diversity between multimodal information to better fuse deep features, handcrafted features and gene features.Main results. The experimental results show that the DADFN model achieves good performance, and its accuracy and AUC are 0.884 and 0.863, respectively. This indicates that the model is effective in predicting lung cancer recurrence.Significance. The proposed model has the potential to help physicians to stratify the risk of lung cancer patients and can be used to identify patients who may benefit from a personalized treatment option.


Asunto(s)
Genómica de Imágenes , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Genómica/métodos , Pulmón
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA