Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Oncol ; 14: 1389396, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39267847

RESUMEN

Introduction: Pathologists rely on whole slide images (WSIs) to diagnose cancer by identifying tumor cells and subtypes. Deep learning models, particularly weakly supervised ones, classify WSIs using image tiles but may overlook false positives and negatives due to the heterogeneous nature of tumors. Both cancerous and healthy cells can proliferate in patterns that extend beyond individual tiles, leading to errors at the tile level that result in inaccurate tumor-level classifications. Methods: To address this limitation, we introduce NATMIL (Neighborhood Attention Transformer Multiple Instance Learning), which utilizes the Neighborhood Attention Transformer to incorporate contextual dependencies among WSI tiles. NATMIL enhances multiple instance learning by integrating a broader tissue context into the model. Our approach enhances the accuracy of tumor classification by considering the broader tissue context, thus reducing errors associated with isolated tile analysis. Results: We conducted a quantitative analysis to evaluate NATMIL's performance against other weakly supervised algorithms. When applied to subtyping non-small cell lung cancer (NSCLC) and lymph node (LN) tumors, NATMIL demonstrated superior accuracy. Specifically, NATMIL achieved accuracy values of 89.6% on the Camelyon dataset and 88.1% on the TCGA-LUSC dataset, outperforming existing methods. These results underscore NATMIL's potential as a robust tool for improving the precision of cancer diagnosis using WSIs. Discussion: Our findings demonstrate that NATMIL significantly improves tumor classification accuracy by reducing errors associated with isolated tile analysis. The integration of contextual dependencies enhances the precision of cancer diagnosis using WSIs, highlighting NATMILs´ potential as a robust tool in pathology.

2.
Sci Rep ; 14(1): 3934, 2024 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-38365831

RESUMEN

Novel methods are required to enhance lung cancer detection, which has overtaken other cancer-related causes of death as the major cause of cancer-related mortality. Radiologists have long-standing methods for locating lung nodules in patients with lung cancer, such as computed tomography (CT) scans. Radiologists must manually review a significant amount of CT scan pictures, which makes the process time-consuming and prone to human error. Computer-aided diagnosis (CAD) systems have been created to help radiologists with their evaluations in order to overcome these difficulties. These systems make use of cutting-edge deep learning architectures. These CAD systems are designed to improve lung nodule diagnosis efficiency and accuracy. In this study, a bespoke convolutional neural network (CNN) with a dual attention mechanism was created, which was especially crafted to concentrate on the most important elements in images of lung nodules. The CNN model extracts informative features from the images, while the attention module incorporates both channel attention and spatial attention mechanisms to selectively highlight significant features. After the attention module, global average pooling is applied to summarize the spatial information. To evaluate the performance of the proposed model, extensive experiments were conducted using benchmark dataset of lung nodules. The results of these experiments demonstrated that our model surpasses recent models and achieves state-of-the-art accuracy in lung nodule detection and classification tasks.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Humanos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Redes Neurales de la Computación , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
3.
BMC Cancer ; 23(1): 1037, 2023 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-37884929

RESUMEN

The emergence of image-based systems to improve diagnostic pathology precision, involving the intent to label sets or bags of instances, greatly hinges on Multiple Instance Learning for Whole Slide Images(WSIs). Contemporary works have shown excellent performance for a neural network in MIL settings. Here, we examine a graph-based model to facilitate end-to-end learning and sample suitable patches using a tile-based approach. We propose MIL-GNN to employ a graph-based Variational Auto-encoder with a Gaussian mixture model to discover relations between sample patches for the purposes to aggregate patch details into an individual vector representation. Using the classical MIL dataset MUSK and distinguishing two lung cancer sub-types, lung cancer called adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), we exhibit the efficacy of our technique. We achieved a 97.42% accuracy on the MUSK dataset and a 94.3% AUC on the classification of lung cancer sub-types utilizing features.


Asunto(s)
Adenocarcinoma , Carcinoma de Pulmón de Células no Pequeñas , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico , Redes Neurales de la Computación
4.
Front Bioeng Biotechnol ; 10: 791424, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35309999

RESUMEN

In order to more accurately and comprehensively characterize the changes and development rules of lesion characteristics in pulmonary medical images in different periods, the study was conducted to predict the evolution of pulmonary nodules in the longitudinal dimension of time, and a benign and malignant prediction model of pulmonary lesions in different periods was constructed under multiscale three-dimensional (3D) feature fusion. According to the sequence of computed tomography (CT) images of patients at different stages, 3D interpolation was conducted to generate 3D lung CT images. The 3D features of different size lesions in the lungs were extracted using 3D convolutional neural networks for fusion features. A time-modulated long short-term memory was constructed to predict the benign and malignant lesions by using the improved time-length memory method to learn the feature vectors of lung lesions with temporal and spatial characteristics in different periods. The experiment shows that the area under the curve of the proposed method is 92.71%, which is higher than that of the traditional method.

5.
Quant Imaging Med Surg ; 12(3): 1929-1957, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35284282

RESUMEN

Background: Computed tomography (CT) is widely used in medical diagnoses due to its ability to non-invasively detect the internal structures of the human body. However, CT scans with normal radiation doses can cause irreversible damage to patients. The radiation exposure is reduced with low-dose CT (LDCT), although considerable speckle noise and streak artifacts in CT images and even structural deformation may result, significantly undermining its diagnostic capability. Methods: This paper proposes a multistage network framework which gradually divides the entire process into 2-staged sub-networks to complete the task of image reconstruction. Specifically, a dilated residual convolutional neural network (DRCNN) was used to denoise the LDCT image. Then, the learned context information was combined with the channel attention subnet, which retains local information, to preserve the structural details and features of the image and textural information. To obtain recognizable characteristic details, we introduced a novel self-calibration module (SCM) between the 2 stages to reweight the local features, which realizes the complementation of information at different stages while refining feature information. In addition, we also designed an autoencoder neural network, using a self-supervised learning scheme to train a perceptual loss neural network specifically for CT images. Results: We evaluated the diagnostic quality of the results and performed ablation experiments on the loss function and network structure modules to verify each module's effectiveness in the network. Our proposed network architecture obtained high peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and visual information fidelity (VIF) values in terms of quantitative evaluation. In the analysis of qualitative results, our network structure maintained a better balance between eliminating image noise and preserving image details. Experimental results showed that our proposed network structure obtained better metrics and visual evaluation. Conclusions: This study proposed a new LDCT image reconstruction method by combining autoencoder perceptual loss networks with multistage convolutional neural networks (MSCNN). Experimental results showed that the newly proposed method has performance than other methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA