Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 25(1)2023 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-38145948

RESUMEN

Spatial transcriptomics unveils the complex dynamics of cell regulation and transcriptomes, but it is typically cost-prohibitive. Predicting spatial gene expression from histological images via artificial intelligence offers a more affordable option, yet existing methods fall short in extracting deep-level information from pathological images. In this paper, we present THItoGene, a hybrid neural network that utilizes dynamic convolutional and capsule networks to adaptively sense potential molecular signals in histological images for exploring the relationship between high-resolution pathology image phenotypes and regulation of gene expression. A comprehensive benchmark evaluation using datasets from human breast cancer and cutaneous squamous cell carcinoma has demonstrated the superior performance of THItoGene in spatial gene expression prediction. Moreover, THItoGene has demonstrated its capacity to decipher both the spatial context and enrichment signals within specific tissue regions. THItoGene can be freely accessed at https://github.com/yrjia1015/THItoGene.


Asunto(s)
Carcinoma de Células Escamosas , Aprendizaje Profundo , Neoplasias Cutáneas , Humanos , Inteligencia Artificial , Perfilación de la Expresión Génica
2.
J Transl Med ; 22(1): 438, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720336

RESUMEN

BACKGROUND: Advanced unresectable gastric cancer (GC) patients were previously treated with chemotherapy alone as the first-line therapy. However, with the Food and Drug Administration's (FDA) 2022 approval of programmed cell death protein 1 (PD-1) inhibitor combined with chemotherapy as the first-li ne treatment for advanced unresectable GC, patients have significantly benefited. However, the significant costs and potential adverse effects necessitate precise patient selection. In recent years, the advent of deep learning (DL) has revolutionized the medical field, particularly in predicting tumor treatment responses. Our study utilizes DL to analyze pathological images, aiming to predict first-line PD-1 combined chemotherapy response for advanced-stage GC. METHODS: In this multicenter retrospective analysis, Hematoxylin and Eosin (H&E)-stained slides were collected from advanced GC patients across four medical centers. Treatment response was evaluated according to iRECIST 1.1 criteria after a comprehensive first-line PD-1 immunotherapy combined with chemotherapy. Three DL models were employed in an ensemble approach to create the immune checkpoint inhibitors Response Score (ICIsRS) as a novel histopathological biomarker derived from Whole Slide Images (WSIs). RESULTS: Analyzing 148,181 patches from 313 WSIs of 264 advanced GC patients, the ensemble model exhibited superior predictive accuracy, leading to the creation of ICIsNet. The model demonstrated robust performance across four testing datasets, achieving AUC values of 0.92, 0.95, 0.96, and 1 respectively. The boxplot, constructed from the ICIsRS, reveals statistically significant disparities between the well response and poor response (all p-values < = 0.001). CONCLUSION: ICIsRS, a DL-derived biomarker from WSIs, effectively predicts advanced GC patients' responses to PD-1 combined chemotherapy, offering a novel approach for personalized treatment planning and allowing for more individualized and potentially effective treatment strategies based on a patient's unique response situations.


Asunto(s)
Aprendizaje Profundo , Inhibidores de Puntos de Control Inmunológico , Receptor de Muerte Celular Programada 1 , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/tratamiento farmacológico , Neoplasias Gástricas/patología , Masculino , Femenino , Resultado del Tratamiento , Persona de Mediana Edad , Inhibidores de Puntos de Control Inmunológico/uso terapéutico , Receptor de Muerte Celular Programada 1/antagonistas & inhibidores , Anciano , Estudios Retrospectivos , Curva ROC , Adulto
3.
J Neurooncol ; 168(2): 283-298, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38557926

RESUMEN

PURPOSE: To develop and validate a pathomics signature for predicting the outcomes of Primary Central Nervous System Lymphoma (PCNSL). METHODS: In this study, 132 whole-slide images (WSIs) of 114 patients with PCNSL were enrolled. Quantitative features of hematoxylin and eosin (H&E) stained slides were extracted using CellProfiler. A pathomics signature was established and validated. Cox regression analysis, receiver operating characteristic (ROC) curves, Calibration, decision curve analysis (DCA), and net reclassification improvement (NRI) were performed to assess the significance and performance. RESULTS: In total, 802 features were extracted using a fully automated pipeline. Six machine-learning classifiers demonstrated high accuracy in distinguishing malignant neoplasms. The pathomics signature remained a significant factor of overall survival (OS) and progression-free survival (PFS) in the training cohort (OS: HR 7.423, p < 0.001; PFS: HR 2.143, p = 0.022) and independent validation cohort (OS: HR 4.204, p = 0.017; PFS: HR 3.243, p = 0.005). A significantly lower response rate to initial treatment was found in high Path-score group (19/35, 54.29%) as compared to patients in the low Path-score group (16/70, 22.86%; p < 0.001). The DCA and NRI analyses confirmed that the nomogram showed incremental performance compared with existing models. The ROC curve demonstrated a relatively sensitive and specific profile for the nomogram (1-, 2-, and 3-year AUC = 0.862, 0.932, and 0.927, respectively). CONCLUSION: As a novel, non-invasive, and convenient approach, the newly developed pathomics signature is a powerful predictor of OS and PFS in PCNSL and might be a potential predictive indicator for therapeutic response.


Asunto(s)
Neoplasias del Sistema Nervioso Central , Linfoma , Aprendizaje Automático , Humanos , Femenino , Masculino , Neoplasias del Sistema Nervioso Central/patología , Neoplasias del Sistema Nervioso Central/diagnóstico , Neoplasias del Sistema Nervioso Central/mortalidad , Persona de Mediana Edad , Pronóstico , Linfoma/patología , Linfoma/diagnóstico , Linfoma/mortalidad , Anciano , Adulto , Curva ROC , Anciano de 80 o más Años , Tasa de Supervivencia , Adulto Joven , Estudios Retrospectivos , Biomarcadores de Tumor/metabolismo
4.
Sensors (Basel) ; 24(12)2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38931561

RESUMEN

Breast cancer is the second most common cancer worldwide, primarily affecting women, while histopathological image analysis is one of the possibile methods used to determine tumor malignancy. Regarding image analysis, the application of deep learning has become increasingly prevalent in recent years. However, a significant issue is the unbalanced nature of available datasets, with some classes having more images than others, which may impact the performance of the models due to poorer generalizability. A possible strategy to avoid this problem is downsampling the class with the most images to create a balanced dataset. Nevertheless, this approach is not recommended for small datasets as it can lead to poor model performance. Instead, techniques such as data augmentation are traditionally used to address this issue. These techniques apply simple transformations such as translation or rotation to the images to increase variability in the dataset. Another possibility is using generative adversarial networks (GANs), which can generate images from a relatively small training set. This work aims to enhance model performance in classifying histopathological images by applying data augmentation using GANs instead of traditional techniques.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Neoplasias de la Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Femenino , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos
5.
BMC Oral Health ; 24(1): 601, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38783295

RESUMEN

PROBLEM: Oral squamous cell carcinoma (OSCC) is the eighth most prevalent cancer globally, leading to the loss of structural integrity within the oral cavity layers and membranes. Despite its high prevalence, early diagnosis is crucial for effective treatment. AIM: This study aimed to utilize recent advancements in deep learning for medical image classification to automate the early diagnosis of oral histopathology images, thereby facilitating prompt and accurate detection of oral cancer. METHODS: A deep learning convolutional neural network (CNN) model categorizes benign and malignant oral biopsy histopathological images. By leveraging 17 pretrained DL-CNN models, a two-step statistical analysis identified the pretrained EfficientNetB0 model as the most superior. Further enhancement of EfficientNetB0 was achieved by incorporating a dual attention network (DAN) into the model architecture. RESULTS: The improved EfficientNetB0 model demonstrated impressive performance metrics, including an accuracy of 91.1%, sensitivity of 92.2%, specificity of 91.0%, precision of 91.3%, false-positive rate (FPR) of 1.12%, F1 score of 92.3%, Matthews correlation coefficient (MCC) of 90.1%, kappa of 88.8%, and computational time of 66.41%. Notably, this model surpasses the performance of state-of-the-art approaches in the field. CONCLUSION: Integrating deep learning techniques, specifically the enhanced EfficientNetB0 model with DAN, shows promising results for the automated early diagnosis of oral cancer through oral histopathology image analysis. This advancement has significant potential for improving the efficacy of oral cancer treatment strategies.


Asunto(s)
Carcinoma de Células Escamosas , Aprendizaje Profundo , Neoplasias de la Boca , Redes Neurales de la Computación , Humanos , Neoplasias de la Boca/patología , Neoplasias de la Boca/diagnóstico por imagen , Neoplasias de la Boca/diagnóstico , Carcinoma de Células Escamosas/patología , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/diagnóstico , Detección Precoz del Cáncer/métodos , Sensibilidad y Especificidad
6.
Entropy (Basel) ; 26(2)2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38392420

RESUMEN

Immunohistochemistry is a powerful technique that is widely used in biomedical research and clinics; it allows one to determine the expression levels of some proteins of interest in tissue samples using color intensity due to the expression of biomarkers with specific antibodies. As such, immunohistochemical images are complex and their features are difficult to quantify. Recently, we proposed a novel method, including a first separation stage based on non-negative matrix factorization (NMF), that achieved good results. However, this method was highly dependent on the parameters that control sparseness and non-negativity, as well as on algorithm initialization. Furthermore, the previously proposed method required a reference image as a starting point for the NMF algorithm. In the present work, we propose a new, simpler and more robust method for the automated, unsupervised scoring of immunohistochemical images based on bright field. Our work is focused on images from tumor tissues marked with blue (nuclei) and brown (protein of interest) stains. The new proposed method represents a simpler approach that, on the one hand, avoids the use of NMF in the separation stage and, on the other hand, circumvents the need for a control image. This new approach determines the subspace spanned by the two colors of interest using principal component analysis (PCA) with dimension reduction. This subspace is a two-dimensional space, allowing for color vector determination by considering the point density peaks. A new scoring stage is also developed in our method that, again, avoids reference images, making the procedure more robust and less dependent on parameters. Semi-quantitative image scoring experiments using five categories exhibit promising and consistent results when compared to manual scoring carried out by experts.

7.
Biol Proced Online ; 25(1): 15, 2023 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-37268878

RESUMEN

BACKGROUND: Deep learning has been extensively used in digital histopathology. The purpose of this study was to test deep learning (DL) algorithms for predicting the vital status of whole-slide image (WSI) of uveal melanoma (UM). METHODS: We developed a deep learning model (Google-net) to predict the vital status of UM patients from histopathological images in TCGA-UVM cohort and validated it in an internal cohort. The histopathological DL features extracted from the model and then were applied to classify UM patients into two subtypes. The differences between two subtypes in clinical outcomes, tumor mutation, and microenvironment, and probability of drug therapeutic response were investigated further. RESULTS: We observed that the developed DL model can achieve a high accuracy of > = 90% for patches and WSIs prediction. Using 14 histopathological DL features, we successfully classified UM patients into Cluster1 and Cluster2 subtypes. Compared to Cluster2, patients in the Cluster1 subtype have a poor survival outcome, increased expression levels of immune-checkpoint genes, higher immune-infiltration of CD8 + T cell and CD4 + T cells, and more sensitivity to anti-PD-1 therapy. Besides, we established and verified prognostic histopathological DL-signature and gene-signature which outperformed the traditional clinical features. Finally, a well-performed nomogram combining the DL-signature and gene-signature was constructed to predict the mortality of UM patients. CONCLUSIONS: Our findings suggest that DL model can accurately predict vital status in UM patents just using histopathological images. We found out two subgroups based on histopathological DL features, which may in favor of immunotherapy and chemotherapy. Finally, a well-performing nomogram that combines DL-signature and gene-signature was constructed to give a more straightforward and reliable prognosis for UM patients in treatment and management.

8.
Gastric Cancer ; 26(5): 734-742, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37322381

RESUMEN

BACKGROUND: Neoadjuvant chemotherapy (NAC) has been recognized as an effective therapeutic option for locally advanced gastric cancer as it is expected to reduce tumor size, increase the resection rate, and improve overall survival. However, for patients who are not responsive to NAC, the best operation timing may be missed together with suffering from side effects. Therefore, it is paramount to differentiate potential respondents from non-respondents. Histopathological images contain rich and complex data that can be exploited to study cancers. We assessed the ability of a novel deep learning (DL)-based biomarker to predict pathological responses from images of hematoxylin and eosin (H&E)-stained tissue. METHODS: In this multicentre observational study, H&E-stained biopsy sections of patients with gastric cancer were collected from four hospitals. All patients underwent NAC followed by gastrectomy. The Becker tumor regression grading (TRG) system was used to evaluate the pathologic chemotherapy response. Based on H&E-stained slides of biopsies, DL methods (Inception-V3, Xception, EfficientNet-B5, and ensemble CRSNet models) were employed to predict the pathological response by scoring the tumor tissue to obtain a histopathological biomarker, the chemotherapy response score (CRS). The predictive performance of the CRSNet was evaluated. RESULTS: 69,564 patches from 230 whole-slide images of 213 patients with gastric cancer were obtained in this study. Based on the F1 score and area under the curve (AUC), an optimal model was finally chosen, named the CRSNet model. Using the ensemble CRSNet model, the response score derived from H&E staining images reached an AUC of 0.936 in the internal test cohort and 0.923 in the external validation cohort for predicting pathological response. The CRS of major responders was significantly higher than that of minor responders in both internal and external test cohorts (both p < 0.001). CONCLUSION: In this study, the proposed DL-based biomarker (CRSNet model) derived from histopathological images of the biopsy showed potential as a clinical aid for predicting the response to NAC in patients with locally advanced GC. Therefore, the CRSNet model provides a novel tool for the individualized management of locally advanced gastric cancer.


Asunto(s)
Neoplasias Gástricas , Humanos , Neoplasias Gástricas/tratamiento farmacológico , Neoplasias Gástricas/cirugía , Terapia Neoadyuvante , Gastrectomía , Biopsia
9.
J Digit Imaging ; 36(2): 441-449, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36474087

RESUMEN

Cervical cancer is the most common cancer among women worldwide. The diagnosis and classification of cancer are extremely important, as it influences the optimal treatment and length of survival. The objective was to develop and validate a diagnosis system based on convolutional neural networks (CNN) that identifies cervical malignancies and provides diagnostic interpretability. A total of 8496 labeled histology images were extracted from 229 cervical specimens (cervical squamous cell carcinoma, SCC, n = 37; cervical adenocarcinoma, AC, n = 8; nonmalignant cervical tissues, n = 184). AlexNet, VGG-19, Xception, and ResNet-50 with five-fold cross-validation were constructed to distinguish cervical cancer images from nonmalignant images. The performance of CNNs was quantified in terms of accuracy, precision, recall, and the area under the receiver operating curve (AUC). Six pathologists were recruited to make a comparison with the performance of CNNs. Guided Backpropagation and Gradient-weighted Class Activation Mapping (Grad-CAM) were deployed to highlight the area of high malignant probability. The Xception model had excellent performance in identifying cervical SCC and AC in test sets. For cervical SCC, AUC was 0.98 (internal validation) and 0.974 (external validation). For cervical AC, AUC was 0.966 (internal validation) and 0.958 (external validation). The performance of CNNs falls between experienced and inexperienced pathologists. Grad-CAM and Guided Gard-CAM ensured diagnoses interpretability by highlighting morphological features of malignant changes. CNN is efficient for histological image classification tasks of distinguishing cervical malignancies from benign tissues and could highlight the specific areas of concern. All these findings suggest that CNNs could serve as a diagnostic tool to aid pathologic diagnosis.


Asunto(s)
Neoplasias del Cuello Uterino , Humanos , Femenino , Neoplasias del Cuello Uterino/diagnóstico por imagen , Redes Neurales de la Computación , Cuello del Útero
10.
J Xray Sci Technol ; 31(1): 211-221, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36463485

RESUMEN

Among malignant tumors, lung cancer has the highest morbidity and fatality rates worldwide. Screening for lung cancer has been investigated for decades in order to reduce mortality rates of lung cancer patients, and treatment options have improved dramatically in recent years. Pathologists utilize various techniques to determine the stage, type, and subtype of lung cancers, but one of the most common is a visual assessment of histopathology slides. The most common subtypes of lung cancer are adenocarcinoma and squamous cell carcinoma, lung benign, and distinguishing between them requires visual inspection by a skilled pathologist. The purpose of this article was to develop a hybrid network for the categorization of lung histopathology images, and it did so by combining AlexNet, wavelet, and support vector machines. In this study, we feed the integrated discrete wavelet transform (DWT) coefficients and AlexNet deep features into linear support vector machines (SVMs) for lung nodule sample classification. The LC25000 Lung and colon histopathology image dataset, which contains 5,000 digital histopathology images in three categories of benign (normal cells), adenocarcinoma, and squamous carcinoma cells (both are cancerous cells) is used in this study to train and test SVM classifiers. The study results of using a 10-fold cross-validation method achieve an accuracy of 99.3% and an area under the curve (AUC) of 0.99 in classifying these digital histopathology images of lung nodule samples.


Asunto(s)
Adenocarcinoma , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Diagnóstico por Computador/métodos , Adenocarcinoma/diagnóstico por imagen , Carcinoma de Células Escamosas/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Máquina de Vectores de Soporte
11.
BMC Med Inform Decis Mak ; 22(1): 122, 2022 05 04.
Artículo en Inglés | MEDLINE | ID: mdl-35509058

RESUMEN

Liver cancer is a malignant tumor with high morbidity and mortality, which has a tremendous negative impact on human survival. However, it is a challenging task to recognize tens of thousands of histopathological images of liver cancer by naked eye, which poses numerous challenges to inexperienced clinicians. In addition, factors such as long time-consuming, tedious work and huge number of images impose a great burden on clinical diagnosis. Therefore, our study combines convolutional neural networks with histopathology images and adopts a feature fusion approach to help clinicians efficiently discriminate the differentiation types of primary hepatocellular carcinoma histopathology images, thus improving their diagnostic efficiency and relieving their work pressure. In this study, for the first time, 73 patients with different differentiation types of primary liver cancer tumors were classified. We performed an adequate classification evaluation of liver cancer differentiation types using four pre-trained deep convolutional neural networks and nine different machine learning (ML) classifiers on a dataset of liver cancer histopathology images with multiple differentiation types. And the test set accuracy, validation set accuracy, running time with different strategies, precision, recall and F1 value were used for adequate comparative evaluation. Proved by experimental results, fusion networks (FuNet) structure is a good choice, which covers both channel attention and spatial attention, and suppresses channel interference with less information. Meanwhile, it can clarify the importance of each spatial location by learning the weights of different locations in space, then apply it to the study of classification of multi-differentiated types of liver cancer. In addition, in most cases, the Stacking-based integrated learning classifier outperforms other ML classifiers in the classification task of multi-differentiation types of liver cancer with the FuNet fusion strategy after dimensionality reduction of the fused features by principle component analysis (PCA) features, and a satisfactory result of 72.46% is achieved in the test set, which has certain practicality.


Asunto(s)
Carcinoma Hepatocelular/patología , Neoplasias Hepáticas/patología , Redes Neurales de la Computación , Carcinoma Hepatocelular/diagnóstico por imagen , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Aprendizaje Automático
12.
BMC Med Inform Decis Mak ; 22(1): 176, 2022 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-35787805

RESUMEN

PURPOSE: Liver cancer is one of the most common malignant tumors in the world, ranking fifth in malignant tumors. The degree of differentiation can reflect the degree of malignancy. The degree of malignancy of liver cancer can be divided into three types: poorly differentiated, moderately differentiated, and well differentiated. Diagnosis and treatment of different levels of differentiation are crucial to the survival rate and survival time of patients. As the gold standard for liver cancer diagnosis, histopathological images can accurately distinguish liver cancers of different levels of differentiation. Therefore, the study of intelligent classification of histopathological images is of great significance to patients with liver cancer. At present, the classification of histopathological images of liver cancer with different degrees of differentiation has disadvantages such as time-consuming, labor-intensive, and large manual investment. In this context, the importance of intelligent classification of histopathological images is obvious. METHODS: Based on the development of a complete data acquisition scheme, this paper applies the SENet deep learning model to the intelligent classification of all types of differentiated liver cancer histopathological images for the first time, and compares it with the four deep learning models of VGG16, ResNet50, ResNet_CBAM, and SKNet. The evaluation indexes adopted in this paper include confusion matrix, Precision, recall, F1 Score, etc. These evaluation indexes can be used to evaluate the model in a very comprehensive and accurate way. RESULTS: Five different deep learning classification models are applied to collect the data set and evaluate model. The experimental results show that the SENet model has achieved the best classification effect with an accuracy of 95.27%. The model also has good reliability and generalization ability. The experiment proves that the SENet deep learning model has a good application prospect in the intelligent classification of histopathological images. CONCLUSIONS: This study also proves that deep learning has great application value in solving the time-consuming and laborious problems existing in traditional manual film reading, and it has certain practical significance for the intelligent classification research of other cancer histopathological images.


Asunto(s)
Aprendizaje Profundo , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Reproducibilidad de los Resultados
13.
Entropy (Basel) ; 24(4)2022 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-35455209

RESUMEN

In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a maximum number set by the researcher and assigning a score to each image that should represent some predefined characteristic of the IHC staining, such as its intensity. However, manual scoring depends on the judgment of an observer and therefore exposes the assessment to a certain level of bias. In this work, we present a fully automatic and unsupervised method for comparative biomarker quantification in histopathological brightfield images. The method relies on a color separation method that discriminates between two chromogens expressed as brown and blue colors robustly, independent of color variation or biomarker expression level. For this purpose, we have adopted a two-stage stain separation approach in the optical density space. First, a preliminary separation is performed using a deconvolution method in which the color vectors of the stains are determined after an eigendecomposition of the data. Then, we adjust the separation using the non-negative matrix factorization method with beta divergences, initializing the algorithm with the matrices resulting from the previous step. After that, a feature vector of each image based on the intensity of the two chromogens is determined. Finally, the images are annotated using a systematically initialized k-means clustering algorithm with beta divergences. The method clearly defines the initial boundaries of the categories, although some flexibility is added. Experiments for the semi-quantitative scoring of images in five categories have been carried out by comparing the results with the scores of four expert researchers yielding accuracies that range between 76.60% and 94.58%. These results show that the proposed automatic scoring system, which is definable and reproducible, produces consistent results.

14.
Methods ; 173: 52-60, 2020 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-31212016

RESUMEN

Even with the rapid advances in medical sciences, histopathological diagnosis is still considered the gold standard in diagnosing cancer. However, the complexity of histopathological images and the dramatic increase in workload make this task time consuming, and the results may be subject to pathologist subjectivity. Therefore, the development of automatic and precise histopathological image analysis methods is essential for the field. In this paper, we propose a new hybrid convolutional and recurrent deep neural network for breast cancer histopathological image classification. Based on the richer multilevel feature representation of the histopathological image patches, our method integrates the advantages of convolutional and recurrent neural networks, and the short-term and long-term spatial correlations between patches are preserved. The experimental results show that our method outperforms the state-of-the-art method with an obtained average accuracy of 91.3% for the 4-class classification task. We also release a dataset with 3771 breast cancer histopathological images to the scientific community that is now publicly available at http://ear.ict.ac.cn/?page_id=1616. Our dataset is not only the largest publicly released dataset for breast cancer histopathological image classification, but it covers as many different subclasses spanning different age groups as possible, thus providing enough data diversity to alleviate the problem of relatively low classification accuracy of benign images.


Asunto(s)
Neoplasias de la Mama/genética , Procesamiento de Imagen Asistido por Computador/métodos , Mama/metabolismo , Mama/patología , Neoplasias de la Mama/patología , Bases de Datos Genéticas , Femenino , Humanos , Redes Neurales de la Computación
15.
J Med Syst ; 46(1): 7, 2021 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-34860316

RESUMEN

Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.


Asunto(s)
Neoplasias de la Mama , Mama , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Mamografía
16.
Entropy (Basel) ; 23(5)2021 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-34065765

RESUMEN

Automated grading systems using deep convolution neural networks (DCNNs) have proven their capability and potential to distinguish between different breast cancer grades using digitized histopathological images. In digital breast pathology, it is vital to measure how confident a DCNN is in grading using a machine-confidence metric, especially with the presence of major computer vision challenging problems such as the high visual variability of the images. Such a quantitative metric can be employed not only to improve the robustness of automated systems, but also to assist medical professionals in identifying complex cases. In this paper, we propose Entropy-based Elastic Ensemble of DCNN models (3E-Net) for grading invasive breast carcinoma microscopy images which provides an initial stage of explainability (using an uncertainty-aware mechanism adopting entropy). Our proposed model has been designed in a way to (1) exclude images that are less sensitive and highly uncertain to our ensemble model and (2) dynamically grade the non-excluded images using the certain models in the ensemble architecture. We evaluated two variations of 3E-Net on an invasive breast carcinoma dataset and we achieved grading accuracy of 96.15% and 99.50%.

17.
J Digit Imaging ; 33(3): 632-654, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-31900812

RESUMEN

Automatic multi-classification of breast cancer histopathological images has remained one of the top-priority research areas in the field of biomedical informatics, due to the great clinical significance of multi-classification in providing diagnosis and prognosis of breast cancer. In this work, two machine learning approaches are thoroughly explored and compared for the task of automatic magnification-dependent multi-classification on a balanced BreakHis dataset for the detection of breast cancer. The first approach is based on handcrafted features which are extracted using Hu moment, color histogram, and Haralick textures. The extracted features are then utilized to train the conventional classifiers, while the second approach is based on transfer learning where the pre-existing networks (VGG16, VGG19, and ResNet50) are utilized as feature extractor and as a baseline model. The results reveal that the use of pre-trained networks as feature extractor exhibited superior performance in contrast to baseline approach and handcrafted approach for all the magnifications. Moreover, it has been observed that the augmentation plays a pivotal role in further enhancing the classification accuracy. In this context, the VGG16 network with linear SVM provides the highest accuracy that is computed in two forms, (a) patch-based accuracies (93.97% for 40×, 92.92% for 100×, 91.23% for 200×, and 91.79% for 400×); (b) patient-based accuracies (93.25% for 40×, 91.87% for 100×, 91.5% for 200×, and 92.31% for 400×) for the classification of magnification-dependent histopathological images. Additionally, "Fibro-adenoma" (benign) and "Mucous Carcinoma" (malignant) classes have been found to be the most complex classes for the entire magnification factors.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
18.
Med Image Anal ; 95: 103162, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38593644

RESUMEN

Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.


Asunto(s)
Neoplasias de la Próstata , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Masculino , Aprendizaje Automático , Teorema de Bayes , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Artefactos , Redes Neurales de la Computación
19.
Med Biol Eng Comput ; 62(6): 1899-1909, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38409645

RESUMEN

Early detection is critical for successfully diagnosing cancer, and timely analysis of diagnostic tests is increasingly important. In the context of neuroendocrine tumors, the Ki-67 proliferation index serves as a fundamental biomarker, aiding pathologists in grading and diagnosing these tumors based on histopathological images. The appropriate treatment plan for the patient is determined based on the tumor grade. An artificial intelligence-based method is proposed to aid pathologists in the automated calculation and grading of the Ki-67 proliferation index. The proposed system first performs preprocessing to enhance image quality. Then, segmentation process is performed using the U-Net architecture, which is a deep learning algorithm, to separate the nuclei from the background. The identified nuclei are then evaluated as Ki-67 positive or negative based on basic color space information and other features. The Ki-67 proliferation index is then calculated, and the neuroendocrine tumor is graded accordingly. The proposed system's performance was evaluated on a dataset obtained from the Department of Pathology at Meram Faculty of Medicine Hospital, Necmettin Erbakan University. The results of the pathologist and the proposed system were compared, and the proposed system was found to have an accuracy of 95% in tumor grading when compared to the pathologist's report.


Asunto(s)
Inteligencia Artificial , Proliferación Celular , Antígeno Ki-67 , Clasificación del Tumor , Tumores Neuroendocrinos , Humanos , Antígeno Ki-67/metabolismo , Antígeno Ki-67/análisis , Tumores Neuroendocrinos/patología , Tumores Neuroendocrinos/diagnóstico , Tumores Neuroendocrinos/metabolismo , Algoritmos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
20.
Comput Methods Programs Biomed ; 251: 108207, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38723437

RESUMEN

BACKGROUND AND OBJECTIVE: Lung cancer (LC) has a high fatality rate that continuously affects human lives all over the world. Early detection of LC prolongs human life and helps to prevent the disease. Histopathological inspection is a common method to diagnose LC. Visual inspection of histopathological diagnosis necessitates more inspection time, and the decision depends on the subjective perception of clinicians. Usually, machine learning techniques mostly depend on traditional feature extraction which is labor-intensive and may not be appropriate for enormous data. In this work, a convolutional neural network (CNN)-based architecture is proposed for the more effective classification of lung tissue subtypes using histopathological images. METHODS: Authors have utilized the first-time nonlocal mean (NLM) filter to suppress the effect of noise from histopathological images. NLM filter efficiently eliminated noise while preserving the edges of images. Then, the obtained denoised images are given as input to the proposed multi-headed lung cancer classification convolutional neural network (ML3CNet). Furthermore, the model quantization technique is utilized to reduce the size of the proposed model for the storage of the data. Reduction in model size requires less memory and speeds up data processing. RESULTS: The effectiveness of the proposed model is compared with the other existing state-of-the-art methods. The proposed ML3CNet achieved an average classification accuracy of 99.72%, sensitivity of 99.66%, precision of 99.64%, specificity of 99.84%, F-1 score of 0.9965, and area under the curve of 0.9978. The quantized accuracy of 98.92% is attained by the proposed model. To validate the applicability of the proposed ML3CNet, it has also been tested on the colon cancer dataset. CONCLUSION: The findings reveal that the proposed approach can be beneficial to automatically classify LC subtypes that might assist healthcare workers in making decisions more precisely. The proposed model can be implemented on the hardware using Raspberry Pi for practical realization.


Asunto(s)
Neoplasias Pulmonares , Redes Neurales de la Computación , Humanos , Neoplasias Pulmonares/clasificación , Neoplasias Pulmonares/patología , Neoplasias Pulmonares/diagnóstico por imagen , Algoritmos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Diagnóstico por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA