RESUMO
The histopathologic distinction of lung adenocarcinoma (LADC) subtypes is subject to high interobserver variability, which can compromise the optimal assessment of patient prognosis. Therefore, this study developed convolutional neural networks capable of distinguishing LADC subtypes and predicting disease-specific survival, according to the recently established LADC tumor grades. Consensus LADC histopathologic images were obtained from 17 expert pulmonary pathologists and one pathologist in training. Two deep learning models (AI-1 and AI-2) were trained to predict eight different LADC classes. Furthermore, the trained models were tested on an independent cohort of 133 patients. The models achieved high precision, recall, and F1 scores exceeding 0.90 for most of the LADC classes. Clear stratification of the three LADC grades was reached in predicting the disease-specific survival by the two models, with both Kaplan-Meier curves showing significance (P = 0.0017 and 0.0003). Moreover, both trained models showed high stability in the segmentation of each pair of predicted grades with low variation in the hazard ratio across 200 bootstrapped samples. These findings indicate that the trained convolutional neural networks improve the diagnostic accuracy of the pathologist and refine LADC grade assessment. Thus, the trained models are promising tools that may assist in the routine evaluation of LADC subtypes and grades in clinical practice.
Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Abordagem GRADE , Neoplasias Pulmonares/patologia , Adenocarcinoma/patologiaRESUMO
Background: Accurate cystoscopic recognition of Hunner lesions (HLs) is indispensable for better treatment prognosis in managing patients with Hunner-type interstitial cystitis (HIC), but frequently challenging due to its varying appearance. Objective: To develop a deep learning (DL) system for cystoscopic recognition of a HL using artificial intelligence (AI). Design setting and participants: A total of 626 cystoscopic images collected from January 8, 2019 to December 24, 2020, consisting of 360 images of HLs from 41 patients with HIC and 266 images of flat reddish mucosal lesions resembling HLs from 41 control patients including those with bladder cancer and other chronic cystitis, were used to create a dataset with an 8:2 ratio of training images and test images for transfer learning and external validation, respectively. AI-based five DL models were constructed, using a pretrained convolutional neural network model that was retrained to output 1 for a HL and 0 for control. A five-fold cross-validation method was applied for internal validation. Outcome measurements and statistical analysis: True- and false-positive rates were plotted as a receiver operating curve when the threshold changed from 0 to 1. Accuracy, sensitivity, and specificity were evaluated at a threshold of 0.5. Diagnostic performance of the models was compared with that of urologists as a reader study. Results and limitations: The mean area under the curve of the models reached 0.919, with mean sensitivity of 81.9% and specificity of 85.2% in the test dataset. In the reader study, the mean accuracy, sensitivity, and specificity were, respectively, 83.0%, 80.4%, and 85.6% for the models, and 62.4%, 79.6%, and 45.2% for expert urologists. Limitations include the diagnostic nature of a HL as warranted assertibility. Conclusions: We constructed the first DL system that recognizes HLs with accuracy exceeding that of humans. This AI-driven system assists physicians with proper cystoscopic recognition of a HL. Patient summary: In this diagnostic study, we developed a deep learning system for cystoscopic recognition of Hunner lesions in patients with interstitial cystitis. The mean area under the curve of the constructed system reached 0.919 with mean sensitivity of 81.9% and specificity of 85.2%, demonstrating diagnostic accuracy exceeding that of human expert urologists in detecting Hunner lesions. This deep learning system assists physicians with proper diagnosis of a Hunner lesion.
RESUMO
Deep learning techniques have achieved remarkable success in lesion segmentation and classification between benign and malignant tumors in breast ultrasound images. However, existing studies are predominantly focused on devising efficient neural network-based learning structures to tackle specific tasks individually. By contrast, in clinical practice, sonographers perform segmentation and classification as a whole; they investigate the border contours of the tissue while detecting abnormal masses and performing diagnostic analysis. Performing multiple cognitive tasks simultaneously in this manner facilitates exploitation of the commonalities and differences between tasks. Inspired by this unified recognition process, this study proposes a novel learning scheme, called the cross-task guided network (CTG-Net), for efficient ultrasound breast image understanding. CTG-Net integrates the two most significant tasks in computerized breast lesion pattern investigation: lesion segmentation and tumor classification. Further, it enables the learning of efficient feature representations across tasks from ultrasound images and the task-specific discriminative features that can greatly facilitate lesion detection. This is achieved using task-specific attention models to share the prediction results between tasks. Then, following the guidance of task-specific attention soft masks, the joint feature responses are efficiently calibrated through iterative model training. Finally, a simple feature fusion scheme is used to aggregate the attention-guided features for efficient ultrasound pattern analysis. We performed extensive experimental comparisons on multiple ultrasound datasets. Compared to state-of-the-art multi-task learning approaches, the proposed approach can improve the Dice's coefficient, true-positive rate of segmentation, AUC, and sensitivity of classification by 11%, 17%, 2%, and 6%, respectively. The results demonstrate that the proposed cross-task guided feature learning framework can effectively fuse the complementary information of ultrasound image segmentation and classification tasks to achieve accurate tumor localization. Thus, it can aid sonographers to detect and diagnose breast cancer.
Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Ultrassonografia , Ultrassonografia MamáriaRESUMO
Colorectal cancer (CRC) is a heterogenous disease, and patients have differences in therapeutic response. However, the mechanisms underlying interpatient heterogeneity in the response to chemotherapeutic agents remain to be elucidated, and molecular tumor characteristics are required to select patients for specific therapies. Patient-derived organoids (PDOs) established from CRCs recapitulate various biological characteristics of tumor tissues, including cellular heterogeneity and the response to chemotherapy. Patient-derived organoids established from CRCs show various morphologies, but there are no criteria for defining these morphologies, which hampers the analysis of their biological significance. Here, we developed an artificial intelligence (AI)-based classifier to categorize PDOs based on microscopic images according to their similarity in appearance and classified tubular adenocarcinoma-derived PDOs into six types. Transcriptome analysis identified differential expression of genes related to cell adhesion in some of the morphological types. Genes involved in ribosome biogenesis were also differentially expressed and were most highly expressed in morphological types showing CRC stem cell properties. We identified an RNA polymerase I inhibitor, CX-5641, to be an upstream regulator of these type-specific gene sets. Notably, PDO types with increased expression of genes involved in ribosome biogenesis were resistant to CX-5461 treatment. Taken together, these results uncover the biological significance of the morphology of PDOs and provide novel indicators by which to categorize CRCs. Therefore, the AI-based classifier is a useful tool to support PDO-based cancer research.
Assuntos
Adenocarcinoma , Antineoplásicos , Neoplasias Colorretais , Adenocarcinoma/tratamento farmacológico , Adenocarcinoma/genética , Adenocarcinoma/metabolismo , Antineoplásicos/farmacologia , Inteligência Artificial , Neoplasias Colorretais/tratamento farmacológico , Neoplasias Colorretais/genética , Neoplasias Colorretais/patologia , Humanos , Organoides/metabolismoRESUMO
Interstitial pneumonia is a heterogeneous disease with a progressive course and poor prognosis, at times even worse than those in the main cancer types. Histopathological examination is crucial for its diagnosis and estimation of prognosis. However, the evaluation strongly depends on the experience of pathologists, and the reproducibility of diagnosis is low. Herein, we propose MIXTURE (huMan-In-the-loop eXplainable artificial intelligence Through the Use of REcurrent training), an original method to develop deep learning models for extracting pathologically significant findings based on an expert pathologist's perspective with a small annotation effort. The procedure of MIXTURE consists of three steps as follows. First, we created feature extractors for tiles from whole slide images using self-supervised learning. The similar looking tiles were clustered based on the output features and then pathologists integrated the pathologically synonymous clusters. Using the integrated clusters as labeled data, deep learning models to classify the tiles into pathological findings were created by transfer-learning the feature extractors. We developed three models for different magnifications. Using these extracted findings, our model was able to predict the diagnosis of usual interstitial pneumonia, a finding suggestive of progressive disease, with high accuracy (AUC 0.90 in validation set and AUC 0.86 in test set). This high accuracy could not be achieved without the integration of findings by pathologists. The patients predicted as UIP had poorer prognosis (5-year overall survival [OS]: 55.4%) than those predicted as non-UIP (OS: 95.2%). The Cox proportional hazards model for each microscopic finding and prognosis pointed out dense fibrosis, fibroblastic foci, elastosis, and lymphocyte aggregation as independent risk factors. We suggest that MIXTURE may serve as a model approach to different diseases evaluated by medical imaging, including pathology and radiology, and be the prototype for explainable artificial intelligence that can collaborate with humans.
Assuntos
Aprendizado Profundo , Fibrose Pulmonar Idiopática , Doenças Pulmonares Intersticiais , Inteligência Artificial , Humanos , Fibrose Pulmonar Idiopática/diagnóstico , Fibrose Pulmonar Idiopática/patologia , Doenças Pulmonares Intersticiais/diagnóstico , Doenças Pulmonares Intersticiais/patologia , Reprodutibilidade dos TestesRESUMO
Background: Nonmuscle-invasive bladder cancer is diagnosed, treated, and monitored using cystoscopy. Artificial intelligence (AI) is increasingly used to augment tumor detection, but its performance is hindered by the limited availability of cystoscopic images required to form a large training data set. This study aimed to determine whether stepwise transfer learning with general images followed by gastroscopic images can improve the accuracy of bladder tumor detection on cystoscopic imaging. Materials and Methods: We trained a convolutional neural network with 1.2 million general images, followed by 8728 gastroscopic images. In the final step of the transfer learning process, the model was additionally trained with 2102 cystoscopic images of normal bladder tissue and bladder tumors collected at the University of Tsukuba Hospital. The diagnostic accuracy was evaluated using a receiver operating characteristic curve. The diagnostic performance of the models trained with cystoscopic images with or without stepwise organic transfer learning was compared with that of medical students and urologists with varying levels of experience. Results: The model developed by stepwise organic transfer learning had 95.4% sensitivity and 97.6% specificity. This performance was better than that of the other models and comparable with that of expert urologists. Notably, it showed superior diagnostic accuracy when tumors occupied >10% of the image. Conclusions: Our findings demonstrate the value of stepwise organic transfer learning in applications with limited data sets for training and further confirm the value of AI in medical diagnostics. Here, we applied deep learning to develop a tool to detect bladder tumors with an accuracy comparable with that of a urologist. To address the limitation that few bladder tumor images are available to train the model, we demonstrate that pretraining with general and gastroscopic images yields superior results.
Assuntos
Neoplasias da Bexiga Urinária , Inteligência Artificial , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Neoplasias da Bexiga Urinária/diagnóstico por imagemRESUMO
The emergence of whole slide imaging technology allows for pathology diagnosis on a computer screen. The applications of digital pathology are expanding, from supporting remote institutes suffering from a shortage of pathologists to routine use in daily diagnosis including that of lung cancer. Through practice and research large archival databases of digital pathology images have been developed that will facilitate the development of artificial intelligence (AI) methods for image analysis. Currently, several AI applications have been reported in the field of lung cancer; these include the segmentation of carcinoma foci, detection of lymph node metastasis, counting of tumor cells, and prediction of gene mutations. Although the integration of AI algorithms into clinical practice remains a significant challenge, we have implemented tumor cell count for genetic analysis, a helpful application for routine use. Our experience suggests that pathologists often overestimate the contents of tumor cells, and the use of AI-based analysis increases the accuracy and makes the tasks less tedious. However, there are several difficulties encountered in the practical use of AI in clinical diagnosis. These include the lack of sufficient annotated data for the development and validation of AI systems, the explainability of black box AI models, such as those based on deep learning that offer the most promising performance, and the difficulty in defining the ground truth data for training and validation owing to inherent ambiguity in most applications. All of these together present significant challenges in the development and clinical translation of AI methods in the practice of pathology. Additional research on these problems will help in resolving the barriers to the clinical use of AI. Helping pathologists in developing knowledge of the working and limitations of AI will benefit the use of AI in both diagnostics and research.
RESUMO
Introduction: Nonmuscle-invasive bladder cancer has a relatively high postoperative recurrence rate despite the implementation of conventional treatment methods. Cystoscopy is essential for diagnosing and monitoring bladder cancer, but lesions are overlooked while using white-light imaging. Using cystoscopy, tumors with a small diameter; flat tumors, such as carcinoma in situ; and the extent of flat lesions associated with the elevated lesions are difficult to identify. In addition, the accuracy of diagnosis and treatment using cystoscopy varies according to the skill and experience of physicians. Therefore, to improve the quality of bladder cancer diagnosis, we aimed to support the cystoscopic diagnosis of bladder cancer using artificial intelligence (AI). Materials and Methods: A total of 2102 cystoscopic images, consisting of 1671 images of normal tissue and 431 images of tumor lesions, were used to create a dataset with an 8:2 ratio of training and test images. We constructed a tumor classifier based on a convolutional neural network (CNN). The performance of the trained classifier was evaluated using test data. True-positive rate and false-positive rate were plotted when the threshold was changed as the receiver operating characteristic (ROC) curve. Results: In the test data (tumor image: 87, normal image: 335), 78 images were true positive, 315 true negative, 20 false positive, and 9 false negative. The area under the ROC curve was 0.98, with a maximum Youden index of 0.837, sensitivity of 89.7%, and specificity of 94.0%. Conclusion: By objectively evaluating the cystoscopic image with CNN, it was possible to classify the image, including tumor lesions and normality. The objective evaluation of cystoscopic images using AI is expected to contribute to improvement in the accuracy of the diagnosis and treatment of bladder cancer.
Assuntos
Inteligência Artificial , Neoplasias da Bexiga Urinária , Cistoscopia , Humanos , Recidiva Local de Neoplasia , Redes Neurais de Computação , Neoplasias da Bexiga Urinária/diagnóstico por imagemRESUMO
Automated cell counters that utilize still images of sample cells are widely used. However, they are not well suited to counting slender, aggregate-prone microorganisms such as Trypanosoma cruzi. Here, we developed a motion-based cell-counting system, using an image-recognition method based on a cubic higher-order local auto-correlation feature. The software successfully estimated the cell density of dispersed, aggregated, as well as fluorescent parasites by motion pattern recognition. Loss of parasites activeness due to drug treatment could also be detected as a reduction in apparent cell count, which potentially increases the sensitivity of drug screening assays. Moreover, the motion-based approach enabled estimation of the number of parasites in a co-culture with host mammalian cells, by disregarding the presence of the host cells as a static background.
Assuntos
Contagem de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagem Óptica/métodos , Reconhecimento Automatizado de Padrão/métodos , Trypanosoma cruzi/isolamento & purificação , Doença de Chagas/parasitologia , Humanos , Aprendizado de Máquina , Microscopia de Fluorescência/métodos , Movimento (Física) , Testes de Sensibilidade Parasitária/métodos , Software , Trypanosoma cruzi/citologiaRESUMO
Deep learning using convolutional neural networks (CNNs) is a distinguished tool for many image classification tasks. Due to its outstanding robustness and generalization, it is also expected to play a key role to facilitate advanced computer-aided diagnosis (CAD) for pathology images. However, the shortage of well-annotated pathology image data for training deep neural networks has become a major issue at present because of the high-cost annotation upon pathologist's professional observation. Faced with this problem, transfer learning techniques are generally used to reinforcing the capacity of deep neural networks. In order to further boost the performance of the state-of-the-art deep neural networks and alleviate insufficiency of well-annotated data, this paper presents a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification and establishes a new type of target-correlative intermediate datasets. Our proposed scheme is deemed capable of making the deep neural network imitating the pathologist's perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation. The experiments are conducted with both well-annotated gastric pathology data and the proposed target-correlative intermediate data on several state-of-the-art deep neural networks. The results congruously demonstrate the feasibility and superiority of our proposed scheme for boosting the classification performance.
Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Estômago , Algoritmos , Histocitoquímica , Humanos , Estômago/diagnóstico por imagem , Estômago/patologia , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologiaRESUMO
Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.
RESUMO
Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.