Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters











Publication year range
1.
J Endourol ; 38(8): 865-870, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38526374

ABSTRACT

Background: The diagnostic accuracy of cystoscopy varies according to the knowledge and experience of the performing physician. In this study, we evaluated the difference in cystoscopic gaze location patterns between medical students and urologists and assessed the differences in their eye movements when simultaneously observing conventional cystoscopic images and images with lesions detected by artificial intelligence (AI). Methodology: Eye-tracking measurements were performed, and observation patterns of participants (24 medical students and 10 urologists) viewing images from routine cystoscopic videos were analyzed. The cystoscopic video was captured preoperatively in a case of initial-onset noninvasive bladder cancer with three low-lying papillary tumors in the posterior, anterior, and neck areas (urothelial carcinoma, high grade, and pTa). The viewpoint coordinates and stop times during observation were obtained using a noncontact type of gaze tracking and gaze measurement system for screen-based gaze tracking. In addition, observation patterns of medical students and urologists during parallel observation of conventional cystoscopic videos and AI-assisted lesion detection videos were compared. Results: Compared with medical students, urologists exhibited a significantly higher degree of stationary gaze entropy when viewing cystoscopic images (p < 0.05), suggesting that urologists with expertise in identifying lesions efficiently observed a broader range of bladder mucosal surfaces on the screen, presumably with the conscious intent of identifying pathologic changes. When the participants observed conventional and AI-assisted lesion detection images side by side, contrary to urologists, medical students showed a higher proportion of attention directed toward AI-detected lesion images. Conclusion: Eye-tracking measurements during cystoscopic image assessment revealed that experienced specialists efficiently observed a wide range of video screens during cystoscopy. In addition, this study revealed how lesion images detected by AI are viewed. Observation patterns of observers' gaze may have implications for assessing and improving proficiency and serving educational purposes. To the best of our knowledge, this is the first study to utilize eye tracking in cystoscopy. University of Tsukuba Hospital, clinical research reference number R02-122.


Subject(s)
Artificial Intelligence , Cystoscopy , Eye-Tracking Technology , Humans , Cystoscopy/methods , Male , Urinary Bladder Neoplasms/diagnostic imaging , Urinary Bladder Neoplasms/pathology , Urinary Bladder Neoplasms/diagnosis , Students, Medical , Female , Adult , Urologists , Young Adult , Eye Movements/physiology , Middle Aged
2.
Gan To Kagaku Ryoho ; 50(6): 681-685, 2023 Jun.
Article in Japanese | MEDLINE | ID: mdl-37317599

ABSTRACT

Artificial intelligence(AI)and information and communication technology(ICT)are beginning to be used in the digital transformation of endoscopic images. In Japan, several AI systems for digestive organ endoscopy have been approved as programmed medical devices and are being introduced into clinical practice. Although it is expected to improve diagnostic accuracy and efficiency in endoscopic examinations for organs other than the digestive organs, research and development for practical application are still in their infancy. This article introduces AI for gastrointestinal endoscopy and the author's research on cystoscopy.


Subject(s)
Artificial Intelligence , Endoscopy , Humans , Japan
3.
Eur Urol Open Sci ; 49: 44-50, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36874607

ABSTRACT

Background: Accurate cystoscopic recognition of Hunner lesions (HLs) is indispensable for better treatment prognosis in managing patients with Hunner-type interstitial cystitis (HIC), but frequently challenging due to its varying appearance. Objective: To develop a deep learning (DL) system for cystoscopic recognition of a HL using artificial intelligence (AI). Design setting and participants: A total of 626 cystoscopic images collected from January 8, 2019 to December 24, 2020, consisting of 360 images of HLs from 41 patients with HIC and 266 images of flat reddish mucosal lesions resembling HLs from 41 control patients including those with bladder cancer and other chronic cystitis, were used to create a dataset with an 8:2 ratio of training images and test images for transfer learning and external validation, respectively. AI-based five DL models were constructed, using a pretrained convolutional neural network model that was retrained to output 1 for a HL and 0 for control. A five-fold cross-validation method was applied for internal validation. Outcome measurements and statistical analysis: True- and false-positive rates were plotted as a receiver operating curve when the threshold changed from 0 to 1. Accuracy, sensitivity, and specificity were evaluated at a threshold of 0.5. Diagnostic performance of the models was compared with that of urologists as a reader study. Results and limitations: The mean area under the curve of the models reached 0.919, with mean sensitivity of 81.9% and specificity of 85.2% in the test dataset. In the reader study, the mean accuracy, sensitivity, and specificity were, respectively, 83.0%, 80.4%, and 85.6% for the models, and 62.4%, 79.6%, and 45.2% for expert urologists. Limitations include the diagnostic nature of a HL as warranted assertibility. Conclusions: We constructed the first DL system that recognizes HLs with accuracy exceeding that of humans. This AI-driven system assists physicians with proper cystoscopic recognition of a HL. Patient summary: In this diagnostic study, we developed a deep learning system for cystoscopic recognition of Hunner lesions in patients with interstitial cystitis. The mean area under the curve of the constructed system reached 0.919 with mean sensitivity of 81.9% and specificity of 85.2%, demonstrating diagnostic accuracy exceeding that of human expert urologists in detecting Hunner lesions. This deep learning system assists physicians with proper diagnosis of a Hunner lesion.

4.
PLoS One ; 17(8): e0271106, 2022.
Article in English | MEDLINE | ID: mdl-35951606

ABSTRACT

Deep learning techniques have achieved remarkable success in lesion segmentation and classification between benign and malignant tumors in breast ultrasound images. However, existing studies are predominantly focused on devising efficient neural network-based learning structures to tackle specific tasks individually. By contrast, in clinical practice, sonographers perform segmentation and classification as a whole; they investigate the border contours of the tissue while detecting abnormal masses and performing diagnostic analysis. Performing multiple cognitive tasks simultaneously in this manner facilitates exploitation of the commonalities and differences between tasks. Inspired by this unified recognition process, this study proposes a novel learning scheme, called the cross-task guided network (CTG-Net), for efficient ultrasound breast image understanding. CTG-Net integrates the two most significant tasks in computerized breast lesion pattern investigation: lesion segmentation and tumor classification. Further, it enables the learning of efficient feature representations across tasks from ultrasound images and the task-specific discriminative features that can greatly facilitate lesion detection. This is achieved using task-specific attention models to share the prediction results between tasks. Then, following the guidance of task-specific attention soft masks, the joint feature responses are efficiently calibrated through iterative model training. Finally, a simple feature fusion scheme is used to aggregate the attention-guided features for efficient ultrasound pattern analysis. We performed extensive experimental comparisons on multiple ultrasound datasets. Compared to state-of-the-art multi-task learning approaches, the proposed approach can improve the Dice's coefficient, true-positive rate of segmentation, AUC, and sensitivity of classification by 11%, 17%, 2%, and 6%, respectively. The results demonstrate that the proposed cross-task guided feature learning framework can effectively fuse the complementary information of ultrasound image segmentation and classification tasks to achieve accurate tumor localization. Thus, it can aid sonographers to detect and diagnose breast cancer.


Subject(s)
Breast Neoplasms , Image Processing, Computer-Assisted , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography , Ultrasonography, Mammary
5.
Cancer Sci ; 113(8): 2693-2703, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35585758

ABSTRACT

Colorectal cancer (CRC) is a heterogenous disease, and patients have differences in therapeutic response. However, the mechanisms underlying interpatient heterogeneity in the response to chemotherapeutic agents remain to be elucidated, and molecular tumor characteristics are required to select patients for specific therapies. Patient-derived organoids (PDOs) established from CRCs recapitulate various biological characteristics of tumor tissues, including cellular heterogeneity and the response to chemotherapy. Patient-derived organoids established from CRCs show various morphologies, but there are no criteria for defining these morphologies, which hampers the analysis of their biological significance. Here, we developed an artificial intelligence (AI)-based classifier to categorize PDOs based on microscopic images according to their similarity in appearance and classified tubular adenocarcinoma-derived PDOs into six types. Transcriptome analysis identified differential expression of genes related to cell adhesion in some of the morphological types. Genes involved in ribosome biogenesis were also differentially expressed and were most highly expressed in morphological types showing CRC stem cell properties. We identified an RNA polymerase I inhibitor, CX-5641, to be an upstream regulator of these type-specific gene sets. Notably, PDO types with increased expression of genes involved in ribosome biogenesis were resistant to CX-5461 treatment. Taken together, these results uncover the biological significance of the morphology of PDOs and provide novel indicators by which to categorize CRCs. Therefore, the AI-based classifier is a useful tool to support PDO-based cancer research.


Subject(s)
Adenocarcinoma , Antineoplastic Agents , Colorectal Neoplasms , Adenocarcinoma/drug therapy , Adenocarcinoma/genetics , Adenocarcinoma/metabolism , Antineoplastic Agents/pharmacology , Artificial Intelligence , Colorectal Neoplasms/drug therapy , Colorectal Neoplasms/genetics , Colorectal Neoplasms/pathology , Humans , Organoids/metabolism
6.
J Endourol ; 35(7): 1030-1035, 2021 07.
Article in English | MEDLINE | ID: mdl-33148020

ABSTRACT

Background: Nonmuscle-invasive bladder cancer is diagnosed, treated, and monitored using cystoscopy. Artificial intelligence (AI) is increasingly used to augment tumor detection, but its performance is hindered by the limited availability of cystoscopic images required to form a large training data set. This study aimed to determine whether stepwise transfer learning with general images followed by gastroscopic images can improve the accuracy of bladder tumor detection on cystoscopic imaging. Materials and Methods: We trained a convolutional neural network with 1.2 million general images, followed by 8728 gastroscopic images. In the final step of the transfer learning process, the model was additionally trained with 2102 cystoscopic images of normal bladder tissue and bladder tumors collected at the University of Tsukuba Hospital. The diagnostic accuracy was evaluated using a receiver operating characteristic curve. The diagnostic performance of the models trained with cystoscopic images with or without stepwise organic transfer learning was compared with that of medical students and urologists with varying levels of experience. Results: The model developed by stepwise organic transfer learning had 95.4% sensitivity and 97.6% specificity. This performance was better than that of the other models and comparable with that of expert urologists. Notably, it showed superior diagnostic accuracy when tumors occupied >10% of the image. Conclusions: Our findings demonstrate the value of stepwise organic transfer learning in applications with limited data sets for training and further confirm the value of AI in medical diagnostics. Here, we applied deep learning to develop a tool to detect bladder tumors with an accuracy comparable with that of a urologist. To address the limitation that few bladder tumor images are available to train the model, we demonstrate that pretraining with general and gastroscopic images yields superior results.


Subject(s)
Urinary Bladder Neoplasms , Artificial Intelligence , Humans , Machine Learning , Neural Networks, Computer , Urinary Bladder Neoplasms/diagnostic imaging
7.
J Endourol ; 34(3): 352-358, 2020 03.
Article in English | MEDLINE | ID: mdl-31808367

ABSTRACT

Introduction: Nonmuscle-invasive bladder cancer has a relatively high postoperative recurrence rate despite the implementation of conventional treatment methods. Cystoscopy is essential for diagnosing and monitoring bladder cancer, but lesions are overlooked while using white-light imaging. Using cystoscopy, tumors with a small diameter; flat tumors, such as carcinoma in situ; and the extent of flat lesions associated with the elevated lesions are difficult to identify. In addition, the accuracy of diagnosis and treatment using cystoscopy varies according to the skill and experience of physicians. Therefore, to improve the quality of bladder cancer diagnosis, we aimed to support the cystoscopic diagnosis of bladder cancer using artificial intelligence (AI). Materials and Methods: A total of 2102 cystoscopic images, consisting of 1671 images of normal tissue and 431 images of tumor lesions, were used to create a dataset with an 8:2 ratio of training and test images. We constructed a tumor classifier based on a convolutional neural network (CNN). The performance of the trained classifier was evaluated using test data. True-positive rate and false-positive rate were plotted when the threshold was changed as the receiver operating characteristic (ROC) curve. Results: In the test data (tumor image: 87, normal image: 335), 78 images were true positive, 315 true negative, 20 false positive, and 9 false negative. The area under the ROC curve was 0.98, with a maximum Youden index of 0.837, sensitivity of 89.7%, and specificity of 94.0%. Conclusion: By objectively evaluating the cystoscopic image with CNN, it was possible to classify the image, including tumor lesions and normality. The objective evaluation of cystoscopic images using AI is expected to contribute to improvement in the accuracy of the diagnosis and treatment of bladder cancer.


Subject(s)
Artificial Intelligence , Urinary Bladder Neoplasms , Cystoscopy , Humans , Neoplasm Recurrence, Local , Neural Networks, Computer , Urinary Bladder Neoplasms/diagnostic imaging
8.
Biotechniques ; 66(4): 179-185, 2019 04.
Article in English | MEDLINE | ID: mdl-30543114

ABSTRACT

Automated cell counters that utilize still images of sample cells are widely used. However, they are not well suited to counting slender, aggregate-prone microorganisms such as Trypanosoma cruzi. Here, we developed a motion-based cell-counting system, using an image-recognition method based on a cubic higher-order local auto-correlation feature. The software successfully estimated the cell density of dispersed, aggregated, as well as fluorescent parasites by motion pattern recognition. Loss of parasites activeness due to drug treatment could also be detected as a reduction in apparent cell count, which potentially increases the sensitivity of drug screening assays. Moreover, the motion-based approach enabled estimation of the number of parasites in a co-culture with host mammalian cells, by disregarding the presence of the host cells as a static background.


Subject(s)
Cell Count/methods , Image Processing, Computer-Assisted/methods , Optical Imaging/methods , Pattern Recognition, Automated/methods , Trypanosoma cruzi/isolation & purification , Chagas Disease/parasitology , Humans , Machine Learning , Microscopy, Fluorescence/methods , Motion , Parasitic Sensitivity Tests/methods , Software , Trypanosoma cruzi/cytology
9.
J Healthc Eng ; 2018: 8961781, 2018.
Article in English | MEDLINE | ID: mdl-30034677

ABSTRACT

Deep learning using convolutional neural networks (CNNs) is a distinguished tool for many image classification tasks. Due to its outstanding robustness and generalization, it is also expected to play a key role to facilitate advanced computer-aided diagnosis (CAD) for pathology images. However, the shortage of well-annotated pathology image data for training deep neural networks has become a major issue at present because of the high-cost annotation upon pathologist's professional observation. Faced with this problem, transfer learning techniques are generally used to reinforcing the capacity of deep neural networks. In order to further boost the performance of the state-of-the-art deep neural networks and alleviate insufficiency of well-annotated data, this paper presents a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification and establishes a new type of target-correlative intermediate datasets. Our proposed scheme is deemed capable of making the deep neural network imitating the pathologist's perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation. The experiments are conducted with both well-annotated gastric pathology data and the proposed target-correlative intermediate data on several state-of-the-art deep neural networks. The results congruously demonstrate the feasibility and superiority of our proposed scheme for boosting the classification performance.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Stomach , Algorithms , Histocytochemistry , Humans , Stomach/diagnostic imaging , Stomach/pathology , Stomach Neoplasms/diagnostic imaging , Stomach Neoplasms/pathology
10.
Int J Biomed Imaging ; 2017: 7089213, 2017.
Article in English | MEDLINE | ID: mdl-28255295

ABSTRACT

Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.

11.
Article in English | MEDLINE | ID: mdl-24110976

ABSTRACT

Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.


Subject(s)
Capsule Endoscopy/methods , Diagnosis, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Intestine, Small/pathology , Color , Gastrointestinal Hemorrhage/diagnosis , Humans , Nonlinear Dynamics
SELECTION OF CITATIONS
SEARCH DETAIL