Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Laryngoscope ; 2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38801129

RESUMEN

OBJECTIVES: Vocal fold leukoplakia (VFL) is a precancerous lesion of laryngeal cancer, and its endoscopic diagnosis poses challenges. We aim to develop an artificial intelligence (AI) model using white light imaging (WLI) and narrow-band imaging (NBI) to distinguish benign from malignant VFL. METHODS: A total of 7057 images from 426 patients were used for model development and internal validation. Additionally, 1617 images from two other hospitals were used for model external validation. Modeling learning based on WLI and NBI modalities was conducted using deep learning combined with a multi-instance learning approach (MIL). Furthermore, 50 prospectively collected videos were used to evaluate real-time model performance. A human-machine comparison involving 100 patients and 12 laryngologists assessed the real-world effectiveness of the model. RESULTS: The model achieved the highest area under the receiver operating characteristic curve (AUC) values of 0.868 and 0.884 in the internal and external validation sets, respectively. AUC in the video validation set was 0.825 (95% CI: 0.704-0.946). In the human-machine comparison, AI significantly improved AUC and accuracy for all laryngologists (p < 0.05). With the assistance of AI, the diagnostic abilities and consistency of all laryngologists improved. CONCLUSIONS: Our multicenter study developed an effective AI model using MIL and fusion of WLI and NBI images for VFL diagnosis, particularly aiding junior laryngologists. However, further optimization and validation are necessary to fully assess its potential impact in clinical settings. LEVEL OF EVIDENCE: 3 Laryngoscope, 2024.

2.
Am J Otolaryngol ; 45(4): 104342, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38703609

RESUMEN

OBJECTIVE: To develop a multi-instance learning (MIL) based artificial intelligence (AI)-assisted diagnosis models by using laryngoscopic images to differentiate benign and malignant vocal fold leukoplakia (VFL). METHODS: The AI system was developed, trained and validated on 5362 images of 551 patients from three hospitals. Automated regions of interest (ROI) segmentation algorithm was utilized to construct image-level features. MIL was used to fusion image level results to patient level features, then the extracted features were modeled by seven machine learning algorithms. Finally, we evaluated the image level and patient level results. Additionally, 50 videos of VFL were prospectively gathered to assess the system's real-time diagnostic capabilities. A human-machine comparison database was also constructed to compare the diagnostic performance of otolaryngologists with and without AI assistance. RESULTS: In internal and external validation sets, the maximum area under the curve (AUC) for image level segmentation models was 0.775 (95 % CI 0.740-0.811) and 0.720 (95 % CI 0.684-0.756), respectively. Utilizing a MIL-based fusion strategy, the AUC at the patient level increased to 0.869 (95 % CI 0.798-0.940) and 0.851 (95 % CI 0.756-0.945). For real-time video diagnosis, the maximum AUC at the patient level reached 0.850 (95 % CI, 0.743-0.957). With AI assistance, the AUC improved from 0.720 (95 % CI 0.682-0.755) to 0.808 (95 % CI 0.775-0.839) for senior otolaryngologists and from 0.647 (95 % CI 0.608-0.686) to 0.807 (95 % CI 0.773-0.837) for junior otolaryngologists. CONCLUSIONS: The MIL based AI-assisted diagnosis system can significantly improve the diagnostic performance of otolaryngologists for VFL and help to make proper clinical decisions.

3.
Front Immunol ; 15: 1310376, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38720887

RESUMEN

Introduction: Hypopharyngeal squamous cell carcinoma (HSCC) is one of the malignant tumors with the worst prognosis in head and neck cancers. The transformation from normal tissue through low-grade and high-grade intraepithelial neoplasia to cancerous tissue in HSCC is typically viewed as a progressive pathological sequence typical of tumorigenesis. Nonetheless, the alterations in diverse cell clusters within the tissue microenvironment (TME) throughout tumorigenesis and their impact on the development of HSCC are yet to be fully understood. Methods: We employed single-cell RNA sequencing and TCR/BCR sequencing to sequence 60,854 cells from nine tissue samples representing different stages during the progression of HSCC. This allowed us to construct dynamic transcriptomic maps of cells in diverse TME across various disease stages, and experimentally validated the key molecules within it. Results: We delineated the heterogeneity among tumor cells, immune cells (including T cells, B cells, and myeloid cells), and stromal cells (such as fibroblasts and endothelial cells) during the tumorigenesis of HSCC. We uncovered the alterations in function and state of distinct cell clusters at different stages of tumor development and identified specific clusters closely associated with the tumorigenesis of HSCC. Consequently, we discovered molecules like MAGEA3 and MMP3, pivotal for the diagnosis and treatment of HSCC. Discussion: Our research sheds light on the dynamic alterations within the TME during the tumorigenesis of HSCC, which will help to understand its mechanism of canceration, identify early diagnostic markers, and discover new therapeutic targets.


Asunto(s)
Neoplasias Hipofaríngeas , Análisis de la Célula Individual , Microambiente Tumoral , Humanos , Neoplasias Hipofaríngeas/genética , Neoplasias Hipofaríngeas/patología , Neoplasias Hipofaríngeas/inmunología , Microambiente Tumoral/inmunología , Microambiente Tumoral/genética , Receptores de Antígenos de Linfocitos T/genética , Receptores de Antígenos de Linfocitos T/metabolismo , Receptores de Antígenos de Linfocitos B/genética , Receptores de Antígenos de Linfocitos B/metabolismo , Carcinogénesis/genética , Análisis de Secuencia de ARN , Transcriptoma , Biomarcadores de Tumor/genética , Carcinoma de Células Escamosas de Cabeza y Cuello/genética , Carcinoma de Células Escamosas de Cabeza y Cuello/inmunología , Carcinoma de Células Escamosas de Cabeza y Cuello/patología , Regulación Neoplásica de la Expresión Génica , Masculino
4.
Eur J Radiol Open ; 12: 100563, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38681663

RESUMEN

Objectives: This study aims to assess the efficacy of narrow band imaging (NBI) endoscopy in utilizing radiomics for predicting radiosensitivity in nasopharyngeal carcinoma (NPC), and to explore the associated molecular mechanisms. Materials: The study included 57 NPC patients who were pathologically diagnosed and underwent RNA sequencing. They were categorized into complete response (CR) and partial response (PR) groups after receiving radical concurrent chemoradiotherapy. We analyzed 267 NBI images using ResNet50 for feature extraction, obtaining 2048 radiomic features per image. Using Python for deep learning and least absolute shrinkage and selection operator for feature selection, we identified differentially expressed genes associated with radiomic features. Subsequently, we conducted enrichment analysis on these genes and validated their roles in the tumor immune microenvironment through single-cell RNA sequencing. Results: After feature selection, 54 radiomic features were obtained. The machine learning algorithm constructed from these features showed that the random forest algorithm had the highest average accuracy rate of 0.909 and an area under the curve of 0.961. Correlation analysis identified 30 differential genes most closely associated with the radiomic features. Enrichment and immune infiltration analysis indicated that tumor-associated macrophages are closely related to treatment responses. Three key NBI differentially expressed immune genes (NBI-DEIGs), namely CCL8, SLC11A1, and PTGS2, were identified as regulators influencing treatment responses through macrophages. Conclusion: NBI-based radiomics models introduce a novel and effective method for predicting radiosensitivity in NPC. The molecular mechanisms may involve the functional states of macrophages, as reflected by key regulatory genes.

5.
Laryngoscope ; 134(1): 127-135, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37254946

RESUMEN

OBJECTIVE: To construct and validate a deep convolutional neural network (DCNN)-based artificial intelligence (AI) system for the detection of nasopharyngeal carcinoma (NPC) using archived nasopharyngoscopic images. METHODS: We retrospectively collected 14107 nasopharyngoscopic images (7108 NPCs and 6999 noncancers) to construct a DCNN model and prepared a validation dataset containing 3501 images (1744 NPCs and 1757 noncancers) from a single center between January 2009 and December 2020. The DCNN model was established using the You Only Look Once (YOLOv5) architecture. Four otolaryngologists were asked to review the images of the validation set to benchmark the DCNN model performance. RESULTS: The DCNN model analyzed the 3501 images in 69.35 s. For the validation dataset, the precision, recall, accuracy, and F1 score of the DCNN model in the detection of NPCs on white light imaging (WLI) and narrow band imaging (NBI) were 0.845 ± 0.038, 0.942 ± 0.021, 0.920 ± 0.024, and 0.890 ± 0.045, and 0.895 ± 0.045, 0.941 ± 0.018, and 0.975 ± 0.013, 0.918 ± 0.036, respectively. The diagnostic outcome of the DCNN model on WLI and NBI images was significantly higher than that of two junior otolaryngologists (p < 0.05). CONCLUSION: The DCNN model showed better diagnostic outcomes for NPCs than those of junior otolaryngologists. Therefore, it could assist them in improving their diagnostic level and reducing missed diagnoses. LEVEL OF EVIDENCE: 3 Laryngoscope, 134:127-135, 2024.


Asunto(s)
Inteligencia Artificial , Neoplasias Nasofaríngeas , Humanos , Endoscopía , Carcinoma Nasofaríngeo/diagnóstico , Neoplasias Nasofaríngeas/diagnóstico por imagen , Neoplasias Nasofaríngeas/patología , Redes Neurales de la Computación , Estudios Retrospectivos
6.
J Laryngol Otol ; 138(3): 331-337, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37994484

RESUMEN

OBJECTIVE: To propose a scoring system based on laryngoscopic characteristics for the differential diagnosis of benign and malignant vocal fold leukoplakia. METHODS: Laryngoscopic images from 200 vocal fold leukoplakia cases were retrospectively analysed. The laryngoscopic signs of benign and malignant vocal fold leukoplakia were compared, and statistically significant features were assigned and accumulated to establish the leukoplakia finding score. RESULTS: A total of five indicators associated with malignant vocal fold leukoplakia were included to construct the leukoplakia finding score, with a possible range of 0-10 points. A score of 6 points or more was indicative of a diagnosis of malignant vocal fold leukoplakia. The sensitivity, specificity and accuracy values of the leukoplakia finding score were 93.8 per cent, 83.6 per cent and 86.0 per cent, respectively. The consistency in the leukoplakia finding score obtained by different laryngologists was strong (kappa = 0.809). CONCLUSION: This scoring system based on laryngoscopic characteristics has high diagnostic value for distinguishing benign and malignant vocal fold leukoplakia.


Asunto(s)
Enfermedades de la Laringe , Laringoscopía , Humanos , Pliegues Vocales/patología , Estudios Retrospectivos , Enfermedades de la Laringe/diagnóstico , Enfermedades de la Laringe/patología , Leucoplasia/diagnóstico , Leucoplasia/patología
7.
Am J Otolaryngol ; 44(2): 103695, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36473265

RESUMEN

OBJECTIVES: Video laryngoscopy is an important diagnostic tool for head and neck cancers. The artificial intelligence (AI) system has been shown to monitor blind spots during esophagogastroduodenoscopy. This study aimed to test the performance of AI-driven intelligent laryngoscopy monitoring assistant (ILMA) for landmark anatomical sites identification on laryngoscopic images and videos based on a convolutional neural network (CNN). MATERIALS AND METHODS: The laryngoscopic images taken from January to December 2018 were retrospectively collected, and ILMA was developed using the CNN model of Inception-ResNet-v2 + Squeeze-and-Excitation Networks (SENet). A total of 16,000 laryngoscopic images were used for training. These were assigned to 20 landmark anatomical sites covering six major head and neck regions. In addition, the performance of ILMA in identifying anatomical sites was validated using 4000 laryngoscopic images and 25 videos provided by five other tertiary hospitals. RESULTS: ILMA identified the 20 anatomical sites on the laryngoscopic images with a total accuracy of 97.60 %, and the average sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 100 %, 99.87 %, 97.65 %, and 99.87 %, respectively. In addition, multicenter clinical verification displayed that the accuracy of ILMA in identifying the 20 targeted anatomical sites in 25 laryngoscopic videos from five hospitals was ≥95 %. CONCLUSION: The proposed CNN-based ILMA model can rapidly and accurately identify the anatomical sites on laryngoscopic images. The model can reflect the coverage of anatomical regions of the head and neck by laryngoscopy, showing application potential in improving the quality of laryngoscopy.


Asunto(s)
Inteligencia Artificial , Neoplasias de Cabeza y Cuello , Humanos , Laringoscopía/métodos , Estudios Retrospectivos , Redes Neurales de la Computación
8.
Laryngoscope ; 132(5): 999-1007, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34622964

RESUMEN

OBJECTIVES/HYPOTHESIS: To develop a deep-learning-based automatic diagnosis system for identifying nasopharyngeal carcinoma (NPC) from noncancer (inflammation and hyperplasia), using both white light imaging (WLI) and narrow-band imaging (NBI) nasopharyngoscopy images. STUDY DESIGN: Retrospective study. METHODS: A total of 4,783 nasopharyngoscopy images (2,898 WLI and 1,885 NBI) of 671 patients were collected and a novel deep convolutional neural network (DCNN) framework was developed named Siamese deep convolutional neural network (S-DCNN), which can simultaneously utilize WLI and NBI images to improve the classification performance. To verify the effectiveness of combining the above-mentioned two modal images for prediction, we compared the proposed S-DCNN with two baseline models, namely DCNN-1 (only considering WLI images) and DCNN-2 (only considering NBI images). RESULTS: In the threefold cross-validation, an overall accuracy and area under the curve of the three DCNNs achieved 94.9% (95% confidence interval [CI] 93.3%-96.5%) and 0.986 (95% CI 0.982-0.992), 87.0% (95% CI 84.2%-89.7%) and 0.930 (95% CI 0.906-0.961), and 92.8% (95% CI 90.4%-95.3%) and 0.971 (95% CI 0.953-0.992), respectively. The accuracy of S-DCNN is significantly improved compared with DCNN-1 (P-value <.001) and DCNN-2 (P-value = .008). CONCLUSION: Using the deep-learning technology to automatically diagnose NPC under nasopharyngoscopy can provide valuable reference for NPC screening. Superior performance can be obtained by simultaneously utilizing the multimodal features of NBI image and WLI image of the same patient. LEVEL OF EVIDENCE: 3 Laryngoscope, 132:999-1007, 2022.


Asunto(s)
Aprendizaje Profundo , Neoplasias Nasofaríngeas , Endoscopía Gastrointestinal , Humanos , Imagen de Banda Estrecha/métodos , Carcinoma Nasofaríngeo/diagnóstico por imagen , Neoplasias Nasofaríngeas/diagnóstico por imagen , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA