Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
1.
Laryngoscope ; 2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39363661

RESUMO

OBJECTIVES: Here we describe the development and pilot testing of the first artificial intelligence (AI) software "copilot" to help train novices to competently perform flexible fiberoptic laryngoscopy (FFL) on a mannikin and improve their uptake of FFL skills. METHODS: Supervised machine learning was used to develop an image classifier model, dubbed the "anatomical region classifier," responsible for predicting the location of camera in the upper aerodigestive tract and an object detection model, dubbed the "anatomical structure detector," responsible for locating and identifying key anatomical structures in images. Training data were collected by performing FFL on an AirSim Combo Bronchi X mannikin (United Kingdom, TruCorp Ltd) using an Ambu aScope 4 RhinoLaryngo Slim connected to an Ambu® aView™ 2 Advance Displaying Unit (Ballerup, Ambu A/S). Medical students were prospectively recruited to try the FFL copilot and rate its ease of use and self-rate their skills with and without the copilot. RESULTS: This model classified anatomical regions with an overall accuracy of 91.9% on the validation set and 80.1% on the test set. The model detected anatomical structures with overall mean average precision of 0.642. Through various optimizations, we were able to run the AI copilot at approximately 28 frames per second (FPS), which is imperceptible from real time and nearly matches the video frame rate of 30 FPS. Sixty-four novice medical students were recruited for feedback on the copilot. Although 90.9% strongly agreed/agreed that the AI copilot was easy to use, their self-rating of FFL skills following use of the copilot were overall equivocal to their self-rating without the copilot. CONCLUSIONS: The AI copilot tracked successful capture of diagnosable views of key anatomical structures effectively guiding users through FFL to ensure all anatomical structures are sufficiently captured. This tool has the potential to assist novices in efficiently gaining competence in FFL. LEVEL OF EVIDENCE: NA Laryngoscope, 2024.

2.
Laryngoscope ; 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39258420

RESUMO

OBJECTIVE: This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD-AI criteria. DATA SOURCES: A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to "artificial intelligence," "machine learning," "deep learning," "neural network," and various head and neck neoplasms. REVIEW METHODS: Two independent reviewers analyzed each published study for adherence to the 65-point TRIPOD-AI criteria. Items were classified as "Yes," "No," or "NA" for each publication. The proportion of studies satisfying each TRIPOD-AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached. RESULTS: The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD-AI is necessary for achieving standardized ML research reporting in head and neck oncology. CONCLUSION: Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases. LEVEL OF EVIDENCE: NA Laryngoscope, 2024.

3.
Laryngoscope ; 2024 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-39177166

RESUMO

OBJECTIVE(S): The objective of this study was to characterize the level of agreement between three manometers: (1) Iowa Oral Performance Instrument (IOPI)-the reference standard for tongue, lip, and cheek strength assessments; (2) MicroRPM Respiratory Pressure Meter (MicroRPM)-the reference standard for respiratory strength assessments; and (3) Digital Pressure Manometer (DPM)-an alternative, low-cost pressure testing manometer. METHODS: Manual pressures were simultaneously applied to the IOPI and DPM, and to the MicroRPM and DPM, within a controlled laboratory setting. Agreement in pressure readings were analyzed using descriptive statistics, Lin's concordance correlation, and Bland-Altman Plots. Agreement was interpreted as "poor" if ρc < 0.90, "moderate" if ρc = 0.90 - < 0.95, "substantial" if ρc = 0.95 - < 0.99, and "excellent" if ρc ≥ 0.99. RESULTS: Differences in pressure readings between the DPM and clinical reference standards were consistently present yet highly predictable. There was a median absolute difference of 2.0-3.9 kPa between the IOPI and DPM, and 4.5-9.8 cm H2O between the MicroRPM and DPM. Lin's concordance revealed "substantial" agreement between the IOPI and DPM (ρc = 0.98) and the MicroRPM and DPM (ρc = 0.99). CONCLUSION: The DPM revealed higher pressure readings when compared to the IOPI and MicroRPM. However, differences in pressure readings were relatively small, highly predictable, and yielded substantial overall agreement. These findings suggest the DPM may be a valid, lower-cost alternative for objective assessments of tongue, lip, cheek, and respiratory muscle strength. Future research should expand on the present findings in clinical patient populations. LEVEL OF EVIDENCE: NA Laryngoscope, 2024.

4.
Laryngoscope ; 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39157956

RESUMO

OBJECTIVE: To evaluate the performance of commercial automatic speech recognition (ASR) systems on d/Deaf and hard-of-hearing (d/Dhh) speech. METHODS: A corpus containing 850 audio files of d/Dhh and normal hearing (NH) speech from the University of Memphis Speech Perception Assessment Laboratory was tested on four speech-to-text application program interfaces (APIs): Amazon Web Services, Microsoft Azure, Google Chirp, and OpenAI Whisper. We quantified the Word Error Rate (WER) of API transcriptions for 24 d/Dhh and nine NH participants and performed subgroup analysis by speech intelligibility classification (SIC), hearing loss (HL) onset, and primary communication mode. RESULTS: Mean WER averaged across APIs was 10 times higher for the d/Dhh group (52.6%) than the NH group (5.0%). APIs performed significantly worse for "low" and "medium" SIC (85.9% and 46.6% WER, respectively) as compared to "high" SIC group (9.5% WER, comparable to NH group). APIs performed significantly worse for speakers with prelingual HL relative to postlingual HL (80.5% and 37.1% WER, respectively). APIs performed significantly worse for speakers primarily communicating with sign language (70.2% WER) relative to speakers with both oral and sign language communication (51.5%) or oral communication only (19.7%). CONCLUSION: Commercial ASR systems underperform for d/Dhh individuals, especially those with "low" and "medium" SIC, prelingual onset of HL, and sign language as primary communication mode. This contrasts with Big Tech companies' promises of accessibility, indicating the need for ASR systems ethically trained on heterogeneous d/Dhh speech data. LEVEL OF EVIDENCE: 3 Laryngoscope, 2024.

5.
Artigo em Inglês | MEDLINE | ID: mdl-39146248

RESUMO

PURPOSE OF REVIEW: The purpose of this review is to summarize the existing literature on artificial intelligence technology utilization in laryngology, highlighting recent advances and current barriers to implementation. RECENT FINDINGS: The volume of publications studying applications of artificial intelligence in laryngology has rapidly increased, demonstrating a strong interest in utilizing this technology. Vocal biomarkers for disease screening, deep learning analysis of videolaryngoscopy for lesion identification, and auto-segmentation of videofluoroscopy for detection of aspiration are a few of the new ways in which artificial intelligence is poised to transform clinical care in laryngology. Increasing collaboration is ongoing to establish guidelines and standards for the field to ensure generalizability. SUMMARY: Artificial intelligence tools have the potential to greatly advance laryngology care by creating novel screening methods, improving how data-heavy diagnostics of laryngology are analyzed, and standardizing outcome measures. However, physician and patient trust in artificial intelligence must improve for the technology to be successfully implemented. Additionally, most existing studies lack large and diverse datasets, external validation, and consistent ground-truth references necessary to produce generalizable results. Collaborative, large-scale studies will fuel technological innovation and bring artificial intelligence to the forefront of patient care in laryngology.

9.
Otolaryngol Clin North Am ; 57(5): 863-870, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38839555

RESUMO

To fuel artificial intelligence (AI) potential in clinical practice in otolaryngology, researchers must understand its epistemic limitations, which are tightly linked to ethical dilemmas requiring careful consideration. AI tools are fundamentally opaque systems, though there are methods to increase explainability and transparency. Reproducibility and replicability limitations can be overcomed by sharing computing code, raw data, and data processing methodology. The risk of bias can be mitigated via algorithmic auditing, careful consideration of the training data, and advocating for a diverse AI workforce to promote algorithmic pluralism, reflecting our population's diverse values and preferences.


Assuntos
Inteligência Artificial , Otolaringologia , Humanos , Inteligência Artificial/ética , Otolaringologia/ética , Conhecimento , Reprodutibilidade dos Testes
10.
Artigo em Inglês | MEDLINE | ID: mdl-38704768

RESUMO

OBJECTIVE: To assess reporting practices of sociodemographic data in Upper Aerodigestive Tract (UAT) videomics research in Otolaryngology-Head and Neck Surgery (OHNS). STUDY DESIGN: Narrative review. METHODS: Four online research databases were searched for peer-reviewed articles on videomics and UAT endoscopy in OHNS, published since January 1, 2017. Title and abstract search, followed by a full-text screening was performed. Dataset audit criteria were determined by the MINIMAR reporting standards for patient demographic characteristics, in addition to gender and author affiliations. RESULTS: Of the 57 studies that were included, 37% reported any sociodemographic information on their dataset. Among these studies, all reported age, most reported sex (86%), two (10%) reported race, and one (5%) reported ethnicity and socioeconomic status. No studies reported gender. Most studies (84%) included at least one female author, and more than half of the studies (53%) had female first/senior authors, with no significant differences in the rate of sociodemographic reporting in studies with and without female authors (any female author: p = 0.2664; first/senior female author: p > 0.9999). Most studies based in the US reported at least one sociodemographic variable (79%), compared to those in Europe (24%) and in Asia (20%) (p = 0.0012). The rates of sociodemographic reporting in journals of different categories were as follows: clinical OHNS: 44%, clinical non-OHNS: 40%, technical: 42%, interdisciplinary: 10%. CONCLUSIONS: There is prevalent underreporting of sociodemographic information in OHNS videomics research utilizing UAT endoscopy. Routine reporting of sociodemographic information should be implemented for AI-based research to help minimize algorithmic biases that have been previously demonstrated.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA