Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Head Neck ; 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39080964

RESUMEN

OBJECTIVE: This study aims to evaluate the efficacy of the profunda artery perforator (PAP) flap in head and neck reconstruction. METHODS: A single arm meta-analysis was performed for flap survival rate (primary outcome), reoperation for major complication, and overall complication rates (secondary outcomes). RESULTS: The search strategy yielded a total of 295 potentially relevant publications, of which 13 were included. A total of 305 patients (males: 80.8%, n = 232/281), with a median age of 56.1 years (n = 305/305; 95% CI 53.9-63), who underwent a total of 307 PAP flap reconstructions for head and neck defects were included. Flap survival rate was 100% (n = 306/307; 95% CI 99.6%-100%), with a reoperation rate for major complications of 3.7% (n = 15/307; 95% CI 1.85%-6.1%) and an overall complication rate of 26.5% (n = 92/307; 95% CI 15.7%-38.9%). Notable postoperative complications included wound dehiscence (n = 15/307, 4.9%), delayed healing (n = 14/307, 4.6%), and wound infection (n = 12/307, 3.9%). Partial flap necrosis and hematoma occurred in 2.6% of cases (n = 8/307), while arterial and venous thrombosis were documented in 0.7% (n = 2/307) and 1.3%, respectively (n = 4/307). CONCLUSION: The application of the PAP flap in head and neck reconstructions showed several favorable aspects, such as an exceptionally low flap failure rate, versatility in achieving variable dimensions, and a relatively low incidence of complications. PAP flap might be considered as a compelling alternative to the traditionally employed soft tissue free flaps in head and neck reconstruction.

2.
Eur Arch Otorhinolaryngol ; 281(8): 4255-4264, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38698163

RESUMEN

PURPOSE: Informative image selection in laryngoscopy has the potential for improving automatic data extraction alone, for selective data storage and a faster review process, or in combination with other artificial intelligence (AI) detection or diagnosis models. This paper aims to demonstrate the feasibility of AI in providing automatic informative laryngoscopy frame selection also capable of working in real-time providing visual feedback to guide the otolaryngologist during the examination. METHODS: Several deep learning models were trained and tested on an internal dataset (n = 5147 images) and then tested on an external test set (n = 646 images) composed of both white light and narrow band images. Four videos were used to assess the real-time performance of the best-performing model. RESULTS: ResNet-50, pre-trained with the pretext strategy, reached a precision = 95% vs. 97%, recall = 97% vs, 89%, and the F1-score = 96% vs. 93% on the internal and external test set respectively (p = 0.062). The four testing videos are provided in the supplemental materials. CONCLUSION: The deep learning model demonstrated excellent performance in identifying diagnostically relevant frames within laryngoscopic videos. With its solid accuracy and real-time capabilities, the system is promising for its development in a clinical setting, either autonomously for objective quality control or in conjunction with other algorithms within a comprehensive AI toolset aimed at enhancing tumor detection and diagnosis.


Asunto(s)
Aprendizaje Profundo , Laringoscopía , Humanos , Laringoscopía/métodos , Grabación en Video , Estudios de Factibilidad , Enfermedades de la Laringe/diagnóstico , Enfermedades de la Laringe/diagnóstico por imagen
3.
Laryngoscope ; 134(6): 2826-2834, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38174772

RESUMEN

OBJECTIVE: To investigate the potential of deep learning for automatically delineating (segmenting) laryngeal cancer superficial extent on endoscopic images and videos. METHODS: A retrospective study was conducted extracting and annotating white light (WL) and Narrow-Band Imaging (NBI) frames to train a segmentation model (SegMENT-Plus). Two external datasets were used for validation. The model's performances were compared with those of two otolaryngology residents. In addition, the model was tested on real intraoperative laryngoscopy videos. RESULTS: A total of 3933 images of laryngeal cancer from 557 patients were used. The model achieved the following median values (interquartile range): Dice Similarity Coefficient (DSC) = 0.83 (0.70-0.90), Intersection over Union (IoU) = 0.83 (0.73-0.90), Accuracy = 0.97 (0.95-0.99), Inference Speed = 25.6 (25.1-26.1) frames per second. The external testing cohorts comprised 156 and 200 images. SegMENT-Plus performed similarly on all three datasets for DSC (p = 0.05) and IoU (p = 0.07). No significant differences were noticed when separately analyzing WL and NBI test images on DSC (p = 0.06) and IoU (p = 0.78) and when analyzing the model versus the two residents on DSC (p = 0.06) and IoU (Senior vs. SegMENT-Plus, p = 0.13; Junior vs. SegMENT-Plus, p = 1.00). The model was then tested on real intraoperative laryngoscopy videos. CONCLUSION: SegMENT-Plus can accurately delineate laryngeal cancer boundaries in endoscopic images, with performances equal to those of two otolaryngology residents. The results on the two external datasets demonstrate excellent generalization capabilities. The computation speed of the model allowed its application on videolaryngoscopies simulating real-time use. Clinical trials are needed to evaluate the role of this technology in surgical practice and resection margin improvement. LEVEL OF EVIDENCE: III Laryngoscope, 134:2826-2834, 2024.


Asunto(s)
Aprendizaje Profundo , Neoplasias Laríngeas , Laringoscopía , Imagen de Banda Estrecha , Humanos , Laringoscopía/métodos , Imagen de Banda Estrecha/métodos , Neoplasias Laríngeas/diagnóstico por imagen , Neoplasias Laríngeas/cirugía , Neoplasias Laríngeas/patología , Estudios Retrospectivos , Grabación en Video , Masculino , Femenino , Persona de Mediana Edad , Luz , Anciano
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA