Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Imaging Inform Med ; 2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38809338

RESUMEN

The diagnosis and treatment of vocal fold disorders heavily rely on the use of laryngoscopy. A comprehensive vocal fold diagnosis requires accurate identification of crucial anatomical structures and potential lesions during laryngoscopy observation. However, existing approaches have yet to explore the joint optimization of the decision-making process, including object detection and image classification tasks simultaneously. In this study, we provide a new dataset, VoFoCD, with 1724 laryngology images designed explicitly for object detection and image classification in laryngoscopy images. Images in the VoFoCD dataset are categorized into four classes and comprise six glottic object types. Moreover, we propose a novel Multitask Efficient trAnsformer network for Laryngoscopy (MEAL) to classify vocal fold images and detect glottic landmarks and lesions. To further facilitate interpretability for clinicians, MEAL provides attention maps to visualize important learned regions for explainable artificial intelligence results toward supporting clinical decision-making. We also analyze our model's effectiveness in simulated clinical scenarios where shaking of the laryngoscopy process occurs. The proposed model demonstrates outstanding performance on our VoFoCD dataset. The accuracy for image classification and mean average precision at an intersection over a union threshold of 0.5 (mAP50) for object detection are 0.951 and 0.874, respectively. Our MEAL method integrates global knowledge, encompassing general laryngoscopy image classification, into local features, which refer to distinct anatomical regions of the vocal fold, particularly abnormal regions, including benign and malignant lesions. Our contribution can effectively aid laryngologists in identifying benign or malignant lesions of vocal folds and classifying images in the laryngeal endoscopy process visually.

2.
Comput Methods Programs Biomed ; 241: 107748, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37598474

RESUMEN

BACKGROUND AND OBJECTIVE: Pulmonary nodule detection and segmentation are currently two primary tasks in analyzing chest computed tomography (Chest CT) in order to detect signs of lung cancer, thereby providing early treatment measures to reduce mortality. Even though there are many proposed methods to reduce false positives for obtaining effective detection results, distinguishing between the pulmonary nodule and background region remains challenging because their biological characteristics are similar and varied in size. The purpose of our work is to propose a method for automatic nodule detection and segmentation in Chest CT by enhancing the feature information of pulmonary nodules. METHODS: We propose a new UNet-based backbone with multi-branch attention auxiliary learning mechanism, which contains three novel modules, namely, Projection module, Fast Cascading Context module, and Boundary Enhancement module, to further enhance the nodule feature representation. Based on that, we build MANet, a lung nodule localization network that simultaneously detects and segments precise nodule positions. Furthermore, our MANet contains a Proposal Refinement step which refines initially generated proposals to effectively reduce false positives and thereby produce the segmentation quality. RESULTS: Comprehensive experiments on the combination of two benchmarks LUNA16 and LIDC-IDRI show that our proposed model outperforms state-of-the-art methods in the tasks of nodule detection and segmentation tasks in terms of FROC, IoU, and DSC metrics. Our method reports an average FROC score of 88.11% in lung nodule detection. For the lung nodule segmentation, the results reach an average IoU score of 71.29% and a DSC score of 82.74%. The ablation study also shows the effectiveness of the new modules which can be integrated into other UNet-based models. CONCLUSIONS: The experiments demonstrated our method with multi-branch attention auxiliary learning ability are a promising approach for detecting and segmenting the pulmonary nodule instances compared to the original UNet design.


Asunto(s)
Aprendizaje , Neoplasias Pulmonares , Humanos , Benchmarking , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen
3.
Am J Otolaryngol ; 44(3): 103800, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36905912

RESUMEN

PURPOSE: To collect a dataset with adequate laryngoscopy images and identify the appearance of vocal folds and their lesions in flexible laryngoscopy images by objective deep learning models. METHODS: We adopted a number of novel deep learning models to train and classify 4549 flexible laryngoscopy images as no vocal fold, normal vocal folds, and abnormal vocal folds. This could help these models recognize vocal folds and their lesions within these images. Ultimately, we made a comparison between the results of the state-of-the-art deep learning models, and another comparison of the results between the computer-aided classification system and ENT doctors. RESULTS: This study exhibited the performance of the deep learning models by evaluating laryngoscopy images collected from 876 patients. The efficiency of the Xception model was higher and steadier than almost the rest of the models. The accuracy of no vocal fold, normal vocal folds, and vocal fold abnormalities on this model were 98.90 %, 97.36 %, and 96.26 %, respectively. Compared to our ENT doctors, the Xception model produced better results than a junior doctor and was near an expert. CONCLUSION: Our results show that current deep learning models can classify vocal fold images well and effectively assist physicians in vocal fold identification and classification of normal or abnormal vocal folds.


Asunto(s)
Aprendizaje Profundo , Laringoscopía , Humanos , Laringoscopía/métodos , Pliegues Vocales/diagnóstico por imagen , Pliegues Vocales/patología
4.
IEEE Trans Image Process ; 31: 287-300, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34855592

RESUMEN

This paper pushes the envelope on decomposing camouflaged regions in an image into meaningful components, namely, camouflaged instances. To promote the new task of camouflaged instance segmentation of in-the-wild images, we introduce a dataset, dubbed CAMO++, that extends our preliminary CAMO dataset (camouflaged object segmentation) in terms of quantity and diversity. The new dataset substantially increases the number of images with hierarchical pixel-wise ground truths. We also provide a benchmark suite for the task of camouflaged instance segmentation. In particular, we present an extensive evaluation of state-of-the-art instance segmentation methods on our newly constructed CAMO++ dataset in various scenarios. We also present a camouflage fusion learning (CFL) framework for camouflaged instance segmentation to further improve the performance of state-of-the-art methods. The dataset, model, evaluation suite, and benchmark will be made publicly available on our project page.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA