Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Front Oncol ; 11: 626602, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33842330

RESUMEN

INTRODUCTION: Fully convoluted neural networks (FCNN) applied to video-analysis are of particular interest in the field of head and neck oncology, given that endoscopic examination is a crucial step in diagnosis, staging, and follow-up of patients affected by upper aero-digestive tract cancers. The aim of this study was to test FCNN-based methods for semantic segmentation of squamous cell carcinoma (SCC) of the oral cavity (OC) and oropharynx (OP). MATERIALS AND METHODS: Two datasets were retrieved from the institutional registry of a tertiary academic hospital analyzing 34 and 45 NBI endoscopic videos of OC and OP lesions, respectively. The dataset referring to the OC was composed of 110 frames, while 116 frames composed the OP dataset. Three FCNNs (U-Net, U-Net 3, and ResNet) were investigated to segment the neoplastic images. FCNNs performance was evaluated for each tested network and compared to the gold standard, represented by the manual annotation performed by expert clinicians. RESULTS: For FCNN-based segmentation of the OC dataset, the best results in terms of Dice Similarity Coefficient (Dsc) were achieved by ResNet with 5(×2) blocks and 16 filters, with a median value of 0.6559. In FCNN-based segmentation for the OP dataset, the best results in terms of Dsc were achieved by ResNet with 4(×2) blocks and 16 filters, with a median value of 0.7603. All tested FCNNs presented very high values of variance, leading to very low values of minima for all metrics evaluated. CONCLUSIONS: FCNNs have promising potential in the analysis and segmentation of OC and OP video-endoscopic images. All tested FCNN architectures demonstrated satisfying outcomes in terms of diagnostic accuracy. The inference time of the processing networks were particularly short, ranging between 14 and 115 ms, thus showing the possibility for real-time application.

2.
Med Biol Eng Comput ; 58(6): 1225-1238, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32212052

RESUMEN

Narrow-band imaging (NBI) laryngoscopy is an optical-biopsy technique used for screening and diagnosing cancer of the laryngeal tract, reducing the biopsy risks but at the cost of some drawbacks, such as large amount of data to review to make the diagnosis. The purpose of this paper is to develop a deep-learning-based strategy for the automatic selection of informative laryngoscopic-video frames, reducing the amount of data to process for diagnosis. The strategy leans on the transfer learning process that is implemented to perform learned-features extraction using six different convolutional neural networks (CNNs) pre-trained on natural images. To test the proposed strategy, the learned features were extracted from the NBI-InfFrames dataset. Support vector machines (SVMs) and CNN-based approach were then used to classify frames as informative (I) and uninformative ones such as blurred (B), with saliva or specular reflections (S), and underexposed (U). The best-performing learned-feature set was achieved with VGG 16 resulting in a recall of I of 0.97 when classifying frames with SVMs and 0.98 with the CNN-based classification. This work presents a valuable novel approach towards the selection of informative frames in laryngoscopic videos and a demonstration of the potential of transfer learning in medical image analysis. Flowchart of the proposed approach to automatic informative-frame selection in laryngoscopic videos. The approach leans on the transfer learning process, which is implemented to perform learned-features extraction using different convolutional neural networks (CNNs) pre-trained on natural images. Frame classification is performed exploiting two different classifiers: support vector machines and fine-tuned CNNs.


Asunto(s)
Endoscopía Capsular/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Laringoscopía/métodos , Algoritmos , Área Bajo la Curva , Bases de Datos Factuales , Humanos , Neoplasias Laríngeas/diagnóstico por imagen , Aprendizaje Automático , Redes Neurales de la Computación , Máquina de Vectores de Soporte , Flujo de Trabajo
3.
Int J Comput Assist Radiol Surg ; 13(9): 1357-1367, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-29796834

RESUMEN

PURPOSE: Fast and accurate graft hepatic steatosis (HS) assessment is of primary importance for lowering liver dysfunction risks after transplantation. Histopathological analysis of biopsied liver is the gold standard for assessing HS, despite being invasive and time consuming. Due to the short time availability between liver procurement and transplantation, surgeons perform HS assessment through clinical evaluation (medical history, blood tests) and liver texture visual analysis. Despite visual analysis being recognized as challenging in the clinical literature, few efforts have been invested to develop computer-assisted solutions for HS assessment. The objective of this paper is to investigate the automatic analysis of liver texture with machine learning algorithms to automate the HS assessment process and offer support for the surgeon decision process. METHODS: Forty RGB images of forty different donors were analyzed. The images were captured with an RGB smartphone camera in the operating room (OR). Twenty images refer to livers that were accepted and 20 to discarded livers. Fifteen randomly selected liver patches were extracted from each image. Patch size was [Formula: see text]. This way, a balanced dataset of 600 patches was obtained. Intensity-based features (INT), histogram of local binary pattern ([Formula: see text]), and gray-level co-occurrence matrix ([Formula: see text]) were investigated. Blood-sample features (Blo) were included in the analysis, too. Supervised and semisupervised learning approaches were investigated for feature classification. The leave-one-patient-out cross-validation was performed to estimate the classification performance. RESULTS: With the best-performing feature set ([Formula: see text]) and semisupervised learning, the achieved classification sensitivity, specificity, and accuracy were 95, 81, and 88%, respectively. CONCLUSIONS: This research represents the first attempt to use machine learning and automatic texture analysis of RGB images from ubiquitous smartphone cameras for the task of graft HS assessment. The results suggest that is a promising strategy to develop a fully automatic solution to assist surgeons in HS assessment inside the OR.


Asunto(s)
Algoritmos , Hígado Graso/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Trasplante de Hígado/efectos adversos , Hígado/diagnóstico por imagen , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos , Color , Hígado Graso/etiología , Humanos , Hígado/cirugía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...