Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Histopathology ; 85(1): 155-170, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38606989

RESUMEN

The histopathological classification of melanocytic tumours with spitzoid features remains a challenging task. We confront the complexities involved in the histological classification of these tumours by proposing machine learning (ML) algorithms that objectively categorise the most relevant features in order of importance. The data set comprises 122 tumours (39 benign, 44 atypical and 39 malignant) from four different countries. BRAF and NRAS mutation status was evaluated in 51. Analysis of variance score was performed to rank 22 clinicopathological variables. The Gaussian naive Bayes algorithm achieved in distinguishing Spitz naevus from malignant spitzoid tumours with an accuracy of 0.95 and kappa score of 0.87, utilising the 12 most important variables. For benign versus non-benign Spitz tumours, the test reached a kappa score of 0.88 using the 13 highest-scored features. Furthermore, for the atypical Spitz tumours (AST) versus Spitz melanoma comparison, the logistic regression algorithm achieved a kappa value of 0.66 and an accuracy rate of 0.85. When the three categories were compared most AST were classified as melanoma, because of the similarities on histological features between the two groups. Our results show promise in supporting the histological classification of these tumours in clinical practice, and provide valuable insight into the use of ML to improve the accuracy and objectivity of this process while minimising interobserver variability. These proposed algorithms represent a potential solution to the lack of a clear threshold for the Spitz/spitzoid tumour classification, and its high accuracy supports its usefulness as a helpful tool to improve diagnostic decision-making.


Asunto(s)
Aprendizaje Automático , Melanoma , Nevo de Células Epitelioides y Fusiformes , Neoplasias Cutáneas , Humanos , Nevo de Células Epitelioides y Fusiformes/patología , Nevo de Células Epitelioides y Fusiformes/diagnóstico , Nevo de Células Epitelioides y Fusiformes/genética , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/diagnóstico , Neoplasias Cutáneas/genética , Masculino , Femenino , Melanoma/patología , Melanoma/diagnóstico , Melanoma/genética , Adulto , Adolescente , Adulto Joven , Niño , Persona de Mediana Edad , Preescolar , Proteínas Proto-Oncogénicas B-raf/genética , Proteínas de la Membrana/genética , GTP Fosfohidrolasas/genética , Lactante , Mutación , Anciano
2.
Sensors (Basel) ; 21(3)2021 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-33573170

RESUMEN

Velocity-based training is a contemporary method used by sports coaches to prescribe the optimal loading based on the velocity of movement of a load lifted. The most employed and accurate instruments to monitor velocity are linear position transducers. Alternatively, smartphone apps compute mean velocity after each execution by manual on-screen digitizing, introducing human error. In this paper, a video-based instrument delivering unattended, real-time measures of barbell velocity with a smartphone high-speed camera has been developed. A custom image-processing algorithm allows for the detection of reference points of a multipower machine to autocalibrate and automatically track barbell markers to give real-time kinematic-derived parameters. Validity and reliability were studied by comparing the simultaneous measurement of 160 repetitions of back squat lifts executed by 20 athletes with the proposed instrument and a validated linear position transducer, used as a criterion. The video system produced practically identical range, velocity, force, and power outcomes to the criterion with low and proportional systematic bias and random errors. Our results suggest that the developed video system is a valid, reliable, and trustworthy instrument for measuring velocity and derived variables accurately with practical implications for use by coaches and practitioners.


Asunto(s)
Entrenamiento de Fuerza , Teléfono Inteligente , Levantamiento de Peso , Fenómenos Biomecánicos , Humanos , Reproducibilidad de los Resultados , Grabación en Video
3.
Entropy (Basel) ; 23(7)2021 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-34356439

RESUMEN

Atrial fibrillation (AF) is the most common cardiac arrhythmia. At present, cardiac ablation is the main treatment procedure for AF. To guide and plan this procedure, it is essential for clinicians to obtain patient-specific 3D geometrical models of the atria. For this, there is an interest in automatic image segmentation algorithms, such as deep learning (DL) methods, as opposed to manual segmentation, an error-prone and time-consuming method. However, to optimize DL algorithms, many annotated examples are required, increasing acquisition costs. The aim of this work is to develop automatic and high-performance computational models for left and right atrium (LA and RA) segmentation from a few labelled MRI volumetric images with a 3D Dual U-Net algorithm. For this, a supervised domain adaptation (SDA) method is introduced to infer knowledge from late gadolinium enhanced (LGE) MRI volumetric training samples (80 LA annotated samples) to a network trained with balanced steady-state free precession (bSSFP) MR images of limited number of annotations (19 RA and LA annotated samples). The resulting knowledge-transferred model SDA outperformed the same network trained from scratch in both RA (Dice equals 0.9160) and LA (Dice equals 0.8813) segmentation tasks.

4.
Sensors (Basel) ; 20(4)2020 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-32069912

RESUMEN

Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.


Asunto(s)
Retinopatía Diabética/diagnóstico , Fondo de Ojo , Interpretación de Imagen Asistida por Computador , Algoritmos , Aneurisma/diagnóstico , Aneurisma/diagnóstico por imagen , Área Bajo la Curva , Exudados y Transudados/diagnóstico por imagen , Hemorragia/diagnóstico , Hemorragia/diagnóstico por imagen , Humanos , Aprendizaje Automático , Curva ROC
5.
Entropy (Basel) ; 21(4)2019 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-33267070

RESUMEN

Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 ± 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands.

6.
Sci Rep ; 13(1): 17764, 2023 Oct 18.
Artículo en Inglés | MEDLINE | ID: mdl-37853065

RESUMEN

The creation of artistic images through the use of Artificial Intelligence is an area that has been gaining interest in recent years. In particular, the ability of Neural Networks to separate and subsequently recombine the style of different images, generating a new artistic image with the desired style, has been a source of study and attraction for the academic and industrial community. This work addresses the challenge of generating artistic images that are framed in the style of pictorial Impressionism and, specifically, that imitate the style of one of its greatest exponents, the painter Claude Monet. After having analysed several theoretical approaches, the Cycle Generative Adversarial Networks are chosen as base model. From this point, a new training methodology which has not been applied to cyclical systems so far, the top-k approach, is implemented. The proposed system is characterised by using in each iteration of the training those k images that, in the previous iteration, have been able to better imitate the artist's style. To evaluate the performance of the proposed methods, the results obtained with both methodologies, basic and top-k, have been analysed from both a quantitative and qualitative perspective. Both evaluation methods demonstrate that the proposed top-k approach recreates the author's style in a more successful manner and, at the same time, also demonstrate the ability of Artificial Intelligence to generate something as creative as impressionist paintings.

7.
F S Sci ; 4(3): 211-218, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37394179

RESUMEN

OBJECTIVE: To develop a spatiotemporal model for de prediction of euploid and aneuploid embryos using time-lapse videos from 10-115 hours after insemination (hpi). DESIGN: Retrospective study. MAIN OUTCOME MEASURES: The research used an end-to-end approach to develop an automated artificial intelligence system capable of extracting features from images and classifying them, considering spatiotemporal dependencies. A convolutional neural network extracted the most relevant features from each video frame. A bidirectional long short-term memory layer received this information and analyzed the temporal dependencies, obtaining a low-dimensional feature vector that characterized each video. A multilayer perceptron classified them into 2 groups, euploid and noneuploid. RESULTS: The model performance in accuracy fell between 0.6170 and 0.7308. A multi-input model with a gate recurrent unit module performed better than others; the precision (or positive predictive value) is 0.8205 for predicting euploidy. Sensitivity, specificity, F1-Score and accuracy are 0.6957, 0.7813, 0.7042, and 0.7308, respectively. CONCLUSIONS: This article proposes an artificial intelligence solution for prioritizing euploid embryo transfer. We can highlight the identification of a noninvasive method for chromosomal status diagnosis using a deep learning approach that analyzes raw data provided by time-lapse incubators. This method demonstrated potential automation of the evaluation process, allowing spatial and temporal information to encode.


Asunto(s)
Aprendizaje Profundo , Estudios Retrospectivos , Imagen de Lapso de Tiempo , Inteligencia Artificial , Ploidias
8.
Sci Data ; 10(1): 704, 2023 10 16.
Artículo en Inglés | MEDLINE | ID: mdl-37845235

RESUMEN

Spitzoid tumors (ST) are a group of melanocytic tumors of high diagnostic complexity. Since 1948, when Sophie Spitz first described them, the diagnostic uncertainty remains until now, especially in the intermediate category known as Spitz tumor of unknown malignant potential (STUMP) or atypical Spitz tumor. Studies developing deep learning (DL) models to diagnose melanocytic tumors using whole slide imaging (WSI) are scarce, and few used ST for analysis, excluding STUMP. To address this gap, we introduce SOPHIE: the first ST dataset with WSIs, including labels as benign, malignant, and atypical tumors, along with the clinical information of each patient. Additionally, we explain two DL models implemented as validation examples using this database.


Asunto(s)
Aprendizaje Profundo , Melanoma , Nevo de Células Epitelioides y Fusiformes , Neoplasias Cutáneas , Humanos , Melanoma/diagnóstico por imagen , Melanoma/patología , Metadatos , Nevo de Células Epitelioides y Fusiformes/diagnóstico por imagen , Neoplasias Cutáneas/patología
9.
Comput Methods Programs Biomed ; 240: 107695, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37393742

RESUMEN

BACKGROUND AND OBJECTIVE: Prostate cancer is one of the most common diseases affecting men. The main diagnostic and prognostic reference tool is the Gleason scoring system. An expert pathologist assigns a Gleason grade to a sample of prostate tissue. As this process is very time-consuming, some artificial intelligence applications were developed to automatize it. The training process is often confronted with insufficient and unbalanced databases which affect the generalisability of the models. Therefore, the aim of this work is to develop a generative deep learning model capable of synthesising patches of any selected Gleason grade to perform data augmentation on unbalanced data and test the improvement of classification models. METHODOLOGY: The methodology proposed in this work consists of a conditional Progressive Growing GAN (ProGleason-GAN) capable of synthesising prostate histopathological tissue patches by selecting the desired Gleason Grade cancer pattern in the synthetic sample. The conditional Gleason Grade information is introduced into the model through the embedding layers, so there is no need to add a term to the Wasserstein loss function. We used minibatch standard deviation and pixel normalisation to improve the performance and stability of the training process. RESULTS: The reality assessment of the synthetic samples was performed with the Frechet Inception Distance (FID). We obtained an FID metric of 88.85 for non-cancerous patterns, 81.86 for GG3, 49.32 for GG4 and 108.69 for GG5 after post-processing stain normalisation. In addition, a group of expert pathologists was selected to perform an external validation of the proposed framework. Finally, the application of our proposed framework improved the classification results in SICAPv2 dataset, proving its effectiveness as a data augmentation method. CONCLUSIONS: ProGleason-GAN approach combined with a stain normalisation post-processing provides state-of-the-art results regarding Frechet's Inception Distance. This model can synthesise samples of non-cancerous patterns, GG3, GG4 or GG5. The inclusion of conditional information about the Gleason grade during the training process allows the model to select the cancerous pattern in a synthetic sample. The proposed framework can be used as a data augmentation method.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/cirugía , Clasificación del Tumor , Pronóstico , Prostatectomía
10.
Bioengineering (Basel) ; 10(10)2023 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-37892874

RESUMEN

The paper proposes a federated content-based medical image retrieval (FedCBMIR) tool that utilizes federated learning (FL) to address the challenges of acquiring a diverse medical data set for training CBMIR models. CBMIR is a tool to find the most similar cases in the data set to assist pathologists. Training such a tool necessitates a pool of whole-slide images (WSIs) to train the feature extractor (FE) to extract an optimal embedding vector. The strict regulations surrounding data sharing in hospitals makes it difficult to collect a rich data set. FedCBMIR distributes an unsupervised FE to collaborative centers for training without sharing the data set, resulting in shorter training times and higher performance. FedCBMIR was evaluated by mimicking two experiments, including two clients with two different breast cancer data sets, namely BreaKHis and Camelyon17 (CAM17), and four clients with the BreaKHis data set at four different magnifications. FedCBMIR increases the F1 score (F1S) of each client from 96% to 98.1% in CAM17 and from 95% to 98.4% in BreaKHis, with 11.44 fewer hours in training time. FedCBMIR provides 98%, 96%, 94%, and 97% F1S in the BreaKHis experiment with a generalized model and accomplishes this in 25.53 fewer hours of training.

11.
Comput Methods Programs Biomed ; 221: 106895, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35609359

RESUMEN

BACKGROUND: Embryo morphology is a predictive marker for implantation success and ultimately live births. Viability evaluation and quality grading are commonly used to select the embryo with the highest implantation potential. However, the traditional method of manual embryo assessment is time-consuming and highly susceptible to inter- and intra-observer variability. Automation of this process results in more objective and accurate predictions. METHOD: In this paper, we propose a novel methodology based on deep learning to automatically evaluate the morphological appearance of human embryos from time-lapse imaging. A supervised contrastive learning framework is implemented to predict embryo viability at day 4 and day 5, and an inductive transfer approach is applied to classify embryo quality at both times. RESULTS: Results showed that both methods outperformed conventional approaches and improved state-of-the-art embryology results for an independent test set. The viability result achieved an accuracy of 0.8103 and 0.9330 and the quality results reached values of 0.7500 and 0.8001 for day 4 and day 5, respectively. Furthermore, qualitative results kept consistency with the clinical interpretation. CONCLUSIONS: The proposed methods are up to date with the artificial intelligence literature and have been proven to be promising. Furthermore, our findings represent a breakthrough in the field of embryology in that they study the possibilities of embryo selection at day 4. Moreover, the grad-CAMs findings are directly in line with embryologists' decisions. Finally, our results demonstrated excellent potential for the inclusion of the models in clinical practice.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Implantación del Embrión , Humanos , Inseminación , Imagen de Lapso de Tiempo/métodos
12.
Cancers (Basel) ; 15(1)2022 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-36612037

RESUMEN

The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios.

13.
Comput Methods Programs Biomed ; 224: 107012, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35843078

RESUMEN

BACKGROUND AND OBJECTIVE: Ulcerative colitis (UC) is an inflammatory bowel disease (IBD) affecting the colon and the rectum characterized by a remitting-relapsing course. To detect mucosal inflammation associated with UC, histology is considered the most stringent criteria. In turn, histologic remission (HR) correlates with improved clinical outcomes and has been recently recognized as a desirable treatment target. The leading biomarker for assessing histologic remission is the presence or absence of neutrophils. Therefore, the finding of this cell in specific colon structures indicates that the patient has UC activity. However, no previous studies based on deep learning have been developed to identify UC based on neutrophils detection using whole-slide images (WSI). METHODS: The methodological core of this work is a novel multiple instance learning (MIL) framework with location constraints able to determine the presence of UC activity using WSI. In particular, we put forward an effective way to introduce constraints about positive instances to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. In addition, we propose a new weighted embedding to enlarge the relevance of the positive instances. RESULTS: Extensive experiments on a multi-center dataset of colon and rectum WSIs, PICASSO-MIL, demonstrate that using the location information we can improve considerably the results at WSI-level. In comparison with prior MIL settings, our method allows for 10% improvements in bag-level accuracy. CONCLUSION: Our model, which introduces a new form of constraints, surpass the results achieved from current state-of-the-art methods that focus on the MIL paradigm. Our method can be applied to other histological concerns where the morphological features determining a positive WSI are tiny and similar to others in the image.


Asunto(s)
Colitis Ulcerosa , Biomarcadores , Colitis Ulcerosa/complicaciones , Colitis Ulcerosa/diagnóstico por imagen , Colitis Ulcerosa/tratamiento farmacológico , Humanos
14.
Comput Med Imaging Graph ; 88: 101846, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33485056

RESUMEN

BACKGROUND AND OBJECTIVE: Prostate cancer is one of the main diseases affecting men worldwide. The Gleason scoring system is the primary diagnostic tool for prostate cancer. This is obtained via the visual analysis of cancerous patterns in prostate biopsies performed by expert pathologists, and the aggregation of the main Gleason grades in a combined score. Computer-aided diagnosis systems allow to reduce the workload of pathologists and increase the objectivity. Nevertheless, those require a large number of labeled samples, with pixel-level annotations performed by expert pathologists, to be developed. Recently, efforts have been made in the literature to develop algorithms aiming the direct estimation of the global Gleason score at biopsy/core level with global labels. However, these algorithms do not cover the accurate localization of the Gleason patterns into the tissue. These location maps are the basis to provide a reliable computer-aided diagnosis system to the experts to be used in clinical practice by pathologists. In this work, we propose a deep-learning-based system able to detect local cancerous patterns in the prostate tissue using only the global-level Gleason score obtained from clinical records during training. METHODS: The methodological core of this work is the proposed weakly-supervised-trained convolutional neural network, WeGleNet, based on a multi-class segmentation layer after the feature extraction module, a global-aggregation, and the slicing of the background class for the model loss estimation during training. RESULTS: Using a public dataset of prostate tissue-micro arrays, we obtained a Cohen's quadratic kappa (κ) of 0.67 for the pixel-level prediction of cancerous patterns in the validation cohort. We compared the model performance for semantic segmentation of Gleason grades with supervised state-of-the-art architectures in the test cohort. We obtained a pixel-level κ of 0.61 and a macro-averaged f1-score of 0.58, at the same level as fully-supervised methods. Regarding the estimation of the core-level Gleason score, we obtained a κ of 0.76 and 0.67 between the model and two different pathologists. CONCLUSIONS: WeGleNet is capable of performing the semantic segmentation of Gleason grades similarly to fully-supervised methods without requiring pixel-level annotations. Moreover, the model reached a performance at the same level as inter-pathologist agreement for the global Gleason scoring of the cores.


Asunto(s)
Próstata , Neoplasias de la Próstata , Técnicas Histológicas , Humanos , Masculino , Clasificación del Tumor , Redes Neurales de la Computación , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Semántica
15.
Comput Methods Programs Biomed ; 200: 105855, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33303289

RESUMEN

BACKGROUND AND OBJECTIVE: Glaucoma is the leading cause of blindness worldwide. Many studies based on fundus image and optical coherence tomography (OCT) imaging have been developed in the literature to help ophthalmologists through artificial-intelligence techniques. Currently, 3D spectral-domain optical coherence tomography (SD-OCT) samples have become more important since they could enclose promising information for glaucoma detection. To analyse the hidden knowledge of the 3D scans for glaucoma detection, we have proposed, for the first time, a deep-learning methodology based on leveraging the spatial dependencies of the features extracted from the B-scans. METHODS: The experiments were performed on a database composed of 176 healthy and 144 glaucomatous SD-OCT volumes centred on the optic nerve head (ONH). The proposed methodology consists of two well-differentiated training stages: a slide-level feature extractor and a volume-based predictive model. The slide-level discriminator is characterised by two new, residual and attention, convolutional modules which are combined via skip-connections with other fine-tuned architectures. Regarding the second stage, we first carried out a data-volume conditioning before extracting the features from the slides of the SD-OCT volumes. Then, Long Short-Term Memory (LSTM) networks were used to combine the recurrent dependencies embedded in the latent space to provide a holistic feature vector, which was generated by the proposed sequential-weighting module (SWM). RESULTS: The feature extractor reports AUC values higher than 0.93 both in the primary and external test sets. Otherwise, the proposed end-to-end system based on a combination of CNN and LSTM networks achieves an AUC of 0.8847 in the prediction stage, which outperforms other state-of-the-art approaches intended for glaucoma detection. Additionally, Class Activation Maps (CAMs) were computed to highlight the most interesting regions per B-scan when discerning between healthy and glaucomatous eyes from raw SD-OCT volumes. CONCLUSIONS: The proposed model is able to extract the features from the B-scans of the volumes and combine the information of the latent space to perform a volume-level glaucoma prediction. Our model, which combines residual and attention blocks with a sequential weighting module to refine the LSTM outputs, surpass the results achieved from current state-of-the-art methods focused on 3D deep-learning architectures.


Asunto(s)
Glaucoma , Disco Óptico , Fondo de Ojo , Glaucoma/diagnóstico por imagen , Humanos , Análisis Espacial , Tomografía de Coherencia Óptica
16.
Comput Biol Med ; 138: 104932, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34673472

RESUMEN

In recent times, bladder cancer has increased significantly in terms of incidence and mortality. Currently, two subtypes are known based on tumour growth: non-muscle invasive (NMIBC) and muscle-invasive bladder cancer (MIBC). In this work, we focus on the MIBC subtype because it has the worst prognosis and can spread to adjacent organs. We present a self-learning framework to grade bladder cancer from histological images stained by immunohistochemical techniques. Specifically, we propose a novel Deep Convolutional Embedded Attention Clustering (DCEAC) which allows for the classification of histological patches into different levels of disease severity, according to established patterns in the literature. The proposed DCEAC model follows a fully unsupervised two-step learning methodology to discern between non-tumour, mild and infiltrative patterns from high-resolution 512 × 512 pixel samples. Our system outperforms previous clustering-based methods by including a convolutional attention module, which enables the refinement of the features of the latent space prior to the classification stage. The proposed network surpasses state-of-the-art approaches by 2-3% across different metrics, reaching a final average accuracy of 0.9034 in a multi-class scenario. Furthermore, the reported class activation maps evidence that our model is able to learn by itself the same patterns that clinicians consider relevant, without requiring previous annotation steps. This represents a breakthrough in MIBC grading that bridges the gap with respect to training the model on labelled data.


Asunto(s)
Neoplasias de la Vejiga Urinaria , Análisis por Conglomerados , Humanos
17.
IEEE J Biomed Health Inform ; 25(8): 3094-3104, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33621184

RESUMEN

Prostate cancer is one of the main diseases affecting men worldwide. The gold standard for diagnosis and prognosis is the Gleason grading system. In this process, pathologists manually analyze prostate histology slides under microscope, in a high time-consuming and subjective task. In the last years, computer-aided-diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in the daily clinical practice. Nevertheless, these systems are usually trained using tedious and prone-to-error pixel-level annotations of Gleason grades in the tissue. To alleviate the need of manual pixel-wise labeling, just a handful of works have been presented in the literature. Furthermore, despite the promising results achieved on global scoring the location of cancerous patterns in the tissue is only qualitatively addressed. These heatmaps of tumor regions, however, are crucial to the reliability of CAD systems as they provide explainability to the system's output and give confidence to pathologists that the model is focusing on medical relevant features. Motivated by this, we propose a novel weakly-supervised deep-learning model, based on self-learning CNNs, that leverages only the global Gleason score of gigapixel whole slide images during training to accurately perform both, grading of patch-level patterns and biopsy-level scoring. To evaluate the performance of the proposed method, we perform extensive experiments on three different external datasets for the patch-level Gleason grading, and on two different test sets for global Grade Group prediction. We empirically demonstrate that our approach outperforms its supervised counterpart on patch-level Gleason grading by a large margin, as well as state-of-the-art methods on global biopsy-level scoring. Particularly, the proposed model brings an average improvement on the Cohen's quadratic kappa ( κ) score of nearly 18% compared to full-supervision for the patch-level Gleason grading task. This suggests that the absence of the annotator's bias in our approach and the capability of using large weakly labeled datasets during training leads to higher performing and more robust models. Furthermore, raw features obtained from the patch-level classifier showed to generalize better than previous approaches in the literature to the subjective global biopsy-level scoring.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Neoplasias de la Próstata , Humanos , Masculino , Clasificación del Tumor , Reproducibilidad de los Resultados
18.
Artif Intell Med ; 118: 102132, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34412848

RESUMEN

Glaucoma is one of the leading causes of blindness worldwide and Optical Coherence Tomography (OCT) is the quintessential imaging technique for its detection. Unlike most of the state-of-the-art studies focused on glaucoma detection, in this paper, we propose, for the first time, a novel framework for glaucoma grading using raw circumpapillary B-scans. In particular, we set out a new OCT-based hybrid network which combines hand-driven and deep learning algorithms. An OCT-specific descriptor is proposed to extract hand-crafted features related to the retinal nerve fibre layer (RNFL). In parallel, an innovative CNN is developed using skip-connections to include tailored residual and attention modules to refine the automatic features of the latent space. The proposed architecture is used as a backbone to conduct a novel few-shot learning based on static and dynamic prototypical networks. The k-shot paradigm is redefined giving rise to a supervised end-to-end system which provides substantial improvements discriminating between healthy, early and advanced glaucoma samples. The training and evaluation processes of the dynamic prototypical network are addressed from two fused databases acquired via Heidelberg Spectralis system. Validation and testing results reach a categorical accuracy of 0.9459 and 0.8788 for glaucoma grading, respectively. Besides, the high performance reported by the proposed model for glaucoma detection deserves a special mention. The findings from the class activation maps are directly in line with the clinicians' opinion since the heatmaps pointed out the RNFL as the most relevant structure for glaucoma diagnosis.


Asunto(s)
Glaucoma , Tomografía de Coherencia Óptica , Algoritmos , Bases de Datos Factuales , Glaucoma/diagnóstico , Humanos , Redes Neurales de la Computación
19.
Artif Intell Med ; 121: 102197, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34763799

RESUMEN

Melanoma is an aggressive neoplasm responsible for the majority of deaths from skin cancer. Specifically, spitzoid melanocytic tumors are one of the most challenging melanocytic lesions due to their ambiguous morphological features. The gold standard for its diagnosis and prognosis is the analysis of skin biopsies. In this process, dermatopathologists visualize skin histology slides under a microscope, in a highly time-consuming and subjective task. In the last years, computer-aided diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in daily clinical practice. Nevertheless, no automatic CAD systems have yet been proposed for the analysis of spitzoid lesions. Regarding common melanoma, no system allows both the selection of the tumor region and the prediction of the benign or malignant form in the diagnosis. Motivated by this, we propose a novel end-to-end weakly supervised deep learning model, based on inductive transfer learning with an improved convolutional neural network (CNN) to refine the embedding features of the latent space. The framework is composed of a source model in charge of finding the tumor patch-level patterns, and a target model focuses on the specific diagnosis of a biopsy. The latter retrains the backbone of the source model through a multiple instance learning workflow to obtain the biopsy-level scoring. To evaluate the performance of the proposed methods, we performed extensive experiments on a private skin database with spitzoid lesions. Test results achieved an accuracy of 0.9231 and 0.80 for the source and the target models, respectively. In addition, the heat map findings are directly in line with the clinicians' medical decision and even highlight, in some cases, patterns of interest that were overlooked by the pathologist.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Biopsia , Diagnóstico por Computador , Humanos , Melanoma/diagnóstico , Microscopía , Neoplasias Cutáneas/diagnóstico
20.
Comput Methods Programs Biomed ; 198: 105788, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33130492

RESUMEN

BACKGROUND AND OBJECTIVE: Optical coherence tomography (OCT) is a useful technique to monitor retinal layer state both in humans and animal models. Automated OCT analysis in rats is of great relevance to study possible toxic effect of drugs and other treatments before human trials. In this paper, two different approaches to detect the most significant retinal layers in a rat OCT image are presented. METHODS: One approach is based on a combination of local horizontal intensity profiles along with a new proposed variant of watershed transformation and the other is built upon an encoder-decoder convolutional network architecture. RESULTS: After a wide validation, an averaged absolute distance error of 3.77 ± 2.59 and 1.90 ± 0.91 µm is achieved by both approaches, respectively, on a batch of the rat OCT database. After a second test of the deep-learning-based method using an unseen batch of the database, an averaged absolute distance error of 2.67 ± 1.25 µm is obtained. The rat OCT database used in this paper is made publicly available to facilitate further comparisons. CONCLUSIONS: Based on the obtained results, it was demonstrated the competitiveness of the first approach since outperforms the commercial Insight image segmentation software (Phoenix Research Labs) as well as its utility to generate labelled images for validation purposes speeding significantly up the ground truth generation process. Regarding the second approach, the deep-learning-based method improves the results achieved by the more conventional method and also by other state-of-the-art techniques. In addition, it was verified that the results of the proposed network can be generalized to new rat OCT images.


Asunto(s)
Roedores , Tomografía de Coherencia Óptica , Animales , Redes Neurales de la Computación , Ratas , Retina/diagnóstico por imagen , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA