Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Histopathology ; 85(1): 155-170, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38606989

RESUMO

The histopathological classification of melanocytic tumours with spitzoid features remains a challenging task. We confront the complexities involved in the histological classification of these tumours by proposing machine learning (ML) algorithms that objectively categorise the most relevant features in order of importance. The data set comprises 122 tumours (39 benign, 44 atypical and 39 malignant) from four different countries. BRAF and NRAS mutation status was evaluated in 51. Analysis of variance score was performed to rank 22 clinicopathological variables. The Gaussian naive Bayes algorithm achieved in distinguishing Spitz naevus from malignant spitzoid tumours with an accuracy of 0.95 and kappa score of 0.87, utilising the 12 most important variables. For benign versus non-benign Spitz tumours, the test reached a kappa score of 0.88 using the 13 highest-scored features. Furthermore, for the atypical Spitz tumours (AST) versus Spitz melanoma comparison, the logistic regression algorithm achieved a kappa value of 0.66 and an accuracy rate of 0.85. When the three categories were compared most AST were classified as melanoma, because of the similarities on histological features between the two groups. Our results show promise in supporting the histological classification of these tumours in clinical practice, and provide valuable insight into the use of ML to improve the accuracy and objectivity of this process while minimising interobserver variability. These proposed algorithms represent a potential solution to the lack of a clear threshold for the Spitz/spitzoid tumour classification, and its high accuracy supports its usefulness as a helpful tool to improve diagnostic decision-making.


Assuntos
Aprendizado de Máquina , Melanoma , Nevo de Células Epitelioides e Fusiformes , Neoplasias Cutâneas , Humanos , Nevo de Células Epitelioides e Fusiformes/patologia , Nevo de Células Epitelioides e Fusiformes/diagnóstico , Nevo de Células Epitelioides e Fusiformes/genética , Neoplasias Cutâneas/patologia , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/genética , Masculino , Feminino , Melanoma/patologia , Melanoma/diagnóstico , Melanoma/genética , Adulto , Adolescente , Adulto Jovem , Criança , Pessoa de Meia-Idade , Pré-Escolar , Proteínas Proto-Oncogênicas B-raf/genética , Proteínas de Membrana/genética , GTP Fosfo-Hidrolases/genética , Lactente , Mutação , Idoso
2.
Sci Rep ; 13(1): 17764, 2023 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-37853065

RESUMO

The creation of artistic images through the use of Artificial Intelligence is an area that has been gaining interest in recent years. In particular, the ability of Neural Networks to separate and subsequently recombine the style of different images, generating a new artistic image with the desired style, has been a source of study and attraction for the academic and industrial community. This work addresses the challenge of generating artistic images that are framed in the style of pictorial Impressionism and, specifically, that imitate the style of one of its greatest exponents, the painter Claude Monet. After having analysed several theoretical approaches, the Cycle Generative Adversarial Networks are chosen as base model. From this point, a new training methodology which has not been applied to cyclical systems so far, the top-k approach, is implemented. The proposed system is characterised by using in each iteration of the training those k images that, in the previous iteration, have been able to better imitate the artist's style. To evaluate the performance of the proposed methods, the results obtained with both methodologies, basic and top-k, have been analysed from both a quantitative and qualitative perspective. Both evaluation methods demonstrate that the proposed top-k approach recreates the author's style in a more successful manner and, at the same time, also demonstrate the ability of Artificial Intelligence to generate something as creative as impressionist paintings.

3.
Sci Data ; 10(1): 704, 2023 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-37845235

RESUMO

Spitzoid tumors (ST) are a group of melanocytic tumors of high diagnostic complexity. Since 1948, when Sophie Spitz first described them, the diagnostic uncertainty remains until now, especially in the intermediate category known as Spitz tumor of unknown malignant potential (STUMP) or atypical Spitz tumor. Studies developing deep learning (DL) models to diagnose melanocytic tumors using whole slide imaging (WSI) are scarce, and few used ST for analysis, excluding STUMP. To address this gap, we introduce SOPHIE: the first ST dataset with WSIs, including labels as benign, malignant, and atypical tumors, along with the clinical information of each patient. Additionally, we explain two DL models implemented as validation examples using this database.


Assuntos
Aprendizado Profundo , Melanoma , Nevo de Células Epitelioides e Fusiformes , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Metadados , Nevo de Células Epitelioides e Fusiformes/diagnóstico por imagem , Neoplasias Cutâneas/patologia
4.
Bioengineering (Basel) ; 10(10)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37892874

RESUMO

The paper proposes a federated content-based medical image retrieval (FedCBMIR) tool that utilizes federated learning (FL) to address the challenges of acquiring a diverse medical data set for training CBMIR models. CBMIR is a tool to find the most similar cases in the data set to assist pathologists. Training such a tool necessitates a pool of whole-slide images (WSIs) to train the feature extractor (FE) to extract an optimal embedding vector. The strict regulations surrounding data sharing in hospitals makes it difficult to collect a rich data set. FedCBMIR distributes an unsupervised FE to collaborative centers for training without sharing the data set, resulting in shorter training times and higher performance. FedCBMIR was evaluated by mimicking two experiments, including two clients with two different breast cancer data sets, namely BreaKHis and Camelyon17 (CAM17), and four clients with the BreaKHis data set at four different magnifications. FedCBMIR increases the F1 score (F1S) of each client from 96% to 98.1% in CAM17 and from 95% to 98.4% in BreaKHis, with 11.44 fewer hours in training time. FedCBMIR provides 98%, 96%, 94%, and 97% F1S in the BreaKHis experiment with a generalized model and accomplishes this in 25.53 fewer hours of training.

5.
Comput Methods Programs Biomed ; 240: 107695, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37393742

RESUMO

BACKGROUND AND OBJECTIVE: Prostate cancer is one of the most common diseases affecting men. The main diagnostic and prognostic reference tool is the Gleason scoring system. An expert pathologist assigns a Gleason grade to a sample of prostate tissue. As this process is very time-consuming, some artificial intelligence applications were developed to automatize it. The training process is often confronted with insufficient and unbalanced databases which affect the generalisability of the models. Therefore, the aim of this work is to develop a generative deep learning model capable of synthesising patches of any selected Gleason grade to perform data augmentation on unbalanced data and test the improvement of classification models. METHODOLOGY: The methodology proposed in this work consists of a conditional Progressive Growing GAN (ProGleason-GAN) capable of synthesising prostate histopathological tissue patches by selecting the desired Gleason Grade cancer pattern in the synthetic sample. The conditional Gleason Grade information is introduced into the model through the embedding layers, so there is no need to add a term to the Wasserstein loss function. We used minibatch standard deviation and pixel normalisation to improve the performance and stability of the training process. RESULTS: The reality assessment of the synthetic samples was performed with the Frechet Inception Distance (FID). We obtained an FID metric of 88.85 for non-cancerous patterns, 81.86 for GG3, 49.32 for GG4 and 108.69 for GG5 after post-processing stain normalisation. In addition, a group of expert pathologists was selected to perform an external validation of the proposed framework. Finally, the application of our proposed framework improved the classification results in SICAPv2 dataset, proving its effectiveness as a data augmentation method. CONCLUSIONS: ProGleason-GAN approach combined with a stain normalisation post-processing provides state-of-the-art results regarding Frechet's Inception Distance. This model can synthesise samples of non-cancerous patterns, GG3, GG4 or GG5. The inclusion of conditional information about the Gleason grade during the training process allows the model to select the cancerous pattern in a synthetic sample. The proposed framework can be used as a data augmentation method.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/cirurgia , Gradação de Tumores , Prognóstico , Prostatectomia
6.
F S Sci ; 4(3): 211-218, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37394179

RESUMO

OBJECTIVE: To develop a spatiotemporal model for de prediction of euploid and aneuploid embryos using time-lapse videos from 10-115 hours after insemination (hpi). DESIGN: Retrospective study. MAIN OUTCOME MEASURES: The research used an end-to-end approach to develop an automated artificial intelligence system capable of extracting features from images and classifying them, considering spatiotemporal dependencies. A convolutional neural network extracted the most relevant features from each video frame. A bidirectional long short-term memory layer received this information and analyzed the temporal dependencies, obtaining a low-dimensional feature vector that characterized each video. A multilayer perceptron classified them into 2 groups, euploid and noneuploid. RESULTS: The model performance in accuracy fell between 0.6170 and 0.7308. A multi-input model with a gate recurrent unit module performed better than others; the precision (or positive predictive value) is 0.8205 for predicting euploidy. Sensitivity, specificity, F1-Score and accuracy are 0.6957, 0.7813, 0.7042, and 0.7308, respectively. CONCLUSIONS: This article proposes an artificial intelligence solution for prioritizing euploid embryo transfer. We can highlight the identification of a noninvasive method for chromosomal status diagnosis using a deep learning approach that analyzes raw data provided by time-lapse incubators. This method demonstrated potential automation of the evaluation process, allowing spatial and temporal information to encode.


Assuntos
Aprendizado Profundo , Estudos Retrospectivos , Imagem com Lapso de Tempo , Inteligência Artificial , Ploidias
7.
Comput Methods Programs Biomed ; 224: 107012, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35843078

RESUMO

BACKGROUND AND OBJECTIVE: Ulcerative colitis (UC) is an inflammatory bowel disease (IBD) affecting the colon and the rectum characterized by a remitting-relapsing course. To detect mucosal inflammation associated with UC, histology is considered the most stringent criteria. In turn, histologic remission (HR) correlates with improved clinical outcomes and has been recently recognized as a desirable treatment target. The leading biomarker for assessing histologic remission is the presence or absence of neutrophils. Therefore, the finding of this cell in specific colon structures indicates that the patient has UC activity. However, no previous studies based on deep learning have been developed to identify UC based on neutrophils detection using whole-slide images (WSI). METHODS: The methodological core of this work is a novel multiple instance learning (MIL) framework with location constraints able to determine the presence of UC activity using WSI. In particular, we put forward an effective way to introduce constraints about positive instances to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. In addition, we propose a new weighted embedding to enlarge the relevance of the positive instances. RESULTS: Extensive experiments on a multi-center dataset of colon and rectum WSIs, PICASSO-MIL, demonstrate that using the location information we can improve considerably the results at WSI-level. In comparison with prior MIL settings, our method allows for 10% improvements in bag-level accuracy. CONCLUSION: Our model, which introduces a new form of constraints, surpass the results achieved from current state-of-the-art methods that focus on the MIL paradigm. Our method can be applied to other histological concerns where the morphological features determining a positive WSI are tiny and similar to others in the image.


Assuntos
Colite Ulcerativa , Biomarcadores , Colite Ulcerativa/complicações , Colite Ulcerativa/diagnóstico por imagem , Colite Ulcerativa/tratamento farmacológico , Humanos
8.
Comput Methods Programs Biomed ; 221: 106895, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35609359

RESUMO

BACKGROUND: Embryo morphology is a predictive marker for implantation success and ultimately live births. Viability evaluation and quality grading are commonly used to select the embryo with the highest implantation potential. However, the traditional method of manual embryo assessment is time-consuming and highly susceptible to inter- and intra-observer variability. Automation of this process results in more objective and accurate predictions. METHOD: In this paper, we propose a novel methodology based on deep learning to automatically evaluate the morphological appearance of human embryos from time-lapse imaging. A supervised contrastive learning framework is implemented to predict embryo viability at day 4 and day 5, and an inductive transfer approach is applied to classify embryo quality at both times. RESULTS: Results showed that both methods outperformed conventional approaches and improved state-of-the-art embryology results for an independent test set. The viability result achieved an accuracy of 0.8103 and 0.9330 and the quality results reached values of 0.7500 and 0.8001 for day 4 and day 5, respectively. Furthermore, qualitative results kept consistency with the clinical interpretation. CONCLUSIONS: The proposed methods are up to date with the artificial intelligence literature and have been proven to be promising. Furthermore, our findings represent a breakthrough in the field of embryology in that they study the possibilities of embryo selection at day 4. Moreover, the grad-CAMs findings are directly in line with embryologists' decisions. Finally, our results demonstrated excellent potential for the inclusion of the models in clinical practice.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Implantação do Embrião , Humanos , Inseminação , Imagem com Lapso de Tempo/métodos
9.
Cancers (Basel) ; 15(1)2022 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-36612037

RESUMO

The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios.

10.
Artif Intell Med ; 121: 102197, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34763799

RESUMO

Melanoma is an aggressive neoplasm responsible for the majority of deaths from skin cancer. Specifically, spitzoid melanocytic tumors are one of the most challenging melanocytic lesions due to their ambiguous morphological features. The gold standard for its diagnosis and prognosis is the analysis of skin biopsies. In this process, dermatopathologists visualize skin histology slides under a microscope, in a highly time-consuming and subjective task. In the last years, computer-aided diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in daily clinical practice. Nevertheless, no automatic CAD systems have yet been proposed for the analysis of spitzoid lesions. Regarding common melanoma, no system allows both the selection of the tumor region and the prediction of the benign or malignant form in the diagnosis. Motivated by this, we propose a novel end-to-end weakly supervised deep learning model, based on inductive transfer learning with an improved convolutional neural network (CNN) to refine the embedding features of the latent space. The framework is composed of a source model in charge of finding the tumor patch-level patterns, and a target model focuses on the specific diagnosis of a biopsy. The latter retrains the backbone of the source model through a multiple instance learning workflow to obtain the biopsy-level scoring. To evaluate the performance of the proposed methods, we performed extensive experiments on a private skin database with spitzoid lesions. Test results achieved an accuracy of 0.9231 and 0.80 for the source and the target models, respectively. In addition, the heat map findings are directly in line with the clinicians' medical decision and even highlight, in some cases, patterns of interest that were overlooked by the pathologist.


Assuntos
Melanoma , Neoplasias Cutâneas , Biópsia , Diagnóstico por Computador , Humanos , Melanoma/diagnóstico , Microscopia , Neoplasias Cutâneas/diagnóstico
11.
Comput Biol Med ; 138: 104932, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34673472

RESUMO

In recent times, bladder cancer has increased significantly in terms of incidence and mortality. Currently, two subtypes are known based on tumour growth: non-muscle invasive (NMIBC) and muscle-invasive bladder cancer (MIBC). In this work, we focus on the MIBC subtype because it has the worst prognosis and can spread to adjacent organs. We present a self-learning framework to grade bladder cancer from histological images stained by immunohistochemical techniques. Specifically, we propose a novel Deep Convolutional Embedded Attention Clustering (DCEAC) which allows for the classification of histological patches into different levels of disease severity, according to established patterns in the literature. The proposed DCEAC model follows a fully unsupervised two-step learning methodology to discern between non-tumour, mild and infiltrative patterns from high-resolution 512 × 512 pixel samples. Our system outperforms previous clustering-based methods by including a convolutional attention module, which enables the refinement of the features of the latent space prior to the classification stage. The proposed network surpasses state-of-the-art approaches by 2-3% across different metrics, reaching a final average accuracy of 0.9034 in a multi-class scenario. Furthermore, the reported class activation maps evidence that our model is able to learn by itself the same patterns that clinicians consider relevant, without requiring previous annotation steps. This represents a breakthrough in MIBC grading that bridges the gap with respect to training the model on labelled data.


Assuntos
Neoplasias da Bexiga Urinária , Análise por Conglomerados , Humanos
12.
Artif Intell Med ; 118: 102132, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34412848

RESUMO

Glaucoma is one of the leading causes of blindness worldwide and Optical Coherence Tomography (OCT) is the quintessential imaging technique for its detection. Unlike most of the state-of-the-art studies focused on glaucoma detection, in this paper, we propose, for the first time, a novel framework for glaucoma grading using raw circumpapillary B-scans. In particular, we set out a new OCT-based hybrid network which combines hand-driven and deep learning algorithms. An OCT-specific descriptor is proposed to extract hand-crafted features related to the retinal nerve fibre layer (RNFL). In parallel, an innovative CNN is developed using skip-connections to include tailored residual and attention modules to refine the automatic features of the latent space. The proposed architecture is used as a backbone to conduct a novel few-shot learning based on static and dynamic prototypical networks. The k-shot paradigm is redefined giving rise to a supervised end-to-end system which provides substantial improvements discriminating between healthy, early and advanced glaucoma samples. The training and evaluation processes of the dynamic prototypical network are addressed from two fused databases acquired via Heidelberg Spectralis system. Validation and testing results reach a categorical accuracy of 0.9459 and 0.8788 for glaucoma grading, respectively. Besides, the high performance reported by the proposed model for glaucoma detection deserves a special mention. The findings from the class activation maps are directly in line with the clinicians' opinion since the heatmaps pointed out the RNFL as the most relevant structure for glaucoma diagnosis.


Assuntos
Glaucoma , Tomografia de Coerência Óptica , Algoritmos , Bases de Dados Factuais , Glaucoma/diagnóstico , Humanos , Redes Neurais de Computação
13.
Entropy (Basel) ; 23(7)2021 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-34356439

RESUMO

Atrial fibrillation (AF) is the most common cardiac arrhythmia. At present, cardiac ablation is the main treatment procedure for AF. To guide and plan this procedure, it is essential for clinicians to obtain patient-specific 3D geometrical models of the atria. For this, there is an interest in automatic image segmentation algorithms, such as deep learning (DL) methods, as opposed to manual segmentation, an error-prone and time-consuming method. However, to optimize DL algorithms, many annotated examples are required, increasing acquisition costs. The aim of this work is to develop automatic and high-performance computational models for left and right atrium (LA and RA) segmentation from a few labelled MRI volumetric images with a 3D Dual U-Net algorithm. For this, a supervised domain adaptation (SDA) method is introduced to infer knowledge from late gadolinium enhanced (LGE) MRI volumetric training samples (80 LA annotated samples) to a network trained with balanced steady-state free precession (bSSFP) MR images of limited number of annotations (19 RA and LA annotated samples). The resulting knowledge-transferred model SDA outperformed the same network trained from scratch in both RA (Dice equals 0.9160) and LA (Dice equals 0.8813) segmentation tasks.

14.
IEEE J Biomed Health Inform ; 25(8): 3094-3104, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33621184

RESUMO

Prostate cancer is one of the main diseases affecting men worldwide. The gold standard for diagnosis and prognosis is the Gleason grading system. In this process, pathologists manually analyze prostate histology slides under microscope, in a high time-consuming and subjective task. In the last years, computer-aided-diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in the daily clinical practice. Nevertheless, these systems are usually trained using tedious and prone-to-error pixel-level annotations of Gleason grades in the tissue. To alleviate the need of manual pixel-wise labeling, just a handful of works have been presented in the literature. Furthermore, despite the promising results achieved on global scoring the location of cancerous patterns in the tissue is only qualitatively addressed. These heatmaps of tumor regions, however, are crucial to the reliability of CAD systems as they provide explainability to the system's output and give confidence to pathologists that the model is focusing on medical relevant features. Motivated by this, we propose a novel weakly-supervised deep-learning model, based on self-learning CNNs, that leverages only the global Gleason score of gigapixel whole slide images during training to accurately perform both, grading of patch-level patterns and biopsy-level scoring. To evaluate the performance of the proposed method, we perform extensive experiments on three different external datasets for the patch-level Gleason grading, and on two different test sets for global Grade Group prediction. We empirically demonstrate that our approach outperforms its supervised counterpart on patch-level Gleason grading by a large margin, as well as state-of-the-art methods on global biopsy-level scoring. Particularly, the proposed model brings an average improvement on the Cohen's quadratic kappa ( κ) score of nearly 18% compared to full-supervision for the patch-level Gleason grading task. This suggests that the absence of the annotator's bias in our approach and the capability of using large weakly labeled datasets during training leads to higher performing and more robust models. Furthermore, raw features obtained from the patch-level classifier showed to generalize better than previous approaches in the literature to the subjective global biopsy-level scoring.


Assuntos
Interpretação de Imagem Assistida por Computador , Neoplasias da Próstata , Humanos , Masculino , Gradação de Tumores , Reprodutibilidade dos Testes
15.
Sensors (Basel) ; 21(3)2021 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-33573170

RESUMO

Velocity-based training is a contemporary method used by sports coaches to prescribe the optimal loading based on the velocity of movement of a load lifted. The most employed and accurate instruments to monitor velocity are linear position transducers. Alternatively, smartphone apps compute mean velocity after each execution by manual on-screen digitizing, introducing human error. In this paper, a video-based instrument delivering unattended, real-time measures of barbell velocity with a smartphone high-speed camera has been developed. A custom image-processing algorithm allows for the detection of reference points of a multipower machine to autocalibrate and automatically track barbell markers to give real-time kinematic-derived parameters. Validity and reliability were studied by comparing the simultaneous measurement of 160 repetitions of back squat lifts executed by 20 athletes with the proposed instrument and a validated linear position transducer, used as a criterion. The video system produced practically identical range, velocity, force, and power outcomes to the criterion with low and proportional systematic bias and random errors. Our results suggest that the developed video system is a valid, reliable, and trustworthy instrument for measuring velocity and derived variables accurately with practical implications for use by coaches and practitioners.


Assuntos
Treinamento Resistido , Smartphone , Levantamento de Peso , Fenômenos Biomecânicos , Humanos , Reprodutibilidade dos Testes , Gravação em Vídeo
16.
Comput Med Imaging Graph ; 88: 101846, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33485056

RESUMO

BACKGROUND AND OBJECTIVE: Prostate cancer is one of the main diseases affecting men worldwide. The Gleason scoring system is the primary diagnostic tool for prostate cancer. This is obtained via the visual analysis of cancerous patterns in prostate biopsies performed by expert pathologists, and the aggregation of the main Gleason grades in a combined score. Computer-aided diagnosis systems allow to reduce the workload of pathologists and increase the objectivity. Nevertheless, those require a large number of labeled samples, with pixel-level annotations performed by expert pathologists, to be developed. Recently, efforts have been made in the literature to develop algorithms aiming the direct estimation of the global Gleason score at biopsy/core level with global labels. However, these algorithms do not cover the accurate localization of the Gleason patterns into the tissue. These location maps are the basis to provide a reliable computer-aided diagnosis system to the experts to be used in clinical practice by pathologists. In this work, we propose a deep-learning-based system able to detect local cancerous patterns in the prostate tissue using only the global-level Gleason score obtained from clinical records during training. METHODS: The methodological core of this work is the proposed weakly-supervised-trained convolutional neural network, WeGleNet, based on a multi-class segmentation layer after the feature extraction module, a global-aggregation, and the slicing of the background class for the model loss estimation during training. RESULTS: Using a public dataset of prostate tissue-micro arrays, we obtained a Cohen's quadratic kappa (κ) of 0.67 for the pixel-level prediction of cancerous patterns in the validation cohort. We compared the model performance for semantic segmentation of Gleason grades with supervised state-of-the-art architectures in the test cohort. We obtained a pixel-level κ of 0.61 and a macro-averaged f1-score of 0.58, at the same level as fully-supervised methods. Regarding the estimation of the core-level Gleason score, we obtained a κ of 0.76 and 0.67 between the model and two different pathologists. CONCLUSIONS: WeGleNet is capable of performing the semantic segmentation of Gleason grades similarly to fully-supervised methods without requiring pixel-level annotations. Moreover, the model reached a performance at the same level as inter-pathologist agreement for the global Gleason scoring of the cores.


Assuntos
Próstata , Neoplasias da Próstata , Técnicas Histológicas , Humanos , Masculino , Gradação de Tumores , Redes Neurais de Computação , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Semântica
17.
Comput Methods Programs Biomed ; 200: 105855, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33303289

RESUMO

BACKGROUND AND OBJECTIVE: Glaucoma is the leading cause of blindness worldwide. Many studies based on fundus image and optical coherence tomography (OCT) imaging have been developed in the literature to help ophthalmologists through artificial-intelligence techniques. Currently, 3D spectral-domain optical coherence tomography (SD-OCT) samples have become more important since they could enclose promising information for glaucoma detection. To analyse the hidden knowledge of the 3D scans for glaucoma detection, we have proposed, for the first time, a deep-learning methodology based on leveraging the spatial dependencies of the features extracted from the B-scans. METHODS: The experiments were performed on a database composed of 176 healthy and 144 glaucomatous SD-OCT volumes centred on the optic nerve head (ONH). The proposed methodology consists of two well-differentiated training stages: a slide-level feature extractor and a volume-based predictive model. The slide-level discriminator is characterised by two new, residual and attention, convolutional modules which are combined via skip-connections with other fine-tuned architectures. Regarding the second stage, we first carried out a data-volume conditioning before extracting the features from the slides of the SD-OCT volumes. Then, Long Short-Term Memory (LSTM) networks were used to combine the recurrent dependencies embedded in the latent space to provide a holistic feature vector, which was generated by the proposed sequential-weighting module (SWM). RESULTS: The feature extractor reports AUC values higher than 0.93 both in the primary and external test sets. Otherwise, the proposed end-to-end system based on a combination of CNN and LSTM networks achieves an AUC of 0.8847 in the prediction stage, which outperforms other state-of-the-art approaches intended for glaucoma detection. Additionally, Class Activation Maps (CAMs) were computed to highlight the most interesting regions per B-scan when discerning between healthy and glaucomatous eyes from raw SD-OCT volumes. CONCLUSIONS: The proposed model is able to extract the features from the B-scans of the volumes and combine the information of the latent space to perform a volume-level glaucoma prediction. Our model, which combines residual and attention blocks with a sequential weighting module to refine the LSTM outputs, surpass the results achieved from current state-of-the-art methods focused on 3D deep-learning architectures.


Assuntos
Glaucoma , Disco Óptico , Fundo de Olho , Glaucoma/diagnóstico por imagem , Humanos , Análise Espacial , Tomografia de Coerência Óptica
18.
Comput Methods Programs Biomed ; 198: 105788, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33130492

RESUMO

BACKGROUND AND OBJECTIVE: Optical coherence tomography (OCT) is a useful technique to monitor retinal layer state both in humans and animal models. Automated OCT analysis in rats is of great relevance to study possible toxic effect of drugs and other treatments before human trials. In this paper, two different approaches to detect the most significant retinal layers in a rat OCT image are presented. METHODS: One approach is based on a combination of local horizontal intensity profiles along with a new proposed variant of watershed transformation and the other is built upon an encoder-decoder convolutional network architecture. RESULTS: After a wide validation, an averaged absolute distance error of 3.77 ± 2.59 and 1.90 ± 0.91 µm is achieved by both approaches, respectively, on a batch of the rat OCT database. After a second test of the deep-learning-based method using an unseen batch of the database, an averaged absolute distance error of 2.67 ± 1.25 µm is obtained. The rat OCT database used in this paper is made publicly available to facilitate further comparisons. CONCLUSIONS: Based on the obtained results, it was demonstrated the competitiveness of the first approach since outperforms the commercial Insight image segmentation software (Phoenix Research Labs) as well as its utility to generate labelled images for validation purposes speeding significantly up the ground truth generation process. Regarding the second approach, the deep-learning-based method improves the results achieved by the more conventional method and also by other state-of-the-art techniques. In addition, it was verified that the results of the proposed network can be generalized to new rat OCT images.


Assuntos
Roedores , Tomografia de Coerência Óptica , Animais , Redes Neurais de Computação , Ratos , Retina/diagnóstico por imagem , Software
19.
Sci Rep ; 10(1): 17706, 2020 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-33077755

RESUMO

Capsule endoscopy (CE) is a widely used, minimally invasive alternative to traditional endoscopy that allows visualisation of the entire small intestine. Patient preparation can help to obtain a cleaner intestine and thus better visibility in the resulting videos. However, studies on the most effective preparation method are conflicting due to the absence of objective, automatic cleanliness evaluation methods. In this work, we aim to provide such a method capable of presenting results on an intuitive scale, with a relatively light-weight novel convolutional neural network architecture at its core. We trained our model using 5-fold cross-validation on an extensive data set of over 50,000 image patches, collected from 35 different CE procedures, and compared it with state-of-the-art classification methods. From the patch classification results, we developed a method to automatically estimate pixel-level probabilities and deduce cleanliness evaluation scores through automatically learnt thresholds. We then validated our method in a clinical setting on 30 newly collected CE videos, comparing the resulting scores to those independently assigned by human specialists. We obtained the highest classification accuracy for the proposed method (95.23%), with significantly lower average prediction times than for the second-best method. In the validation of our method, we found acceptable agreement with two human specialists compared to interhuman agreement, showing its validity as an objective evaluation method.

20.
Comput Methods Programs Biomed ; 195: 105637, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32653747

RESUMO

BACKGROUND AND OBJECTIVE: Prostate cancer is one of the most common diseases affecting men worldwide. The Gleason scoring system is the primary diagnostic and prognostic tool for prostate cancer. Furthermore, recent reports indicate that the presence of patterns of the Gleason scale such as the cribriform pattern may also correlate with a worse prognosis compared to other patterns belonging to the Gleason grade 4. Current clinical guidelines have indicated the convenience of highlight its presence during the analysis of biopsies. All these requirements suppose a great workload for the pathologist during the analysis of each sample, which is based on the pathologist's visual analysis of the morphology and organisation of the glands in the tissue, a time-consuming and subjective task. In recent years, with the development of digitisation devices, the use of computer vision techniques for the analysis of biopsies has increased. However, to the best of the authors' knowledge, the development of algorithms to automatically detect individual cribriform patterns belonging to Gleason grade 4 has not yet been studied in the literature. The objective of the work presented in this paper is to develop a deep-learning-based system able to support pathologists in the daily analysis of prostate biopsies. This analysis must include the Gleason grading of local structures, the detection of cribriform patterns, and the Gleason scoring of the whole biopsy. METHODS: The methodological core of this work is a patch-wise predictive model based on convolutional neural networks able to determine the presence of cancerous patterns based on the Gleason grading system. In particular, we train from scratch a simple self-design architecture with three filters and a top model with global-max pooling. The cribriform pattern is detected by retraining the set of filters of the last convolutional layer in the network. Subsequently, a biopsy-level prediction map is reconstructed by bi-linear interpolation of the patch-level prediction of the Gleason grades. In addition, from the reconstructed prediction map, we compute the percentage of each Gleason grade in the tissue to feed a multi-layer perceptron which provides a biopsy-level score. RESULTS: In our SICAPv2 database, composed of 182 annotated whole slide images, we obtained a Cohen's quadratic kappa of 0.77 in the test set for the patch-level Gleason grading with the proposed architecture trained from scratch. Our results outperform previous ones reported in the literature. Furthermore, this model reaches the level of fine-tuned state-of-the-art architectures in a patient-based four groups cross validation. In the cribriform pattern detection task, we obtained an area under ROC curve of 0.82. Regarding the biopsy Gleason scoring, we achieved a quadratic Cohen's Kappa of 0.81 in the test subset. Shallow CNN architectures trained from scratch outperform current state-of-the-art methods for Gleason grades classification. Our proposed model is capable of characterising the different Gleason grades in prostate tissue by extracting low-level features through three basic blocks (i.e. convolutional layer + max pooling). The use of global-max pooling to reduce each activation map has shown to be a key factor for reducing complexity in the model and avoiding overfitting. Regarding the Gleason scoring of biopsies, a multi-layer perceptron has shown to better model the decision-making of pathologists than previous simpler models used in the literature.


Assuntos
Neoplasias da Próstata , Biópsia , Técnicas Histológicas , Humanos , Masculino , Gradação de Tumores
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA