Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Heliyon ; 10(3): e25367, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38327447

RESUMO

Water quality can be negatively affected by the presence of some toxic phytoplankton species, whose toxins are difficult to remove by conventional purification systems. This creates the need for periodic analyses, which are nowadays manually performed by experts. These labor-intensive processes are affected by subjectivity and expertise, causing unreliability. Some automatic systems have been proposed to address these limitations. However, most of them are based on classical image processing pipelines with not easily scalable designs. In this context, deep learning techniques are more adequate for the detection and recognition of phytoplankton specimens in multi-specimen microscopy images, as they integrate both tasks in a single end-to-end trainable module that is able to automatize the adaption to such a complex domain. In this work, we explore the use of two different object detectors: Faster R-CNN and RetinaNet, from the one-stage and two-stage paradigms respectively. We use a dataset composed of multi-specimen microscopy images captured using a systematic protocol. This allows the use of widely available optical microscopes, also avoiding manual adjustments on a per-specimen basis, which would require expert knowledge. We have made our dataset publicly available to improve the reproducibility and to foment the development of new alternatives in the field. The selected Faster R-CNN methodology reaches maximum recall levels of 95.35%, 84.69%, and 79.81%, and precisions of 94.68%, 89.30% and 82.61%, for W. naegeliana, A. spiroides, and D. sociale, respectively. The system is able to adapt to the dataset problems and improves the results overall with respect to the reference state-of-the-art work. In addition, the proposed system improves the automation and abstraction from the domain and simplifies the workflow and adjustment.

2.
Med Biol Eng Comput ; 62(3): 865-881, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38060101

RESUMO

Retinal vascular tortuosity is an excessive bending and twisting of the blood vessels in the retina that is associated with numerous health conditions. We propose a novel methodology for the automated assessment of the retinal vascular tortuosity from color fundus images. Our methodology takes into consideration several anatomical factors to weigh the importance of each individual blood vessel. First, we use deep neural networks to produce a robust extraction of the different anatomical structures. Then, the weighting coefficients that are required for the integration of the different anatomical factors are adjusted using evolutionary computation. Finally, the proposed methodology also provides visual representations that explain the contribution of each individual blood vessel to the predicted tortuosity, hence allowing us to understand the decisions of the model. We validate our proposal in a dataset of color fundus images providing a consensus ground truth as well as the annotations of five clinical experts. Our proposal outperforms previous automated methods and offers a performance that is comparable to that of the clinical experts. Therefore, our methodology demonstrates to be a viable alternative for the assessment of the retinal vascular tortuosity. This could facilitate the use of this biomarker in clinical practice and medical research.


Assuntos
Inteligência Artificial , Doenças Retinianas , Humanos , Vasos Retinianos/diagnóstico por imagem , Retina , Fundo de Olho , Algoritmos
3.
Neural Netw ; 170: 254-265, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37995547

RESUMO

Multi-task learning is a promising paradigm to leverage task interrelations during the training of deep neural networks. A key challenge in the training of multi-task networks is to adequately balance the complementary supervisory signals of multiple tasks. In that regard, although several task-balancing approaches have been proposed, they are usually limited by the use of per-task weighting schemes and do not completely address the uneven contribution of the different tasks to the network training. In contrast to classical approaches, we propose a novel Multi-Adaptive Optimization (MAO) strategy that dynamically adjusts the contribution of each task to the training of each individual parameter in the network. This automatically produces a balanced learning across tasks and across parameters, throughout the whole training and for any number of tasks. To validate our proposal, we perform comparative experiments on real-world datasets for computer vision, considering different experimental settings. These experiments allow us to analyze the performance obtained in several multi-task scenarios along with the learning balance across tasks, network layers and training steps. The results demonstrate that MAO outperforms previous task-balancing alternatives. Additionally, the performed analyses provide insights that allow us to comprehend the advantages of this novel approach for multi-task learning.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Monoaminoxidase
4.
Quant Imaging Med Surg ; 13(7): 4540-4562, 2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37456305

RESUMO

Background: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. Methods: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. Results: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method's parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). Conclusions: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.

5.
Eur J Ophthalmol ; 33(5): 1874-1882, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36775924

RESUMO

PURPOSE: Since very preterm children often have increased retinal tortuosity that may indicate decisive architectural changes in the systemic microvascular network, we used a new semi-automatic software to measure retinal vessel tortuosity on fundus digital images of moderate-to-late preterm (MLP) children. METHODS: In this observational case-control study, the global and local tortuosity parameters of retinal vessels were evaluated on fundus photographs of 36 MLP children and 36 age- and sex-matched controls. The associations between birth parameters and parameters reflecting retinal vessel tortuosity were evaluated using correlation analysis. RESULTS: Even after incorporation of anatomical factors, the global and local tortuosity parameters were not significantly different between groups. The MLP group showed a smaller arteriolar caliber (0.53 ± 0.2) than the controls (0.56 ± 0.2; p = 0.013). Other local tortuosity parameters, such as vessel length, distance to fovea, and distance to optic disc, were not significantly different between arteries and veins. Tortuosity in both groups was higher among vessels closer to the fovea (r = -0.077, p < 0.001) and the optic disc (r = -0.0544, p = 0.009). Global tortuosity showed a weakly positive correlation with gestational age and a weakly negative correlation with birth weight in both groups. CONCLUSION: MLP patients did not display increased vessel tortuosity in comparison with the controls; however, the arteriolar caliber in the MLP group was smaller than that in children born full-term. Larger studies should confirm this finding and explore associations between cardiovascular and metabolic status and retinal vessel geometry in MLP children.


Assuntos
Disco Óptico , Vasos Retinianos , Recém-Nascido , Humanos , Criança , Estudos de Casos e Controles , Disco Óptico/irrigação sanguínea , Retina , Computadores
6.
Comput Biol Med ; 152: 106451, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36571941

RESUMO

During the last years, deep learning techniques have emerged as powerful alternatives to solve biomedical image analysis problems. However, the training of deep neural networks usually needs great amounts of labeled data to be done effectively. This is even more critical in the case of biomedical imaging due to the added difficulty of obtaining data labeled by experienced clinicians. To mitigate the impact of data scarcity, one of the most commonly used strategies is transfer learning. Nevertheless, the success of this approach depends on the effectiveness of the available pre-training techniques for learning from little or no labeled data. In this work, we explore the application of the Context Encoder paradigm for transfer learning in the domain of retinal image analysis. To this aim, we propose several approaches that allow to work with full resolution images and improve the recognition of the retinal structures. In order to validate the proposals, the Context Encoder pre-trained models are fine-tuned to perform two relevant tasks in the domain: vessels segmentation and fovea localization. The experiments performed on different public datasets demonstrate that the proposed Context Encoder approaches allow mitigating the impact of data scarcity, being superior to previous alternatives in this domain.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Diagnóstico por Imagem , Retina/diagnóstico por imagem , Aprendizado de Máquina
7.
Comput Methods Programs Biomed ; 229: 107296, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36481530

RESUMO

BACKGROUND AND OBJECTIVES: Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, AMD is the most frequent cause of blindness in developed countries. Although some promising treatments have been proposed that effectively slow down its development, their effectiveness significantly diminishes in the advanced stages. This emphasizes the importance of large-scale screening programs for early detection. Nevertheless, implementing such programs for a disease like AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. For the characterization of the disease, clinicians have to identify and localize certain retinal lesions. All this motivates the development of automatic diagnostic methods. In this sense, several works have achieved highly positive results for AMD detection using convolutional neural networks (CNNs). However, none of them incorporates explainability mechanisms linking the diagnosis to its related lesions to help clinicians to better understand the decisions of the models. This is specially relevant, since the absence of such mechanisms limits the application of automatic methods in the clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. METHODS: In our proposal, a CNN with a custom architectural setting is trained end-to-end for the joint identification of AMD and its associated retinal lesions. With the proposed setting, the lesion identification is directly derived from independent lesion activation maps; then, the diagnosis is obtained from the identified lesions. The training is performed end-to-end using image-level labels. Thus, lesion-specific activation maps are learned in a weakly-supervised manner. The provided lesion information is of high clinical interest, as it allows clinicians to assess the developmental stage of the disease. Additionally, the proposed approach allows to explain the diagnosis obtained by the models directly from the identified lesions and their corresponding activation maps. The training data necessary for the approach can be obtained without much extra work on the part of clinicians, since the lesion information is habitually present in medical records. This is an important advantage over other methods, including fully-supervised lesion segmentation methods, which require pixel-level labels whose acquisition is arduous. RESULTS: The experiments conducted in 4 different datasets demonstrate that the proposed approach is able to identify AMD and its associated lesions with satisfactory performance. Moreover, the evaluation of the lesion activation maps shows that the models trained using the proposed approach are able to identify the pathological areas within the image and, in most cases, to correctly determine to which lesion they correspond. CONCLUSIONS: The proposed approach provides meaningful information-lesion identification and lesion activation maps-that conveniently explains and complements the diagnosis, and is of particular interest to clinicians for the diagnostic process. Moreover, the data needed to train the networks using the proposed approach is commonly easy to obtain, what represents an important advantage in fields with particularly scarce data, such as medical imaging.


Assuntos
Aprendizado Profundo , Degeneração Macular , Humanos , Fundo de Olho , Degeneração Macular/diagnóstico por imagem , Redes Neurais de Computação , Retina/diagnóstico por imagem
8.
Comput Biol Med ; 144: 105333, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35279425

RESUMO

After publishing an in-depth study that analyzed the ability of computerized methods to assist or replace human experts in obtaining carotid intima-media thickness (CIMT) measurements leading to correct therapeutic decisions, here the same consortium joined to present technical outlooks on computerized CIMT measurement systems and provide considerations for the community regarding the development and comparison of these methods, including considerations to encourage the standardization of computerized CIMT measurements and results presentation. A multi-center database of 500 images was collected, upon which three manual segmentations and seven computerized methods were employed to measure the CIMT, including traditional methods based on dynamic programming, deformable models, the first order absolute moment, anisotropic Gaussian derivative filters and deep learning-based image processing approaches based on U-Net convolutional neural networks. An inter- and intra-analyst variability analysis was conducted and segmentation results were analyzed by dividing the database based on carotid morphology, image signal-to-noise ratio, and research center. The computerized methods obtained CIMT absolute bias results that were comparable with studies in literature and they generally were similar and often better than the observed inter- and intra-analyst variability. Several computerized methods showed promising segmentation results, including one deep learning method (CIMT absolute bias = 106 ± 89 µm vs. 160 ± 140 µm intra-analyst variability) and three other traditional image processing methods (CIMT absolute bias = 139 ± 119 µm, 143 ± 118 µm and 139 ± 136 µm). The entire database used has been made publicly available for the community to facilitate future studies and to encourage an open comparison and technical analysis (https://doi.org/10.17632/m7ndn58sv6.1).


Assuntos
Artérias Carótidas , Espessura Intima-Media Carotídea , Artérias Carótidas/diagnóstico por imagem , Artéria Carótida Primitiva/diagnóstico por imagem , Humanos , Ultrassonografia/métodos , Ultrassonografia Doppler
9.
Comput Biol Med ; 143: 105302, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35219187

RESUMO

Diabetic retinopathy is an increasingly prevalent eye disorder that can lead to severe vision impairment. The severity grading of the disease using retinal images is key to provide an adequate treatment. However, in order to learn the diverse patterns and complex relations that are required for the grading, deep neural networks require very large annotated datasets that are not always available. This has been typically addressed by reusing networks that were pre-trained for natural image classification, hence relying on additional annotated data from a different domain. In contrast, we propose a novel pre-training approach that takes advantage of unlabeled multimodal visual data commonly available in ophthalmology. The use of multimodal visual data for pre-training purposes has been previously explored by training a network in the prediction of one image modality from another. However, that approach does not ensure a broad understanding of the retinal images, given that the network may exclusively focus on the similarities between modalities while ignoring the differences. Thus, we propose a novel self-supervised pre-training that explicitly teaches the networks to learn the common characteristics between modalities as well as the characteristics that are exclusive to the input modality. This provides a complete comprehension of the input domain and facilitates the training of downstream tasks that require a broad understanding of the retinal images, such as the grading of diabetic retinopathy. To validate and analyze the proposed approach, we performed an exhaustive experimentation on different public datasets. The transfer learning performance for the grading of diabetic retinopathy is evaluated under different settings while also comparing against previous state-of-the-art pre-training approaches. Additionally, a comparison against relevant state-of-the-art works for the detection and grading of diabetic retinopathy is also provided. The results show a satisfactory performance of the proposed approach, which outperforms previous pre-training alternatives in the grading of diabetic retinopathy.

10.
Comput Biol Med ; 140: 105101, 2021 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-34875412

RESUMO

Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.

11.
Artif Intell Med ; 118: 102116, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34412839

RESUMO

BACKGROUND AND OBJECTIVES: The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS: We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS: The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS: The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.


Assuntos
Artéria Retiniana , Algoritmos , Fundo de Olho , Artéria Retiniana/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem
12.
Ultrasound Med Biol ; 47(8): 2442-2455, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33941415

RESUMO

Common carotid intima-media thickness (CIMT) is a commonly used marker for atherosclerosis and is often computed in carotid ultrasound images. An analysis of different computerized techniques for CIMT measurement and their clinical impacts on the same patient data set is lacking. Here we compared and assessed five computerized CIMT algorithms against three expert analysts' manual measurements on a data set of 1088 patients from two centers. Inter- and intra-observer variability was assessed, and the computerized CIMT values were compared with those manually obtained. The CIMT measurements were used to assess the correlation with clinical parameters, cardiovascular event prediction through a generalized linear model and the Kaplan-Meier hazard ratio. CIMT measurements obtained with a skilled analyst's segmentation and the computerized segmentation were comparable in statistical analyses, suggesting they can be used interchangeably for CIMT quantification and clinical outcome investigation. To facilitate future studies, the entire data set used is made publicly available for the community at http://dx.doi.org/10.17632/fpv535fss7.1.


Assuntos
Algoritmos , Artérias Carótidas/diagnóstico por imagem , Espessura Intima-Media Carotídea , Idoso , Sistemas Computacionais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ultrassonografia
13.
Comput Methods Programs Biomed ; 200: 105923, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33486341

RESUMO

BACKGROUND AND OBJECTIVE: The proliferation of toxin-producing phytoplankton species can compromise the quality of the water sources. This contamination is difficult to detect, and consequently to be neutralised, since normal water purification techniques are ineffective. Currently, the water analyses about phytoplankton are commonly performed by the specialists with manual routine analyses, which represents a major limitation. The adequate identification and classification of phytoplankton specimens requires intensive training and expertise. Additionally, the performed analysis involves a lengthy process that exhibits serious problems of reliability and repeatability as inter-expert agreement is not always reached. Considering all those factors, the automatization of these analyses is, therefore, highly desirable to reduce the workload of the specialists and facilitate the process. METHODS: This manuscript proposes a novel fully automatic methodology to perform phytoplankton analyses in digital microscopy images of water samples taken with a regular light microscope. In particular, we propose a method capable of analysing multi-specimen images acquired using a simplified systematic protocol. In contrast with prior approaches, this enables its use without the necessity of an expert taxonomist operating the microscope. The system is able to detect and segment the different existing phytoplankton specimens, with high variability in terms of visual appearances, and to merge them into colonies and sparse specimens when necessary. Moreover, the system is capable of differentiating them from other similar objects like zooplankton, detritus or mineral particles, among others, and then classify the specimens into defined target species of interest using a machine learning-based approach. RESULTS: The proposed system provided satisfactory and accurate results in every step. The detection step provided a FNR of 0.4%. Phytoplankton detection, that is, differentiating true phytoplankton from similar objects (zooplankton, minerals, etc.), provided a result of 84.07% of precision at 90% of recall. The target species classification, reported an overall accuracy of 87.50%. The recall levels for each species are, 81.82% for W. naegeliana, 57.15% for A. spiroides, 85.71% for D. sociale and 95% for the "Other" group, a set of relevant toxic and interesting species widely spread over the samples. CONCLUSIONS: The proposed methodology provided accurate results in all the designed steps given the complexity of the problem, particularly in terms of specimen identification, phytoplankton differentiation as well as the classification of the defined target species. Therefore, this fully automatic system represents a robust and consistent tool to aid the specialists in the analysis of the quality of the water sources and potability.


Assuntos
Microscopia , Fitoplâncton , Aprendizado de Máquina , Reprodutibilidade dos Testes , Água
14.
Sensors (Basel) ; 20(22)2020 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-33238566

RESUMO

Water safety and quality can be compromised by the proliferation of toxin-producing phytoplankton species, requiring continuous monitoring of water sources. This analysis involves the identification and counting of these species which requires broad experience and knowledge. The automatization of these tasks is highly desirable as it would release the experts from tedious work, eliminate subjective factors, and improve repeatability. Thus, in this preliminary work, we propose to advance towards an automatic methodology for phytoplankton analysis in digital images of water samples acquired using regular microscopes. In particular, we propose a novel and fully automatic method to detect and segment the existent phytoplankton specimens in these images using classical computer vision algorithms. The proposed method is able to correctly detect sparse colonies as single phytoplankton candidates, thanks to a novel fusion algorithm, and is able to differentiate phytoplankton specimens from other image objects in the microscope samples (like minerals, bubbles or detritus) using a machine learning based approach that exploits texture and colour features. Our preliminary experiments demonstrate that the proposed method provides satisfactory and accurate results.


Assuntos
Monitoramento Ambiental/métodos , Processamento de Imagem Assistida por Computador , Microscopia , Fitoplâncton , Algoritmos , Água Doce , Aprendizado de Máquina
15.
Sensors (Basel) ; 20(7)2020 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-32260062

RESUMO

Optical Coherence Tomography (OCT) has become a relevant image modality in the ophthalmological clinical practice, as it offers a detailed representation of the eye fundus. This medical imaging modality is currently one of the main means of identification and characterization of intraretinal cystoid regions, a crucial task in the diagnosis of exudative macular disease or macular edema, among the main causes of blindness in developed countries. This work presents an exhaustive analysis of intensity and texture-based descriptors for its identification and classification, using a complete set of 510 texture features, three state-of-the-art feature selection strategies, and seven representative classifier strategies. The methodology validation and the analysis were performed using an image dataset of 83 OCT scans. From these images, 1609 samples were extracted from both cystoid and non-cystoid regions. The different tested configurations provided satisfactory results, reaching a mean cross-validation test accuracy of 92.69%. The most promising feature categories identified for the issue were the Gabor filters, the Histogram of Oriented Gradients (HOG), the Gray-Level Run-Length matrix (GLRL), and the Laws' texture filters (LAWS), being consistently and considerably selected along all feature selector algorithms in the top positions of different relevance rankings.

16.
Comput Methods Programs Biomed ; 186: 105201, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31783244

RESUMO

BACKGROUND AND OBJECTIVES: The analysis of the retinal vasculature plays an important role in the diagnosis of many ocular and systemic diseases. In this context, the accurate detection of the vessel crossings and bifurcations is an important requirement for the automated extraction of relevant biomarkers. In that regard, we propose a novel approach that addresses the simultaneous detection of vessel crossings and bifurcations in eye fundus images. METHOD: We propose to formulate the detection of vessel crossings and bifurcations in eye fundus images as a multi-instance heatmap regression. In particular, a deep neural network is trained in the prediction of multi-instance heatmaps that model the likelihood of a pixel being a landmark location. This novel approach allows to make predictions using full images and integrates into a single step the detection and distinction of the vascular landmarks. RESULTS: The proposed method is validated on two public datasets of reference that include detailed annotations for vessel crossings and bifurcations in eye fundus images. The conducted experiments evidence that the proposed method offers a satisfactory performance. In particular, the proposed method achieves 74.23% and 70.90% F-score for the detection of crossings and bifurcations, respectively, in color fundus images. Furthermore, the proposed method outperforms previous works by a significant margin. CONCLUSIONS: The proposed multi-instance heatmap regression allows to successfully exploit the potential of modern deep learning algorithms for the simultaneous detection of retinal vessel crossings and bifurcations. Consequently, this results in a significant improvement over previous methods, which will further facilitate the automated analysis of the retinal vasculature in many pathological conditions.


Assuntos
Fundo de Olho , Temperatura Alta , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
17.
J Digit Imaging ; 32(6): 947-962, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31144147

RESUMO

An accurate identification of the retinal arteries and veins is a relevant issue in the development of automatic computer-aided diagnosis systems that facilitate the analysis of different relevant diseases that affect the vascular system as diabetes or hypertension, among others. The proposed method offers a complete analysis of the retinal vascular tree structure by its identification and posterior classification into arteries and veins using optical coherence tomography (OCT) scans. These scans include the near-infrared reflectance retinography images, the ones we used in this work, in combination with the corresponding histological sections. The method, firstly, segments the vessel tree and identifies its characteristic points. Then, Global Intensity-Based Features (GIBS) are used to measure the differences in the intensity profiles between arteries and veins. A k-means clustering classifier employs these features to evaluate the potential of artery/vein identification of the proposed method. Finally, a post-processing stage is applied to correct misclassifications using context information and maximize the performance of the classification process. The methodology was validated using an OCT image dataset retrieved from 46 different patients, where 2,392 vessel segments and 97,294 vessel points were manually labeled by an expert clinician. The method achieved satisfactory results, reaching a best accuracy of 93.35% in the identification of arteries and veins, being the first proposal that faces this issue in this image modality.


Assuntos
Doenças Retinianas/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Doenças Vasculares/diagnóstico por imagem , Humanos
18.
BMC Med Res Methodol ; 18(1): 144, 2018 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-30458717

RESUMO

BACKGROUND: The retinal vascular tortuosity can be a potential indicator of relevant vascular and non-vascular diseases. However, the lack of a precise and standard guide for the tortuosity evaluation hinders its use for diagnostic and treatment purposes. This work aims to advance in the standardization of the retinal vascular tortuosity as a clinical biomarker with diagnostic potential, allowing, thereby, the validation of objective computational measurements on the basis of the entire spectrum of the expert knowledge. METHODS: This paper describes a multi-expert validation process of the computational vascular tortuosity measurements of reference. A group of five experts, covering the different clinical profiles of an ophthalmological service, and a four-grade scale from non-tortuous to severe tortuosity as well as non-tortuous / tortuous and asymptomatic / symptomatic binary classifications are considered for the analysis of the the multi-expert validation procedure. The specialists rating process comprises two rounds involving all the experts and a joint round to establish consensual rates. The expert agreement is analyzed throughout the rating procedure and, then, the consensual rates are set as the reference to validate the prognostic performance of four computational tortuosity metrics of reference. RESULTS: The Kappa indexes for the intra-rater agreement analysis were obtained between 0.35 and 0.83 whereas for the inter-rater agreement in the asymptomatic / symptomatic classification were between 0.22 and 0.76. The Area Under the Curve (AUC) for each expert against the consensual rates were placed between 0.61 and 0.83 whereas the prognostic performance of the best objective tortuosity metric was 0.80. CONCLUSIONS: There is a high inter and intra-rater variability, especially for the case of the four grade scale. The prognostic performance of the tortuosity measurements is close to the experts' performance, especially for Grisan measurement. However, there is a gap between the automatic effectiveness and the expert perception given the lack of clinical criteria in the computational measurements.


Assuntos
Diagnóstico por Computador/métodos , Oftalmologistas/estatística & dados numéricos , Doenças Retinianas/diagnóstico , Vasos Retinianos/patologia , Humanos , Variações Dependentes do Observador , Oftalmologistas/normas , Oftalmologia/métodos , Oftalmologia/normas , Oftalmologia/estatística & dados numéricos , Padrões de Prática Médica/normas , Reprodutibilidade dos Testes
19.
J Vis Exp ; (139)2018 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-30320756

RESUMO

Cardiovascular diseases (CVDs) are the leading cause of death throughout the world. The total risk of developing CVD is determined by the combined effect of different cardiovascular risk factors (e.g., diabetes, raised blood pressure, unhealthy diet, tobacco use, stress, etc.) that commonly coexist and act multiplicatively. Most CVDs can be prevented by an early identification of the highest risk factors and an appropriate treatment. The stratification of cardiovascular risk factors involves a wide range of parameters and tests that specialists use in their clinical practice. In addition to cardiovascular (CV) risk stratification, ambulatory blood pressure monitoring (ABPM) also provides relevant information for diagnostic and treatment purposes. This work presents a list of protocols based on the Hydra platform, a web-based system for clinical decision support which incorporates a set of functionalities and services that are required for complete cardiovascular analysis, risk assessment, early diagnosis, treatment and monitoring of patients over time. The program includes tools for inputting and managing comprehensive patient data, organized into different checkups to track the evolution over time. It also has a risk stratification tool to compute a CV risk factor based upon several risk stratification tables of reference. Additionally, the program includes a tool that incorporates ABPM analysis and allows the extraction of valuable information by monitoring blood pressure over a specific period of time. Finally, the reporting service summarizes the most relevant information in a set of reports that aid clinicians in their clinical decision-making process.


Assuntos
Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/terapia , Sistemas de Apoio a Decisões Clínicas , Internet , Humanos , Fatores de Risco
20.
PLoS One ; 12(6): e0177544, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28570557

RESUMO

Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.


Assuntos
Neoplasias da Mama/patologia , Redes Neurais de Computação , Neoplasias da Mama/classificação , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA