Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Diagnostics (Basel) ; 14(10)2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38786273

RESUMO

Artificial intelligence (AI) models have received considerable attention in recent years for their ability to identify optical coherence tomography (OCT) biomarkers with clinical diagnostic potential and predict disease progression. This study aims to externally validate a deep learning (DL) algorithm by comparing its segmentation of retinal layers and fluid with a gold-standard method for manually adjusting the automatic segmentation of the Heidelberg Spectralis HRA + OCT software Version 6.16.8.0. A total of sixty OCT images of healthy subjects and patients with intermediate and exudative age-related macular degeneration (AMD) were included. A quantitative analysis of the retinal thickness and fluid area was performed, and the discrepancy between these methods was investigated. The results showed a moderate-to-strong correlation between the metrics extracted by both software types, in all the groups, and an overall near-perfect area overlap was observed, except for in the inner segment ellipsoid (ISE) layer. The DL system detected a significant difference in the outer retinal thickness across disease stages and accurately identified fluid in exudative cases. In more diseased eyes, there was significantly more disagreement between these methods. This DL system appears to be a reliable method for accessing important OCT biomarkers in AMD. However, further accuracy testing should be conducted to confirm its validity in real-world settings to ultimately aid ophthalmologists in OCT imaging management and guide timely treatment approaches.

2.
Artif Intell Med ; 149: 102814, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462277

RESUMO

Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.


Assuntos
Processamento de Linguagem Natural , Radiologia , Aprendizado de Máquina
3.
Artif Intell Med ; 147: 102737, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38184361

RESUMO

Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% - a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.


Assuntos
Artefatos , Tórax , Raios X , Exercício Físico
4.
Comput Methods Programs Biomed ; 236: 107558, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37087944

RESUMO

BACKGROUND AND OBJECTIVE: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for example, 224 × 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radiological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are combined in a parameter-efficient fashion. METHODS: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 × 224, 448 × 448 and 896 × 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. RESULTS: The proposed approach (AUC 83.27±0.17, 7.1M parameters) outperforms standard single-scale models (AUC 81.76±0.18, 82.62±0.11 and 82.39±0.13 for input sizes 224 × 224, 448 × 448 and 896 × 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83.27±0.11, 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classification of all findings, regardless of their size, highlighting the advantages of this approach. CONCLUSIONS: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Radiografia , Radiografia Torácica/métodos
5.
J Med Imaging (Bellingham) ; 10(1): 014006, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36825083

RESUMO

Purpose: The development of accurate methods for retinal layer and fluid segmentation in optical coherence tomography images can help the ophthalmologists in the diagnosis and follow-up of retinal diseases. Recent works based on joint segmentation presented good results for the segmentation of most retinal layers, but the fluid segmentation results are still not satisfactory. We report a hierarchical framework that starts by distinguishing the retinal zone from the background, then separates the fluid-filled regions from the rest, and finally, discriminates the several retinal layers. Approach: Three fully convolutional networks were trained sequentially. The weighting scheme used for computing the loss function during training is derived from the outputs of the networks trained previously. To reinforce the relative position between retinal layers, the mutex Dice loss (included for optimizing the last network) was further modified so that errors between more "distant" layers are more penalized. The method's performance was evaluated using a public dataset. Results: The proposed hierarchical approach outperforms previous works in the segmentation of the inner segment ellipsoid layer and fluid (Dice coefficient = 0.95 and 0.82, respectively). The results achieved for the remaining layers are at a state-of-the-art level. Conclusions: The proposed framework led to significant improvements in fluid segmentation, without compromising the results in the retinal layers. Thus, its output can be used by ophthalmologists as a second opinion or as input for automatic extraction of relevant quantitative biomarkers.

6.
Environ Sci Pollut Res Int ; 30(17): 50174-50197, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36790704

RESUMO

Recycling agents provide better additions of reclaimed asphalt pavement (RAP) in the production of new asphalt mixtures. Alternative and residual materials that have the potential as asphalt binder viscosity reducers have gained visibility in the field of paving due to the perspective of circular economy in recycled mixtures. Soybean oil sludge fatty acid is a material produced from soybean oil sludge, a waste generated in the soybean oil refining step. Thus, this paper investigated the physical, chemical, and rheological effects of the asphalt binder PG 64-XX modified by the fatty acid of soybean oil sludge in the contents of 6% and 7% by weight of the binder. The modified binder samples were submitted to penetration tests, softening point, rotational viscosity, performance grade (PG), before and after short-term aging (RTFO), and multiple stress creep and recovery (MSCR). A control asphalt mixture and recycled asphalt mixtures produced with 40% RAP and fatty acid-modified binders were subjected to tensile strength, induced moisture damage, resilient modulus, and fatigue life. A Student's t statistical test verified the significance of the data, as well as the estimation of production costs of these asphalt mixtures. The use of the fatty acid significantly reduced the stiffness and viscosity of the control asphalt binder, decreasing the mixing temperatures at 14 °C and 17 °C to 6% and 7%, respectively. Using higher fatty acid contents from soybean oil sludge significantly improved the performance of recycled mixtures in tensile strength, moisture damage, and fatigue life. The production cost of recycled asphalt mixtures was lower than that of the control mixture.


Assuntos
Esgotos , Óleo de Soja , Ácidos Graxos , Materiais de Construção
7.
Graefes Arch Clin Exp Ophthalmol ; 260(12): 3825-3836, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35838808

RESUMO

PURPOSE: This study aims to investigate retinal and choroidal vascular reactivity to carbogen in central serous chorioretinopathy (CSC) patients. METHODS: An experimental pilot study including 68 eyes from 20 CSC patients and 14 age and sex-matched controls was performed. The participants inhaled carbogen (5% CO2 + 95% O2) for 2 min through a high-concentration disposable mask. A 30° disc-centered fundus imaging using infra-red (IR) and macular spectral domain optical coherence tomography (SD-OCT) using the enhanced depth imaging (EDI) technique was performed, both at baseline and after a 2-min gas exposure. A parametric model fitting-based approach for automatic retinal blood vessel caliber estimation was used to assess the mean variation in both arterial and venous vasculature. Choroidal thickness was measured in two different ways: the subfoveal choroidal thickness (SFCT) was calculated using a manual caliper and the mean central choroidal thickness (MCCT) was assessed using an automatic software. RESULTS: No significant differences were detected in baseline hemodynamic parameters between both groups. A significant positive correlation was found between the participants' age and arterial diameter variation (p < 0.001, r = 0.447), meaning that younger participants presented a more vasoconstrictive response (negative variation) than older ones. No significant differences were detected in the vasoreactive response between CSC and controls for both arterial and venous vessels (p = 0.63 and p = 0.85, respectively). Although the vascular reactivity was not related to the activity of CSC, it was related to the time of disease, for both the arterial (p = 0.02, r = 0.381) and venous (p = 0.001, r = 0.530) beds. SFCT and MCCT were highly correlated (r = 0.830, p < 0.001). Both SFCT and MCCT significantly increased in CSC patients (p < 0.001 and p < 0.001) but not in controls (p = 0.059 and 0.247). A significant negative correlation between CSC patients' age and MCCT variation (r = - 0.340, p = 0.049) was detected. In CSC patients, the choroidal thickness variation was not related to the activity state, time of disease, or previous photodynamic treatment. CONCLUSION: Vasoreactivity to carbogen was similar in the retinal vessels but significantly higher in the choroidal vessels of CSC patients when compared to controls, strengthening the hypothesis of a choroidal regulation dysfunction in this pathology.


Assuntos
Coriorretinopatia Serosa Central , Humanos , Coriorretinopatia Serosa Central/diagnóstico , Angiofluoresceinografia/métodos , Projetos Piloto , Acuidade Visual , Corioide/patologia , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos
8.
Sci Rep ; 12(1): 6596, 2022 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-35449199

RESUMO

The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/diagnóstico por imagem , Humanos , Radiografia , Radiografia Torácica/métodos , Estudos Retrospectivos
9.
Comput Biol Med ; 126: 103995, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33007620

RESUMO

Diabetic retinopathy (DR) is a diabetes complication, which in extreme situations may lead to blindness. Since the first stages are often asymptomatic, regular eye examinations are required for an early diagnosis. As microaneurysms (MAs) are one of the first signs of DR, several automated methods have been proposed for their detection in order to reduce the ophthalmologists' workload. Although local convergence filters (LCFs) have already been applied for feature extraction, their potential as MA enhancement operators was not explored yet. In this work, we propose a sliding band filter for MA enhancement aiming at obtaining a set of initial MA candidates. Then, a combination of the filter responses with color, contrast and shape information is used by an ensemble of classifiers for final candidate classification. Finally, for each eye fundus image, a score is computed from the confidence values assigned to the MAs detected in the image. The performance of the proposed methodology was evaluated in four datasets. At the lesion level, sensitivities of 64% and 81% were achieved for an average of 8 false positives per image (FPIs) in e-ophtha MA and SCREEN-DR, respectively. In the last dataset, an AUC of 0.83 was also obtained for DR detection.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Microaneurisma , Algoritmos , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico Precoce , Fundo de Olho , Humanos , Microaneurisma/diagnóstico por imagem
10.
Med Image Anal ; 63: 101715, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32434128

RESUMO

Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR|GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR|GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR|GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR|GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (κ) between 0.71 and 0.84 was achieved in five different datasets. We show that high κ values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR|GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR|GRADUATE as a second-opinion system in DR severity grading.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico por Computador , Fundo de Olho , Humanos , Incerteza
11.
J Med Syst ; 44(4): 81, 2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-32140870

RESUMO

Lung cancer is considered one of the deadliest diseases in the world. An early and accurate diagnosis aims to promote the detection and characterization of pulmonary nodules, which is of vital importance to increase the patients' survival rates. The mentioned characterization is done through a segmentation process, facing several challenges due to the diversity in nodular shape, size, and texture, as well as the presence of adjacent structures. This paper tackles pulmonary nodule segmentation in computed tomography scans proposing three distinct methodologies. First, a conventional approach which applies the Sliding Band Filter (SBF) to estimate the filter's support points, matching the border coordinates. The remaining approaches are Deep Learning based, using the U-Net and a novel network called SegU-Net to achieve the same goal. Their performance is compared, as this work aims to identify the most promising tool to improve nodule characterization. All methodologies used 2653 nodules from the LIDC database, achieving a Dice score of 0.663, 0.830, and 0.823 for the SBF, U-Net and SegU-Net respectively. This way, the U-Net based models yield more identical results to the ground truth reference annotated by specialists, thus being a more reliable approach for the proposed exercise. The novel network revealed similar scores to the U-Net, while at the same time reducing computational cost and improving memory efficiency. Consequently, such study may contribute to the possible implementation of this model in a decision support system, assisting the physicians in establishing a reliable diagnosis of lung pathologies based on this segmentation task.


Assuntos
Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Aprendizado Profundo , Diagnóstico por Computador , Humanos , Neoplasias Pulmonares/diagnóstico por imagem
12.
Med Image Anal ; 59: 101561, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31671320

RESUMO

Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.


Assuntos
Aprendizado Profundo , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Fotografação , Conjuntos de Dados como Assunto , Humanos , Reconhecimento Automatizado de Padrão
13.
PLoS One ; 13(4): e0194702, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29668759

RESUMO

BACKGROUND: Changes in the retinal vessel caliber are associated with a variety of major diseases, namely diabetes, hypertension and atherosclerosis. The clinical assessment of these changes in fundus images is tiresome and prone to errors and thus automatic methods are desirable for objective and precise caliber measurement. However, the variability of blood vessel appearance, image quality and resolution make the development of these tools a non-trivial task. METHOLODOGY: A method for the estimation of vessel caliber in eye fundus images via vessel cross-sectional intensity profile model fitting is herein proposed. First, the vessel centerlines are determined and individual segments are extracted and smoothed by spline approximation. Then, the corresponding cross-sectional intensity profiles are determined, post-processed and ultimately fitted by newly proposed parametric models. These models are based on Difference-of-Gaussians (DoG) curves modified through a multiplying line with varying inclination. With this, the proposed models can describe profile asymmetry, allowing a good adjustment to the most difficult profiles, namely those showing central light reflex. Finally, the parameters of the best-fit model are used to determine the vessel width using ensembles of bagged regression trees with random feature selection. RESULTS AND CONCLUSIONS: The performance of our approach is evaluated on the REVIEW public dataset by comparing the vessel cross-sectional profile fitting of the proposed modified DoG models with 7 and 8 parameters against a Hermite model with 6 parameters. Results on different goodness of fitness metrics indicate that our models are constantly better at fitting the vessel profiles. Furthermore, our width measurement algorithm achieves a precision close to the observers, outperforming state-of-the art methods, and retrieving the highest precision when evaluated using cross-validation. This high performance supports the robustness of the algorithm and validates its use in retinal vessel width measurement and possible integration in a system for retinal vasculature assessment.


Assuntos
Fundo de Olho , Processamento de Imagem Assistida por Computador , Modelos Teóricos , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes
14.
IEEE Trans Med Imaging ; 37(3): 781-791, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28981409

RESUMO

In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Técnicas de Diagnóstico Oftalmológico , Humanos , Retina/diagnóstico por imagem
15.
Comput Biol Med ; 56: 1-12, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25464343

RESUMO

BACKGROUND: The optic disc (OD) centre and boundary are important landmarks in retinal images and are essential for automating the calculation of health biomarkers related with some prevalent systemic disorders, such as diabetes, hypertension, cerebrovascular and cardiovascular diseases. METHODS: This paper presents an automatic approach for OD segmentation using a multiresolution sliding band filter (SBF). After the preprocessing phase, a low-resolution SBF is applied on a downsampled retinal image and the locations of maximal filter response are used for focusing the analysis on a reduced region of interest (ROI). A high-resolution SBF is applied to obtain a set of pixels associated with the maximum response of the SBF, giving a coarse estimation of the OD boundary, which is regularized using a smoothing algorithm. RESULTS: Our results are compared with manually extracted boundaries from public databases (ONHSD, MESSIDOR and INSPIRE-AVR datasets) outperforming recent approaches for OD segmentation. For the ONHSD, 44% of the results are classified as Excellent, while the remaining images are distributed between the Good (47%) and Fair (9%) categories. An average overlapping area of 83%, 89% and 85% is achieved for the images in ONHSD, MESSIDOR and INSPIR-AVR datasets, respectively, when comparing with the manually delineated OD regions. DISCUSSION: The evaluation results on the images of three datasets demonstrate the better performance of the proposed method compared to recently published OD segmentation approaches and prove the independence of this method when from changes in image characteristics such as size, quality and camera field of view.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imagem Óptica/instrumentação , Imagem Óptica/métodos , Retina/patologia , Bases de Dados Factuais , Humanos
16.
IEEE Trans Image Process ; 23(3): 1073-83, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23693131

RESUMO

The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.


Assuntos
Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Artéria Retiniana/patologia , Doenças Retinianas/patologia , Veia Retiniana/patologia , Retinoscopia/métodos , Algoritmos , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
Artigo em Inglês | MEDLINE | ID: mdl-25571444

RESUMO

This paper introduces RetinaCAD, a system, for the fast, reliable and automatic measurement of the Central Retinal Arteriolar Equivalent (CRAE), the Central Retinal Venular Equivalent (CRVE), and the Arteriolar-to-Venular Ratio (AVR) values, as well as several geometrical features of the retinal vasculature. RetinaCAD identifies important landmarks in the retina, such as the blood vessels and optic disc, and performs artery/vein classification and vessel width measurement. The estimation of the CRAE, CRVE and AVR values on 480 images from 120 subjects has shown a significant correlation between right and left eyes and also between images of same eye acquired with different camera fields of view. AVR estimation in retinal images of 54 subjects showed the lowest values in people with diabetes or high blood pressure thus demonstrating the potential of the system as a CAD tool for early detection and follow-up of diabetes, hypertension or cardiovascular pathologies.


Assuntos
Artéria Retiniana/patologia , Veia Retiniana/patologia , Pressão Sanguínea , Diabetes Mellitus/fisiopatologia , Diagnóstico por Computador , Humanos , Processamento de Imagem Assistida por Computador , Artéria Retiniana/fisiopatologia , Veia Retiniana/fisiopatologia , Interface Usuário-Computador
18.
Comput Math Methods Med ; 2013: 218415, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24171044

RESUMO

This paper describes a new methodology for lane detection in Thin-Layer Chromatography images. An approach based on the continuous wavelet transform is used to enhance the relevant lane information contained in the intensity profile obtained from image data projection. Lane detection proceeds in three phases: the first obtains a set of candidate lanes, which are validated or removed in the second phase; in the third phase, lane limits are calculated, and subtle lanes are recovered. The superior performance of the new solution was confirmed by a comparison with three other methodologies previously described in the literature.


Assuntos
Cromatografia em Camada Fina/estatística & dados numéricos , Algoritmos , Bases de Dados Factuais/estatística & dados numéricos , Doença de Fabry/diagnóstico , Doença de Fabry/metabolismo , Humanos , Processamento de Imagem Assistida por Computador , Programas de Rastreamento/estatística & dados numéricos , Triexosilceramidas/análise , Análise de Ondaletas
19.
Comput Med Imaging Graph ; 37(5-6): 409-17, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23726437

RESUMO

This paper describes a new methodology for automatic location of the optic disc in retinal images, based on the combination of information taken from the blood vessel network with intensity data. The distribution of vessel orientations around an image point is quantified using the new concept of entropy of vascular directions. The robustness of the method for OD localization is improved by constraining the search for maximal values of entropy to image areas with high intensities. The method was able to obtain a valid location for the optic disc in 1357 out of the 1361 images of the four datasets.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Disco Óptico/irrigação sanguínea , Reconhecimento Automatizado de Padrão/métodos , Vasos Retinianos , Algoritmos , Bases de Dados Factuais , Entropia , Humanos
20.
IEEE Trans Med Imaging ; 29(8): 1463-73, 2010 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-20525532

RESUMO

Microscopy cell image analysis is a fundamental tool for biological research. In particular, multivariate fluorescence microscopy is used to observe different aspects of cells in cultures. It is still common practice to perform analysis tasks by visual inspection of individual cells which is time consuming, exhausting and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cell cultures. Traditionally the task of automatic cell analysis is approached through the use of image segmentation methods for extraction of cells' locations and shapes. Image segmentation, although fundamental, is neither an easy task in computer vision nor is it robust to image quality changes. This makes image segmentation for cell detection semi-automated requiring frequent tuning of parameters. We introduce a new approach for cell detection and shape estimation in multivariate images based on the sliding band filter (SBF). This filter's design makes it adequate to detect overall convex shapes and as such it performs well for cell detection. Furthermore, the parameters involved are intuitive as they are directly related to the expected cell size. Using the SBF filter we detect cells' nucleus and cytoplasm location and shapes. Based on the assumption that each cell has the same approximate shape center in both nuclei and cytoplasm fluorescence channels, we guide cytoplasm shape estimation by the nuclear detections improving performance and reducing errors. Then we validate cell detection by gathering evidence from nuclei and cytoplasm channels. Additionally, we include overlap correction and shape regularization steps which further improve the estimated cell shapes. The approach is evaluated using two datasets with different types of data: a 20 images benchmark set of simulated cell culture images, containing 1000 simulated cells; a 16 images Drosophila melanogaster Kc167 dataset containing 1255 cells, stained for DNA and actin. Both image datasets present a difficult problem due to the high variability of cell shapes and frequent cluster overlap between cells. On the Drosophila dataset our approach achieved a precision/recall of 95%/69% and 82%/90% for nuclei and cytoplasm detection respectively and an overall accuracy of 76%.


Assuntos
Núcleo Celular/ultraestrutura , Citoplasma/ultraestrutura , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Actinas/química , Animais , Agregação Celular , Núcleo Celular/química , Forma Celular , Citoplasma/química , DNA/química , Bases de Dados Factuais , Drosophila melanogaster , Análise Multivariada , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA