Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Sensors (Basel) ; 22(19)2022 Sep 26.
Article in English | MEDLINE | ID: mdl-36236402

ABSTRACT

Since the beginning of the COVID-19 pandemic, many works have been published proposing solutions to the problems that arose in this scenario. In this vein, one of the topics that attracted the most attention is the development of computer-based strategies to detect COVID-19 from thoracic medical imaging, such as chest X-ray (CXR) and computerized tomography scan (CT scan). By searching for works already published on this theme, we can easily find thousands of them. This is partly explained by the fact that the most severe worldwide pandemic emerged amid the technological advances recently achieved, and also considering the technical facilities to deal with the large amount of data produced in this context. Even though several of these works describe important advances, we cannot overlook the fact that others only use well-known methods and techniques without a more relevant and critical contribution. Hence, differentiating the works with the most relevant contributions is not a trivial task. The number of citations obtained by a paper is probably the most straightforward and intuitive way to verify its impact on the research community. Aiming to help researchers in this scenario, we present a review of the top-100 most cited papers in this field of investigation according to the Google Scholar search engine. We evaluate the distribution of the top-100 papers taking into account some important aspects, such as the type of medical imaging explored, learning settings, segmentation strategy, explainable artificial intelligence (XAI), and finally, the dataset and code availability.


Subject(s)
COVID-19 , Artificial Intelligence , COVID-19/diagnostic imaging , Humans , Pandemics , SARS-CoV-2 , Tomography, X-Ray Computed/methods , X-Rays
2.
Sensors (Basel) ; 21(21)2021 Oct 27.
Article in English | MEDLINE | ID: mdl-34770423

ABSTRACT

COVID-19 frequently provokes pneumonia, which can be diagnosed using imaging exams. Chest X-ray (CXR) is often useful because it is cheap, fast, widespread, and uses less radiation. Here, we demonstrate the impact of lung segmentation in COVID-19 identification using CXR images and evaluate which contents of the image influenced the most. Semantic segmentation was performed using a U-Net CNN architecture, and the classification using three CNN architectures (VGG, ResNet, and Inception). Explainable Artificial Intelligence techniques were employed to estimate the impact of segmentation. A three-classes database was composed: lung opacity (pneumonia), COVID-19, and normal. We assessed the impact of creating a CXR image database from different sources, and the COVID-19 generalization from one source to another. The segmentation achieved a Jaccard distance of 0.034 and a Dice coefficient of 0.982. The classification using segmented images achieved an F1-Score of 0.88 for the multi-class setup, and 0.83 for COVID-19 identification. In the cross-dataset scenario, we obtained an F1-Score of 0.74 and an area under the ROC curve of 0.9 for COVID-19 identification using segmented images. Experiments support the conclusion that even after segmentation, there is a strong bias introduced by underlying factors from different sources.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , Humans , Lung/diagnostic imaging , SARS-CoV-2 , X-Rays
3.
Article in English | MEDLINE | ID: mdl-38408009

ABSTRACT

Dataset scaling, a.k.a. normalization, is an essential preprocessing step in a machine learning (ML) pipeline. It aims to adjust the scale of attributes in a way that they all vary within the same range. This transformation is known to improve the performance of classification models. Still, there are several scaling techniques (STs) to choose from, and no ST is guaranteed to be the best for a dataset regardless of the classifier chosen. It is thus a problem-and classifier-dependent decision. Furthermore, there can be a huge difference in performance when selecting the wrong technique; hence, it should not be neglected. That said, the trial-and-error process of finding the most suitable technique for a particular dataset can be unfeasible. As an alternative, we propose the Meta-scaler, which uses meta-learning (MtL) to build meta-models to automatically select the best ST for a given dataset and classification algorithm. The meta-models learn to represent the relationship between meta-features extracted from the datasets and the performance of specific classification algorithms on these datasets when scaled with different techniques. Our experiments using 12 base classifiers, 300 datasets, and five STs demonstrate the feasibility and effectiveness of the approach. When using the ST selected by the Meta-scaler for each dataset, 10 of 12 base models tested achieved statistically significantly better classification performance than any fixed choice of a single ST. The Meta-scaler also outperforms state-of-the-art MtL approaches for ST selection. The source code, data, and results from the experiments in this article are available at a GitHub repository (http://github.com/amorimlb/meta_scaler).

4.
Sci Rep ; 12(1): 487, 2022 01 11.
Article in English | MEDLINE | ID: mdl-35017537

ABSTRACT

The sea surface temperature (SST) is an environmental indicator closely related to climate, weather, and atmospheric events worldwide. Its forecasting is essential for supporting the decision of governments and environmental organizations. Literature has shown that single machine learning (ML) models are generally more accurate than traditional statistical models for SST time series modeling. However, the parameters tuning of these ML models is a challenging task, mainly when complex phenomena, such as SST forecasting, are addressed. Issues related to misspecification, overfitting, or underfitting of the ML models can lead to underperforming forecasts. This work proposes using hybrid systems (HS) that combine (ML) models using residual forecasting as an alternative to enhance the performance of SST forecasting. In this context, two types of combinations are evaluated using two ML models: support vector regression (SVR) and long short-term memory (LSTM). The experimental evaluation was performed on three datasets from different regions of the Atlantic Ocean using three well-known measures: mean square error (MSE), mean absolute percentage error (MAPE), and mean absolute error (MAE). The best HS based on SVR improved the MSE value for each analyzed series by [Formula: see text], [Formula: see text], and [Formula: see text] compared to its respective single model. The HS employing the LSTM improved [Formula: see text], [Formula: see text], and [Formula: see text] concerning the single LSTM model. Compared to literature approaches, at least one version of HS attained higher accuracy than statistical and ML models in all study cases. In particular, the nonlinear combination of the ML models obtained the best performance among the proposed HS versions.

5.
Neural Netw ; 88: 114-124, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28236678

ABSTRACT

This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published.


Subject(s)
Forecasting/methods , Interrupted Time Series Analysis/methods , Models, Theoretical , Humans , Neural Networks, Computer
6.
PLoS One ; 11(2): e0149943, 2016.
Article in English | MEDLINE | ID: mdl-26919587

ABSTRACT

Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.


Subject(s)
Cardiovascular Diseases/diagnosis , Image Enhancement/methods , Retinal Vessels/pathology , Algorithms , Cardiovascular Diseases/pathology , Humans
7.
PLoS One ; 10(9): e0138507, 2015.
Article in English | MEDLINE | ID: mdl-26414182

ABSTRACT

The particulate matter (PM) concentration has been one of the most relevant environmental concerns in recent decades due to its prejudicial effects on living beings and the earth's atmosphere. High PM concentration affects the human health in several ways leading to short and long term diseases. Thus, forecasting systems have been developed to support decisions of the organizations and governments to alert the population. Forecasting systems based on Artificial Neural Networks (ANNs) have been highlighted in the literature due to their performances. In general, three ANN-based approaches have been found for this task: ANN trained via learning algorithms, hybrid systems that combine search algorithms with ANNs, and hybrid systems that combine ANN with other forecasters. Independent of the approach, it is common to suppose that the residuals (error series), obtained from the difference between actual series and forecasting, have a white noise behavior. However, it is possible that this assumption is infringed due to: misspecification of the forecasting model, complexity of the time series or temporal patterns of the phenomenon not captured by the forecaster. This paper proposes an approach to improve the performance of PM forecasters from residuals modeling. The approach analyzes the remaining residuals recursively in search of temporal patterns. At each iteration, if there are temporal patterns in the residuals, the approach generates the forecasting of the residuals in order to improve the forecasting of the PM time series. The proposed approach can be used with either only one forecaster or by combining two or more forecasting models. In this study, the approach is used to improve the performance of a hybrid system (HS) composed by genetic algorithm (GA) and ANN from residuals modeling performed by two methods, namely, ANN and own hybrid system. Experiments were performed for PM2.5 and PM10 concentration series in Kallio and Vallila stations in Helsinki and evaluated from six metrics. Experimental results show that the proposed approach improves the accuracy of the forecasting method in terms of fitness function for all cases, when compared with the method without correction. The correction via HS obtained a superior performance, reaching the best results in terms of fitness function and in five out of six metrics. These results also were found when a sensitivity analysis was performed varying the proportions of the sets of training, validation and test. The proposed approach reached consistent results when compared with the forecasting method without correction, showing that it can be an interesting tool for correction of PM forecasters.


Subject(s)
Environmental Monitoring/methods , Particulate Matter/analysis , Air Pollution/analysis , Computer Simulation , Finland , Humans , Models, Theoretical , Neural Networks, Computer , Particle Size , Time Factors
8.
PLoS One ; 9(12): e115967, 2014.
Article in English | MEDLINE | ID: mdl-25542018

ABSTRACT

Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.


Subject(s)
Algorithms , Face/anatomy & histology , Neural Networks, Computer , Emotions , Humans
9.
IEEE Trans Cybern ; 43(6): 2082-91, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23757517

ABSTRACT

The human visual system is one of the most fascinating and complex mechanisms of the central nervous system that enables our capacity to see. It is through the visual system that we are able to accomplish from the most simple task such as object recognition to the most complex visual interpretation, understanding and perception. Inspired by this sophisticated system, two models based on the properties of the human visual system are proposed. These models are designed based on the concepts of receptive and inhibitory fields. The first model is a pyramidal neural network with lateral inhibition, called lateral inhibition pyramidal neural network. The second proposed model is a supervised image segmentation system, called segmentation and classification based on receptive fields. This work shows that the combination of these two models is beneficial, and the results obtained are better than that of other state-of-the-art methods.


Subject(s)
Biomimetics/methods , Nerve Net/physiology , Neural Inhibition/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Pyramidal Cells/physiology , Visual Perception/physiology , Algorithms , Artificial Intelligence , Humans
10.
Article in English | MEDLINE | ID: mdl-19964312

ABSTRACT

Tissue classification in mammography can help the diagnosis of breast cancer by separating healthy tissue from lesions. We present herein the use of three texture descriptors for breast tissue segmentation purposes: the Sum Histogram, the Gray Level Co-Occurrence Matrix (GLCM) and the Local Binary Pattern (LBP). A modification of the LBP is also proposed for a better distinction of the tissues. In order to segment the image into its tissues, these descriptors are compared using a fidelity index and two clustering algorithms: k-Means and SOM (Self-Organizing Maps).


Subject(s)
Breast Neoplasms/pathology , Breast/pathology , Mammography/methods , Algorithms , Breast Neoplasms/diagnostic imaging , Cluster Analysis , Computers , Databases, Factual , Diagnostic Imaging/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Mammography/instrumentation , Medical Oncology/instrumentation , Medical Oncology/methods , Pattern Recognition, Automated/methods , Software
SELECTION OF CITATIONS
SEARCH DETAIL