ABSTRACT
In this paper we present a comparison study between different aggregation functions for the combination of RGB color channels in stereo matching problem. We introduce color information from images to the stereo matching algorithm by aggregating the similarities of the RGB channels which are calculated independently. We compare the accuracy of different stereo matching algorithms and aggregation functions. We show experimentally that the best function depends on the stereo matching algorithm considered, but the dual of the geometric mean excels as the most robust aggregation.
ABSTRACT
Social network analysis is a popular tool to understand the relationships between interacting agents by studying the structural properties of their connections. However, this kind of analysis can miss some of the domain-specific knowledge available in the original information domain and its propagation through the associated network. In this work, we develop an extension of classical social network analysis to incorporate external information from the original source of the network. With this extension we propose a new centrality measure, the semantic value, and a new affinity function, the semantic affinity, that establishes fuzzy-like relationships between the different actors in the network. We also propose a new heuristic algorithm based on the shortest capacity problem to compute this new function. As an illustrative case study, we use the novel proposals to analyze and compare the gods and heroes from three different classical mythologies: 1) Greek; 2) Celtic; and 3) Nordic. We study the relationships of each individual mythology and those of the common structure that is formed when we fuse the three of them. We also compare our results with those obtained using other existing centrality measures and embedding approaches. In addition, we test the proposed measures on a classical social network, the Reuters terror news network, as well as in a Twitter network related to the COVID-19 pandemic. We found that the novel method obtains more meaningful comparisons and results than previous existing approaches in every case.
ABSTRACT
Traditionally, Convolutional Neural Networks make use of the maximum or arithmetic mean in order to reduce the features extracted by convolutional layers in a downsampling process known as pooling. However, there is no strong argument to settle upon one of the two functions and, in practice, this selection turns to be problem dependent. Further, both of these options ignore possible dependencies among the data. We believe that a combination of both of these functions, as well as of additional ones which may retain different information, can benefit the feature extraction process. In this work, we replace traditional pooling by several alternative functions. In particular, we consider linear combinations of order statistics and generalizations of the Sugeno integral, extending the latter's domain to the whole real line and setting the theoretical base for their application. We present an alternative pooling layer based on this strategy which we name "CombPool" layer. We replace the pooling layers of three different architectures of increasing complexity by CombPool layers, and empirically prove over multiple datasets that linear combinations outperform traditional pooling functions in most cases. Further, combinations with either the Sugeno integral or one of its generalizations usually yield the best results, proving a strong candidate to apply in most architectures.
Subject(s)
Neural Networks, ComputerABSTRACT
Brain-computer interface (BCI) technologies are popular methods of communication between the human brain and external devices. One of the most popular approaches to BCI is motor imagery (MI). In BCI applications, the electroencephalography (EEG) is a very popular measurement for brain dynamics because of its noninvasive nature. Although there is a high interest in the BCI topic, the performance of existing systems is still far from ideal, due to the difficulty of performing pattern recognition tasks in EEG signals. This difficulty lies in the selection of the correct EEG channels, the signal-to-noise ratio of these signals, and how to discern the redundant information among them. BCI systems are composed of a wide range of components that perform signal preprocessing, feature extraction, and decision making. In this article, we define a new BCI framework, called enhanced fusion framework, where we propose three different ideas to improve the existing MI-based BCI frameworks. First, we include an additional preprocessing step of the signal: a differentiation of the EEG signal that makes it time invariant. Second, we add an additional frequency band as a feature for the system: the sensorimotor rhythm band, and we show its effect on the performance of the system. Finally, we make a profound study of how to make the final decision in the system. We propose the usage of both up to six types of different classifiers and a wide range of aggregation functions (including classical aggregations, Choquet and Sugeno integrals, and their extensions and overlap functions) to fuse the information given by the considered classifiers. We have tested this new system on a dataset of 20 volunteers performing MI-based brain-computer interface experiments. On this dataset, the new system achieved 88.80% accuracy. We also propose an optimized version of our system that is able to obtain up to 90.76%. Furthermore, we find that the pair Choquet/Sugeno integrals and overlap functions are the ones providing the best results.
Subject(s)
Brain-Computer Interfaces , Algorithms , Brain , Electroencephalography/methods , Humans , Imagination , Signal Processing, Computer-AssistedABSTRACT
Stereo matching problem attempts to find corresponding locations between pairs of displaced images of the same scene. Correspondence estimation between pixels suffers from occlusions, noise, and bias. This paper introduces a novel approach to represent images by means of interval-valued fuzzy sets. These sets allow one to overcome the uncertainty due to the aforementioned problems. The aim is to take advantage of the new representation to develop a stereo matching algorithm. The interval-valued fuzzification process for images that is proposed here is based on image segmentation. Interval-valued fuzzy similarities are introduced to compare windows whose pixels are represented by intervals. To make use of color information, the similarities of the RGB channels were aggregated using the luminance formula. The experimental analysis makes a comparison with other methods. The new representation that is proposed together with the new similarity measure show a better overall behavior, providing more accurate correspondences, mainly near depth discontinuities and for images with a large amount of color.
ABSTRACT
In this paper, a simple and effective image-magnification algorithm based on intervals is proposed. A low-resolution image is magnified to form a high-resolution image using a block-expanding method. Our proposed method associates each pixel with an interval obtained by a weighted aggregation of the pixels in its neighborhood. From the interval and with a linear K(α) operator, we obtain the magnified image. Experimental results show that our algorithm provides a magnified image with better quality (peak signal-to-noise ratio) than several existing methods.
ABSTRACT
In this paper, an automatic histogram threshold approach based on a fuzziness measure is presented. This work is an improvement of an existing method. Using fuzzy logic concepts, the problems involved in finding the minimum of a criterion function are avoided. Similarity between gray levels is the key to find an optimal threshold. Two initial regions of gray levels, located at the boundaries of the histogram, are defined. Then, using an index of fuzziness, a similarity process is started to find the threshold point. A significant contrast between objects and background is assumed. Previous histogram equalization is used in small contrast images. No prior knowledge of the image is required.