Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Sensors (Basel) ; 23(12)2023 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-37420835

RESUMEN

Indoor location-based services constitute an important part of our daily lives, providing position and direction information about people or objects in indoor spaces. These systems can be useful in security and monitoring applications that target specific areas such as rooms. Vision-based scene recognition is the task of accurately identifying a room category from a given image. Despite years of research in this field, scene recognition remains an open problem due to the different and complex places in the real world. Indoor environments are relatively complicated because of layout variability, object and decoration complexity, and multiscale and viewpoint changes. In this paper, we propose a room-level indoor localization system based on deep learning and built-in smartphone sensors combining visual information with smartphone magnetic heading. The user can be room-level localized while simply capturing an image with a smartphone. The presented indoor scene recognition system is based on direction-driven convolutional neural networks (CNNs) and therefore contains multiple CNNs, each tailored for a particular range of indoor orientations. We present particular weighted fusion strategies that improve system performance by properly combining the outputs from different CNN models. To meet users' needs and overcome smartphone limitations, we propose a hybrid computing strategy based on mobile computation offloading compatible with the proposed system architecture. The implementation of the scene recognition system is split between the user's smartphone and a server, which aids in meeting the computational requirements of CNNs. Several experimental analysis were conducted, including to assess performance and provide a stability analysis. The results obtained on a real dataset show the relevance of the proposed approach for localization, as well as the interest in model partitioning in hybrid mobile computation offloading. Our extensive evaluation demonstrates an increase in accuracy compared to traditional CNN scene recognition, indicating the effectiveness and robustness of our approach.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Teléfono Inteligente
2.
Sensors (Basel) ; 20(1)2020 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-31935945

RESUMEN

Indoor localization has several applications ranging from people tracking and indoor navigation, to autonomous robot navigation and asset tracking. We tackle the problem as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The decision-making process in localization systems relies on data coming from multiple sensors. The data retrieved from these sensors require robust fusion approaches to be processed. One of these approaches is the belief functions theory (BFT), also called the Dempster-Shafer theory. This theory deals with uncertainty and imprecision with a theoretically attractive evidential reasoning framework. This paper investigates the usage of the BFT to define an evidence framework for estimating the most probable sensor's zone. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.

3.
Sci Rep ; 12(1): 4968, 2022 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-35322055

RESUMEN

The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers, because the use of such images in driving scenes is highly relevant. However, the case of motorized two-wheelers has not been treated yet. Since the dynamics of these vehicles are very different from those of cars, we focus our study on images acquired using a motorcycle. This paper provides a thorough comparative study to show how different deep learning approaches handle omnidirectional images with different representations, including perspective, equirectangular, spherical, and fisheye, and presents the best solution to segment road scene omnidirectional images. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, simulated fisheye images, as well as a test set of real fisheye images. By analyzing both qualitative and quantitative results, the conclusions of this study are multiple, as it helps understand how the networks learn to deal with omnidirectional distortions. Our main findings are that models with planar convolutions give better results than the ones with spherical convolutions, and that models trained on omnidirectional representations transfer better to standard perspective images than vice versa.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Semántica , Procesamiento de Imagen Asistido por Computador/métodos , Motocicletas
4.
IEEE Trans Image Process ; 25(10): 4565-79, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27416597

RESUMEN

This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.

5.
IEEE Trans Pattern Anal Mach Intell ; 34(9): 1814-26, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22201059

RESUMEN

Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.

6.
Artículo en Inglés | MEDLINE | ID: mdl-21096851

RESUMEN

The inherent physical characteristics of many real-life phenomena, including biological and physiological aspects, require adapted nonlinear tools. Moreover, the additive nature in some situations involve solutions expressed as positive combinations of data. In this paper, we propose a nonlinear feature extraction method, with a non-negativity constraint. To this end, the kernel principal component analysis is considered to define the most relevant features in the reproducing kernel Hilbert space. These features are the nonlinear principal components with high-order correlations between input variables. A pre-image technique is required to get back to the input space. With a non-negative constraint, we show that one can solve the pre-image problem efficiently, using a simple iterative scheme. Furthermore, the constrained solution contributes to the stability of the algorithm. Experimental results on event-related potentials (ERP) illustrate the efficiency of the proposed method.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Diagnóstico por Computador/métodos , Electroencefalografía/métodos , Potenciales Evocados/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Dinámicas no Lineales , Análisis de Componente Principal , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA