Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 19(7)2019 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-30987240

RESUMO

This paper presents an approach that can be used to measure height of driver's eyes and rear position lamps from a video, i.e., two important metrics used to set sight distance standards. This data plays an important role in the definition of geometric design of highways and streets. Our method automatically estimates the camera pose with respect to the road. It then requires selecting two points to obtain the height. New vehicles tend to be higher and larger. Consequently, this information shoud be updated. This approach has been applied on a large panel of vehicles. Our method was evaluated on vehicle height measurements. Our results suggest that our method achieves less than 1.8 cm (0.7 in) mean absolute error. Our experiments show an increase in the height of driver's eyes and taillights.


Assuntos
Acidentes de Trânsito , Condução de Veículo , Olho , Monitorização Fisiológica/métodos , Humanos , Sono/fisiologia
2.
Sensors (Basel) ; 17(7)2017 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-28686213

RESUMO

Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.

3.
Sensors (Basel) ; 17(5)2017 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-28531101

RESUMO

Long-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR).

4.
Sci Rep ; 12(1): 4968, 2022 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-35322055

RESUMO

The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers, because the use of such images in driving scenes is highly relevant. However, the case of motorized two-wheelers has not been treated yet. Since the dynamics of these vehicles are very different from those of cars, we focus our study on images acquired using a motorcycle. This paper provides a thorough comparative study to show how different deep learning approaches handle omnidirectional images with different representations, including perspective, equirectangular, spherical, and fisheye, and presents the best solution to segment road scene omnidirectional images. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, simulated fisheye images, as well as a test set of real fisheye images. By analyzing both qualitative and quantitative results, the conclusions of this study are multiple, as it helps understand how the networks learn to deal with omnidirectional distortions. Our main findings are that models with planar convolutions give better results than the ones with spherical convolutions, and that models trained on omnidirectional representations transfer better to standard perspective images than vice versa.


Assuntos
Processamento de Imagem Assistida por Computador , Semântica , Processamento de Imagem Assistida por Computador/métodos , Motocicletas
5.
IEEE Trans Image Process ; 22(5): 1808-21, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23288336

RESUMO

Bio-inspired and non-conventional vision systems are highly researched topics. Among them, omnidirectional vision systems have demonstrated their ability to significantly improve the geometrical interpretation of scenes. However, few researchers have investigated how to perform object detection with such systems. The existing approaches require a geometrical transformation prior to the interpretation of the picture. In this paper, we investigate what must be taken into account and how to process omnidirectional images provided by the sensor. We focus our research on face detection and highlight the fact that particular attention should be paid to the descriptors in order to successfully perform face detection on omnidirectional images. We demonstrate that this choice is critical to obtaining high detection rates. Our results imply that the adaptation of existing object-detection frameworks, designed for perspective images, should be focused on the choice of appropriate image descriptors in the design of the object-detection pipeline.


Assuntos
Identificação Biométrica/métodos , Face/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Identificação Biométrica/instrumentação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA