Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(24)2023 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-38139658

RESUMO

SLAM (simultaneous localization and mapping) plays a crucial role in autonomous robot navigation. A challenging aspect of visual SLAM systems is determining the 3D camera orientation of the motion trajectory. In this paper, we introduce an end-to-end network structure, InertialNet, which establishes the correlation between the image sequence and the IMU signals. Our network model is built upon inertial measurement learning and is employed to predict the camera's general motion pose. By incorporating an optical flow substructure, InertialNet is independent of the appearance of training sets and can be adapted to new environments. It maintains stable predictions even in the presence of image blur, changes in illumination, and low-texture scenes. In our experiments, we evaluated InertialNet on the public EuRoC dataset and our dataset, demonstrating its feasibility with faster training convergence and fewer model parameters for inertial measurement prediction.

2.
Entropy (Basel) ; 25(4)2023 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-37190448

RESUMO

The detection of regions of interest is commonly considered as an early stage of information extraction from images. It is used to provide the contents meaningful to human perception for machine vision applications. In this work, a new technique for structured region detection based on the distillation of local image features with clustering analysis is proposed. Different from the existing methods, our approach takes the application-specific reference images for feature learning and extraction. It is able to identify text clusters under the sparsity of feature points derived from the characters. For the localization of structured regions, the cluster with high feature density is calculated and serves as a candidate for region expansion. An iterative adjustment is then performed to enlarge the ROI for complete text coverage. The experiments carried out for text region detection of invoice and banknote demonstrate the effectiveness of the proposed technique.

3.
Sensors (Basel) ; 21(11)2021 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-34073565

RESUMO

In mobile robotics research, the exploration of unknown environments has always been an important topic due to its practical uses in consumer and military applications. One specific interest of recent investigation is the field of complete coverage and path planning (CCPP) techniques for mobile robot navigation. In this paper, we present a collaborative CCPP algorithms for single robot and multi-robot systems. The incremental coverage from the robot movement is maximized by evaluating a new cost function. A goal selection function is then designed to facilitate the collaborative exploration for a multi-robot system. By considering the local gains from the individual robots as well as the global gain by the goal selection, the proposed method is able to optimize the overall coverage efficiency. In the experiments, our CCPP algorithms are carried out on various unknown and complex environment maps. The simulation results and performance evaluation demonstrate the effectiveness of the proposed collaborative CCPP technique.

4.
Sensors (Basel) ; 21(14)2021 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-34300459

RESUMO

This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.

5.
Sensors (Basel) ; 21(14)2021 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-34300491

RESUMO

In this paper, we present a real-time object detection and depth estimation approach based on deep convolutional neural networks (CNNs). We improve object detection through the incorporation of transfer connection blocks (TCBs), in particular, to detect small objects in real time. For depth estimation, we introduce binocular vision to the monocular-based disparity estimation network, and the epipolar constraint is used to improve prediction accuracy. Finally, we integrate the two-dimensional (2D) location of the detected object with the depth information to achieve real-time detection and depth estimation. The results demonstrate that the proposed approach achieves better results compared to conventional methods.


Assuntos
Redes Neurais de Computação
6.
Sensors (Basel) ; 20(14)2020 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-32708587

RESUMO

Extending the dynamic range can present much richer contrasts and physical information from the traditional low dynamic range (LDR) images. To tackle this, we propose a method to generate a high dynamic range image from a single LDR image. In addition, a technique for the matching between the histogram of a high dynamic range (HDR) image and the original image is introduced. To evaluate the results, we utilize the dynamic range for independent image quality assessment. It recognizes the difference in subtle brightness, which is a significant role in the assessment of novel lighting, rendering, and imaging algorithms. The results show that the picture quality is improved, and the contrast is adjusted. The performance comparison with other methods is carried out using the predicted visibility (HDR-VDP-2). Compared to the results obtained from other techniques, our extended HDR images can present a wider dynamic range with a large difference between light and dark areas.

7.
Sensors (Basel) ; 20(18)2020 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-32916970

RESUMO

One major concern in the development of intelligent vehicles is to improve the driving safety. It is also an essential issue for future autonomous driving and intelligent transportation. In this paper, we present a vision-based system for driving assistance. A front and a rear on-board camera are adopted for visual sensing and environment perception. The purpose is to avoid potential traffic accidents due to forward collision and vehicle overtaking, and assist the drivers or self-driving cars to perform safe lane change operations. The proposed techniques consist of lane change detection, forward collision warning, and overtaking vehicle identification. A new cumulative density function (CDF)-based symmetry verification method is proposed for the detection of front vehicles. The motion cue obtained from optical flow is used for overtaking detection. It is further combined with a convolutional neural network to remove repetitive patterns for more accurate overtaking vehicle identification. Our approach is able to adapt to a variety of highway and urban scenarios under different illumination conditions. The experiments and performance evaluation carried out on real scene images have demonstrated the effectiveness of the proposed techniques.

8.
Sensors (Basel) ; 19(10)2019 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-31096676

RESUMO

People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. An image re-coloring algorithm based on eigenvector processing is proposed for robust color separation under color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted by an angle in the λ , Y-B, R-G color space. The experimental results show that our approach is useful for the recognition and separation of the CVD confusing colors in natural scene images. Compared to the existing techniques, our results of natural images with CVD simulation work very well in terms of RMS, HDR-VDP-2 and an IRB-approved human test. Both the objective comparison with previous works and the subjective evaluation on human tests validate the effectiveness of the proposed method.


Assuntos
Percepção de Cores/fisiologia , Defeitos da Visão Cromática/terapia , Aumento da Imagem/métodos , Algoritmos , Defeitos da Visão Cromática/fisiopatologia , Feminino , Humanos , Masculino
9.
Sensors (Basel) ; 14(9): 16508-31, 2014 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-25192317

RESUMO

In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.


Assuntos
Inteligência Artificial , Interpretação de Imagem Assistida por Computador/instrumentação , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Robótica/instrumentação , Robótica/métodos , Algoritmos , Desenho de Equipamento , Análise de Falha de Equipamento , Movimento (Física) , Integração de Sistemas
10.
J Opt Soc Am A Opt Image Sci Vis ; 29(8): 1694-706, 2012 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-23201887

RESUMO

A defocus blur identification technique based on histogram analysis of a real edge image is presented. The image defocus process of a camera is formulated by incorporating the nonlinear camera response and the intensity-dependent noise model. The histogram matching between the synthesized and real defocused regions is then carried out with intensity-dependent filtering. By iteratively changing the point-spread function parameters, the best blur extent is identified from histogram comparison. We have performed the experiments on both the synthetic and real edge images. The results have demonstrated the robustness and feasibility of the proposed technique.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA