Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(4)2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38400227

RESUMO

Among the numerous gaze-estimation methods currently available, appearance-based methods predominantly use RGB images as input and employ convolutional neural networks (CNNs) to detect facial images to regressively obtain gaze angles or gaze points. Model-based methods require high-resolution images to obtain a clear eyeball geometric model. These methods face significant challenges in outdoor environments and practical application scenarios. This paper proposes a model-based gaze-estimation algorithm using a low-resolution 3D TOF camera. This study uses infrared images instead of RGB images as input to overcome the impact of varying illumination intensity in the environment on gaze estimation. We utilized a trained YOLOv8 neural network model to detect eye landmarks in captured facial images. Combined with the depth map from a time-of-flight (TOF) camera, we calculated the 3D coordinates of the canthus points of a single eye of the subject. Based on this, we fitted a 3D geometric model of the eyeball to determine the subject's gaze angle. Experimental validation showed that our method achieved a root mean square error of 6.03° and 4.83° in the horizontal and vertical directions, respectively, for the detection of the subject's gaze angle. We also tested the proposed method in a real car driving environment, achieving stable driver gaze detection at various locations inside the car, such as the dashboard, driver mirror, and the in-vehicle screen.

2.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-36366067

RESUMO

Depth sensing is an important issue in many applications, such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, a depth map can be acquired by a stereo camera and a Time-of-Flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied a generalized multi-camera calibration method that uses both color and depth information. Next, depth maps of two sensors were fused by 3D registration and reprojection approach. Then, hole-filling was applied to refine the new depth map from the ToF-stereo fused data. Finally, the surface reconstruction technique was used to generate mesh data from the ToF-stereo fused pointcloud data. The proposed procedure was implemented and tested with real-world data and compared with various algorithms to validate its efficiency.

3.
Sensors (Basel) ; 21(24)2021 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-34960586

RESUMO

This paper investigates the problem of spacecraft relative navigation with respect to an unknown target during the close-proximity operations in the on-orbit service system. The serving spacecraft is equipped with a Time-of-Flight (ToF) camera for object recognition and feature detection. A fast and robust relative navigation strategy for acquisition is presented without any extra information about the target by using the natural circle features. The architecture of the proposed relative navigation strategy consists of three ingredients. First, a point cloud segmentation method based on the auxiliary gray image is developed for fast extraction of the circle feature point cloud of the target. Secondly, a new parameter fitting method of circle features is proposed including circle feature calculation by two different geometric models and results' fusion. Finally, a specific definition of the coordinate frame system is introduced to solve the relative pose with respect to the uncooperative target. In order to validate the efficiency of the segmentation, an experimental test is conducted based on real-time image data acquired by the ToF camera. The total time consumption is saved by 94%. In addition, numerical simulations are carried out to evaluate the proposed navigation algorithm. It shows good robustness under the different levels of noises.

4.
Sensors (Basel) ; 20(5)2020 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-32110910

RESUMO

Visual inertial odometry (VIO) is the front-end of visual simultaneous localization and mapping (vSLAM) methods and has been actively studied in recent years. In this context, a time-of-flight (ToF) camera, with its high accuracy of depth measurement and strong resilience to ambient light of variable intensity, draws our interest. Thus, in this paper, we present a realtime visual inertial system based on a low cost ToF camera. The iterative closest point (ICP) methodology is adopted, incorporating salient point-selection criteria and a robustness-weighting function. In addition, an error-state Kalman filter is used and fused with inertial measurement unit (IMU) data. To test its capability, the ToF-VIO system is mounted on an unmanned aerial vehicle (UAV) platform and operated in a variable light environment. The estimated flight trajectory is compared with the ground truth data captured by a motion capture system. Real flight experiments are also conducted in a dark indoor environment, demonstrating good agreement with estimated performance. The current system is thus shown to be accurate and efficient for use in UAV applications in dark and Global Navigation Satellite System (GNSS)-denied environments.

5.
Sensors (Basel) ; 20(2)2020 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-31936546

RESUMO

Trajectory-based writing system refers to writing a linguistic character or word in free space by moving a finger, marker, or handheld device. It is widely applicable where traditional pen-up and pen-down writing systems are troublesome. Due to the simple writing style, it has a great advantage over the gesture-based system. However, it is a challenging task because of the non-uniform characters and different writing styles. In this research, we developed an air-writing recognition system using three-dimensional (3D) trajectories collected by a depth camera that tracks the fingertip. For better feature selection, the nearest neighbor and root point translation was used to normalize the trajectory. We employed the long short-term memory (LSTM) and a convolutional neural network (CNN) as a recognizer. The model was tested and verified by the self-collected dataset. To evaluate the robustness of our model, we also employed the 6D motion gesture (6DMG) alphanumeric character dataset and achieved 99.32% accuracy which is the highest to date. Hence, it verifies that the proposed model is invariant for digits and characters. Moreover, we publish a dataset containing 21,000 digits; which solves the lack of dataset in the current research.

6.
J Biomed Inform ; 89: 81-100, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30521854

RESUMO

Strokes, surgeries, or degenerative diseases can impair motor abilities and balance. Long-term rehabilitation is often the only way to recover, as completely as possible, these lost skills. To be effective, this type of rehabilitation should follow three main rules. First, rehabilitation exercises should be able to keep patient's motivation high. Second, each exercise should be customizable depending on patient's needs. Third, patient's performance should be evaluated objectively, i.e., by measuring patient's movements with respect to an optimal reference model. To meet the just reported requirements, in this paper, an interactive and low-cost full body rehabilitation framework for the generation of 3D immersive serious games is proposed. The framework combines two Natural User Interfaces (NUIs), for hand and body modeling, respectively, and a Head Mounted Display (HMD) to provide the patient with an interactive and highly defined Virtual Environment (VE) for playing with stimulating rehabilitation exercises. The paper presents the overall architecture of the framework, including the environment for the generation of the pilot serious games and the main features of the used hand and body models. The effectiveness of the proposed system is shown on a group of ninety-two patients. In a first stage, a pool of seven rehabilitation therapists has evaluated the results of the patients on the basis of three reference rehabilitation exercises, confirming a significant gradual recovery of the patients' skills. Moreover, the feedbacks received by the therapists and patients, who have used the system, have pointed out remarkable results in terms of motivation, usability, and customization. In a second stage, by comparing the current state-of-the-art in rehabilitation area with the proposed system, we have observed that the latter can be considered a concrete contribution in terms of versatility, immersivity, and novelty. In a final stage, by training a Gated Recurrent Unit Recurrent Neural Network (GRU-RNN) with healthy subjects (i.e., baseline), we have also provided a reference model to objectively evaluate the degree of the patients' performance. To estimate the effectiveness of this last aspect of the proposed approach, we have used the NTU RGB + D Action Recognition dataset obtaining comparable results with the current literature in action recognition.


Assuntos
Terapia por Exercício/métodos , Reabilitação/métodos , Jogos de Vídeo , Terapia de Exposição à Realidade Virtual/métodos , Humanos
7.
Sensors (Basel) ; 19(6)2019 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-30875816

RESUMO

This paper is focused on designing a cost function of selecting a foothold for a physical quadruped robot walking on rough terrain. The quadruped robot is modeled with Denavit⁻Hartenberg (DH) parameters, and then a default foothold is defined based on the model. Time of Flight (TOF) camera is used to perceive terrain information and construct a 2.5D elevation map, on which the terrain features are detected. The cost function is defined as the weighted sum of several elements including terrain features and some features on the relative pose between the default foothold and other candidates. It is nearly impossible to hand-code the weight vector of the function, so the weights are learned using Supporting Vector Machine (SVM) techniques, and the training data set is generated from the 2.5D elevation map of a real terrain under the guidance of experts. Four candidate footholds around the default foothold are randomly sampled, and the expert gives the order of such four candidates by rotating and scaling the view for seeing clearly. Lastly, the learned cost function is used to select a suitable foothold and drive the quadruped robot to walk autonomously across the rough terrain with wooden steps. Comparing to the approach with the original standard static gait, the proposed cost function shows better performance.

8.
Sensors (Basel) ; 17(1)2017 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-28067767

RESUMO

Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it's difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5-5 m).

9.
Sensors (Basel) ; 17(9)2017 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-28895892

RESUMO

Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.

10.
Sensors (Basel) ; 17(4)2017 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-28422075

RESUMO

This paper proposes a multisensory system for the detection and localization of peripheral subcutaneous veins, as a first step for achieving automatic robotic insertion of catheters in the near future. The multisensory system is based on the combination of a SWIR (Short-Wave Infrared) camera, a TOF (Time-Of-Flight) camera and a NIR (Near Infrared) lighting source. The associated algorithm consists of two main parts: one devoted to the features extraction from the SWIR image, and another envisaged for the registration of the range data provided by the TOF camera, with the SWIR image and the results of the peripheral veins detection. In this way, the detected subcutaneous veins are mapped onto the 3D reconstructed surface, providing a full representation of the region of interest for the automatic catheter insertion. Several experimental tests were carried out in order to evaluate the capabilities of the presented approach. Preliminary results demonstrate the feasibility of the proposed design and highlight the potential benefits of the solution.


Assuntos
Veias , Algoritmos
11.
Front Plant Sci ; 13: 1099033, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36733593

RESUMO

Aiming at the stability of hand-eye calibration in fruit picking scene, a simple hand-eye calibration method for picking robot based on optimization combined with TOF (Time of Flight) camera is proposed. This method needs to fix the TOF depth camera at actual and calculated coordinates of the peach the end of the robot, operate the robot to take pictures of the calibration board from different poses, and record the current photographing poses to ensure that each group of pictures is clear and complete, so as to use the TOF depth camera to image the calibration board. Obtain multiple sets of calibration board depth maps and corresponding point cloud data, that is, "eye" data. Through the circle center extraction and positioning algorithm, the circle center points on each group of calibration plates are extracted, and a circle center sorting method based on the vector angle and the center of mass coordinates is designed to solve the circle center caused by factors such as mirror distortion, uneven illumination and different photographing poses. And through the tool center point of the actuator, the coordinate value of the circle center point on the four corners of each group of calibration plates in the robot end coordinate system is located in turn, and the "hand" data is obtained. Combined with the SVD method, And according to the obtained point residuals, the weight coefficients of the marker points are redistributed, and the hand-eye parameters are iteratively optimized, which improves the accuracy and stability of the hand-eye calibration. the method proposed in this paper has a better ability to locate the gross error under the environment of large gross errors. In order to verify the feasibility of the hand-eye calibration method, the indoor picking experiment was simulated, and the peaches were identified and positioned by combining deep learning and 3D vision to verify the proposed hand-eye calibration method. The JAKA six-axis robot and TuYang depth camera are used to build the experimental platform. The experimental results show that the method is simple to operate, has good stability, and the calibration plate is easy to manufacture and low in cost. work accuracy requirements.

12.
Front Neuroinform ; 13: 21, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31024285

RESUMO

A crucial link of electroencephalograph (EEG) technology is the accurate estimation of EEG electrode positions on a specific human head, which is very useful for precise analysis of brain functions. Photogrammetry has become an effective method in this field. This study aims to propose a more reliable and efficient method which can acquire 3D information conveniently and locate the source signal accurately in real-time. The main objective is identification and 3D location of EEG electrode positions using a system consisting of CCD cameras and Time-of-Flight (TOF) cameras. To calibrate the camera group accurately, differently to the previous camera calibration approaches, a method is introduced in this report which uses the point cloud directly rather than the depth image. Experimental results indicate that the typical distance error of reconstruction in this study is 3.26 mm for real-time applications, which is much better than the widely used electromagnetic method in clinical medicine. The accuracy can be further improved to a great extent by using a high-resolution camera.

13.
Comput Biol Med ; 101: 174-183, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-30145437

RESUMO

It is proposed in this paper a reliable approach for human gait symmetry assessment using a Time-of-Flight (ToF) depth camera and two mirrors. The setup formed from these devices provides a sequence of 3D point clouds that is the input of our system. A cylindrical histogram is estimated for describing the posture in each point cloud. The sequence of such histograms is then separated into two sequences of sub-histograms representing two half-bodies. A cross-correlation technique is finally applied to provide values describing gait symmetry indices. The evaluation was performed on 9 different gait types to demonstrate the ability of our approach in assessing gait symmetry. A comparison between our system and related methods, that employ different input data types, is also provided.


Assuntos
Marcha/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Postura/fisiologia , Caminhada/fisiologia , Fenômenos Biomecânicos , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa