Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
IEEE Sens J ; 24(5): 6888-6897, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38476583

RESUMEN

We developed an ankle-worn gait monitoring system for tracking gait parameters, including length, width, and height. The system utilizes ankle bracelets equipped with wide-angle infrared (IR) stereo cameras tasked with monitoring a marker on the opposing ankle. A computer vision algorithm we have also developed processes the imaged marker positions to estimate the length, width, and height of the person's gait. Through testing on multiple participants, the prototype of the proposed gait monitoring system exhibited notable performance, achieving an average accuracy of 96.52%, 94.46%, and 95.29% for gait length, width, and height measurements, respectively, despite distorted wide-angle images. The OptiGait system offers a cost-effective and user-friendly alternative compared to existing gait parameter sensing systems, delivering comparable accuracy in measuring gait length and width. Notably, the system demonstrates a novel capability in measuring gait height, a feature not previously reported in the literature.

2.
Sensors (Basel) ; 23(3)2023 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-36772441

RESUMEN

Nowadays, state-of-the-art direct visual odometry (VO) methods essentially rely on points to estimate the pose of the camera and reconstruct the environment. Direct Sparse Odometry (DSO) became the standard technique and many approaches have been developed from it. However, only recently, two monocular plane-based DSOs have been presented. The first one uses a learning-based plane estimator to generate coarse planes as input for optimization. When these coarse estimates are too far from the minimum, the optimization may fail. Thus, the entire system result is dependent on the quality of the plane predictions and restricted to the training data domain. The second one only detects planes in vertical and horizontal orientation as being more adequate to structured environments. To the best of our knowledge, we propose the first Stereo Plane-based VO inspired by the DSO framework. Differing from the above-mentioned methods, our approach purely uses planes as features in the sliding window optimization and uses a dual quaternion as pose parameterization. The conducted experiments showed that our method presents a similar performance to Stereo DSO, a point-based approach.

3.
Sensors (Basel) ; 23(6)2023 Mar 17.
Artículo en Inglés | MEDLINE | ID: mdl-36991934

RESUMEN

Methods based on 64-beam LiDAR can provide very precise 3D object detection. However, highly accurate LiDAR sensors are extremely costly: a 64-beam model can cost approximately USD 75,000. We previously proposed SLS-Fusion (sparse LiDAR and stereo fusion) to fuse low-cost four-beam LiDAR with stereo cameras that outperform most advanced stereo-LiDAR fusion methods. In this paper, and according to the number of LiDAR beams used, we analyzed how the stereo and LiDAR sensors contributed to the performance of the SLS-Fusion model for 3D object detection. Data coming from the stereo camera play a significant role in the fusion model. However, it is necessary to quantify this contribution and identify the variations in such a contribution with respect to the number of LiDAR beams used inside the model. Thus, to evaluate the roles of the parts of the SLS-Fusion network that represent LiDAR and stereo camera architectures, we propose dividing the model into two independent decoder networks. The results of this study show that-starting from four beams-increasing the number of LiDAR beams has no significant impact on the SLS-Fusion performance. The presented results can guide the design decisions by practitioners.

4.
Sensors (Basel) ; 23(6)2023 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-36991954

RESUMEN

We propose an omnidirectional measurement method without blind spots by using a convex mirror, which in principle does not cause chromatic aberration, and by using vertical disparity by installing cameras at the top and bottom of the image. In recent years, there has been significant research in the fields of autonomous cars and robots. In these fields, three-dimensional measurements of the surrounding environment have become indispensable. Depth sensing with cameras is one of the most important sensors for recognizing the surrounding environment. Previous studies have attempted to measure a wide range of areas using fisheye and full spherical panoramic cameras. However, these approaches have limitations such as blind spots and the need for multiple cameras to measure all directions. Therefore, this paper describes a stereo camera system that uses a device capable of taking an omnidirectional image with a single shot, enabling omnidirectional measurement with only two cameras. This achievement was challenging to attain with conventional stereo cameras. The results of experiments confirmed an improvement in accuracy of up to 37.4% compared to previous studies. In addition, the system succeeded in generating depth image that can recognize distances in all directions in a single frame, demonstrating the possibility of omnidirectional measurement with two cameras.

5.
Sensors (Basel) ; 23(24)2023 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-38139510

RESUMEN

In order to effectively balance enforced guidance/regulation during a pandemic and limit infection transmission, with the necessity for public transportation services to remain safe and operational, it is imperative to understand and monitor environmental conditions and typical behavioural patterns within such spaces. Social distancing ability on public transport as well as the use of advanced computer vision techniques to accurately measure this are explored in this paper. A low-cost depth-sensing system is deployed on a public bus as a means to approximate social distancing measures and study passenger habits in relation to social distancing. The results indicate that social distancing on this form of public transport is unlikely for an individual beyond a 28% occupancy threshold, with an 89% chance of being within 1-2 m from at least one other passenger and a 57% chance of being within less than one metre from another passenger at any one point in time. Passenger preference for seating is also analysed, which clearly demonstrates that for typical passengers, ease of access and comfort, as well as seats having a view, are preferred over maximising social-distancing measures. With a highly detailed and comprehensive set of acquired data and accurate measurement capability, the employed equipment and processing methodology also prove to be a robust approach for the application.


Asunto(s)
Distanciamiento Físico , Transportes , Transportes/métodos , Pandemias/prevención & control
6.
Sensors (Basel) ; 23(11)2023 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-37300020

RESUMEN

This paper proposes a near-central camera model and its solution approach. 'Near-central' refers to cases in which the rays do not converge to a point and do not have severely arbitrary directions (non-central cases). Conventional calibration methods are difficult to apply in such cases. Although the generalized camera model can be applied, dense observation points are required for accurate calibration. Moreover, this approach is computationally expensive in the iterative projection framework. We developed a noniterative ray correction method based on sparse observation points to address this problem. First, we established a smoothed three-dimensional (3D) residual framework using a backbone to avoid using the iterative framework. Second, we interpolated the residual by applying local inverse distance weighting on the nearest neighbor of a given point. Specifically, we prevented excessive computation and the deterioration in accuracy that may occur in inverse projection through the 3D smoothed residual vectors. Moreover, the 3D vectors can represent the ray directions more accurately than the 2D entities. Synthetic experiments show that the proposed method can achieve prompt and accurate calibration. The depth error is reduced by approximately 63% in the bumpy shield dataset, and the proposed approach is noted to be two digits faster than the iterative methods.


Asunto(s)
Calibración , Análisis por Conglomerados
7.
Sensors (Basel) ; 22(12)2022 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-35746387

RESUMEN

This paper proposes a method to extend a sensing range of a short-baseline stereo camera (SBSC). The proposed method combines a stereo depth and a monocular depth estimated by a convolutional neural network-based monocular depth estimation (MDE). To combine a stereo depth and a monocular depth, the proposed method estimates a scale factor of a monocular depth using stereo depth-mono depth pairs and then combines the two depths. Another advantage of the proposed method is that the trained MDE model may be utilized for different environments without retraining. The performance of the proposed method is verified qualitatively and quantitatively using the directly collected and open datasets.


Asunto(s)
Redes Neurales de la Computación
8.
Sensors (Basel) ; 22(11)2022 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-35684807

RESUMEN

In orchard fruit picking systems for pears, the challenge is to identify the full shape of the soft fruit to avoid injuries while using robotic or automatic picking systems. Advancements in computer vision have brought the potential to train for different shapes and sizes of fruit using deep learning algorithms. In this research, a fruit recognition method for robotic systems was developed to identify pears in a complex orchard environment using a 3D stereo camera combined with Mask Region-Convolutional Neural Networks (Mask R-CNN) deep learning technology to obtain targets. This experiment used 9054 RGBA original images (3018 original images and 6036 augmented images) to create a dataset divided into a training, validation, and testing sets. Furthermore, we collected the dataset under different lighting conditions at different times which were high-light (9-10 am) and low-light (6-7 pm) conditions at JST, Tokyo Time, August 2021 (summertime) to prepare training, validation, and test datasets at a ratio of 6:3:1. All the images were taken by a 3D stereo camera which included PERFORMANCE, QUALITY, and ULTRA models. We used the PERFORMANCE model to capture images to make the datasets; the camera on the left generated depth images and the camera on the right generated the original images. In this research, we also compared the performance of different types with the R-CNN model (Mask R-CNN and Faster R-CNN); the mean Average Precisions (mAP) of Mask R-CNN and Faster R-CNN were compared in the same datasets with the same ratio. Each epoch in Mask R-CNN was set at 500 steps with total 80 epochs. And Faster R-CNN was set at 40,000 steps for training. For the recognition of pears, the Mask R-CNN, had the mAPs of 95.22% for validation set and 99.45% was observed for the testing set. On the other hand, mAPs were observed 87.9% in the validation set and 87.52% in the testing set using Faster R-CNN. The different models using the same dataset had differences in performance in gathering clustered pears and individual pear situations. Mask R-CNN outperformed Faster R-CNN when the pears are densely clustered at the complex orchard. Therefore, the 3D stereo camera-based dataset combined with the Mask R-CNN vision algorithm had high accuracy in detecting the individual pears from gathered pears in a complex orchard environment.


Asunto(s)
Pyrus , Algoritmos , Frutas , Redes Neurales de la Computación
9.
Sensors (Basel) ; 22(7)2022 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-35408068

RESUMEN

The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rectified left and right images based on the principle of the epipolar constraints, then the obstacles are detected from the depth images using the MeanShift algorithm. The pixel-level polarization images are extracted from the raw polarization-grey images, then the water hazards are detected successfully. The PointPillars network is employed to detect the objects from the point cloud. The calibration and synchronization between the sensors are accomplished. The experiment results show that the data fusion enriches the detection results, provides high-dimensional perceptual information and extends the effective detection range. Meanwhile, the detection results are stable under diverse range and illumination conditions.


Asunto(s)
Algoritmos , Refracción Ocular , Calibración
10.
Sensors (Basel) ; 22(4)2022 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-35214518

RESUMEN

This work focuses on the problem of non-contact measurement for vegetables in agricultural automation. The application of computer vision in assisted agricultural production significantly improves work efficiency due to the rapid development of information technology and artificial intelligence. Based on object detection and stereo cameras, this paper proposes an intelligent method for vegetable recognition and size estimation. The method obtains colorful images and depth maps with a binocular stereo camera. Then detection networks classify four kinds of common vegetables (cucumber, eggplant, tomato and pepper) and locate six points for each object. Finally, the size of vegetables is calculated using the pixel position and depth of keypoints. Experimental results show that the proposed method can classify four kinds of common vegetables within 60 cm and accurately estimate their diameter and length. The work provides an innovative idea for solving the vegetable's non-contact measurement problems and can promote the application of computer vision in agricultural automation.


Asunto(s)
Inteligencia Artificial , Verduras , Algoritmos
11.
Sensors (Basel) ; 22(7)2022 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-35408240

RESUMEN

The automatic positioning of machines in a large number of application areas is an important aspect of automation. Today, this is often done using classic geodetic sensors such as Global Navigation Satellite Systems (GNSS) and robotic total stations. In this work, a stereo camera system was developed that localizes a machine at high frequency and serves as an alternative to the previously mentioned sensors. For this purpose, algorithms were developed that detect active markers on the machine in a stereo image pair, find stereo point correspondences, and estimate the pose of the machine from these. Theoretical influences and accuracies for different systems were estimated with a Monte Carlo simulation, on the basis of which the stereo camera system was designed. Field measurements were used to evaluate the actual achievable accuracies and the robustness of the prototype system. The comparison is present with reference measurements with a laser tracker. The estimated object pose achieved accuracies higher than 16 mm with the translation components and accuracies higher than 3 mrad with the rotation components. As a result, 3D point accuracies higher than 16 mm were achieved by the machine. For the first time, a prototype could be developed that represents an alternative, powerful image-based localization method for machines to the classical geodetic sensors.

12.
Sensors (Basel) ; 22(17)2022 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-36080853

RESUMEN

Ego-motion estimation is a foundational capability for autonomous combine harvesters, supporting high-level functions such as navigation and harvesting. This paper presents a novel approach for estimating the motion of a combine harvester from a sequence of stereo images. The proposed method starts with tracking a set of 3D landmarks which are triangulated from stereo-matched features. Six Degree of Freedom (DoF) ego motion is obtained by minimizing the reprojection error of those landmarks on the current frame. Then, local bundle adjustment is performed to refine structure (i.e., landmark positions) and motion (i.e., keyframe poses) jointly in a sliding window. Both processes are encapsulated into a two-threaded architecture to achieve real-time performance. Our method utilizes a stereo camera, which enables estimation at true scale and easy startup of the system. Quantitative tests were performed on real agricultural scene data, comprising several different working paths, in terms of estimating accuracy and real-time performance. The experimental results demonstrated that our proposed perception system achieved favorable accuracy, outputting the pose at 10 Hz, which is sufficient for online ego-motion estimation for combine harvesters.


Asunto(s)
Ego , Movimiento (Física)
13.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36366067

RESUMEN

Depth sensing is an important issue in many applications, such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, a depth map can be acquired by a stereo camera and a Time-of-Flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied a generalized multi-camera calibration method that uses both color and depth information. Next, depth maps of two sensors were fused by 3D registration and reprojection approach. Then, hole-filling was applied to refine the new depth map from the ToF-stereo fused data. Finally, the surface reconstruction technique was used to generate mesh data from the ToF-stereo fused pointcloud data. The proposed procedure was implemented and tested with real-world data and compared with various algorithms to validate its efficiency.

14.
Sensors (Basel) ; 22(15)2022 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-35898100

RESUMEN

This paper presents a new synthetic dataset obtained from Gazebo simulations of an Unmanned Ground Vehicle (UGV) moving on different natural environments. To this end, a Husky mobile robot equipped with a tridimensional (3D) Light Detection and Ranging (LiDAR) sensor, a stereo camera, a Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU) and wheel tachometers has followed several paths using the Robot Operating System (ROS). Both points from LiDAR scans and pixels from camera images, have been automatically labeled into their corresponding object class. For this purpose, unique reflectivity values and flat colors have been assigned to each object present in the modeled environments. As a result, a public dataset, which also includes 3D pose ground-truth, is provided as ROS bag files and as human-readable data. Potential applications include supervised learning and benchmarking for UGV navigation on natural environments. Moreover, to allow researchers to easily modify the dataset or to directly use the simulations, the required code has also been released.


Asunto(s)
Robótica , Benchmarking , Ambiente , Humanos , Especies Reactivas de Oxígeno , Programas Informáticos
15.
Sensors (Basel) ; 22(15)2022 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-35898004

RESUMEN

Growth indices can quantify crop productivity and establish optimal environmental, nutritional, and irrigation control strategies. A convolutional neural network (CNN)-based model is presented for estimating various growth indices (i.e., fresh weight, dry weight, height, leaf area, and diameter) of four varieties of greenhouse lettuce using red, green, blue, and depth (RGB-D) data obtained using a stereo camera. Data from an online autonomous greenhouse challenge (Wageningen University, June 2021) were employed in this study. The data were collected using an Intel RealSense D415 camera. The developed model has a two-stage CNN architecture based on ResNet50V2 layers. The developed model provided coefficients of determination from 0.88 to 0.95, with normalized root mean square errors of 6.09%, 6.30%, 7.65%, 7.92%, and 5.62% for fresh weight, dry weight, height, diameter, and leaf area, respectively, on unknown lettuce images. Using red, green, blue (RGB) and depth data employed in the CNN improved the determination accuracy for all five lettuce growth indices due to the ability of the stereo camera to extract height information on lettuce. The average time for processing each lettuce image using the developed CNN model run on a Jetson SUB mini-PC with a Jetson Xavier NX was 0.83 s, indicating the potential for the model in fast real-time sensing of lettuce growth indices.


Asunto(s)
Lactuca/crecimiento & desarrollo , Redes Neurales de la Computación , Humanos , Lactuca/clasificación , Hojas de la Planta/crecimiento & desarrollo , Raíces de Plantas/crecimiento & desarrollo
16.
Sensors (Basel) ; 21(21)2021 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-34770355

RESUMEN

Direct visual odometry algorithms assume that every frame from the camera has the same photometric characteristics. However, the cameras with auto exposure are widely used outdoors as the environment often changes. The vignetting also affects the pixel's brightness on different frames, even if the exposure time is fixed. We propose an online vignetting correction and exposure time estimation method for stereo direct visual odometry algorithms. Our method works on a camera that has a gamma-like response function. The inverse vignetting function and exposure time ratio between neighboring frames are estimated. Stereo matching is used to select correspondences between the left image and right image in the same frame at the initialization step. Feature points are used to pick the correspondences between different frames. Our method provides static correction results during the experiments on datasets and a stereo camera.


Asunto(s)
Algoritmos , Fotometría , Calibración
17.
Glob Chang Biol ; 24(6): 2366-2376, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29316074

RESUMEN

Rising atmospheric [CO2 ] and associated climate change are expected to modify primary productivity across a range of ecosystems globally. Increasing aridity is predicted to reduce grassland productivity, although rising [CO2 ] and associated increases in plant water use efficiency may partially offset the effect of drying on growth. Difficulties arise in predicting the direction and magnitude of future changes in ecosystem productivity, due to limited field experimentation investigating climate and CO2 interactions. We use repeat near-surface digital photography to quantify the effects of water availability and experimentally manipulated elevated [CO2 ] (eCO2 ) on understorey live foliage cover and biomass over three growing seasons in a temperate grassy woodland in south-eastern Australia. We hypothesised that (i) understorey herbaceous productivity is dependent upon soil water availability, and (ii) that eCO2 will increase productivity, with greatest stimulation occurring under conditions of low water availability. Soil volumetric water content (VWC) determined foliage cover and growth rates over the length of the growing season (August to March), with low VWC (<0.1 m3  m-3 ) reducing productivity. However, eCO2 did not increase herbaceous cover and biomass over the duration of the experiment, or mitigate the effects of low water availability on understorey growth rates and cover. Our findings suggest that projected increases in aridity in temperate woodlands are likely to lead to reduced understorey productivity, with little scope for eCO2 to offset these changes.


Asunto(s)
Dióxido de Carbono/química , Dióxido de Carbono/farmacología , Cambio Climático , Bosques , Plantas/efectos de los fármacos , Suelo/química , Biomasa , Estaciones del Año , Agua/química , Agua/fisiología
18.
Sensors (Basel) ; 18(10)2018 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-30347836

RESUMEN

In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features-point and line segments-have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.

19.
Sensors (Basel) ; 18(11)2018 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-30384481

RESUMEN

This paper presents a stereo camera-based head-eye calibration method that aims to find the globally optimal transformation between a robot's head and its eye. This method is highly intuitive and simple, so it can be used in a vision system for humanoid robots without any complex procedures. To achieve this, we introduce an extended minimum variance approach for head-eye calibration using surface normal vectors instead of 3D point sets. The presented method considers both positional and orientational error variances between visual measurements and kinematic data in head-eye calibration. Experiments using both synthetic and real data show the accuracy and efficiency of the proposed method.

20.
Sensors (Basel) ; 16(1)2015 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-26703624

RESUMEN

In this work we show the principle of optical 3D surface measurements based on the fringe projection technique for underwater applications. The challenges of underwater use of this technique are shown and discussed in comparison with the classical application. We describe an extended camera model which takes refraction effects into account as well as a proposal of an effective, low-effort calibration procedure for underwater optical stereo scanners. This calibration technique combines a classical air calibration based on the pinhole model with ray-based modeling and requires only a few underwater recordings of an object of known length and a planar surface. We demonstrate a new underwater 3D scanning device based on the fringe projection technique. It has a weight of about 10 kg and the maximal water depth for application of the scanner is 40 m. It covers an underwater measurement volume of 250 mm × 200 mm × 120 mm. The surface of the measurement objects is captured with a lateral resolution of 150 µm in a third of a second. Calibration evaluation results are presented and examples of first underwater measurements are given.


Asunto(s)
Monitoreo del Ambiente/métodos , Imagenología Tridimensional/métodos , Algoritmos , Monitoreo del Ambiente/instrumentación , Hidrobiología , Imagenología Tridimensional/instrumentación , Agua/química
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA