Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Sensors (Basel) ; 24(15)2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39124084

RESUMO

The sturgeon is an important commercial aquaculture species in China. The measurement of sturgeon mass plays a remarkable role in aquaculture management. Furthermore, the measurement of sturgeon mass serves as a key phenotype, offering crucial information for enhancing growth traits through genetic improvement. Until now, the measurement of sturgeon mass is usually conducted by manual sampling, which is work intensive and time consuming for farmers and invasive and stressful for the fish. Therefore, a noninvasive volume reconstruction model for estimating the mass of swimming sturgeon based on RGB-D sensor was proposed in this paper. The volume of individual sturgeon was reconstructed by integrating the thickness of the upper surface of the sturgeon, where the difference in depth between the surface and the bottom was used as the thickness measurement. To verify feasibility, three experimental groups were conducted, achieving prediction accuracies of 0.897, 0.861, and 0.883, which indicated that the method can obtain the reliable, accurate mass of the sturgeon. The strategy requires no special hardware or intensive calculation, and it provides a key to uncovering noncontact, high-throughput, and highly sensitive mass evaluation of sturgeon while holding potential for evaluating the mass of other cultured fishes.


Assuntos
Aquicultura , Peixes , Natação , Animais , Peixes/fisiologia , Natação/fisiologia , Aquicultura/métodos
2.
Sensors (Basel) ; 24(6)2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38544276

RESUMO

The increase in life expectancy, and the consequent growth of the elderly population, represents a major challenge to guarantee adequate health and social care. The proposed system aims to provide a tool that automates the evaluation of gait and balance, essential to prevent falls in older people. Through an RGB-D camera, it is possible to capture and digitally represent certain parameters that describe how users carry out certain human motions and poses. Such individual motions and poses are actually related to items included in many well-known gait and balance evaluation tests. According to that information, therapists, who would not need to be present during the execution of the exercises, evaluate the results of such tests and could issue a diagnosis by storing and analyzing the sequences provided by the developed system. The system was validated in a laboratory scenario, and subsequently a trial was carried out in a nursing home with six residents. Results demonstrate the usefulness of the proposed system and the ease of objectively evaluating the main items of clinical tests by using the parameters calculated from information acquired with the RGB-D sensor. In addition, it lays the future foundations for creating a Cloud-based platform for remote fall risk assessment and its integration with a mobile assistant robot, and for designing Artificial Intelligence models that can detect patterns and identify pathologies for enabling therapists to prevent falls in users under risk.


Assuntos
Inteligência Artificial , Terapia por Exercício , Humanos , Idoso , Medição de Risco/métodos , Computadores
3.
Sensors (Basel) ; 22(11)2022 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-35684765

RESUMO

It is possible to construct cost-efficient three-dimensional (3D) or four-dimensional (4D) scanning systems using multiple affordable off-the-shelf RGB-D sensors to produce high-quality reconstructions of 3D objects. However, the quality of these systems' reconstructions is sensitive to a number of factors in reconstruction pipelines, such as multi-view calibration, depth estimation, 3D reconstruction, and color mapping accuracy, because the successive pipelines to reconstruct 3D meshes from multiple active stereo sensors are strongly correlated with each other. This paper categorizes the pipelines into sub-procedures and analyze various factors that can significantly affect reconstruction quality. Thus, this paper provides analytical and practical guidelines for high-quality 3D reconstructions with off-the-shelf sensors. For each sub-procedure, this paper shows comparisons and evaluations of several methods using data captured by 18 RGB-D sensors and provide analyses and discussions towards robust 3D reconstruction. Through various experiments, it has been demonstrated that significantly more accurate 3D scans can be obtained with the considerations along the pipelines. We believe our analyses, benchmarks, and guidelines will help anyone build their own studio and their further research for 3D reconstruction.


Assuntos
Algoritmos , Imageamento Tridimensional , Calibragem , Imageamento Tridimensional/métodos
4.
Sensors (Basel) ; 21(9)2021 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-33925847

RESUMO

High-quality and complete human motion 4D reconstruction is of great significance for immersive VR and even human operation. However, it has inevitable self-scanning constraints, and tracking under monocular settings also has strict restrictions. In this paper, we propose a human motion capture system combined with human priors and performance capture that only uses a single RGB-D sensor. To break the self-scanning constraint, we generated a complete mesh only using the front view input to initialize the geometric capture. In order to construct a correct warping field, most previous methods initialize their systems in a strict way. To maintain high fidelity while increasing the easiness of the system, we updated the model while capturing motion. Additionally, we blended in human priors in order to improve the reliability of model warping. Extensive experiments demonstrated that our method can be used more comfortably while maintaining credible geometric warping and remaining free of self-scanning constraints.


Assuntos
Postura , Humanos , Movimento (Física) , Reprodutibilidade dos Testes
5.
Sensors (Basel) ; 21(3)2021 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-33540791

RESUMO

RGB-D cameras have been commercialized, and many applications using them have been proposed. In this paper, we propose a robust registration method of multiple RGB-D cameras. We use a human body tracking system provided by Azure Kinect SDK to estimate a coarse global registration between cameras. As this coarse global registration has some error, we refine it using feature matching. However, the matched feature pairs include mismatches, hindering good performance. Therefore, we propose a registration refinement procedure that removes these mismatches and uses the global registration. In an experiment, the ratio of inliers among the matched features is greater than 95% for all tested feature matchers. Thus, we experimentally confirm that mismatches can be eliminated via the proposed method even in difficult situations and that a more precise global registration of RGB-D cameras can be obtained.


Assuntos
Monitorização Fisiológica , Calibragem , Humanos , Movimento
6.
Sensors (Basel) ; 20(2)2020 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-31940895

RESUMO

This paper studies the control performance of visual servoing system under the planar camera and RGB-D cameras, the contribution of this paper is through rapid identification of target RGB-D images and precise measurement of depth direction to strengthen the performance indicators of visual servoing system such as real time and accuracy, etc. Firstly, color images acquired by the RGB-D camera are segmented based on optimized normalized cuts. Next, the gray scale is restored according to the histogram feature of the target image. Then, the obtained 2D graphics depth information and the enhanced gray image information are distort merged to complete the target pose estimation based on the Hausdorff distance, and the current image pose is matched with the target image pose. The end angle and the speed of the robot are calculated to complete a control cycle and the process is iterated until the servo task is completed. Finally, the performance index of this control system based on proposed algorithm is tested about accuracy, real-time under position-based visual servoing system. The results demonstrate and validate that the RGB-D image processing algorithm proposed in this paper has the performance in the above aspects of the visual servoing system.

7.
Sensors (Basel) ; 20(22)2020 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-33202569

RESUMO

Aerial robots are widely used in search and rescue applications because of their small size and high maneuvering. However, designing an autonomous exploration algorithm is still a challenging and open task, because of the limited payload and computing resources on board UAVs. This paper presents an autonomous exploration algorithm for the aerial robots that shows several improvements for being used in the search and rescue tasks. First of all, an RGB-D sensor is used to receive information from the environment and the OctoMap divides the environment into obstacles, free and unknown spaces. Then, a clustering algorithm is used to filter the frontiers extracted from the OctoMap, and an information gain based cost function is applied to choose the optimal frontier. At last, the feasible path is given by A* path planner and a safe corridor generation algorithm. The proposed algorithm has been tested and compared with baseline algorithms in three different environments with the map resolutions of 0.2 m, and 0.3 m. The experimental results show that the proposed algorithm has a shorter exploration path and can save more exploration time when compared with the state of the art. The algorithm has also been validated in the real flight experiments.

8.
Sensors (Basel) ; 20(16)2020 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-32784913

RESUMO

This paper proposes a novel online object-packing system which can measure the dimensions of every incoming object and calculate its desired position in a given container. Existing object-packing systems have the limitations of requiring the exact information of objects in advance or assuming them as boxes. Thus, this paper is mainly focused on the following two points: (1) Real-time calculation of the dimensions and orientation of an object; (2) Online optimization of the object's position in a container. The dimensions and orientation of the object are obtained using an RGB-D sensor when the object is picked by a manipulator and moved over a certain position. The optimal position of the object is calculated by recognizing the container's available space using another RGB-D sensor and minimizing the cost function that is formulated by the available space information and the optimization criteria inspired by the way people place things. The experimental results show that the proposed system successfully places the incoming various shaped objects in their proper positions.

9.
Sensors (Basel) ; 20(21)2020 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-33171831

RESUMO

Three-dimensional hand detection from a single RGB-D image is an important technology which supports many useful applications. Practically, it is challenging to robustly detect human hands in unconstrained environments because the RGB-D channels can be affected by many uncontrollable factors, such as light changes. To tackle this problem, we propose a 3D hand detection approach which improves the robustness and accuracy by adaptively fusing the complementary features extracted from the RGB-D channels. Using the fused RGB-D feature, the 2D bounding boxes of hands are detected first, and then the 3D locations along the z-axis are estimated through a cascaded network. Furthermore, we represent a challenging RGB-D hand detection dataset collected in unconstrained environments. Different from previous works which primarily rely on either the RGB or D channel, we adaptively fuse the RGB-D channels for hand detection. Specifically, evaluation results show that the D-channel is crucial for hand detection in unconstrained environments. Our RGB-D fusion-based approach significantly improves the hand detection accuracy from 69.1 to 74.1 comparing to one of the most state-of-the-art RGB-based hand detectors. The existing RGB- or D-based methods are unstable in unseen lighting conditions: in dark conditions, the accuracy of the RGB-based method significantly drops to 48.9, and in back-light conditions, the accuracy of the D-based method dramatically drops to 28.3. Compared with these methods, our RGB-D fusion based approach is much more robust without accuracy degrading, and our detection results are 62.5 and 65.9, respectively, in these two extreme lighting conditions for accuracy.


Assuntos
Mãos , Imageamento Tridimensional , Iluminação , Humanos
10.
Sensors (Basel) ; 19(23)2019 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-31766772

RESUMO

This paper presents an omnidirectional RGB-D (RGB + Distance fusion) sensor prototype using an actuated LIDAR (Light Detection and Ranging) and an RGB camera. Besides the sensor, a novel mapping strategy is developed considering sensor scanning characteristics. The sensor can gather RGB and 3D data from any direction by toppling in 90 degrees a laser scan sensor and rotating it about its central axis. The mapping strategy is based on two environment maps, a local map for instantaneous perception, and a global map for perception memory. The 2D local map represents the surface in front of the robot and may contain RGB data, allowing environment reconstruction and human detection, similar to a sliding window that moves with a robot and stores surface data.

11.
Sensors (Basel) ; 19(2)2019 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-30669645

RESUMO

Fruit detection in real outdoor conditions is necessary for automatic guava harvesting, and the branch-dependent pose of fruits is also crucial to guide a robot to approach and detach the target fruit without colliding with its mother branch. To conduct automatic, collision-free picking, this study investigates a fruit detection and pose estimation method by using a low-cost red⁻green⁻blue⁻depth (RGB-D) sensor. A state-of-the-art fully convolutional network is first deployed to segment the RGB image to output a fruit and branch binary map. Based on the fruit binary map and RGB-D depth image, Euclidean clustering is then applied to group the point cloud into a set of individual fruits. Next, a multiple three-dimensional (3D) line-segments detection method is developed to reconstruct the segmented branches. Finally, the 3D pose of the fruit is estimated using its center position and nearest branch information. A dataset was acquired in an outdoor orchard to evaluate the performance of the proposed method. Quantitative experiments showed that the precision and recall of guava fruit detection were 0.983 and 0.948, respectively; the 3D pose error was 23.43° ± 14.18°; and the execution time per fruit was 0.565 s. The results demonstrate that the developed method can be applied to a guava-harvesting robot.

12.
Sensors (Basel) ; 19(7)2019 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-30959936

RESUMO

Measuring pavement roughness and detecting pavement surface defects are two of the most important tasks in pavement management. While existing pavement roughness measurement approaches are expensive, the primary aim of this paper is to use a cost-effective and sufficiently accurate RGB-D sensor to estimate the pavement roughness in the outdoor environment. An algorithm is proposed to process the RGB-D data and autonomously quantify the road roughness. To this end, the RGB-D sensor is calibrated and primary data for estimating the pavement roughness are collected. The collected depth frames and RGB images are registered to create the 3D road surfaces. We found that there is a significant correlation between the estimated International Roughness Index (IRI) using the RGB-D sensor and the manual measured IRI using rod and level. By considering the Power Spectral Density (PSD) analysis and the repeatability of measurement, the results show that the proposed solution can accurately estimate the different pavement roughness.

13.
Sensors (Basel) ; 18(3)2018 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-29547562

RESUMO

This paper deals with the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor. It is a challenging task as we consider the almost inevitable accumulation error issue in some previous sequential fusion methods and also the possible failure of surface tracking in a long sequence. Therefore, we propose a global non-rigid registration framework and tackle the drifting problem via an explicit loop closure. Our novel scheme starts with a fusion step to get multiple partial scans from the input sequence, followed by a pairwise non-rigid registration and loop detection step to obtain correspondences between neighboring partial pieces and those pieces that form a loop. Then, we perform a global registration procedure to align all those pieces together into a consistent canonical space as guided by those matches that we have established. Finally, our proposed model-update step helps fixing potential misalignments that still exist after the global registration. Both geometric and appearance constraints are enforced during our alignment; therefore, we are able to get the recovered model with accurate geometry as well as high fidelity color maps for the mesh. Experiments on both synthetic and various real datasets have demonstrated the capability of our approach to reconstruct complete and watertight deformable objects.

14.
Sensors (Basel) ; 18(5)2018 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-29748508

RESUMO

Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.


Assuntos
Auxiliares Sensoriais , Pessoas com Deficiência Visual/reabilitação , Dispositivos Eletrônicos Vestíveis , Percepção de Profundidade , Humanos , Interpretação de Imagem Assistida por Computador , Reconhecimento Automatizado de Padrão , Caminhada
15.
Sensors (Basel) ; 17(7)2017 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-28671565

RESUMO

Studies on depth images containing three-dimensional information have been performed for many practical applications. However, the depth images acquired from depth sensors have inherent problems, such as missing values and noisy boundaries. These problems significantly affect the performance of applications that use a depth image as their input. This paper describes a depth enhancement algorithm based on a combination of color and depth information. To fill depth holes and recover object shapes, asynchronous cellular automata with neighborhood distance maps are used. Image segmentation and a weighted linear combination of spatial filtering algorithms are applied to extract object regions and fill disocclusion in the object regions. Experimental results on both real-world and public datasets show that the proposed method enhances the quality of the depth image with low computational complexity, outperforming conventional methods on a number of metrics. Furthermore, to verify the performance of the proposed method, we present stereoscopic images generated by the enhanced depth image to illustrate the improvement in quality.

16.
Sensors (Basel) ; 18(1)2017 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-29295536

RESUMO

Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, "on ground crop inspection" potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. "On ground monitoring" is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows.


Assuntos
Produtos Agrícolas , Agricultura , Madeira
17.
Sensors (Basel) ; 17(8)2017 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-28817069

RESUMO

The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability.

18.
Sensors (Basel) ; 16(11)2016 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-27879634

RESUMO

The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers.


Assuntos
Técnicas Biossensoriais/métodos , Pessoas com Deficiência Visual , Algoritmos , Humanos , Reconhecimento Automatizado de Padrão
19.
Sensors (Basel) ; 15(8): 18506-25, 2015 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-26230696

RESUMO

This work presents a procedure for refining depth maps acquired using RGB-D (depth) cameras. With numerous new structured-light RGB-D cameras, acquiring high-resolution depth maps has become easy. However, there are problems such as undesired occlusion, inaccurate depth values, and temporal variation of pixel values when using these cameras. In this paper, a proposed method based on an exemplar-based inpainting method is proposed to remove artefacts in depth maps obtained using RGB-D cameras. Exemplar-based inpainting has been used to repair an object-removed image. The concept underlying this inpainting method is similar to that underlying the procedure for padding the occlusions in the depth data obtained using RGB-D cameras. Therefore, our proposed method enhances and modifies the inpainting method for application in and the refinement of RGB-D depth data image quality. For evaluating the experimental results of the proposed method, our proposed method was tested on the Tsukuba Stereo Dataset, which contains a 3D video with the ground truths of depth maps, occlusion maps, RGB images, the peak signal-to-noise ratio, and the computational time as the evaluation metrics. Moreover, a set of self-recorded RGB-D depth maps and their refined versions are presented to show the effectiveness of the proposed method.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Bases de Dados como Assunto , Imageamento Tridimensional , Razão Sinal-Ruído , Fatores de Tempo
20.
J Dent ; 127: 104302, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36152954

RESUMO

OBJECTIVES: This study aimed to evaluate and compare the accuracy and inter-operator reliability of a low-cost red-green-blue-depth (RGB-D) camera-based facial scanner (Bellus3D Arc7) with a stereophotogrammetry facial scanner (3dMD) and to explore the possibility of the former as a clinical substitute for the latter. METHODS: A mannequin head was selected as the research object. In the RGB-D camera-based facial scanner group, the head was continuously scanned five times using an RGB-D camera-based facial scanner (Bellus3D Arc7), and the outcome data of each scan was then imported into CAD software (MeshLab) to reconstruct three-dimensional (3D) facial photographs. In the stereophotogrammetry facial scanner group, the mannequin head was scanned with a stereophotogrammetry facial scanner (3dMD). Selected parameters were directly measured on the reconstructed 3D virtual faces using a CAD software. The same parameters were then measured directly on the mannequin head using the direct anthropometry (DA) method as the gold standard for later comparison. The accuracy of the facial scanners was evaluated in terms of trueness and precision. Trueness was evaluated by comparing the measurement results of the two groups with each other and with that of DA using equivalence tests and average absolute deviations, while precision and inter-operator reliability were assessed using the intraclass correlation coefficient (ICC). A 3D facial mesh deviation between the two groups was also calculated for further reference using a 3D metrology software (GOM inspect pro). RESULTS: In terms of trueness, the average absolute deviations between RGB-D camera-based and stereophotogrammetry facial scanners, between RGB-D camera-based facial scanner and DA, and between stereophotogrammetry facial scanner and DA were statistically equivalent at 0.50±0.27 mm, 0.61±0.42 mm, and 0.28±0.14 mm, respectively. Equivalence test results confirmed that their equivalence was within clinical requirements (<1 mm). The ICC for each parameter was approximately 0.999 in terms of precision and inter-operator reliability. A 3D facial mesh analysis suggested that the deviation between the two groups was 0.37±0.01 mm. CONCLUSIONS: For facial scanners, an accuracy of <1 mm is commonly considered clinically acceptable. Both the RGB-D camera-based and stereophotogrammetry facial scanners in this study showed acceptable trueness, high precision, and inter-operator reliability. A low-cost RGB-D camera-based facial scanner could be an eligible clinical substitute for traditional stereophotogrammetry. CLINICAL SIGNIFICANCE: The low-cost RGB-D camera-based facial scanner showed clinically acceptable trueness, high precision, and inter-operator reliability; thus, it could be an eligible clinical substitute for traditional stereophotogrammetry.


Assuntos
Imageamento Tridimensional , Fotogrametria , Desenho Assistido por Computador , Técnica de Moldagem Odontológica , Reprodutibilidade dos Testes , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA