RESUMEN
The inherent membrane tension of biological materials could vitally affect their responses to contact loading but is generally ignored in existing indentation analysis. In this paper, the authors theoretically investigate the contact stiffness of axisymmetric indentations of elastic solids covered with thin tensed membranes. When the indentation size decreases to the same order as the ratio of membrane tension to elastic modulus, the contact stiffness accounting for the effect of membrane tension becomes much higher than the prediction of conventional contact theory. An explicit expression is derived for the contact stiffness, which is universal for axisymmetric indentations using indenters of arbitrary convex profiles. On this basis, a simple method of analysis is proposed to estimate the membrane tension and elastic modulus of biological materials from the indentation load-depth data, which is successfully applied to analyze the indentation experiments of cells and lungs. This study might be helpful for the comprehensive assessment of the mechanical properties of soft biological systems. STATEMENT OF SIGNIFICANCE: This paper highlights the crucial effect of the inherent membrane tension on the indentation response of soft biomaterials, which has been generally ignored in existing analysis of experiments. For typical indentation tests on cells and organs, the contact stiffness can be twice or higher than the prediction of conventional contact model. A universal expression of the contact stiffness accounting for the membrane tension effect is derived. On this basis, a simple method of analysis is proposed to abstract the membrane tension of biomaterials from the experimentally recorded indentation load-depth data. With this method, the elasticity of soft biomaterials can be characterized more comprehensively.
Asunto(s)
Materiales Biocompatibles , Elasticidad , Módulo de ElasticidadRESUMEN
The majority of falls leading to death occur among the elderly population. The use of fall detection technology can help to ensure quick help for fall victims by automatically informing caretakers. Our fall detection method is based on depth data and has a high level of reliability in detecting falls while maintaining a low false alarm rate. The technology has been deployed in over 1,200 installations, indicating user acceptance and technological maturity. We follow a privacy by design approach by using range maps for the analysis instead of RGB images and process all the data in the sensor. The literature review shows that real-world fall detection evaluation is scarce, and if available, is conducted with a limited amount of participants. To our knowledge, our depth image based fall detection method has achieved the largest field evaluation up to date, with more than 100,000 events manually annotated and an evaluation on a dataset with 2.2 million events. We additionally present an 8-months study with more than 120,000 alarms analysed, provoked by 214 sensors located in 16 care facilities in Austria. We learned that on average 2.3 times more falls happen than are documented. Consequently, the system helps to detect falls that are otherwise overseen. The presented solution has the potential to make a significant impact in reducing the risk of accidental falls.
Asunto(s)
Accidentes por Caídas , Privacidad , Anciano , Humanos , Accidentes por Caídas/prevención & control , Austria , Conocimiento , Reproducibilidad de los ResultadosRESUMEN
Hyperspectral imaging and distance data have previously been used in aerial, forestry, agricultural, and medical imaging applications. Extracting meaningful information from a combination of different imaging modalities is difficult, as the image sensor fusion requires knowing the optical properties of the sensors, selecting the right optics and finding the sensors' mutual reference frame through calibration. In this research we demonstrate a method for fusing data from Fabry-Perot interferometer hyperspectral camera and a Kinect V2 time-of-flight depth sensing camera. We created an experimental application to demonstrate utilizing the depth augmented hyperspectral data to measure emission angle dependent reflectance from a multi-view inferred point cloud. We determined the intrinsic and extrinsic camera parameters through calibration, used global and local registration algorithms to combine point clouds from different viewpoints, created a dense point cloud and determined the angle dependent reflectances from it. The method could successfully combine the 3D point cloud data and hyperspectral data from different viewpoints of a reference colorchecker board. The point cloud registrations gained 0.29-0.36 fitness for inlier point correspondences and RMSE was approx. 2, which refers a quite reliable registration result. The RMSE of the measured reflectances between the front view and side views of the targets varied between 0.01 and 0.05 on average and the spectral angle between 1.5 and 3.2 degrees. The results suggest that changing emission angle has very small effect on the surface reflectance intensity and spectrum shapes, which was expected with the used colorchecker.
Asunto(s)
Algoritmos , CalibraciónRESUMEN
This paper presents a pig carcass cutting dataset, captured from a bespoke frame structure with 6 Intel® RealSense™ Depth Camera D415 cameras attached, and later recorded from a single camera attached to a robotic arm cycling through the positions previously defined by the frame structure. The data is composed of bags files recorded from the Intel's SDK, which includes RGB-D data and camera intrinsic parameters for each sensor. In addition, ten JSON files with the transformation matrix for each camera in relation to the left/front camera in the structure are provided, five JSON files for the data recorded with the bespoke frame and five JSON files for the data captured with the robotic arm.
RESUMEN
Animal dimensions are essential indicators for monitoring their growth rate, diet efficiency, and health status. A computer vision system is a recently emerging precision livestock farming technology that overcomes the previously unresolved challenges pertaining to labor and cost. Depth sensor cameras can be used to estimate the depth or height of an animal, in addition to two-dimensional information. Collecting top-view depth images is common in evaluating body mass or conformational traits in livestock species. However, in the depth image data acquisition process, manual interventions are involved in controlling a camera from a laptop or where detailed steps for automated data collection are not documented. Furthermore, open-source image data acquisition implementations are rarely available. The objective of this study was to 1) investigate the utility of automated top-view dairy cow depth data collection methods using picture- and video-based methods, 2) evaluate the performance of an infrared cut lens, 3) and make the source code available. Both methods can automatically perform animal detection, trigger recording, capture depth data, and terminate recording for individual animals. The picture-based method takes only a predetermined number of images whereas the video-based method uses a sequence of frames as a video. For the picture-based method, we evaluated 3- and 10-picture approaches. The depth sensor camera was mounted 2.75 m above-the-ground over a walk-through scale between the milking parlor and the free-stall barn. A total of 150 Holstein and 100 Jersey cows were evaluated. A pixel location where the depth was monitored was set up as a point of interest. More than 89% of cows were successfully captured using both picture- and video-based methods. The success rates of the picture- and video-based methods further improved to 92% and 98%, respectively, when combined with an infrared cut lens. Although both the picture-based method with 10 pictures and the video-based method yielded accurate results for collecting depth data on cows, the former was more efficient in terms of data storage. The current study demonstrates automated depth data collection frameworks and a Python implementation available to the community, which can help facilitate the deployment of computer vision systems for dairy cows.
RESUMEN
Recently, several computer applications provided operating mode through pointing fingers, waving hands, and with body movement instead of a mouse, keyboard, audio, or touch input such as sign language recognition, robot control, games, appliances control, and smart surveillance. With the increase of hand-pose-based applications, new challenges in this domain have also emerged. Support vector machines and neural networks have been extensively used in this domain using conventional RGB data, which are not very effective for adequate performance. Recently, depth data have become popular due to better understating of posture attributes. In this study, a multiple parallel stream 2D CNN (two-dimensional convolution neural network) model is proposed to recognize the hand postures. The proposed model comprises multiple steps and layers to detect hand poses from image maps obtained from depth data. The hyper parameters of the proposed model are tuned through experimental analysis. Three publicly available benchmark datasets: Kaggle, First Person, and Dexter, are used independently to train and test the proposed approach. The accuracy of the proposed method is 99.99%, 99.48%, and 98% using the Kaggle hand posture dataset, First Person hand posture dataset, and Dexter dataset, respectively. Further, the results obtained for F1 and AUC scores are also near-optimal. Comparative analysis with state-of-the-art shows that the proposed model outperforms the previous methods.
Asunto(s)
Mano , Redes Neurales de la Computación , Movimiento , PosturaRESUMEN
Surface flatness assessment is necessary for quality control of metal sheets manufactured from steel coils by roll leveling and cutting. Mechanical-contact-based flatness sensors are being replaced by modern laser-based optical sensors that deliver accurate and dense reconstruction of metal sheet surfaces for flatness index computation. However, the surface range images captured by these optical sensors are corrupted by very specific kinds of noise due to vibrations caused by mechanical processes like degreasing, cleaning, polishing, shearing, and transporting roll systems. Therefore, high-quality flatness optical measurement systems strongly depend on the quality of image denoising methods applied to extract the true surface height image. This paper presents a deep learning architecture for removing these specific kinds of noise from the range images obtained by a laser based range sensor installed in a rolling and shearing line, in order to allow accurate flatness measurements from the clean range images. The proposed convolutional blind residual denoising network (CBRDNet) is composed of a noise estimation module and a noise removal module implemented by specific adaptation of semantic convolutional neural networks. The CBRDNet is validated on both synthetic and real noisy range image data that exhibit the most critical kinds of noise that arise throughout the metal sheet production process. Real data were obtained from a single laser line triangulation flatness sensor installed in a roll leveling and cut to length line. Computational experiments over both synthetic and real datasets clearly demonstrate that CBRDNet achieves superior performance in comparison to traditional 1D and 2D filtering methods, and state-of-the-art CNN-based denoising techniques. The experimental validation results show a reduction in error than can be up to 15% relative to solutions based on traditional 1D and 2D filtering methods and between 10% and 3% relative to the other deep learning denoising architectures recently reported in the literature.
RESUMEN
Virtual training systems are in an increasing demand because of real-world training, which requires a high cost or accompanying risk, and can be conducted safely through virtual environments. For virtual training to be effective for users, it is important to provide realistic training situations; however, virtual reality (VR) content using VR controllers for experiential learning differ significantly from real content in terms of tangible interactions. In this paper, we propose a method for enhancing the presence and immersion during virtual training by applying various sensors to tangible virtual training as a way to track the movement of real tools used during training and virtualizing the entire body of the actual user for transfer to a virtual environment. The proposed training system connects virtual and real-world spaces through an actual object (e.g., an automobile) to provide the feeling of actual touch during virtual training. Furthermore, the system measures the posture of the tools (steam gun and mop) and the degree of touch and applies them during training (e.g., a steam car wash.) User-testing is conducted to validate the increase in the effectiveness of virtual job training.
Asunto(s)
Realidad Virtual , Movimiento , Interfaz Usuario-ComputadorRESUMEN
This data article presents a rich original experimental video sources and wide collections of laboratory data on water levels, sediment depths and wave front celerity values arose from different multiphase dam-break scenarios. The required data of dam-break shock waves in highly silted-up reservoirs with various initial up- and down-stream hydraulic conditions is obtained directly from high-quality videos. The multi-layer shock waves were recorded by three professional cameras mounted along the laboratory channel. The extracted video images were rigorously scrutinized, and the datasets were obtained through the images via image processing method. Different sediment depths in the upstream reservoir and dry- or wet-bed downstream conditions were considered as initial conditions, compromising a total of 32 different scenarios. A total of 198 original experimental videos are made available online in the public repository "Mendeley Data" in 8 groups based on 8 different initial upstream sediment depths [1], [2], [3], [4], [5], [6], [7], [8]. 20 locations along the flume and 15 time snaps after the dam breaks were considered for data collecting. Consequently, a total of 18,000 water level and sediment depth data points were collected to prepare four datasets, which are uploaded in the public repository "Mendeley Data". A total of 9600 water level data points could be accessed in [9], [10], while 8400 sediment depth data points are available online in [11], [12] and could be utilized for validation and practical purposes by other researchers. This data article is related to another research article entitled "Experimental study and numerical verification of silted-up dam-break" [13].
RESUMEN
Measuring pavement roughness and detecting pavement surface defects are two of the most important tasks in pavement management. While existing pavement roughness measurement approaches are expensive, the primary aim of this paper is to use a cost-effective and sufficiently accurate RGB-D sensor to estimate the pavement roughness in the outdoor environment. An algorithm is proposed to process the RGB-D data and autonomously quantify the road roughness. To this end, the RGB-D sensor is calibrated and primary data for estimating the pavement roughness are collected. The collected depth frames and RGB images are registered to create the 3D road surfaces. We found that there is a significant correlation between the estimated International Roughness Index (IRI) using the RGB-D sensor and the manual measured IRI using rod and level. By considering the Power Spectral Density (PSD) analysis and the repeatability of measurement, the results show that the proposed solution can accurately estimate the different pavement roughness.
RESUMEN
High-Level Structure (HLS) extraction in a set of images consists of recognizing 3D elements with useful information to the user or application. There are several approaches to HLS extraction. However, most of these approaches are based on processing two or more images captured from different camera views or on processing 3D data in the form of point clouds extracted from the camera images. In contrast and motivated by the extensive work developed for the problem of depth estimation in a single image, where parallax constraints are not required, in this work, we propose a novel methodology towards HLS extraction from a single image with promising results. For that, our method has four steps. First, we use a CNN to predict the depth for a single image. Second, we propose a region-wise analysis to refine depth estimates. Third, we introduce a graph analysis to segment the depth in semantic orientations aiming at identifying potential HLS. Finally, the depth sections are provided to a new CNN architecture that predicts HLS in the shape of cubes and rectangular parallelepipeds.
RESUMEN
In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject's motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4 . 5 % in total 3D PCK accuracy.
RESUMEN
The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.
Asunto(s)
Lógica , Aprendizaje Automático , Procesamiento de Lenguaje Natural , Reconocimiento de Normas Patrones Automatizadas , Percepción Espacial , Análisis Espacial , Interfaz Usuario-Computador , HumanosRESUMEN
Climbing and descending stairs are demanding daily activities, and the monitoring of them may reveal the presence of musculoskeletal diseases at an early stage. A markerless system is needed to monitor such stair walking activity without mentally or physically disturbing the subject. Microsoft Kinect v2 has been used for gait monitoring, as it provides a markerless skeleton tracking function. However, few studies have used this device for stair walking monitoring, and the accuracy of its skeleton tracking function during stair walking has not been evaluated. Moreover, skeleton tracking is not likely to be suitable for estimating body joints during stair walking, as the form of the body is different from what it is when it walks on level surfaces. In this study, a new method of estimating the 3D position of the knee joint was devised that uses the depth data of Kinect v2. The accuracy of this method was compared with that of the skeleton tracking function of Kinect v2 by simultaneously measuring subjects with a 3D motion capture system. The depth data method was found to be more accurate than skeleton tracking. The mean error of the 3D Euclidian distance of the depth data method was 43.2 ± 27.5 mm, while that of the skeleton tracking was 50.4 ± 23.9 mm. This method indicates the possibility of stair walking monitoring for the early discovery of musculoskeletal diseases.