Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 31
Filtrer
1.
Hum Factors ; : 187208241272066, 2024 Aug 08.
Article de Anglais | MEDLINE | ID: mdl-39117017

RÉSUMÉ

OBJECTIVE: Physical and cognitive workloads and performance were studied for a corrective shared control (CSC) human-robot collaborative (HRC) sanding task. BACKGROUND: Manual sanding is physically demanding. Collaborative robots (cobots) can potentially reduce physical stress, but fully autonomous implementation has been particularly challenging due to skill, task variability, and robot limitations. CSC is an HRC method where the robot operates semi-autonomously while the human provides real-time corrections. METHODS: Twenty laboratory participants removed paint using an orbital sander, both manually and with a CSC robot. A fully automated robot was also tested. RESULTS: The CSC robot improved subjective discomfort compared to manual sanding in the upper arm by 29.5%, lower arm by 32%, hand by 36.5%, front of the shoulder by 24%, and back of the shoulder by 17.5%. Muscle fatigue measured using EMG, was observed in the medial deltoid and flexor carpi radialis for the manual condition. The composite cognitive workload on the NASA-TLX increased by 14.3% for manual sanding due to high physical demand and effort, while mental demand was 14% greater for the CSC robot. Digital imaging showed that the CSC robot outperformed the automated condition by 7.16% for uniformity, 4.96% for quantity, and 6.06% in total. CONCLUSIONS: In this example, we found that human skills and techniques were integral to sanding and can be successfully incorporated into HRC systems. Humans performed the task using the CSC robot with less fatigue and discomfort. APPLICATIONS: The results can influence implementation of future HRC systems in manufacturing environments.

2.
IEEE Sens J ; 24(5): 6888-6897, 2024 Mar.
Article de Anglais | MEDLINE | ID: mdl-38476583

RÉSUMÉ

We developed an ankle-worn gait monitoring system for tracking gait parameters, including length, width, and height. The system utilizes ankle bracelets equipped with wide-angle infrared (IR) stereo cameras tasked with monitoring a marker on the opposing ankle. A computer vision algorithm we have also developed processes the imaged marker positions to estimate the length, width, and height of the person's gait. Through testing on multiple participants, the prototype of the proposed gait monitoring system exhibited notable performance, achieving an average accuracy of 96.52%, 94.46%, and 95.29% for gait length, width, and height measurements, respectively, despite distorted wide-angle images. The OptiGait system offers a cost-effective and user-friendly alternative compared to existing gait parameter sensing systems, delivering comparable accuracy in measuring gait length and width. Notably, the system demonstrates a novel capability in measuring gait height, a feature not previously reported in the literature.

3.
Article de Anglais | MEDLINE | ID: mdl-37719135

RÉSUMÉ

A novel online real-time video stabilization algorithm (LSstab) that suppresses unwanted motion jitters based on cinematography principles is presented. LSstab features a parallel realization of the a-contrario RANSAC (AC-RANSAC) algorithm to estimate the inter-frame camera motion parameters. A novel least squares based smoothing cost function is then proposed to mitigate undesirable camera jitters according to cinematography principles. A recursive least square solver is derived to minimize the smoothing cost function with a linear computation complexity. LSstab is evaluated using a suite of publicly available videos against state-of-the-art video stabilization methods. Results show that LSstab achieves comparable or better performance, which attains real-time processing speed when a GPU is used.

4.
Ergonomics ; 66(8): 1132-1141, 2023 Aug.
Article de Anglais | MEDLINE | ID: mdl-36227226

RÉSUMÉ

Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.


Sujet(s)
Main , Analyse et exécution des tâches , Humains , Études prospectives , Membre supérieur , Ordinateurs , Enregistrement sur magnétoscope/méthodes
5.
Article de Anglais | MEDLINE | ID: mdl-36367914

RÉSUMÉ

Spasticity is a common complication for patients with stroke, but only few studies investigate the relation between spasticity and voluntary movement. This study proposed a novel automatic system for assessing the severity of spasticity (SS) of four upper-limb joints, including the elbow, wrist, thumb, and fingers, through voluntary movements. A wearable system which combined 19 inertial measurement units and a pressure ball was proposed to collect the kinematic and force information when the participants perform four tasks, namely cone stacking (CS), fast flexion and extension (FFE), slow ball squeezing (SBS), and fast ball squeezing (FBS). Several time and frequency domain features were extracted from the collected data, and two feature selection approaches based on recursive feature elimination were adopted to select the most influential features. The selected features were input into five machine learning techniques for assessing the SS for each joint. The results indicated that using CS task to assess the SS of elbow and fingers and using FBS task to assess the SS of thumb and wrist can reach the highest weighted-average F1-score. Furthermore, the study also concluded that FBS is the optimal task for assessing all the four upper-limb joints. The overall result shown that the proposed automatic system can assess four upper-limb joints through voluntary movements accurately, which is a breakthrough of finding the relation between spasticity and voluntary movement.

6.
Sensors (Basel) ; 22(19)2022 Sep 23.
Article de Anglais | MEDLINE | ID: mdl-36236314

RÉSUMÉ

A novel wearable multi-sensor data glove system is developed to explore the relation between finger spasticity and voluntary movement in patients with stroke. Many stroke patients suffer from finger spasticity, which is detrimental to their manual dexterity. Diagnosing and assessing the degrees of spasticity require neurological testing performed by trained professionals to estimate finger spasticity scores via the modified Ashworth scale (MAS). The proposed system offers an objective, quantitative solution to assess the finger spasticity of patients with stroke and complements the manual neurological test. In this work, the hardware and software components of this system are described. By requiring patients to perform five designated tasks, biomechanical measurements including linear and angular speed, acceleration, and pressure at every finger joint and upper limb are recorded, making up more than 1000 features for each task. We conducted a preliminary clinical test with 14 subjects using this system. Statistical analysis is performed on the acquired measurements to identify a small subset of features that are most likely to discriminate a healthy patient from patients suffering from finger spasticity. This encouraging result validates the feasibility of this proposed system to quantitatively and objectively assess finger spasticity.


Sujet(s)
Réadaptation après un accident vasculaire cérébral , Accident vasculaire cérébral , Doigts , Humains , Spasticité musculaire/diagnostic , Accident vasculaire cérébral/diagnostic , Membre supérieur
7.
J Signal Process Syst ; 94(3): 329-343, 2022 Mar.
Article de Anglais | MEDLINE | ID: mdl-35663585

RÉSUMÉ

A real-time 3D visualization (RT3DV) system using a multiview RGB camera array is presented. RT3DV can process multiple synchronized video streams to produce a stereo video of a dynamic scene from a chosen view angle. Its design objective is to facilitate 3D visualization at the video frame rate with good viewing quality. To facilitate 3D vision, RT3DV estimates and updates a surface mesh model formed directly from a set of sparse key points. The 3D coordinates of these key points are estimated from matching 2D key points across multiview video streams with the aid of epipolar geometry and trifocal tensor. To capture the scene dynamics, 2D key points in individual video streams are tracked between successive frames. We implemented a proof of concept RT3DV system tasked to process five synchronous video streams acquired by an RGB camera array. It achieves a processing speed of 44 milliseconds per frame and a peak signal to noise ratio (PSNR) of 15.9 dB from a viewpoint coinciding with a reference view. As a comparison, an image-based MVS algorithm utilizing a dense point cloud model and frame by frame feature detection and matching will require 7 seconds to render a frame and yield a reference view PSNR of 16.3 dB.

8.
Hum Factors ; 64(3): 482-498, 2022 05.
Article de Anglais | MEDLINE | ID: mdl-32972247

RÉSUMÉ

OBJECTIVE: A computer vision method was developed for estimating the trunk flexion angle, angular speed, and angular acceleration by extracting simple features from the moving image during lifting. BACKGROUND: Trunk kinematics is an important risk factor for lower back pain, but is often difficult to measure by practitioners for lifting risk assessments. METHODS: Mannequins representing a wide range of hand locations for different lifting postures were systematically generated using the University of Michigan 3DSSPP software. A bounding box was drawn tightly around each mannequin and regression models estimated trunk angles. The estimates were validated against human posture data for 216 lifts collected using a laboratory-grade motion capture system and synchronized video recordings. Trunk kinematics, based on bounding box dimensions drawn around the subjects in the video recordings of the lifts, were modeled for consecutive video frames. RESULTS: The mean absolute difference between predicted and motion capture measured trunk angles was 14.7°, and there was a significant linear relationship between predicted and measured trunk angles (R2 = .80, p < .001). The training error for the kinematics model was 2.3°. CONCLUSION: Using simple computer vision-extracted features, the bounding box method indirectly estimated trunk angle and associated kinematics, albeit with limited precision. APPLICATION: This computer vision method may be implemented on handheld devices such as smartphones to facilitate automatic lifting risk assessments in the workplace.


Sujet(s)
Levage , Tronc , Phénomènes biomécaniques , Ordinateurs , Humains , Posture
9.
IEEE Trans Hum Mach Syst ; 51(6): 734-739, 2021 Dec.
Article de Anglais | MEDLINE | ID: mdl-35677387

RÉSUMÉ

A robust computer vision-based approach is developed to estimate the load asymmetry angle defined in the revised NIOSH lifting equation (RNLE). The angle of asymmetry enables the computation of a recommended weight limit for repetitive lifting operations in a workplace to prevent lower back injuries. An open-source package OpenPose is applied to estimate the 2D locations of skeletal joints of the worker from two synchronous videos. Combining these joint location estimates, a computer vision correspondence and depth estimation method is developed to estimate the 3D coordinates of skeletal joints during lifting. The angle of asymmetry is then deduced from a subset of these 3D positions. Error analysis reveals unreliable angle estimates due to occlusions of upper limbs. A robust angle estimation method that mitigates this challenge is developed. We propose a method to flag unreliable angle estimates based on the average confidence level of 2D joint estimates provided by OpenPose. An optimal threshold is derived that balances the percentage variance reduction of the estimation error and the percentage of angle estimates flagged. Tested with 360 lifting instances in a NIOSH-provided dataset, the standard deviation of angle estimation error is reduced from 10.13° to 4.99°. To realize this error variance reduction, 34% of estimated angles are flagged and require further validation.

10.
Micromachines (Basel) ; 11(5)2020 May 10.
Article de Anglais | MEDLINE | ID: mdl-32397580

RÉSUMÉ

Existing laparoscopic surgery systems use a single laparoscope to visualize the surgical area with a limited field of view (FoV), necessitating maneuvering the laparoscope to search a target region. In some cases, the laparoscope needs to be moved from one surgical port to another one to detect target organs. These maneuvers would cause longer surgical time and degrade the efficiency of operation. We hypothesize that if an array of cameras can be deployed to provide a stitched video with an expanded FoV and small blind spots, the time required to perform multiple tasks at different sites can be significantly reduced. We developed a micro-camera array that can enlarge the FoV and reduce blind spots between the cameras by optimizing the angle of cameras. The video stream of this micro-camera array was designed to be processed in real-time to provide a stitched video with the expanded FoV. We mounted this micro-camera array to a Fundamentals of Laparoscopic Surgery (FLS) laparoscopic trainer box and designed an experiment to validate the hypothesis above. Surgeons, residents, and a medical student were recruited to perform a modified bean drop task, and the completion time was compared against that measured using a traditional single-camera laparoscope. It was observed that utilizing the micro-camera array, the completion time of the modified bean drop task was 203 ± 55 s while using the laparoscope, the completion time was 245 ± 114 s, with a p-value of 0.00097. It is also observed that the benefit of using an FoV-expanded camera array does not diminish for subjects who are more experienced. This test provides convincing evidence and validates the hypothesis that expanded FoV with small blind spots can reduce the operation time for laparoscopic surgical tasks.

11.
Sensors (Basel) ; 19(11)2019 Jun 07.
Article de Anglais | MEDLINE | ID: mdl-31181614

RÉSUMÉ

In this paper, we propose a novel multi-view image denoising algorithm based on convolutional neural network (MVCNN). Multi-view images are arranged into 3D focus image stacks (3DFIS) according to different disparities. The MVCNN is trained to process each 3DFIS and generate a denoised image stack that contains the recovered image information for regions of particular disparities. The denoised image stacks are then fused together to produce a denoised target view image using the estimated disparity map. Different from conventional multi-view denoising approaches that group similar patches first and then perform denoising on those patches, our CNN-based algorithm saves the effort of exhaustive patch searching and greatly reduces the computational time. In the proposed MVCNN, residual learning and batch normalization strategies are also used to enhance the denoising performance and accelerate the training process. Compared with the state-of-the-art single image and multi-view denoising algorithms, experiments show that the proposed CNN-based algorithm is a highly effective and efficient method in Gaussian denoising of multi-view images.

12.
Ergonomics ; 62(8): 1043-1054, 2019 Aug.
Article de Anglais | MEDLINE | ID: mdl-31092146

RÉSUMÉ

A widely used risk prediction tool, the revised NIOSH lifting equation (RNLE), provides the recommended weight limit (RWL), but is limited by analyst subjectivity, experience, and resources. This paper describes a robust, non-intrusive, straightforward approach to automatically extract spatial and temporal factors necessary for the RNLE using a single video camera in the sagittal plane. The participant's silhouette is segmented by motion information and the novel use of a ghosting effect provides accurate detection of lifting instances, and hand and feet location prediction. Laboratory tests using 6 participants, each performing 36 lifts, showed that a nominal 640 pixel × 480 pixel 2D video, in comparison to 3D motion capture, provided RWL estimations within 0.2 kg (SD = 1.0 kg). The linear regression between the video and 3D tracking RWL was R2 = 0.96 (slope = 1.0, intercept = 0.2 kg). Since low definition video was used in order to synchronise with motion capture, better performance is anticipated using high definition video. Practitioner's summary: An algorithm for automatically calculating the revised NIOSH lifting equation using a single video camera was evaluated in comparison to laboratory 3D motion capture. The results indicate that this method has suitable accuracy for practical use and may be, particularly, useful when multiple lifts are evaluated. Abbreviations: 2D: Two-dimensional; 3D: Three-dimensional; ACGIH: American Conference of Governmental Industrial Hygienists; AM: asymmetric multiplier; BOL: beginning of lift; CM: coupling multiplier; DM: distance multiplier; EOL: end of lift; FIRWL: frequency independent recommended weight limit; FM: frequency multiplier; H: horizontal distance; HM: horizontal multiplier; IMU: inertial measurement unit; ISO: International Organization for Standardization; LC: load constant; NIOSH: National Institute for Occupational Safety and Health; RGB: red, green, blue; RGB-D: red, green, blue - depth; RNLE: revised NIOSH lifting equation; RWL: recommended weight limit; SD: standard deviation; TLV: threshold limit value; VM: vertical multiplier; V: vertical distance.


Sujet(s)
Ingénierie humaine/méthodes , Levage , Monitorage physiologique/méthodes , Santé au travail , Enregistrement sur magnétoscope/méthodes , Adulte , Femelle , Humains , Modèles linéaires , Mâle , , Appréciation des risques , États-Unis
13.
Hum Factors ; 61(8): 1326-1339, 2019 12.
Article de Anglais | MEDLINE | ID: mdl-31013463

RÉSUMÉ

OBJECTIVE: This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND: Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD: We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS: Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION: Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION: Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.


Sujet(s)
Main , Interprétation d'images assistée par ordinateur , Apprentissage machine , Aptitudes motrices , Reconnaissance automatique des formes , Procédures de chirurgie opératoire , Humains , Enregistrement sur magnétoscope
14.
Hum Factors ; 61(1): 64-77, 2019 02.
Article de Anglais | MEDLINE | ID: mdl-30091947

RÉSUMÉ

OBJECTIVE: A method for automatically classifying lifting postures from simple features in video recordings was developed and tested. We explored if an "elastic" rectangular bounding box, drawn tightly around the subject, can be used for classifying standing, stooping, and squatting at the lift origin and destination. BACKGROUND: Current marker-less video tracking methods depend on a priori skeletal human models, which are prone to error from poor illumination, obstructions, and difficulty placing cameras in the field. Robust computer vision algorithms based on spatiotemporal features were previously applied for evaluating repetitive motion tasks, exertion frequency, and duty cycle. METHODS: Mannequin poses were systematically generated using the Michigan 3DSSPP software for a wide range of hand locations and lifting postures. The stature-normalized height and width of a bounding box were measured in the sagittal plane and when rotated horizontally by 30°. After randomly ordering the data, a classification and regression tree algorithm was trained to classify the lifting postures. RESULTS: The resulting tree had four levels and four splits, misclassifying 0.36% training-set cases. The algorithm was tested using 30 video clips of industrial lifting tasks, misclassifying 3.33% test-set cases. The sensitivity and specificity, respectively, were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. CONCLUSIONS: The tree classification algorithm is capable of classifying lifting postures based only on dimensions of bounding boxes. APPLICATIONS: It is anticipated that this practical algorithm can be implemented on handheld devices such as a smartphone, making it readily accessible to practitioners.


Sujet(s)
Levage , Posture/physiologie , Analyse et exécution des tâches , Algorithmes , Phénomènes biomécaniques , Arbres de décision , Humains , Mannequins , Reproductibilité des résultats , Enregistrement sur magnétoscope
15.
Micromachines (Basel) ; 9(9)2018 Aug 25.
Article de Anglais | MEDLINE | ID: mdl-30424364

RÉSUMÉ

The quality and the extent of intra-abdominal visualization are critical to a laparoscopic procedure. Currently, a single laparoscope is inserted into one of the laparoscopic ports to provide intra-abdominal visualization. The extent of this field of view (FoV) is rather restricted and may limit efficiency and the range of operations. Here we report a trocar-camera assembly (TCA) that promises a large FoV, and improved efficiency and range of operations. A video stitching program processes video data from multiple miniature cameras and combines these videos in real-time. This stitched video is then displayed on an operating monitor with a much larger FoV than that of a single camera. In addition, we successfully performed a standard and a modified bean drop task, without any distortion, in a simulator box by using the TCA and taking advantage of its FoV which is larger than that of the current laparoscopic cameras. We successfully demonstrated its improved efficiency and range of operations. The TCA frees up a surgical port and potentially eliminates the need of physical maneuvering of the laparoscopic camera, operated by an assistant.

16.
Sensors (Basel) ; 18(7)2018 Jul 14.
Article de Anglais | MEDLINE | ID: mdl-30011930

RÉSUMÉ

An optimal camera placement problem is investigated. The objective is to maximize the area of the field of view (FoV) of a stitched video obtained by stitching video streams from an array of cameras. The positions and poses of these cameras are restricted to a given set of selections. The camera array is designed to be placed inside the abdomen to support minimally invasive laparoscopic surgery. Hence, a few non-traditional requirements/constraints are imposed: Adjacent views are required to overlap to support image registration for seamless video stitching. The resulting effective FoV should be a contiguous region without any holes and should be a convex polygon. With these requirements, traditional camera placement algorithms cannot be directly applied to solve this problem. In this work, we show the complexity of this problem grows exponentially as a function of the problem size, and then present a greedy polynomial time heuristic solution that approximates well to the globally optimal solution. We present a new approach to directly evaluate the combined coverage area (area of FoV) as the union of a set of quadrilaterals. We also propose a graph-based approach to ensure the stitching requirement (overlap between adjacent views) is satisfied. We present a method to find a convex polygon with maximum area from a given polygon. Several design examples show that the proposed algorithm can achieve larger FoV area while using much less computing time.

17.
Ergonomics ; 60(12): 1730-1738, 2017 Dec.
Article de Anglais | MEDLINE | ID: mdl-28640656

RÉSUMÉ

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.


Sujet(s)
Algorithmes , Main/physiologie , Effort physique , Études ergonomiques , Ordinateurs , Humains , Enregistrement sur magnétoscope
18.
Appl Ergon ; 65: 461-472, 2017 Nov.
Article de Anglais | MEDLINE | ID: mdl-28284701

RÉSUMÉ

Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.


Sujet(s)
Ingénierie humaine/méthodes , Analyse et exécution des tâches , Enregistrement sur magnétoscope/méthodes , Phénomènes biomécaniques , Lésions par microtraumatismes répétés/étiologie , Main/physiologie , Humains , Traitement d'image par ordinateur/méthodes , Déplacement , Mouvement/physiologie , Maladies professionnelles/étiologie
19.
Appl Opt ; 55(29): 8316-8334, 2016 Oct 10.
Article de Anglais | MEDLINE | ID: mdl-27828081

RÉSUMÉ

Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

20.
Sensors (Basel) ; 16(5)2016 May 23.
Article de Anglais | MEDLINE | ID: mdl-27223287

RÉSUMÉ

A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE