Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
IEEE Sens J ; 24(5): 6888-6897, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38476583

RESUMEN

We developed an ankle-worn gait monitoring system for tracking gait parameters, including length, width, and height. The system utilizes ankle bracelets equipped with wide-angle infrared (IR) stereo cameras tasked with monitoring a marker on the opposing ankle. A computer vision algorithm we have also developed processes the imaged marker positions to estimate the length, width, and height of the person's gait. Through testing on multiple participants, the prototype of the proposed gait monitoring system exhibited notable performance, achieving an average accuracy of 96.52%, 94.46%, and 95.29% for gait length, width, and height measurements, respectively, despite distorted wide-angle images. The OptiGait system offers a cost-effective and user-friendly alternative compared to existing gait parameter sensing systems, delivering comparable accuracy in measuring gait length and width. Notably, the system demonstrates a novel capability in measuring gait height, a feature not previously reported in the literature.

2.
Artículo en Inglés | MEDLINE | ID: mdl-37719135

RESUMEN

A novel online real-time video stabilization algorithm (LSstab) that suppresses unwanted motion jitters based on cinematography principles is presented. LSstab features a parallel realization of the a-contrario RANSAC (AC-RANSAC) algorithm to estimate the inter-frame camera motion parameters. A novel least squares based smoothing cost function is then proposed to mitigate undesirable camera jitters according to cinematography principles. A recursive least square solver is derived to minimize the smoothing cost function with a linear computation complexity. LSstab is evaluated using a suite of publicly available videos against state-of-the-art video stabilization methods. Results show that LSstab achieves comparable or better performance, which attains real-time processing speed when a GPU is used.

3.
Ergonomics ; 66(8): 1132-1141, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36227226

RESUMEN

Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.


Asunto(s)
Mano , Análisis y Desempeño de Tareas , Humanos , Estudios Prospectivos , Extremidad Superior , Computadores , Grabación en Video/métodos
4.
Artículo en Inglés | MEDLINE | ID: mdl-36367914

RESUMEN

Spasticity is a common complication for patients with stroke, but only few studies investigate the relation between spasticity and voluntary movement. This study proposed a novel automatic system for assessing the severity of spasticity (SS) of four upper-limb joints, including the elbow, wrist, thumb, and fingers, through voluntary movements. A wearable system which combined 19 inertial measurement units and a pressure ball was proposed to collect the kinematic and force information when the participants perform four tasks, namely cone stacking (CS), fast flexion and extension (FFE), slow ball squeezing (SBS), and fast ball squeezing (FBS). Several time and frequency domain features were extracted from the collected data, and two feature selection approaches based on recursive feature elimination were adopted to select the most influential features. The selected features were input into five machine learning techniques for assessing the SS for each joint. The results indicated that using CS task to assess the SS of elbow and fingers and using FBS task to assess the SS of thumb and wrist can reach the highest weighted-average F1-score. Furthermore, the study also concluded that FBS is the optimal task for assessing all the four upper-limb joints. The overall result shown that the proposed automatic system can assess four upper-limb joints through voluntary movements accurately, which is a breakthrough of finding the relation between spasticity and voluntary movement.

5.
Sensors (Basel) ; 22(19)2022 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-36236314

RESUMEN

A novel wearable multi-sensor data glove system is developed to explore the relation between finger spasticity and voluntary movement in patients with stroke. Many stroke patients suffer from finger spasticity, which is detrimental to their manual dexterity. Diagnosing and assessing the degrees of spasticity require neurological testing performed by trained professionals to estimate finger spasticity scores via the modified Ashworth scale (MAS). The proposed system offers an objective, quantitative solution to assess the finger spasticity of patients with stroke and complements the manual neurological test. In this work, the hardware and software components of this system are described. By requiring patients to perform five designated tasks, biomechanical measurements including linear and angular speed, acceleration, and pressure at every finger joint and upper limb are recorded, making up more than 1000 features for each task. We conducted a preliminary clinical test with 14 subjects using this system. Statistical analysis is performed on the acquired measurements to identify a small subset of features that are most likely to discriminate a healthy patient from patients suffering from finger spasticity. This encouraging result validates the feasibility of this proposed system to quantitatively and objectively assess finger spasticity.


Asunto(s)
Rehabilitación de Accidente Cerebrovascular , Accidente Cerebrovascular , Dedos , Humanos , Espasticidad Muscular/diagnóstico , Accidente Cerebrovascular/diagnóstico , Extremidad Superior
6.
J Signal Process Syst ; 94(3): 329-343, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35663585

RESUMEN

A real-time 3D visualization (RT3DV) system using a multiview RGB camera array is presented. RT3DV can process multiple synchronized video streams to produce a stereo video of a dynamic scene from a chosen view angle. Its design objective is to facilitate 3D visualization at the video frame rate with good viewing quality. To facilitate 3D vision, RT3DV estimates and updates a surface mesh model formed directly from a set of sparse key points. The 3D coordinates of these key points are estimated from matching 2D key points across multiview video streams with the aid of epipolar geometry and trifocal tensor. To capture the scene dynamics, 2D key points in individual video streams are tracked between successive frames. We implemented a proof of concept RT3DV system tasked to process five synchronous video streams acquired by an RGB camera array. It achieves a processing speed of 44 milliseconds per frame and a peak signal to noise ratio (PSNR) of 15.9 dB from a viewpoint coinciding with a reference view. As a comparison, an image-based MVS algorithm utilizing a dense point cloud model and frame by frame feature detection and matching will require 7 seconds to render a frame and yield a reference view PSNR of 16.3 dB.

7.
Hum Factors ; 64(3): 482-498, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-32972247

RESUMEN

OBJECTIVE: A computer vision method was developed for estimating the trunk flexion angle, angular speed, and angular acceleration by extracting simple features from the moving image during lifting. BACKGROUND: Trunk kinematics is an important risk factor for lower back pain, but is often difficult to measure by practitioners for lifting risk assessments. METHODS: Mannequins representing a wide range of hand locations for different lifting postures were systematically generated using the University of Michigan 3DSSPP software. A bounding box was drawn tightly around each mannequin and regression models estimated trunk angles. The estimates were validated against human posture data for 216 lifts collected using a laboratory-grade motion capture system and synchronized video recordings. Trunk kinematics, based on bounding box dimensions drawn around the subjects in the video recordings of the lifts, were modeled for consecutive video frames. RESULTS: The mean absolute difference between predicted and motion capture measured trunk angles was 14.7°, and there was a significant linear relationship between predicted and measured trunk angles (R2 = .80, p < .001). The training error for the kinematics model was 2.3°. CONCLUSION: Using simple computer vision-extracted features, the bounding box method indirectly estimated trunk angle and associated kinematics, albeit with limited precision. APPLICATION: This computer vision method may be implemented on handheld devices such as smartphones to facilitate automatic lifting risk assessments in the workplace.


Asunto(s)
Elevación , Torso , Fenómenos Biomecánicos , Computadores , Humanos , Postura
8.
IEEE Trans Hum Mach Syst ; 51(6): 734-739, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35677387

RESUMEN

A robust computer vision-based approach is developed to estimate the load asymmetry angle defined in the revised NIOSH lifting equation (RNLE). The angle of asymmetry enables the computation of a recommended weight limit for repetitive lifting operations in a workplace to prevent lower back injuries. An open-source package OpenPose is applied to estimate the 2D locations of skeletal joints of the worker from two synchronous videos. Combining these joint location estimates, a computer vision correspondence and depth estimation method is developed to estimate the 3D coordinates of skeletal joints during lifting. The angle of asymmetry is then deduced from a subset of these 3D positions. Error analysis reveals unreliable angle estimates due to occlusions of upper limbs. A robust angle estimation method that mitigates this challenge is developed. We propose a method to flag unreliable angle estimates based on the average confidence level of 2D joint estimates provided by OpenPose. An optimal threshold is derived that balances the percentage variance reduction of the estimation error and the percentage of angle estimates flagged. Tested with 360 lifting instances in a NIOSH-provided dataset, the standard deviation of angle estimation error is reduced from 10.13° to 4.99°. To realize this error variance reduction, 34% of estimated angles are flagged and require further validation.

9.
Micromachines (Basel) ; 11(5)2020 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-32397580

RESUMEN

Existing laparoscopic surgery systems use a single laparoscope to visualize the surgical area with a limited field of view (FoV), necessitating maneuvering the laparoscope to search a target region. In some cases, the laparoscope needs to be moved from one surgical port to another one to detect target organs. These maneuvers would cause longer surgical time and degrade the efficiency of operation. We hypothesize that if an array of cameras can be deployed to provide a stitched video with an expanded FoV and small blind spots, the time required to perform multiple tasks at different sites can be significantly reduced. We developed a micro-camera array that can enlarge the FoV and reduce blind spots between the cameras by optimizing the angle of cameras. The video stream of this micro-camera array was designed to be processed in real-time to provide a stitched video with the expanded FoV. We mounted this micro-camera array to a Fundamentals of Laparoscopic Surgery (FLS) laparoscopic trainer box and designed an experiment to validate the hypothesis above. Surgeons, residents, and a medical student were recruited to perform a modified bean drop task, and the completion time was compared against that measured using a traditional single-camera laparoscope. It was observed that utilizing the micro-camera array, the completion time of the modified bean drop task was 203 ± 55 s while using the laparoscope, the completion time was 245 ± 114 s, with a p-value of 0.00097. It is also observed that the benefit of using an FoV-expanded camera array does not diminish for subjects who are more experienced. This test provides convincing evidence and validates the hypothesis that expanded FoV with small blind spots can reduce the operation time for laparoscopic surgical tasks.

10.
Sensors (Basel) ; 19(11)2019 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-31181614

RESUMEN

In this paper, we propose a novel multi-view image denoising algorithm based on convolutional neural network (MVCNN). Multi-view images are arranged into 3D focus image stacks (3DFIS) according to different disparities. The MVCNN is trained to process each 3DFIS and generate a denoised image stack that contains the recovered image information for regions of particular disparities. The denoised image stacks are then fused together to produce a denoised target view image using the estimated disparity map. Different from conventional multi-view denoising approaches that group similar patches first and then perform denoising on those patches, our CNN-based algorithm saves the effort of exhaustive patch searching and greatly reduces the computational time. In the proposed MVCNN, residual learning and batch normalization strategies are also used to enhance the denoising performance and accelerate the training process. Compared with the state-of-the-art single image and multi-view denoising algorithms, experiments show that the proposed CNN-based algorithm is a highly effective and efficient method in Gaussian denoising of multi-view images.

11.
Ergonomics ; 62(8): 1043-1054, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31092146

RESUMEN

A widely used risk prediction tool, the revised NIOSH lifting equation (RNLE), provides the recommended weight limit (RWL), but is limited by analyst subjectivity, experience, and resources. This paper describes a robust, non-intrusive, straightforward approach to automatically extract spatial and temporal factors necessary for the RNLE using a single video camera in the sagittal plane. The participant's silhouette is segmented by motion information and the novel use of a ghosting effect provides accurate detection of lifting instances, and hand and feet location prediction. Laboratory tests using 6 participants, each performing 36 lifts, showed that a nominal 640 pixel × 480 pixel 2D video, in comparison to 3D motion capture, provided RWL estimations within 0.2 kg (SD = 1.0 kg). The linear regression between the video and 3D tracking RWL was R2 = 0.96 (slope = 1.0, intercept = 0.2 kg). Since low definition video was used in order to synchronise with motion capture, better performance is anticipated using high definition video. Practitioner's summary: An algorithm for automatically calculating the revised NIOSH lifting equation using a single video camera was evaluated in comparison to laboratory 3D motion capture. The results indicate that this method has suitable accuracy for practical use and may be, particularly, useful when multiple lifts are evaluated. Abbreviations: 2D: Two-dimensional; 3D: Three-dimensional; ACGIH: American Conference of Governmental Industrial Hygienists; AM: asymmetric multiplier; BOL: beginning of lift; CM: coupling multiplier; DM: distance multiplier; EOL: end of lift; FIRWL: frequency independent recommended weight limit; FM: frequency multiplier; H: horizontal distance; HM: horizontal multiplier; IMU: inertial measurement unit; ISO: International Organization for Standardization; LC: load constant; NIOSH: National Institute for Occupational Safety and Health; RGB: red, green, blue; RGB-D: red, green, blue - depth; RNLE: revised NIOSH lifting equation; RWL: recommended weight limit; SD: standard deviation; TLV: threshold limit value; VM: vertical multiplier; V: vertical distance.


Asunto(s)
Ergonomía/métodos , Elevación , Monitoreo Fisiológico/métodos , Salud Laboral , Grabación en Video/métodos , Adulto , Femenino , Humanos , Modelos Lineales , Masculino , National Institute for Occupational Safety and Health, U.S. , Medición de Riesgo , Estados Unidos
12.
Hum Factors ; 61(8): 1326-1339, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31013463

RESUMEN

OBJECTIVE: This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND: Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD: We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS: Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION: Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION: Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.


Asunto(s)
Mano , Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Destreza Motora , Reconocimiento de Normas Patrones Automatizadas , Procedimientos Quirúrgicos Operativos , Humanos , Grabación en Video
13.
Hum Factors ; 61(1): 64-77, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30091947

RESUMEN

OBJECTIVE: A method for automatically classifying lifting postures from simple features in video recordings was developed and tested. We explored if an "elastic" rectangular bounding box, drawn tightly around the subject, can be used for classifying standing, stooping, and squatting at the lift origin and destination. BACKGROUND: Current marker-less video tracking methods depend on a priori skeletal human models, which are prone to error from poor illumination, obstructions, and difficulty placing cameras in the field. Robust computer vision algorithms based on spatiotemporal features were previously applied for evaluating repetitive motion tasks, exertion frequency, and duty cycle. METHODS: Mannequin poses were systematically generated using the Michigan 3DSSPP software for a wide range of hand locations and lifting postures. The stature-normalized height and width of a bounding box were measured in the sagittal plane and when rotated horizontally by 30°. After randomly ordering the data, a classification and regression tree algorithm was trained to classify the lifting postures. RESULTS: The resulting tree had four levels and four splits, misclassifying 0.36% training-set cases. The algorithm was tested using 30 video clips of industrial lifting tasks, misclassifying 3.33% test-set cases. The sensitivity and specificity, respectively, were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. CONCLUSIONS: The tree classification algorithm is capable of classifying lifting postures based only on dimensions of bounding boxes. APPLICATIONS: It is anticipated that this practical algorithm can be implemented on handheld devices such as a smartphone, making it readily accessible to practitioners.


Asunto(s)
Elevación , Postura/fisiología , Análisis y Desempeño de Tareas , Algoritmos , Fenómenos Biomecánicos , Árboles de Decisión , Humanos , Maniquíes , Reproducibilidad de los Resultados , Grabación en Video
14.
Micromachines (Basel) ; 9(9)2018 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-30424364

RESUMEN

The quality and the extent of intra-abdominal visualization are critical to a laparoscopic procedure. Currently, a single laparoscope is inserted into one of the laparoscopic ports to provide intra-abdominal visualization. The extent of this field of view (FoV) is rather restricted and may limit efficiency and the range of operations. Here we report a trocar-camera assembly (TCA) that promises a large FoV, and improved efficiency and range of operations. A video stitching program processes video data from multiple miniature cameras and combines these videos in real-time. This stitched video is then displayed on an operating monitor with a much larger FoV than that of a single camera. In addition, we successfully performed a standard and a modified bean drop task, without any distortion, in a simulator box by using the TCA and taking advantage of its FoV which is larger than that of the current laparoscopic cameras. We successfully demonstrated its improved efficiency and range of operations. The TCA frees up a surgical port and potentially eliminates the need of physical maneuvering of the laparoscopic camera, operated by an assistant.

15.
Sensors (Basel) ; 18(7)2018 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-30011930

RESUMEN

An optimal camera placement problem is investigated. The objective is to maximize the area of the field of view (FoV) of a stitched video obtained by stitching video streams from an array of cameras. The positions and poses of these cameras are restricted to a given set of selections. The camera array is designed to be placed inside the abdomen to support minimally invasive laparoscopic surgery. Hence, a few non-traditional requirements/constraints are imposed: Adjacent views are required to overlap to support image registration for seamless video stitching. The resulting effective FoV should be a contiguous region without any holes and should be a convex polygon. With these requirements, traditional camera placement algorithms cannot be directly applied to solve this problem. In this work, we show the complexity of this problem grows exponentially as a function of the problem size, and then present a greedy polynomial time heuristic solution that approximates well to the globally optimal solution. We present a new approach to directly evaluate the combined coverage area (area of FoV) as the union of a set of quadrilaterals. We also propose a graph-based approach to ensure the stitching requirement (overlap between adjacent views) is satisfied. We present a method to find a convex polygon with maximum area from a given polygon. Several design examples show that the proposed algorithm can achieve larger FoV area while using much less computing time.

16.
Ergonomics ; 60(12): 1730-1738, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28640656

RESUMEN

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.


Asunto(s)
Algoritmos , Mano/fisiología , Esfuerzo Físico , Estudios de Tiempo y Movimiento , Computadores , Humanos , Grabación en Video
17.
Appl Ergon ; 65: 461-472, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28284701

RESUMEN

Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.


Asunto(s)
Ergonomía/métodos , Análisis y Desempeño de Tareas , Grabación en Video/métodos , Fenómenos Biomecánicos , Trastornos de Traumas Acumulados/etiología , Mano/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento (Física) , Movimiento/fisiología , Enfermedades Profesionales/etiología
18.
Appl Opt ; 55(29): 8316-8334, 2016 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-27828081

RESUMEN

Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

19.
Sensors (Basel) ; 16(5)2016 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-27223287

RESUMEN

A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

20.
J Acoust Soc Am ; 139(4): 1848, 2016 04.
Artículo en Inglés | MEDLINE | ID: mdl-27106332

RESUMEN

An acoustic-signature based method of estimating the flight trajectory of low-altitude flying aircraft that only requires a stationary microphone array is proposed. This method leverages the Doppler shifts of engine sound to estimate the closest point of approach distance, time, and speed. It also leverages the acoustic phase shift over the microphone array to estimate the direction of arrival of the target. Combining these parameters, this algorithm provides a total least square estimate of the target trajectory under the assumption of constant target height, direction, and speed. Analytical bounds of potential performance degradation due to noise are derived and the estimation error caused by signal propagation delay is analyzed, and both are verified with extensive simulation. The proposed algorithm is also validated by processing the data collected in field experiments.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...