Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Sens J ; 24(5): 6888-6897, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38476583

RESUMO

We developed an ankle-worn gait monitoring system for tracking gait parameters, including length, width, and height. The system utilizes ankle bracelets equipped with wide-angle infrared (IR) stereo cameras tasked with monitoring a marker on the opposing ankle. A computer vision algorithm we have also developed processes the imaged marker positions to estimate the length, width, and height of the person's gait. Through testing on multiple participants, the prototype of the proposed gait monitoring system exhibited notable performance, achieving an average accuracy of 96.52%, 94.46%, and 95.29% for gait length, width, and height measurements, respectively, despite distorted wide-angle images. The OptiGait system offers a cost-effective and user-friendly alternative compared to existing gait parameter sensing systems, delivering comparable accuracy in measuring gait length and width. Notably, the system demonstrates a novel capability in measuring gait height, a feature not previously reported in the literature.

2.
Ergonomics ; 66(8): 1132-1141, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36227226

RESUMO

Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.


Assuntos
Mãos , Análise e Desempenho de Tarefas , Humanos , Estudos Prospectivos , Extremidade Superior , Computadores , Gravação em Vídeo/métodos
3.
Artigo em Inglês | MEDLINE | ID: mdl-37719135

RESUMO

A novel online real-time video stabilization algorithm (LSstab) that suppresses unwanted motion jitters based on cinematography principles is presented. LSstab features a parallel realization of the a-contrario RANSAC (AC-RANSAC) algorithm to estimate the inter-frame camera motion parameters. A novel least squares based smoothing cost function is then proposed to mitigate undesirable camera jitters according to cinematography principles. A recursive least square solver is derived to minimize the smoothing cost function with a linear computation complexity. LSstab is evaluated using a suite of publicly available videos against state-of-the-art video stabilization methods. Results show that LSstab achieves comparable or better performance, which attains real-time processing speed when a GPU is used.

4.
Sensors (Basel) ; 22(19)2022 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-36236314

RESUMO

A novel wearable multi-sensor data glove system is developed to explore the relation between finger spasticity and voluntary movement in patients with stroke. Many stroke patients suffer from finger spasticity, which is detrimental to their manual dexterity. Diagnosing and assessing the degrees of spasticity require neurological testing performed by trained professionals to estimate finger spasticity scores via the modified Ashworth scale (MAS). The proposed system offers an objective, quantitative solution to assess the finger spasticity of patients with stroke and complements the manual neurological test. In this work, the hardware and software components of this system are described. By requiring patients to perform five designated tasks, biomechanical measurements including linear and angular speed, acceleration, and pressure at every finger joint and upper limb are recorded, making up more than 1000 features for each task. We conducted a preliminary clinical test with 14 subjects using this system. Statistical analysis is performed on the acquired measurements to identify a small subset of features that are most likely to discriminate a healthy patient from patients suffering from finger spasticity. This encouraging result validates the feasibility of this proposed system to quantitatively and objectively assess finger spasticity.


Assuntos
Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral , Dedos , Humanos , Espasticidade Muscular/diagnóstico , Acidente Vascular Cerebral/diagnóstico , Extremidade Superior
5.
Hum Factors ; 64(3): 482-498, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-32972247

RESUMO

OBJECTIVE: A computer vision method was developed for estimating the trunk flexion angle, angular speed, and angular acceleration by extracting simple features from the moving image during lifting. BACKGROUND: Trunk kinematics is an important risk factor for lower back pain, but is often difficult to measure by practitioners for lifting risk assessments. METHODS: Mannequins representing a wide range of hand locations for different lifting postures were systematically generated using the University of Michigan 3DSSPP software. A bounding box was drawn tightly around each mannequin and regression models estimated trunk angles. The estimates were validated against human posture data for 216 lifts collected using a laboratory-grade motion capture system and synchronized video recordings. Trunk kinematics, based on bounding box dimensions drawn around the subjects in the video recordings of the lifts, were modeled for consecutive video frames. RESULTS: The mean absolute difference between predicted and motion capture measured trunk angles was 14.7°, and there was a significant linear relationship between predicted and measured trunk angles (R2 = .80, p < .001). The training error for the kinematics model was 2.3°. CONCLUSION: Using simple computer vision-extracted features, the bounding box method indirectly estimated trunk angle and associated kinematics, albeit with limited precision. APPLICATION: This computer vision method may be implemented on handheld devices such as smartphones to facilitate automatic lifting risk assessments in the workplace.


Assuntos
Remoção , Tronco , Fenômenos Biomecânicos , Computadores , Humanos , Postura
6.
Sensors (Basel) ; 19(11)2019 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-31181614

RESUMO

In this paper, we propose a novel multi-view image denoising algorithm based on convolutional neural network (MVCNN). Multi-view images are arranged into 3D focus image stacks (3DFIS) according to different disparities. The MVCNN is trained to process each 3DFIS and generate a denoised image stack that contains the recovered image information for regions of particular disparities. The denoised image stacks are then fused together to produce a denoised target view image using the estimated disparity map. Different from conventional multi-view denoising approaches that group similar patches first and then perform denoising on those patches, our CNN-based algorithm saves the effort of exhaustive patch searching and greatly reduces the computational time. In the proposed MVCNN, residual learning and batch normalization strategies are also used to enhance the denoising performance and accelerate the training process. Compared with the state-of-the-art single image and multi-view denoising algorithms, experiments show that the proposed CNN-based algorithm is a highly effective and efficient method in Gaussian denoising of multi-view images.

7.
Hum Factors ; 61(8): 1326-1339, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31013463

RESUMO

OBJECTIVE: This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND: Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD: We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS: Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION: Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION: Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.


Assuntos
Mãos , Interpretação de Imagem Assistida por Computador , Aprendizado de Máquina , Destreza Motora , Reconhecimento Automatizado de Padrão , Procedimentos Cirúrgicos Operatórios , Humanos , Gravação em Vídeo
8.
Hum Factors ; 61(1): 64-77, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30091947

RESUMO

OBJECTIVE: A method for automatically classifying lifting postures from simple features in video recordings was developed and tested. We explored if an "elastic" rectangular bounding box, drawn tightly around the subject, can be used for classifying standing, stooping, and squatting at the lift origin and destination. BACKGROUND: Current marker-less video tracking methods depend on a priori skeletal human models, which are prone to error from poor illumination, obstructions, and difficulty placing cameras in the field. Robust computer vision algorithms based on spatiotemporal features were previously applied for evaluating repetitive motion tasks, exertion frequency, and duty cycle. METHODS: Mannequin poses were systematically generated using the Michigan 3DSSPP software for a wide range of hand locations and lifting postures. The stature-normalized height and width of a bounding box were measured in the sagittal plane and when rotated horizontally by 30°. After randomly ordering the data, a classification and regression tree algorithm was trained to classify the lifting postures. RESULTS: The resulting tree had four levels and four splits, misclassifying 0.36% training-set cases. The algorithm was tested using 30 video clips of industrial lifting tasks, misclassifying 3.33% test-set cases. The sensitivity and specificity, respectively, were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. CONCLUSIONS: The tree classification algorithm is capable of classifying lifting postures based only on dimensions of bounding boxes. APPLICATIONS: It is anticipated that this practical algorithm can be implemented on handheld devices such as a smartphone, making it readily accessible to practitioners.


Assuntos
Remoção , Postura/fisiologia , Análise e Desempenho de Tarefas , Algoritmos , Fenômenos Biomecânicos , Árvores de Decisões , Humanos , Manequins , Reprodutibilidade dos Testes , Gravação em Vídeo
9.
Ergonomics ; 62(8): 1043-1054, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31092146

RESUMO

A widely used risk prediction tool, the revised NIOSH lifting equation (RNLE), provides the recommended weight limit (RWL), but is limited by analyst subjectivity, experience, and resources. This paper describes a robust, non-intrusive, straightforward approach to automatically extract spatial and temporal factors necessary for the RNLE using a single video camera in the sagittal plane. The participant's silhouette is segmented by motion information and the novel use of a ghosting effect provides accurate detection of lifting instances, and hand and feet location prediction. Laboratory tests using 6 participants, each performing 36 lifts, showed that a nominal 640 pixel × 480 pixel 2D video, in comparison to 3D motion capture, provided RWL estimations within 0.2 kg (SD = 1.0 kg). The linear regression between the video and 3D tracking RWL was R2 = 0.96 (slope = 1.0, intercept = 0.2 kg). Since low definition video was used in order to synchronise with motion capture, better performance is anticipated using high definition video. Practitioner's summary: An algorithm for automatically calculating the revised NIOSH lifting equation using a single video camera was evaluated in comparison to laboratory 3D motion capture. The results indicate that this method has suitable accuracy for practical use and may be, particularly, useful when multiple lifts are evaluated. Abbreviations: 2D: Two-dimensional; 3D: Three-dimensional; ACGIH: American Conference of Governmental Industrial Hygienists; AM: asymmetric multiplier; BOL: beginning of lift; CM: coupling multiplier; DM: distance multiplier; EOL: end of lift; FIRWL: frequency independent recommended weight limit; FM: frequency multiplier; H: horizontal distance; HM: horizontal multiplier; IMU: inertial measurement unit; ISO: International Organization for Standardization; LC: load constant; NIOSH: National Institute for Occupational Safety and Health; RGB: red, green, blue; RGB-D: red, green, blue - depth; RNLE: revised NIOSH lifting equation; RWL: recommended weight limit; SD: standard deviation; TLV: threshold limit value; VM: vertical multiplier; V: vertical distance.


Assuntos
Ergonomia/métodos , Remoção , Monitorização Fisiológica/métodos , Saúde Ocupacional , Gravação em Vídeo/métodos , Adulto , Feminino , Humanos , Modelos Lineares , Masculino , National Institute for Occupational Safety and Health, U.S. , Medição de Risco , Estados Unidos
10.
Sensors (Basel) ; 18(7)2018 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-30011930

RESUMO

An optimal camera placement problem is investigated. The objective is to maximize the area of the field of view (FoV) of a stitched video obtained by stitching video streams from an array of cameras. The positions and poses of these cameras are restricted to a given set of selections. The camera array is designed to be placed inside the abdomen to support minimally invasive laparoscopic surgery. Hence, a few non-traditional requirements/constraints are imposed: Adjacent views are required to overlap to support image registration for seamless video stitching. The resulting effective FoV should be a contiguous region without any holes and should be a convex polygon. With these requirements, traditional camera placement algorithms cannot be directly applied to solve this problem. In this work, we show the complexity of this problem grows exponentially as a function of the problem size, and then present a greedy polynomial time heuristic solution that approximates well to the globally optimal solution. We present a new approach to directly evaluate the combined coverage area (area of FoV) as the union of a set of quadrilaterals. We also propose a graph-based approach to ensure the stitching requirement (overlap between adjacent views) is satisfied. We present a method to find a convex polygon with maximum area from a given polygon. Several design examples show that the proposed algorithm can achieve larger FoV area while using much less computing time.

11.
Ergonomics ; 60(12): 1730-1738, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28640656

RESUMO

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.


Assuntos
Algoritmos , Mãos/fisiologia , Esforço Físico , Estudos de Tempo e Movimento , Computadores , Humanos , Gravação em Vídeo
12.
Appl Opt ; 55(29): 8316-8334, 2016 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-27828081

RESUMO

Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

13.
J Acoust Soc Am ; 139(4): 1848, 2016 04.
Artigo em Inglês | MEDLINE | ID: mdl-27106332

RESUMO

An acoustic-signature based method of estimating the flight trajectory of low-altitude flying aircraft that only requires a stationary microphone array is proposed. This method leverages the Doppler shifts of engine sound to estimate the closest point of approach distance, time, and speed. It also leverages the acoustic phase shift over the microphone array to estimate the direction of arrival of the target. Combining these parameters, this algorithm provides a total least square estimate of the target trajectory under the assumption of constant target height, direction, and speed. Analytical bounds of potential performance degradation due to noise are derived and the estimation error caused by signal propagation delay is analyzed, and both are verified with extensive simulation. The proposed algorithm is also validated by processing the data collected in field experiments.

14.
Sensors (Basel) ; 16(5)2016 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-27223287

RESUMO

A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

15.
Hum Factors ; 58(3): 427-40, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26546381

RESUMO

OBJECTIVE: This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). BACKGROUND: There are currently no standardized and widely accepted CBE screening techniques. METHODS: Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. RESULTS: Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. CONCLUSIONS: Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. APPLICATION: Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment.


Assuntos
Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Exame Físico/métodos , Gravação em Vídeo/métodos , Algoritmos , Simulação por Computador , Feminino , Humanos , Modelos Teóricos
16.
Ergonomics ; 59(11): 1514-1525, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26848051

RESUMO

A marker-less 2D video algorithm measured hand kinematics (location, velocity and acceleration) in a paced repetitive laboratory task for varying hand activity levels (HAL). The decision tree (DT) algorithm identified the trajectory of the hand using spatiotemporal relationships during the exertion and rest states. The feature vector training (FVT) method utilised the k-nearest neighbourhood classifier, trained using a set of samples or the first cycle. The average duty cycle (DC) error using the DT algorithm was 2.7%. The FVT algorithm had an average 3.3% error when trained using the first cycle sample of each repetitive task, and had a 2.8% average error when trained using several representative repetitive cycles. Error for HAL was 0.1 for both algorithms, which was considered negligible. Elemental time, stratified by task and subject, were not statistically different from ground truth (p < 0.05). Both algorithms performed well for automatically measuring elapsed time, DC and HAL. Practitioner Summary: A completely automated approach for measuring elapsed time and DC was developed using marker-less video tracking and the tracked kinematic record. Such an approach is automatic, repeatable, objective and unobtrusive, and is suitable for evaluating repetitive exertions, muscle fatigue and manual tasks.


Assuntos
Algoritmos , Mãos/fisiologia , Processamento de Imagem Assistida por Computador , Análise e Desempenho de Tarefas , Gravação em Vídeo , Aceleração , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Movimento , Fadiga Muscular
17.
Ergonomics ; 58(12): 2057-66, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25978764

RESUMO

Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross-correlation template-matching algorithm for tracking a region of interest on the upper extremities. Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s(2) for acceleration, and less than 93 mm/s for speed and 656 mm/s(2) for acceleration when camera pan and tilt were within ± 30 degrees. Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary: This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value(®) for repetitive motion when the camera is located within ± 30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task.


Assuntos
Aceleração , Algoritmos , Movimento , Exposição Ocupacional/análise , Extremidade Superior/fisiologia , Gravação em Vídeo/métodos , Adolescente , Adulto , Fenômenos Biomecânicos , Ergonomia , Feminino , Humanos , Masculino , Doenças Musculoesqueléticas , Doenças Profissionais , Adulto Jovem
18.
Ergonomics ; 58(2): 184-94, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25343278

RESUMO

An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: HAL = 10[e(-15:87+0:02D+2:25 ln S)/(1+e(-15:87+0:02D+2:25 ln S)], R(2) = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. ( 1997 ) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28).


Assuntos
Mãos/fisiologia , Esforço Físico , Análise e Desempenho de Tarefas , Trabalho/fisiologia , Antropometria/métodos , Fenômenos Biomecânicos , Humanos , Militares , Movimento , Saúde Ocupacional , Análise de Regressão , Níveis Máximos Permitidos , Estados Unidos
19.
Environ Monit Assess ; 186(2): 919-34, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24046241

RESUMO

Here, we describe and evaluate two low-power wireless sensor networks (WSNs) designed to remotely monitor wetland hydrochemical dynamics over time scales ranging from minutes to decades. Each WSN (one student-built and one commercial) has multiple nodes to monitor water level, precipitation, evapotranspiration, temperature, and major solutes at user-defined time intervals. Both WSNs can be configured to report data in near real time via the internet. Based on deployments in two isolated wetlands, we report highly resolved water budgets, transient reversals of flow path, rates of transpiration from peatlands and the dynamics of chromophoric-dissolved organic matter and bulk ionic solutes (specific conductivity)-all on daily or subdaily time scales. Initial results indicate that direct precipitation and evapotranspiration dominate the hydrologic budget of both study wetlands, despite their relatively flat geomorphology and proximity to elevated uplands. Rates of transpiration from peatland sites were typically greater than evaporation from open waters but were more challenging to integrate spatially. Due to the high specific yield of peat, the hydrologic gradient between peatland and open water varied with precipitation events and intervening periods of dry out. The resultant flow path reversals implied that the flux of solutes across the riparian boundary varied over daily time scales. We conclude that WSNs can be deployed in remote wetland-dominated ecosystems at relatively low cost to assess the hydrochemical impacts of weather, climate, and other perturbations.


Assuntos
Monitoramento Ambiental/métodos , Tecnologia de Sensoriamento Remoto , Áreas Alagadas , Tecnologia sem Fio , Clima , Internet , Tempo (Meteorologia)
20.
J Signal Process Syst ; 94(3): 329-343, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35663585

RESUMO

A real-time 3D visualization (RT3DV) system using a multiview RGB camera array is presented. RT3DV can process multiple synchronized video streams to produce a stereo video of a dynamic scene from a chosen view angle. Its design objective is to facilitate 3D visualization at the video frame rate with good viewing quality. To facilitate 3D vision, RT3DV estimates and updates a surface mesh model formed directly from a set of sparse key points. The 3D coordinates of these key points are estimated from matching 2D key points across multiview video streams with the aid of epipolar geometry and trifocal tensor. To capture the scene dynamics, 2D key points in individual video streams are tracked between successive frames. We implemented a proof of concept RT3DV system tasked to process five synchronous video streams acquired by an RGB camera array. It achieves a processing speed of 44 milliseconds per frame and a peak signal to noise ratio (PSNR) of 15.9 dB from a viewpoint coinciding with a reference view. As a comparison, an image-based MVS algorithm utilizing a dense point cloud model and frame by frame feature detection and matching will require 7 seconds to render a frame and yield a reference view PSNR of 16.3 dB.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA