Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
1.
Sensors (Basel) ; 23(21)2023 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-37960620

RESUMO

Indoor human action recognition, essential across various applications, faces significant challenges such as orientation constraints and identification limitations, particularly in systems reliant on non-contact devices. Self-occlusions and non-line of sight (NLOS) situations are important representatives among them. To address these challenges, this paper presents a novel system utilizing dual Kinect V2, enhanced by an advanced Transmission Control Protocol (TCP) and sophisticated ensemble learning techniques, tailor-made to handle self-occlusions and NLOS situations. Our main works are as follows: (1) a data-adaptive adjustment mechanism, anchored on localization outcomes, to mitigate self-occlusion in dynamic orientations; (2) the adoption of sophisticated ensemble learning techniques, including a Chirp acoustic signal identification method, based on an optimized fuzzy c-means-AdaBoost algorithm, for improving positioning accuracy in NLOS contexts; and (3) an amalgamation of the Random Forest model and bat algorithm, providing innovative action identification strategies for intricate scenarios. We conduct extensive experiments, and our results show that the proposed system augments human action recognition precision by a substantial 30.25%, surpassing the benchmarks set by current state-of-the-art works.

2.
Sensors (Basel) ; 23(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37050520

RESUMO

Anthropometric measurements of the human body are an important problem that affects many aspects of human life. However, anthropometric measurement often requires the application of an appropriate measurement procedure and the use of specialized, sometimes expensive measurement tools. Sometimes the measurement procedure is complicated, time-consuming, and requires properly trained personnel. This study aimed to develop a system for estimating human anthropometric parameters based on a three-dimensional scan of the complete body made with an inexpensive depth camera in the form of the Kinect v2 sensor. The research included 129 men aged 18 to 28. The developed system consists of a rotating platform, a depth sensor (Kinect v2), and a PC computer that was used to record 3D data, and to estimate individual anthropometric parameters. Experimental studies have shown that the precision of the proposed system for a significant part of the parameters is satisfactory. The largest error was found in the waist circumference parameter. The results obtained confirm that this method can be used in anthropometric measurements.


Assuntos
Antropometria , Masculino , Humanos , Antropometria/métodos , Fenômenos Biomecânicos
3.
Sensors (Basel) ; 23(1)2022 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-36616603

RESUMO

Motion analysis is an area with several applications for health, sports, and entertainment. The high cost of state-of-the-art equipment in the health field makes it unfeasible to apply this technique in the clinics' routines. In this vein, RGB-D and RGB equipment, which have joint tracking tools, are tested with portable and low-cost solutions to enable computational motion analysis. The recent release of Google MediaPipe, a joint inference tracking technique that uses conventional RGB cameras, can be considered a milestone due to its ability to estimate depth coordinates in planar images. In light of this, this work aims to evaluate the measurement of angular variation from RGB-D and RGB sensor data against the Qualisys Tracking Manager gold standard. A total of 60 recordings were performed for each upper and lower limb movement in two different position configurations concerning the sensors. Google's MediaPipe usage obtained close results compared to Kinect V2 sensor in the inherent aspects of absolute error, RMS, and correlation to the gold standard, presenting lower dispersion values and error metrics, which is more positive. In the comparison with equipment commonly used in physical evaluations, MediaPipe had an error within the error range of short- and long-arm goniometers.


Assuntos
Movimento , Esportes , Fenômenos Biomecânicos , Movimento (Física) , Benchmarking
4.
Sensors (Basel) ; 22(7)2022 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-35408082

RESUMO

The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors are used to capture a planar whiteboard at 15 locations in a grid pattern with laser scanner data serving as ground truth. A set of histograms reveals the temporal-based random depth error inherent in each Kinect. Additionally, a two-dimensional cone of accuracy illustrates the systematic spatial error. At distances greater than 2.5 m, we find the Azure Kinect to have improved accuracy in both spatial and temporal domains as compared to the Kinect v2, while for distances less than 2.5 m, the spatial and temporal accuracies were found to be comparable. In another experiment, we compare the distribution of random depth error between each Kinect sensor by capturing a flat wall across the field of view in horizontal and vertical directions. We find the Azure Kinect to have improved temporal accuracy over the Kinect v2 in the range of 2.5 to 3.5 m for measurements close to the optical axis. The results indicate that the Azure Kinect is a suitable substitute for Kinect v2 in 3D scanning applications.


Assuntos
Sistemas Computacionais , Luz , Fenômenos Biomecânicos
5.
Sensors (Basel) ; 22(6)2022 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-35336429

RESUMO

The interruption of rehabilitation activities caused by the COVID-19 lockdown has significant health negative consequences for the population with physical disabilities. Thus, measuring the range of motion (ROM) using remotely taken photographs, which are then sent to specialists for formal assessment, has been recommended. Currently, low-cost Kinect motion capture sensors with a natural user interface are the most feasible implementations for upper limb motion analysis. An active range of motion (AROM) measuring system based on a Kinect v2 sensor for upper limb motion analysis using Fugl-Meyer Assessment (FMA) scoring is described in this paper. Two test groups of children, each having eighteen participants, were analyzed in the experimental stage, where upper limbs' AROM and motor performance were assessed using FMA. Participants in the control group (mean age of 7.83 ± 2.54 years) had no cognitive impairment or upper limb musculoskeletal problems. The study test group comprised children aged 8.28 ± 2.32 years with spastic hemiparesis. A total of 30 samples of elbow flexion and 30 samples of shoulder abduction of both limbs for each participant were analyzed using the Kinect v2 sensor at 30 Hz. In both upper limbs, no significant differences (p < 0.05) in the measured angles and FMA assessments were observed between those obtained using the described Kinect v2-based system and those obtained directly using a universal goniometer. The measurement error achieved by the proposed system was less than ±1° compared to the specialist's measurements. According to the obtained results, the developed measuring system is a good alternative and an effective tool for FMA assessment of AROM and motor performance of upper limbs, while avoiding direct contact in both healthy children and children with spastic hemiparesis.


Assuntos
COVID-19 , COVID-19/diagnóstico , Criança , Pré-Escolar , Controle de Doenças Transmissíveis , Hemiplegia , Humanos , Amplitude de Movimento Articular , Extremidade Superior
6.
Sensors (Basel) ; 22(16)2022 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-36015911

RESUMO

Wind tunnel tests often require deformation and displacement measures to determine the behavior of structures to evaluate their response to wind excitation. However, common measurement techniques make it possible to measure these quantities only at a few specific points. Moreover, these kinds of measurements, such as Linear Variable Differential Transformer LVDTs or fiber optics, usually influence the downstream and upstream air fluxes and the structure under test. In order to characterize the displacement of the structure not just at a few points, but for the entire structure, in this article, the application of 3D cameras during a wind tunnel test is presented. In order to validate this measurement technique in this application field, a wind tunnel test was executed. Three Kinect V2 depth sensors were used for a 3D displacement measurement of a test structure that did not present any optical marker or feature. The results highlighted that by using a low-cost and user-friendly measurement system, it is possible to obtain 3D measurements in a volume of several cubic meters (4 m × 4 m × 4 m wind tunnel chamber), without significant disturbance of wind flux and by means of a simple calibration of sensors, executed directly inside the wind tunnel. The obtained results highlighted a displacement directed to the internal part of the structure for the side most exposed to wind, while the sides, parallel to the wind flux, were more subjected to vibrations and with an outwards average displacement. These results are compliant with the expected behavior of the structure.

7.
Sensors (Basel) ; 22(8)2022 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-35458903

RESUMO

This paper proposes a time-series deep-learning 3D Kinect camera scheme to classify the respiratory phases with a lung tumor and predict the lung tumor displacement. Specifically, the proposed scheme is driven by two time-series deep-learning algorithmic models: the respiratory-phase classification model and the regression-based prediction model. To assess the performance of the proposed scheme, the classification and prediction models were tested with four categories of datasets: patient-based datasets with regular and irregular breathing patterns; and pseudopatient-based datasets with regular and irregular breathing patterns. In this study, 'pseudopatients' refer to a dynamic thorax phantom with a lung tumor programmed with varying breathing patterns and breaths per minute. The total accuracy of the respiratory-phase classification model was 100%, 100%, 100%, and 92.44% for the four dataset categories, with a corresponding mean squared error (MSE), mean absolute error (MAE), and coefficient of determination (R2) of 1.2-1.6%, 0.65-0.8%, and 0.97-0.98, respectively. The results demonstrate that the time-series deep-learning classification and regression-based prediction models can classify the respiratory phases and predict the lung tumor displacement with high accuracy. Essentially, the novelty of this research lies in the use of a low-cost 3D Kinect camera with time-series deep-learning algorithms in the medical field to efficiently classify the respiratory phase and predict the lung tumor displacement.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Algoritmos , Humanos , Neoplasias Pulmonares/diagnóstico , Imagens de Fantasmas , Tórax
8.
Sensors (Basel) ; 22(10)2022 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-35632211

RESUMO

Analysing the dynamics in social interactions in indoor spaces entails evaluating spatial-temporal variables from the event, such as location and time. Additionally, social interactions include invisible spaces that we unconsciously acknowledge due to social constraints, e.g., space between people having a conversation with each other. Nevertheless, current sensor arrays focus on detecting the physically occupied spaces from social interactions, i.e., areas inhabited by physically measurable objects. Our goal is to detect the socially occupied spaces, i.e., spaces not physically occupied by subjects and objects but inhabited by the interaction they sustain. We evaluate the social representation of the space structure between two or more active participants, so-called F-Formation for small gatherings. We propose calculating body orientation and location from skeleton joint data sets by integrating depth cameras. The body orientation is derived by integrating the shoulders and spine joint data with head/face rotation data and spatial-temporal information from trajectories. From the physically occupied measurements, we can detect socially occupied spaces. In our user study implementing the system, we compared the capabilities and skeleton tracking datasets from three depth camera sensors, the Kinect v2, Azure Kinect, and Zed 2i. We collected 32 walking patterns for individual and dyad configurations and evaluated the system's accuracy regarding the intended and socially accepted orientations. Experimental results show accuracy above 90% for the Kinect v2, 96% for the Azure Kinect, and 89% for the Zed 2i for assessing socially relevant body orientation. Our algorithm contributes to the anonymous and automated assessment of socially occupied spaces. The depth sensor system is promising in detecting more complex social structures. These findings impact research areas that study group interactions within complex indoor settings.


Assuntos
Sistema Musculoesquelético , Algoritmos , Fenômenos Biomecânicos , Humanos , Esqueleto , Caminhada
9.
Sensors (Basel) ; 21(6)2021 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-33802731

RESUMO

Children with cerebral palsy (CP) have high risks of falling. It is necessary to evaluate gait stability for children with CP. In comparison to traditional motion capture techniques, the Kinect has the potential to be utilised as a cost-effective gait stability assessment tool, ensuring frequent and uninterrupted gait monitoring. To evaluate the validity and reliability of this measurement, in this study, ten children with CP performed two testing sessions, of which gait data were recorded by a Kinect V2 sensor and a referential Motion Analysis system. The margin of stability (MOS) and gait spatiotemporal metrics were examined. For the spatiotemporal parameters, intraclass correlation coefficient (ICC2,k) values were from 0.83 to 0.99 between two devices and from 0.78 to 0.88 between two testing sessions. For the MOS outcomes, ICC2,k values ranged from 0.42 to 0.99 between two devices and 0.28 to 0.69 between two test sessions. The Kinect V2 was able to provide valid and reliable spatiotemporal gait parameters, and it could also offer accurate outcome measures for the minimum MOS. The reliability of the Kinect V2 when assessing time-specific MOS variables was limited. The Kinect V2 shows the potential to be used as a cost-effective tool for CP gait stability assessment.


Assuntos
Paralisia Cerebral , Análise da Marcha , Fenômenos Biomecânicos , Paralisia Cerebral/diagnóstico , Criança , Marcha , Humanos , Reprodutibilidade dos Testes
10.
J Appl Res Intellect Disabil ; 34(2): 606-614, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33258262

RESUMO

BACKGROUND: Individuals with intellectual disabilities (ID) may have difficulties in performing daily living tasks. Among other daily living tasks, independent oral hygiene is an essential life skill for people with ID. MATERIALS AND METHODS: Four children with intellectual disabilities (two males and two females, ages 7-11) participated in the experiment. We employed the KinectTM V2 sensor to gamify oral hygiene skill training. Specifically, a non-concurrent multiple baseline design was adopted to demonstrate the relation between game-based intervention and independent oral hygiene skills. RESULTS: All students learned how to brush their teeth independently and maintained the skill 4 weeks later with the introduction of the game-based training. Social validity results showed the teachers and parents considered the video game was useful. CONCLUSIONS: The proposed Kinect-based video game might be used for effective training of elementary students with ID to improve oral hygiene independently.


Assuntos
Deficiência Intelectual , Jogos de Vídeo , Criança , Feminino , Humanos , Masculino , Higiene Bucal , Estudantes
11.
Sensors (Basel) ; 20(23)2020 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-33287285

RESUMO

A non-destructive measuring technique was applied to test major vine geometric traits on measurements collected by a contactless sensor. Three-dimensional optical sensors have evolved over the past decade, and these advancements may be useful in improving phenomics technologies for other crops, such as woody perennials. Red, green and blue-depth (RGB-D) cameras, namely Microsoft Kinect, have a significant influence on recent computer vision and robotics research. In this experiment an adaptable mobile platform was used for the acquisition of depth images for the non-destructive assessment of branch volume (pruning weight) and related to grape yield in vineyard crops. Vineyard yield prediction provides useful insights about the anticipated yield to the winegrower, guiding strategic decisions to accomplish optimal quantity and efficiency, and supporting the winegrower with decision-making. A Kinect v2 system on-board to an on-ground electric vehicle was capable of producing precise 3D point clouds of vine rows under six different management cropping systems. The generated models demonstrated strong consistency between 3D images and vine structures from the actual physical parameters when average values were calculated. Correlations of Kinect branch volume with pruning weight (dry biomass) resulted in high coefficients of determination (R2 = 0.80). In the study of vineyard yield correlations, the measured volume was found to have a good power law relationship (R2 = 0.87). However due to low capability of most depth cameras to properly build 3-D shapes of small details the results for each treatment when calculated separately were not consistent. Nonetheless, Kinect v2 has a tremendous potential as a 3D sensor in agricultural applications for proximal sensing operations, benefiting from its high frame rate, low price in comparison with other depth cameras, and high robustness.

12.
Sensors (Basel) ; 20(24)2020 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-33321817

RESUMO

The use of 3D sensors combined with appropriate data processing and analysis has provided tools to optimise agricultural management through the application of precision agriculture. The recent development of low-cost RGB-Depth cameras has presented an opportunity to introduce 3D sensors into the agricultural community. However, due to the sensitivity of these sensors to highly illuminated environments, it is necessary to know under which conditions RGB-D sensors are capable of operating. This work presents a methodology to evaluate the performance of RGB-D sensors under different lighting and distance conditions, considering both geometrical and spectral (colour and NIR) features. The methodology was applied to evaluate the performance of the Microsoft Kinect v2 sensor in an apple orchard. The results show that sensor resolution and precision decreased significantly under middle to high ambient illuminance (>2000 lx). However, this effect was minimised when measurements were conducted closer to the target. In contrast, illuminance levels below 50 lx affected the quality of colour data and may require the use of artificial lighting. The methodology was useful for characterizing sensor performance throughout the full range of ambient conditions in commercial orchards. Although Kinect v2 was originally developed for indoor conditions, it performed well under a range of outdoor conditions.

13.
J Sports Sci Med ; 19(3): 585-595, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32874112

RESUMO

The Test of Gross Motor Development 2 (TGMD-2) is currently the standard approach for assessing fundamental movement skills (FMS), including locomotor and object control skills. However, its extensive application is restricted by its low efficiency and requirement of expert training for large-scale evaluations. This study evaluated the accuracy of a newly-developed video-based classification system (VCS) with a marker-less sensor to assess children's locomotor skills. A total of 203 typically-developing children aged three to eight years executed six locomotor skills, following the TGMD-2 guidelines. A Kinect v2 sensor was used to capture their activities, and videos were recorded for further evaluation by a trained rater. A series of computational-kinematic-based algorithms was developed for instant performance rating. The VCS exhibited moderate-to-very good levels of agreement with the rater, ranging from 66.1% to 87.5%, for each skill, and 72.4% for descriptive ratings. Paired t-test revealed that there were no significant differences, but significant positive correlation, between the standard scores determined by the two approaches. Tukey mean difference plot suggested there was no bias, with a mean difference (SD) of -0.16 (1.8) and respective 95% confidence interval of 3.5. The kappa agreement for the descriptive ratings between the two approaches was found to be moderate (k = 0.54, p < 0.01). Overall, the results suggest the VCS could potentially be an alternative to the conventional TGMD-2 assessment approach for assessing children's locomotor skills without the necessity of the presence of an experienced rater for the administration.


Assuntos
Desenvolvimento Infantil/classificação , Destreza Motora/classificação , Gravação em Vídeo/métodos , Algoritmos , Fenômenos Biomecânicos , Criança , Pré-Escolar , Humanos , Locomoção , Estudos de Tempo e Movimento
14.
BMC Med Inform Decis Mak ; 19(Suppl 9): 243, 2019 12 12.
Artigo em Inglês | MEDLINE | ID: mdl-31830986

RESUMO

BACKGROUND: Assessment and rating of Parkinson's Disease (PD) are commonly based on the medical observation of several clinical manifestations, including the analysis of motor activities. In particular, medical specialists refer to the MDS-UPDRS (Movement Disorder Society - sponsored revision of Unified Parkinson's Disease Rating Scale) that is the most widely used clinical scale for PD rating. However, clinical scales rely on the observation of some subtle motor phenomena that are either difficult to capture with human eyes or could be misclassified. This limitation motivated several researchers to develop intelligent systems based on machine learning algorithms able to automatically recognize the PD. Nevertheless, most of the previous studies investigated the classification between healthy subjects and PD patients without considering the automatic rating of different levels of severity. METHODS: In this context, we implemented a simple and low-cost clinical tool that can extract postural and kinematic features with the Microsoft Kinect v2 sensor in order to classify and rate PD. Thirty participants were enrolled for the purpose of the present study: sixteen PD patients rated according to MDS-UPDRS and fourteen healthy paired subjects. In order to investigate the motor abilities of the upper and lower body, we acquired and analyzed three main motor tasks: (1) gait, (2) finger tapping, and (3) foot tapping. After preliminary feature selection, different classifiers based on Support Vector Machine (SVM) and Artificial Neural Networks (ANN) were trained and evaluated for the best solution. RESULTS: Concerning the gait analysis, results showed that the ANN classifier performed the best by reaching 89.4% of accuracy with only nine features in diagnosis PD and 95.0% of accuracy with only six features in rating PD severity. Regarding the finger and foot tapping analysis, results showed that an SVM using the extracted features was able to classify healthy subjects versus PD patients with great performances by reaching 87.1% of accuracy. The results of the classification between mild and moderate PD patients indicated that the foot tapping features were the most representative ones to discriminate (81.0% of accuracy). CONCLUSIONS: The results of this study have shown how a low-cost vision-based system can automatically detect subtle phenomena featuring the PD. Our findings suggest that the proposed tool can support medical specialists in the assessment and rating of PD patients in a real clinical scenario.


Assuntos
Análise Custo-Benefício , Atividade Motora/fisiologia , Doença de Parkinson/fisiopatologia , Índice de Gravidade de Doença , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Análise da Marcha , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Máquina de Vetores de Suporte
15.
J Neuroeng Rehabil ; 16(1): 97, 2019 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-31349868

RESUMO

BACKGROUND: Gait is usually assessed by clinical tests, which may have poor accuracy and be biased, or instrumented systems, which potentially solve these limitations at the cost of being time-consuming and expensive. The different versions of the Microsoft Kinect have enabled human motion tracking without using wearable sensors at a low-cost and with acceptable reliability. This study aims: First, to determine the sensitivity of an open-access Kinect v2-based gait analysis system to motor disability and aging; Second, to determine its concurrent validity with standardized clinical tests in individuals with stroke; Third, to quantify its inter and intra-rater reliability, standard error of measurement, minimal detectable change; And, finally, to investigate its ability to identify fall risk after stroke. METHODS: The most widely used spatiotemporal and kinematic gait parameters of 82 individuals post-stroke and 355 healthy subjects were estimated with the Kinect v2-based system. In addition, participants with stroke were assessed with the Dynamic Gait Index, the 1-min Walking Test, and the 10-m Walking Test. RESULTS: The system successfully characterized the performance of both groups. Significant concurrent validity with correlations of variable strength was detected between all clinical tests and gait measures. Excellent inter and intra-rater reliability was evidenced for almost all measures. Minimal detectable change was variable, with poorer results for kinematic parameters. Almost all gait parameters proved to identify fall risk. CONCLUSIONS: Results suggest that although its limited sensitivity to kinematic parameters, the Kinect v2-based gait analysis could be used as a low-cost alternative to laboratory-grade systems to complement gait assessment in clinical settings.


Assuntos
Análise da Marcha/instrumentação , Transtornos Neurológicos da Marcha/diagnóstico , Software , Adulto , Fenômenos Biomecânicos , Feminino , Transtornos Neurológicos da Marcha/etiologia , Transtornos Neurológicos da Marcha/fisiopatologia , Voluntários Saudáveis , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Acidente Vascular Cerebral/complicações
16.
Sensors (Basel) ; 19(5)2019 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-30862049

RESUMO

Since the release of the first Kinect in 2011, low-cost technologies for upper-limb evaluation has been employed frequently for rehabilitation purposes. However, a limited number of studies have assessed the potential of the Kinect V2 for motor evaluations. In this paper, a simple biomechanical protocol has been developed, in order to assess the performances of healthy people and patients, during daily-life reaching movements, with focus on some of the patients' common compensatory strategies. The assessment considers shoulder range of motion, elbow range of motion, trunk compensatory strategies, and movement smoothness. Seventy-seven healthy people and twenty post-stroke patients participated to test the biomechanical assessment. The testing protocol included four different experimental conditions: (1) dominant limb and (2) non-dominant limb of 77 healthy people, and (3) the more impaired limb of 20 post-stroke hemiparetic patients, and (4) the less-impaired limb of 11 patients (subgroup of the original 20). Biomechanical performances of the four groups were compared. Results showed that the dominant and non-dominant limbs of healthy people had comparable performances (p > 0.05). On the contrary, condition (3) showed statistically significant differences between the healthy dominant/non-dominant limb and the less-affected limb in hemiparetic patients, for all parameters of assessment (p < 0.001). In some cases, the less-affected limb of the patients also showed statistical differences (p < 0.05), with respect to the healthy people. Such results suggest that Kinect V2 has the potential for being employed at home, laboratory or clinical environment, for the evaluation of patients' motor performances.


Assuntos
Técnicas Biossensoriais/métodos , Feminino , Gestos , Humanos , Masculino , Acidente Vascular Cerebral/fisiopatologia , Reabilitação do Acidente Vascular Cerebral , Extremidade Superior/fisiologia
17.
Sensors (Basel) ; 19(11)2019 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-31167494

RESUMO

Based on computer vision technology, this paper proposes a method for identifying and locating crops in order to successfully capture crops in the process of automatic crop picking. This method innovatively combines the YOLOv3 algorithm under the DarkNet framework with the point cloud image coordinate matching method, and can achieve the goal of this paper very well. Firstly, RGB (RGB is the color representing the three channels of red, green and blue) images and depth images are obtained by using the Kinect v2 depth camera. Secondly, the YOLOv3 algorithm is used to identify the various types of target crops in the RGB images, and the feature points of the target crops are determined. Finally, the 3D coordinates of the feature points are displayed on the point cloud images. Compared with other methods, this method of crop identification has high accuracy and small positioning error, which lays a good foundation for the subsequent harvesting of crops using mechanical arms. In summary, the method used in this paper can be considered effective.

18.
Sensors (Basel) ; 19(16)2019 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-31398825

RESUMO

Using consumer depth cameras at close range yields a higher surface resolution of the object, but this makes more serious noises. This form of noise tends to be located at or on the edge of the realistic surface over a large area, which is an obstacle for real-time applications that do not rely on point cloud post-processing. In order to fill this gap, by analyzing the noise region based on position and shape, we proposed a composite filtering system for using consumer depth cameras at close range. The system consists of three main modules that are used to eliminate different types of noise areas. Taking the human hand depth image as an example, the proposed filtering system can eliminate most of the noise areas. All algorithms in the system are not based on window smoothing and are accelerated by the GPU. By using Kinect v2 and SR300, a large number of contrast experiments show that the system can get good results and has extremely high real-time performance, which can be used as a pre-step for real-time human-computer interaction, real-time 3D reconstruction, and further filtering.

19.
J Appl Res Intellect Disabil ; 32(4): 942-951, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30941883

RESUMO

BACKGROUND: Individuals with intellectual disabilities (ID) may have difficulties in performing daily living tasks. Among other daily living tasks, independent personal hygiene is an essential life skill for people with ID. MATERIALS AND METHODS: Four children in a special education class participated in the experiment. We employed the Kinect V2 sensor to gamify hand washing. Specifically, a non-concurrent multiple baseline design was adopted to demonstrate the relation between game-based intervention and washing hands independently. RESULTS: Data showed that the percentage of correct task steps increased among all four participants. Social validity results showed the parents considered the video game was very useful and it had helped their children learn the hand hygiene skills effectively. CONCLUSIONS: Although the game is a highly accepted training tool for school-use, it currently remains error-prone. A more technically robust system will likely result in higher participant motivation and task performance.


Assuntos
Condicionamento Operante , Crianças com Deficiência/reabilitação , Educação Inclusiva/métodos , Higiene das Mãos , Deficiência Intelectual/reabilitação , Jogos de Vídeo , Criança , Feminino , Humanos , Masculino , Resultado do Tratamento
20.
J Neuroeng Rehabil ; 15(1): 104, 2018 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-30428896

RESUMO

BACKGROUND: After a stroke, during seated reaching with their paretic upper limb, many patients spontaneously replace the use of their arm by trunk compensation movements, even though they are able to use their arm when forced to do so. We previously quantified this proximal arm non-use (PANU) with a motion capture system (Zebris, CMS20s). The aim of this study was to validate a low-cost Microsoft Kinect-based system against the CMS20s reference system to diagnose PANU. METHODS: In 19 hemiparetic stroke individuals, the PANU score, reach length, trunk length, and proximal arm use (PAU) were measured during seated reaching simultaneously by the Kinect (v2) and the CMS20s over two testing sessions separated by two hours. RESULTS: Intraclass correlation coefficients (ICC) and linear regression analysis showed that the PANU score (ICC = 0.96, r2 = 0.92), reach length (ICC = 0.81, r2 = 0.68), trunk length (ICC = 0.97, r2 = 0.94) and PAU (ICC = 0.97, r2 = 0.94) measured using the Kinect were strongly related to those measured using the CMS20s. The PANU scores showed good test-retest reliability for both the Kinect (ICC = 0.76) and CMS20s (ICC = 0.72). Bland and Altman plots showed slightly reduced PANU scores in the re-test session for both systems (Kinect: - 4.25 ± 6.76; CMS20s: - 4.71 ± 7.88), which suggests a practice effect. CONCLUSION: We showed that the Kinect could accurately and reliably assess PANU, reach length, trunk length and PAU during seated reaching in post stroke individuals. We conclude that the Kinect can offer a low-cost and widely available solution to clinically assess PANU for individualised rehabilitation and to monitor the progress of paretic arm recovery. TRIAL REGISTRATION: The study was approved by The Ethics Committee of Montpellier, France (N°ID-RCB: 2014-A00395-42) and registered in Clinical Trial (N° NCT02326688, Registered on 15 December 2014, https://clinicaltrials.gov/ct2/show/results/NCT02326688 ).


Assuntos
Imageamento Tridimensional/métodos , Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral/fisiopatologia , Ultrassonografia/métodos , Adulto , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Extremidade Superior/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA