Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 416
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(23): e2216162120, 2023 06 06.
Artigo em Inglês | MEDLINE | ID: mdl-37253013

RESUMO

Across the United States, police chiefs, city officials, and community leaders alike have highlighted the need to de-escalate police encounters with the public. This concern about escalation extends from encounters involving use of force to routine car stops, where Black drivers are disproportionately pulled over. Yet, despite the calls for action, we know little about the trajectory of police stops or how escalation unfolds. In study 1, we use methods from computational linguistics to analyze police body-worn camera footage from 577 stops of Black drivers. We find that stops with escalated outcomes (those ending in arrest, handcuffing, or a search) diverge from stops without these outcomes in their earliest moments-even in the first 45 words spoken by the officer. In stops that result in escalation, officers are more likely to issue commands as their opening words to the driver and less likely to tell drivers the reason why they are being stopped. In study 2, we expose Black males to audio clips of the same stops and find differences in how escalated stops are perceived: Participants report more negative emotion, appraise officers more negatively, worry about force being used, and predict worse outcomes after hearing only the officer's initial words in escalated versus non-escalated stops. Our findings show that car stops that end in escalated outcomes sometimes begin in an escalated fashion, with adverse effects for Black male drivers and, in turn, police-community relations.


Assuntos
Negro ou Afro-Americano , Aplicação da Lei , Polícia , Humanos , Masculino , Aplicação da Lei/métodos , Estados Unidos , Racismo , Emoções
2.
BMC Health Serv Res ; 24(1): 681, 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38812029

RESUMO

BACKGROUND: Body worn cameras (BWC) are mobile audio and video capture devices that can be secured to clothing allowing the wearer to record some of what they see and hear. This technology is being introduced in a range of healthcare settings as part of larger violence reduction strategies aimed at reducing incidents of aggression and violence on inpatient wards, however limited evidence exists to understand if this technology achieves such goals. AIM: This study aimed to evaluate the implementation of BWCs on two inpatient mental health wards, including the impact on incidents, the acceptability to staff and patients, the sustainability of the resource use and ability to manage the use of BWCs on these wards. METHODS: The study used a mixed-methods design comparing quantitative measures including ward activity and routinely collected incident data at three time-points before during and after the pilot implementation of BWCs on one acute ward and one psychiatric intensive care unit, alongside pre and post pilot qualitative interviews with patients and staff, analysed using a framework based on the Consolidated Framework for Implementation Research. RESULTS: Results showed no clear relationship between the use of BWCs and rates or severity of incidents on either ward, with limited impact of using BWCs on levels of incidents. Qualitative findings noted mixed perceptions about the use of BWCs and highlighted the complexity of implementing such technology as a violence reduction method within a busy healthcare setting Furthermore, the qualitative data collected during this pilot period highlighted the potential systemic and contextual factors such as low staffing that may impact on the incident data presented. CONCLUSION: This study sheds light on the complexities of using such BWCs as a tool for 'maximising safety' on mental health settings. The findings suggest that BWCs have a limited impact on levels of incidents on wards, something that is likely to be largely influenced by the process of implementation as well as a range of contextual factors. As a result, it is likely that while BWCs may see successes in one hospital site this is not guaranteed for another site as such factors will have a considerable impact on efficacy, acceptability, and feasibility.


Assuntos
Unidade Hospitalar de Psiquiatria , Humanos , Projetos Piloto , Masculino , Feminino , Adulto , Violência/prevenção & controle , Gravação em Vídeo , Pessoa de Meia-Idade , Pesquisa Qualitativa , Dispositivos Eletrônicos Vestíveis
3.
J Dairy Sci ; 107(2): 917-932, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37777006

RESUMO

The objective of this study was to document the milking efficiency of a sample of Irish dairy farms and to understand the effects of (1) seasonality, (2) management practices, (3) parlor infrastructure, and (4) parlor automations on milking efficiency metrics. A novel methodology based on empirical data from video cameras, infrastructure surveys, and milk yield data allowed for the accurate computation of milking efficiency metrics and quantification of the effects of seasonality, number of operators, and parlor automations on milking efficiency across 2 parlor types. The data for this study were collected over 2 periods: period 1 (July 28, 2020, to October 23, 2020, peak-late production) and period 2 (April 12, 2021, to May 19, 2021, early-peak production) from a sample of 16 herringbone and 10 rotary commercial Irish dairy farms. Milking efficiency was evaluated on each farm using 3 key performance indicators: (1) cows milked per hour (cows/h), (2) cows milked per operator per hour (cows/h per operator), and (3) liters of milk harvested per hour (L/h). Milking efficiency key performance indicators were calculated using "total process time," defined as the time between the first cow entering the holding yard and the end of the cleaning process. Average herd sizes for herringbone and rotary farms were 180 and 425 cows, respectively. Average system sizes for herringbone and rotary farms were 20 and 50 clusters, respectively. For herringbone farms, the average milking efficiency was 94 cows/h, 73 cows/h per operator, and 1,012 L/h, whereas rotary farms achieved an average milking efficiency of 170 cows/h, 132 cows/h per operator, and 1,534 L/h. Parlor size was strongly correlated with milking efficiency (cows/h) for herringbone parlors (0.91) but was only moderately correlated for rotary parlors (0.50). Hence, we documented the effect of parlor size on milking efficiency is relative to parlor type. Cluster utilization values on herringbone farms were 5 cows/cluster per h, 4 cows/cluster per operator per h, and 51 L/cluster per h, which were 67%, 33%, and 65% greater than rotary farms, respectively. We found for both herringbone and rotary farms hourly cow throughput (cows/h, cows/h per operator) were greatest during period 1 and that the volume of milk harvested per hour (L/h) was greatest for period 2. Thus, we documented an inverse seasonal relationship between hourly rates of cows milked and milk harvested. We observed that for herringbone farms, milking efficiency (cows/h, L/h) had a strong positive correlation (0.75, 0.74) with the levels of automation use. However, the minimal variation in automations used among rotary farms made it difficult to evaluate their effect on milking efficiency. Similarly, we found that the effect of automations on milking efficiency was dependent on parlor type. On average, a second operator at milking for both herringbone (H) and rotary (R) farms increased values for cows/h (+19%, H; +34%, R) and L/h (+21%, H; +12%, R) but lowered values for cows/h per operator (-35%, H; -12%, R). The holistic methodology applied in this study allowed us to add novel data to the literature by quantifying the effects of seasonality, the number of operators present at milking, and parlor automation use on milking efficiency across 2 parlor types.


Assuntos
Lactação , Leite , Feminino , Bovinos , Animais , Irlanda , Indústria de Laticínios/métodos , Automação , Fazendas
4.
J Arthroplasty ; 39(1): 255-260, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37295618

RESUMO

BACKGROUND: Traffic in the operating room (OR) create turbulence and contaminates air by bacterial shedding. Therefore, we examined: (1) if the number and duration of door openings were associated with increased particles during arthroplasty surgery; (2) if traffic cameras installed in the operating room were an effective intervention to decrease traffic and particles during arthroplasty surgery; and (3) the effectiveness of traffic camera over time. METHODS: Fifty cases were included between November 3, 2021, and June 22, 2022, with 25 cases in each group. Two particle counters were used to count particles sized 0.5 to 10 µm. One counter was positioned within the sterile field, and another between the OR doors. Two door counters were mounted to count door openings. For the intervention, traffic cameras were mounted facing each door and took snapshots with door openings. RESULTS: The number of door openings/minute was 30% less in the Intervention group (P < .001). The Intervention group had significantly lower particles by 26 to 43% in the operative field (0.5 µm, P = .01; 0.7 µm, P = .008; 1 µm, P = .007; 2.5 µm, P = .006; 5 µm, P = .01; and 10 µm, P = .01). The particles between the OR doors were decreased by 2 to 42% in the Intervention group and the difference was significant for (0.5 µm, P = 0.03; 0.7 µm, P = .02; and 1 µm, P = .03). The decrease in door openings and particles were sustained over the study period. CONCLUSION: The use of traffic cameras was an effective and sustainable method to limit OR traffic and door openings, which resulted in a reduction in particles in the operating room.


Assuntos
Artroplastia , Salas Cirúrgicas , Humanos
5.
Sensors (Basel) ; 24(3)2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38339742

RESUMO

Scientific-grade cameras are frequently employed in industries such as spectral imaging technology, aircraft, medical detection, and astronomy, and are characterized by high precision, high quality, fast speed, and high sensitivity. Especially in the field of astronomy, obtaining information about faint light often requires long exposure with high-resolution cameras, which means that any external factors can cause the camera to become unstable and result in increased errors in the detection results. This paper aims to investigate the effect of displacement introduced by various vibration factors on the imaging of an astronomical camera during long exposure. The sources of vibration are divided into external vibration and internal vibration. External vibration mainly includes environmental vibration and resonance effects, while internal vibration mainly refers to the vibration caused by the force generated by the refrigeration module inside the camera during the working process of the camera. The cooling module is divided into water-cooled and air-cooled modes. Through the displacement and vibration experiments conducted on the camera, it is proven that the air-cooled mode will cause the camera to produce greater displacement changes relative to the water-cooled mode, leading to blurring of the imaging results and lowering the accuracy of astronomical detection. This paper compares the effects of displacement produced by two methods, fan cooling and water-circulation cooling, and proposes improvements to minimize the displacement variations in the camera and improve the imaging quality. This study provides a reference basis for the design of astronomical detection instruments and for determining the vibration source of cameras, which helps to promote the further development of astronomical detection.

6.
Sensors (Basel) ; 24(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39000817

RESUMO

Parallax processing and structure preservation have long been important and challenging tasks in image stitching. In this paper, an image stitching method based on sliding camera to eliminate perspective deformation and asymmetric optical flow to solve parallax is proposed. By maintaining the viewpoint of two input images in the mosaic non-overlapping area and creating a virtual camera by interpolation in the overlapping area, the viewpoint is gradually transformed from one to another so as to complete the smooth transition of the two image viewpoints and reduce perspective deformation. Two coarsely aligned warped images are generated with the help of a global projection plane. After that, the optical flow propagation and gradient descent method are used to quickly calculate the bidirectional asymmetric optical flow between the two warped images, and the optical-flow-based method is used to further align the two warped images to reduce parallax. In the image blending, the softmax function and registration error are used to adjust the width of the blending area, further eliminating ghosting and reducing parallax. Finally, by comparing our method with APAP, AANAP, SPHP, SPW, TFT, and REW, it has been proven that our method can not only effectively solve perspective deformation, but also gives more natural transitions between images. At the same time, our method can robustly reduce local misalignment in various scenarios, with higher structural similarity index. A scoring method combining subjective and objective evaluations of perspective deformation, local alignment and runtime is defined and used to rate all methods, where our method ranks first.

7.
Sensors (Basel) ; 24(8)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38676139

RESUMO

Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic camera's main lens and microlenses. Our work addresses the often-overlooked role of the main lens exit pupil in these models, specifically in the decoding process of standard plenoptic camera (SPC) images. We formally deduce the connection between the refocusing distance and the resampling parameter for the decoded light field and provide an analysis of the errors that arise when the exit pupil is not considered. In addition, previous work is revisited with respect to the exit pupil's role, and all theoretical results are validated through a ray tracing-based simulation. With the public release of the evaluated SPC designs alongside our simulation and experimental data, we aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics.

8.
Sensors (Basel) ; 24(7)2024 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-38610457

RESUMO

This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera's field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes. To address these issues, this paper combines the utilization of image masks to mitigate abrupt changes in global feature values and the application of neural networks to tackle the modeling challenge posed by translational motion. By employing masks at various locations within the image, multiple estimations of rotation corresponding to the motion of each selected region can be obtained. Our contribution lies in offering a rapid method for implementing numerous masks on the image with real-time inference speed, rendering it suitable for embedded robot applications. Extensive experiments have been conducted on both real-world and synthetic datasets generated using Blender. The results obtained validate the accuracy, robustness, and real-time performance of the proposed method compared to a state-of-the-art method.

9.
J Exp Bot ; 74(17): 5255-5272, 2023 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-37249250

RESUMO

Pistia stratiotes is an aquatic plant with a complex structure that allows it to stay afloat. It grows quickly, and in large numbers becomes an undesirable plant as an invasive species. Describing the dynamics of a water drop splash on P. stratiotes leaves can contribute to increasing knowledge of its behavior and finding alternative methods for eradicating it or using it for the benefit of the environment. The non-wettable surface of P. stratiotes presents a complex structure-simple uniseriate trichomes and also ridges and veins. We analyzed the drop impact on a leaf placed on the water surface and recorded it by high-speed cameras. Based on the recordings, quantitative and qualitative analyses were performed. After impacting the leaf, the water drop spread until it reached its maximum surface area accompanied by the ejection of early droplets in the initial stage. Thereafter, three scenarios of water behavior were observed: (i) drop receding and stabilization; (ii) drop receding and ejection of late droplets formed in the later stage as an effect of elastic deformation of the leaf; and (iii) drop breaking apart and ejection of late droplets. The results indicated that the increasing kinetic energy of the impacting drops expressed by the Weber number and the complex leaf surface have an effect on the course of the splash. The simple uniseriate trichomes of the P. stratiotes leaf and the high energy of the falling drops were responsible for the formation and characteristics of the early droplets. The presence of ridges and veins and the leaf's mechanical response had an impact on the occurrence of late droplets.


Assuntos
Araceae , Interações Hidrofóbicas e Hidrofílicas , Plantas , Folhas de Planta/fisiologia , Água/análise
10.
Biol Cybern ; 117(4-5): 389-406, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37733033

RESUMO

Foveation can be defined as the organic action of directing the gaze towards a visual region of interest to acquire relevant information selectively. With the recent advent of event cameras, we believe that taking advantage of this visual neuroscience mechanism would greatly improve the efficiency of event data processing. Indeed, applying foveation to event data would allow to comprehend the visual scene while significantly reducing the amount of raw data to handle. In this respect, we demonstrate the stakes of neuromorphic foveation theoretically and empirically across several computer vision tasks, namely semantic segmentation and classification. We show that foveated event data have a significantly better trade-off between quantity and quality of the information conveyed than high- or low-resolution event data. Furthermore, this compromise extends even over fragmented datasets. Our code is publicly available online at: https://github.com/amygruel/FoveationStakes_DVS .


Assuntos
Computadores , Visão Ocular
11.
Nutr J ; 22(1): 7, 2023 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-36635676

RESUMO

BACKGROUND: Traditional recall approaches of data collection for assessing dietary intake and time use are prone to recall bias. Studies in high- and middle-income countries show that automated wearable cameras are a promising method for collecting objective health behavior data and may improve study participants' recall of foods consumed and daily activities performed. This study aimed to evaluate the feasibility of using automated wearable cameras in rural Eastern Ugandan to collect dietary and time use data. METHODS: Mothers of young children (n = 211) wore an automated wearable camera on 2 non-consecutive days while continuing their usual activities. The day after wearing the camera, participants' dietary diversity and time use was assessed using an image-assisted recall. Their experiences of the method were assessed via a questionnaire. RESULTS: Most study participants reported their experiences with the automated wearable camera and image-assisted recall to be good (36%) or very good (56%) and would participate in a similar study in the future (97%). None of the eight study withdrawals could be definitively attributed to the camera. Fifteen percent of data was lost due to device malfunction, and twelve percent of the images were "uncodable" due to insufficient lighting. Processing and analyzing the images were labor-intensive, time-consuming, and prone to human error. Half (53%) of participants had difficulty interpreting the images captured by the camera. CONCLUSIONS: Using an automated wearable camera in rural Eastern Uganda was feasible, although improvements are needed to overcome the challenges common to rural, low-income country contexts and reduce the burdens posed on both participants and researchers. To improve the quality of data obtained, future automated wearable camera-based image assisted recall studies should use a structured data format to reduce image coding time; electronically code the data in the field, as an output of the image review process, to eliminate ex post facto data entry; and, ideally, use computer-assisted personal interviews software to ensure completion and reduce errors. In-depth formative work in partnership with key local stakeholders (e.g., researchers from low-income countries, representatives from government and/or other institutional review boards, and community representatives and local leaders) is also needed to identify practical approaches to ensuring that the ethical rights of automated wearable camera study participants in low-income countries are adequately protected.


Assuntos
Dieta , Dispositivos Eletrônicos Vestíveis , Criança , Humanos , Pré-Escolar , Estudos de Viabilidade , Uganda , Estudos Transversais
12.
BMC Health Serv Res ; 23(1): 899, 2023 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-37612649

RESUMO

BACKGROUND: There is growing public policy and research interest in the development and use of various technologies for managing violence in healthcare settings to protect the health and well-being of patients and workers. However, little research exists on the impact of technologies on violence prevention, and in particular in the context of rehabilitation settings. Our study addresses this gap by exploring the perceptions and experiences of rehabilitation professionals regarding how technologies are used (or not) for violence prevention, and their perceptions regarding their efficacy and impact. METHODS: This was a descriptive qualitative study with 10 diverse professionals (e.g., physical therapy, occupational therapy, recreation therapy, nursing) who worked across inpatient and outpatient settings in one rehabilitation hospital. Data collection consisted of semi-structured interviews with all participants. A conventional approach to content analysis was used to identify key themes. RESULTS: We found that participants used three types of technologies for violence prevention: an electronic patient flagging system, fixed and portable emergency alarms, and cameras. All of these were perceived by participants as being largely ineffective for violence prevention due to poor design features, malfunction, limited resources, and incompatibility with the culture of care. Our analysis further suggests that professionals' perception that these technologies would not prevent violence may be linked to their focus on individual patients, with a corresponding lack of attention to structural factors, including the culture of care and the organizational and physical environment. CONCLUSIONS: Our findings suggest an urgent need for greater consideration of structural factors in efforts to develop effective interventions for violence prevention in rehabilitation settings, including the design and implementation of new technologies.


Assuntos
Terapia Ocupacional , Humanos , Hospitais de Reabilitação , Coleta de Dados , Eletrônica , Violência/prevenção & controle
13.
J Dairy Sci ; 106(12): 9006-9015, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37641284

RESUMO

Recording complex phenotypes on a large scale is becoming possible with the incorporation of recently developed new technologies. One of these new technologies is the use of 3-dimensional (3D) cameras on commercial farms to measure feed intake and body weight (BW) daily. Residual feed intake (RFI) has been proposed as a proxy for feed efficiency in several species, including cattle, pigs, and poultry. Dry matter intake (DMI) and BW records are required to calculate RFI, and the use of this new technology will help increase the number of individual records more efficiently. The aim of this study was to estimate genetic parameters (including genetic correlations) for DMI and BW obtained by 3D cameras from 6,000 cows in commercial farms from the breeds Danish Holstein, Jersey, and Nordic Red. Additionally, heritabilities per parity and genetic correlations among parities were estimated for DMI and BW in the 3 breeds. Data included 158,000 weekly records of DMI and BW obtained between 2019 and 2022 on 17 commercial farms. Estimated heritability for DMI ranged from 0.17 to 0.25, whereas for BW they ranged from 0.44 to 0.58. The genetic correlations between DMI and BW were moderately positive (0.58-0.65). Genetic correlations among parities in both traits were highly correlated in the 3 breeds, except for DMI between first parity and late parities in Holstein where they were down to 0.62. Based on these results, we conclude that DMI and BW phenotypes measured by 3D cameras are heritable for the 3 dairy breeds and their heritabilities are comparable to those obtained by traditional methods (scales and feed bins). The high heritabilities and correlations of 3D measurements with the true trait in previous studies demonstrate the potential of this new technology for measuring feed intake and BW in real time. In conclusion, 3D camera technology has the potential to become a valuable tool for automatic and continuous recording of feed intake and BW on commercial farms.


Assuntos
Ingestão de Alimentos , Lactação , Animais , Bovinos/genética , Feminino , Gravidez , Ração Animal/análise , Peso Corporal/genética , Dinamarca , Ingestão de Alimentos/genética , Fazendas , Lactação/genética
14.
Sensors (Basel) ; 23(4)2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36850751

RESUMO

The event camera efficiently detects scene radiance changes and produces an asynchronous event stream with low latency, high dynamic range (HDR), high temporal resolution, and low power consumption. However, the large output data caused by the asynchronous imaging mechanism makes the increase in spatial resolution of the event camera limited. In this paper, we propose a novel event camera super-resolution (SR) network (EFSR-Net) based on a deep learning approach to address the problems of low spatial resolution and poor visualization of event cameras. The network model is capable of reconstructing high-resolution (HR) intensity images using event streams and active sensor pixel (APS) frame information. We design the coupled response blocks (CRB) in the network that are able of fusing the feature information of both data to achieve the recovery of detailed textures in the shadows of real images. We demonstrate that our method is able to reconstruct high-resolution intensity images with more details and less blurring in synthetic and real datasets, respectively. The proposed EFSR-Net can improve the peak signal-to-noise ratio (PSNR) metric by 1-2 dB compared with state-of-the-art methods.

15.
Sensors (Basel) ; 23(15)2023 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-37571439

RESUMO

Event cameras are the emerging bio-mimetic sensors with microsecond-level responsiveness in recent years, also known as dynamic vision sensors. Due to the inherent sensitivity of event camera hardware to light sources and interference from various external factors, various types of noises are inevitably present in the camera's output results. This noise can degrade the camera's perception of events and the performance of algorithms for processing event streams. Moreover, since the output of event cameras is in the form of address-event representation, efficient denoising methods for traditional frame images are no longer applicable in this case. Most existing denoising methods for event cameras target background activity noise and sometimes remove real events as noise. Furthermore, these methods are ineffective in handling noise generated by high-frequency flickering light sources and changes in diffused light reflection. To address these issues, we propose an event stream denoising method based on salient region recognition in this paper. This method can effectively remove conventional background activity noise as well as irregular noise caused by diffuse reflection and flickering light source changes without significantly losing real events. Additionally, we introduce an evaluation metric that can be used to assess the noise removal efficacy and the preservation of real events for various denoising methods.

16.
Sensors (Basel) ; 23(3)2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36772578

RESUMO

In this paper, a procedure for calibrating the image sensors of mobile devices and evaluating their results was developed and implemented in a software application. Regarding the calibration, two methods were used, an OpenCV function and a photogrammetry method, which used the same camera model. In evaluating the calibration results, a method is proposed that uses single-image rectification to examine the performance of the calibration parameters in a practical and supervisory way. After an experiment followed by a study, a standard is proposed regarding the number and shooting angles of the photographs that should be used in the calibration. During the development, problems related to processing large images and automating processes were solved. Finally, the procedure and software application were tested in a case study.

17.
Sensors (Basel) ; 23(4)2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36850759

RESUMO

In the context of setting up a stereo high-speed camera system for accurate 3D measurements in highly dynamic experiments, the potential of a "Fastcam SA-X2" stereo system is evaluated by testing different camera configurations and motion scenarios. A thorough accuracy analysis is performed using spatial rigid-body transformations and relative measurement analyses of photogrammetrically reconstructed surfaces of nondeformable objects. The effects of camera calibration, exposure time, object velocity, and object surface pattern quality on the quality of adjusted 3D coordinates are taken into consideration. While the exposure time does not significantly influence the quality of the static measurements, the results of dynamic experiments demonstrate that not only an insufficient frame rate but also an increased noise level resulting from short exposure times affects 3D coordinate accuracy. Using appropriate configurations to capture dynamic events, the errors in dynamic experiments do not differ significantly from the errors obtained in static measurements. A spatial mapping error of less than 1 µm is obtained through the experiments, with proper testing configurations for an object surface area of 5×20 mm. These findings are relevant for users of high-speed stereo imaging techniques to perform geometric 3D measurements, deformation, and crack analyses.

18.
Sensors (Basel) ; 23(15)2023 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-37571727

RESUMO

Three-dimensional (3D) cameras used for gait assessment obviate the need for bodily markers or sensors, making them particularly interesting for clinical applications. Due to their limited field of view, their application has predominantly focused on evaluating gait patterns within short walking distances. However, assessment of gait consistency requires testing over a longer walking distance. The aim of this study is to validate the accuracy for gait assessment of a previously developed method that determines walking spatiotemporal parameters and kinematics measured with a 3D camera mounted on a mobile robot base (ROBOGait). Walking parameters measured with this system were compared with measurements with Xsens IMUs. The experiments were performed on a non-linear corridor of approximately 50 m, resembling the environment of a conventional rehabilitation facility. Eleven individuals exhibiting normal motor function were recruited to walk and to simulate gait patterns representative of common neurological conditions: Cerebral Palsy, Multiple Sclerosis, and Cerebellar Ataxia. Generalized estimating equations were used to determine statistical differences between the measurement systems and between walking conditions. When comparing walking parameters between paired measures of the systems, significant differences were found for eight out of 18 descriptors: range of motion (ROM) of trunk and pelvis tilt, maximum knee flexion in loading response, knee position at toe-off, stride length, step time, cadence; and stance duration. When analyzing how ROBOGait can distinguish simulated pathological gait from physiological gait, a mean accuracy of 70.4%, a sensitivity of 49.3%, and a specificity of 74.4% were found when compared with the Xsens system. The most important gait abnormalities related to the clinical conditions were successfully detected by ROBOGait. The descriptors that best distinguished simulated pathological walking from normal walking in both systems were step width and stride length. This study underscores the promising potential of 3D cameras and encourages exploring their use in clinical gait analysis.


Assuntos
Marcha , Caminhada , Humanos , Marcha/fisiologia , Caminhada/fisiologia , Extremidade Inferior , Joelho , Articulação do Joelho , Fenômenos Biomecânicos
19.
Sensors (Basel) ; 23(15)2023 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-37571742

RESUMO

The identification of respiratory patterns based on the movement of the chest wall can assist in monitoring an individual's health status, particularly those with neuromuscular disorders, such as hemiplegia and Duchenne muscular dystrophy. Thoraco-abdominal asynchrony (TAA) refers to the lack of coordination between the rib cage and abdominal movements, characterized by a time delay in their expansion. Motion capture systems, like optoelectronic plethysmography (OEP), are commonly employed to assess these asynchronous movements. However, alternative technologies able to capture chest wall movements without physical contact, such as RGB digital cameras and time-of-flight digital cameras, can also be utilized due to their accessibility, affordability, and non-invasive nature. This study explores the possibility of using a single RGB digital camera to record the kinematics of the thoracic and abdominal regions by placing four non-reflective markers on the torso. In order to choose the positions of these markers, we previously investigated the movements of 89 chest wall landmarks using OEP. Laboratory tests and volunteer experiments were conducted to assess the viability of the proposed system in capturing the kinematics of the chest wall and estimating various time-related respiratory parameters (i.e., fR, Ti, Te, and Ttot) as well as TAA indexes. The results demonstrate a high level of agreement between the detected chest wall kinematics and the reference data. Furthermore, the system shows promising potential in estimating time-related respiratory parameters and identifying phase shifts indicative of TAA, thus suggesting its feasibility in detecting abnormal chest wall movements without physical contact with a single RGB camera.


Assuntos
Parede Torácica , Humanos , Estudos de Viabilidade , Fenômenos Biomecânicos , Mecânica Respiratória , Respiração , Pletismografia/métodos
20.
Sensors (Basel) ; 23(9)2023 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-37177448

RESUMO

The purpose of the present study was to create a two-dimensional model which illustrates a landscape of shooting opportunities at goal during a competitive football match. For that purpose, we analysed exemplar attacking subphases of each team when the ball was in the last 30 m of the field. The player's positional data (x and y coordinates) and the ball were captured at 25 fps and processed to create heatmaps that illustrated the shooting opportunities that were available in the first and second half in different field areas. Moreover, the time that the shooting opportunities were available was estimated. Results show that in the observed match, most of the shooting opportunities lasted between 1 and 2 s, with only a few opportunities lasting more than 2 s. The shooting opportunities did not display a homogenous distribution over the field. The obtained heatmaps provide valuable and specific information about each team's shooting opportunities, allowing the identification of the most vulnerable areas. Additionally, the amount, duration, and location of the shooting opportunities have shown significant differences between teams. This customizable model is sensitive to the features of shooting opportunities and can be used in real-time video analysis for individual and collective performance analysis.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa