Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28.622
Filtrar
1.
BMC Psychiatry ; 24(1): 375, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38773509

RESUMEN

BACKGROUND: Obsessive slowness, a symptom of obsessive-compulsive disorder (OCD), is characterized by compulsive behavior and significant slowness of movement. Primary obsessive slowness (POS) is defined as a condition in which a series of actions are segmented, and the patient spends an unlimited amount of time performing each action while checking each action, resulting in cessation or slowness of movement. It is often difficult to treat POS with exposure and response prevention, which is considered effective in general OCD, and no treatment has been established. Here, we discuss the effectiveness of psychoeducation and modeling using video recordings in the treatment of POS. CASE PRESENTATION: We report a case of POS in a 19-year-old woman. Each action was subdivided and ordered, and the patient could not proceed to the next action without confirming that the previous step had been performed. Therefore, she could not live her daily life independently; for instance, toileting and bathing required more than 1 h, even with assistance. After more than 5 months of long-term treatment, including pharmacotherapy, psychoeducation, and modeling with video recordings, she recovered to live her daily life independently. CONCLUSION: Psychoeducation and behavioral therapy can effectively treat POS. Particularly, modeling with video recordings would be an easy-to-use option for POS treatment.


Asunto(s)
Trastorno Obsesivo Compulsivo , Grabación en Video , Humanos , Femenino , Trastorno Obsesivo Compulsivo/terapia , Adulto Joven , Hospitalización , Educación del Paciente como Asunto/métodos , Adulto , Resultado del Tratamiento
2.
BMC Anesthesiol ; 24(1): 181, 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38773386

RESUMEN

BACKGROUND: Endotracheal intubation is challenging during cardiopulmonary resuscitation, and video laryngoscopy has showed benefits for this procedure. The aim of this study was to compare the effectiveness of various intubation approaches, including the bougie first, preloaded bougie, endotracheal tube (ETT) with stylet, and ETT without stylet, on first-attempt success using video laryngoscopy during chest compression. METHODS: This was a randomized crossover trial conducted in a general tertiary teaching hospital. We included anesthesia residents in postgraduate year one to three who passed the screening test. Each resident performed intubation with video laryngoscopy using the four approaches in a randomized sequence on an adult manikin during continuous chest compression. The primary outcome was the first-attempt success defined as starting ventilation within a one minute. RESULTS: A total of 260 endotracheal intubations conducted by 65 residents were randomized and analyzed with 65 procedures in each group. First-attempt success occurred in 64 (98.5%), 57 (87.7%), 56 (86.2%), and 46 (70.8%) intubations in the bougie-first, preloaded bougie, ETT with stylet, and ETT without stylet approaches, respectively. The bougie-first approach had a significantly higher possibility of first-attempt success than the preloaded bougie approach [risk ratio (RR) 8.00, 95% confidence interval (CI) 1.03 to 62.16, P = 0.047], the ETT with stylet approach (RR 9.00, 95% CI 1.17 to 69.02, P = 0.035), and the ETT without stylet approach (RR 19.00, 95% CI 2.62 to 137.79, P = 0.004) in the generalized estimating equation logistic model accounting for clustering of intubations operated by the same resident. In addition, the bougie first approach did not result in prolonged intubation or increased self-reported difficulty among the study participants. CONCLUSIONS: The bougie first approach with video laryngoscopy had the highest possibility of first-attempt success during chest compression. These results helped inform the intubation approach during CPR. However, further studies in an actual clinical environment are warranted to validate these findings. TRIAL REGISTRATION: Clinicaltrials.gov; identifier: NCT05689125; date: January 18, 2023.


Asunto(s)
Reanimación Cardiopulmonar , Estudios Cruzados , Intubación Intratraqueal , Laringoscopía , Maniquíes , Grabación en Video , Intubación Intratraqueal/métodos , Intubación Intratraqueal/instrumentación , Humanos , Laringoscopía/métodos , Laringoscopía/instrumentación , Reanimación Cardiopulmonar/métodos , Masculino , Femenino , Adulto , Internado y Residencia/métodos , Procedimientos y Técnicas Asistidas por Video
3.
Biosens Bioelectron ; 258: 116318, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38701538

RESUMEN

We report a massive field-of-view and high-speed videography platform for measuring the sub-cellular traction forces of more than 10,000 biological cells over 13 mm2 at 83 frames per second. Our Single-Pixel Optical Tracers (SPOT) tool uses 2-dimensional diffraction gratings embedded into a soft substrate to convert cells' mechanical traction force into optical colors detectable by a video camera. The platform measures the sub-cellular traction forces of diverse cell types, including tightly connected tissue sheets and near isolated cells. We used this platform to explore the mechanical wave propagation in a tightly connected sheet of Neonatal Rat Ventricular Myocytes (NRVMs) and discovered that the activation time of some tissue regions are heterogeneous from the overall spiral wave behavior of the cardiac wave.


Asunto(s)
Miocitos Cardíacos , Animales , Ratas , Miocitos Cardíacos/citología , Técnicas Biosensibles/métodos , Técnicas Biosensibles/instrumentación , Diseño de Equipo , Grabación en Video , Células Cultivadas
4.
Brain Behav ; 14(5): e3510, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38715394

RESUMEN

BACKGROUND: Multiple system atrophy (MSA) is a neurodegenerative disease that progresses rapidly and has a poor prognosis. This study aimed to assess the value of video oculomotor evaluation (VOE) in the differential diagnosis of MSA and Parkinson's disease (PD). METHODS: In total, 28 patients with MSA, 31 patients with PD, and 30 age- and sex-matched healthy controls (HC) were screened and included in this study. The evaluation consisted of a gaze-holding test, smooth pursuit eye movement (SPEM), random saccade, and optokinetic nystagmus (OKN). RESULTS: The MSA and PD groups had more abnormalities and decreased SPEM gain than the HC group (64.29%, 35.48%, 10%, p < .001). The SPEM gain in the MSA group was significantly lower than that in the PD group at specific frequencies. Patients with MSA and PD showed prolonged latencies in all saccade directions compared with those with HC. However, the two diseases had no significant differences in the saccade parameters. The OKN gain gradually decreased from the HC to the PD and the MSA groups (p < .05). Compared with the PD group, the gain in the MSA group was further decreased in the OKN test at 30°/s (Left, p = .010; Right p = .016). Receiver operating characteristic curves showed that the combination of oculomotor parameters with age and course of disease could aid in the differential diagnosis of patients with MSA and PD, with a sensitivity of 89.29% and a specificity of 70.97%. CONCLUSIONS: The combination of oculomotor parameters and clinical data may aid in the differential diagnosis of MSA and PD. Furthermore, VOE is vital in the identification of neurodegenerative diseases.


Asunto(s)
Atrofia de Múltiples Sistemas , Enfermedad de Parkinson , Movimientos Sacádicos , Humanos , Atrofia de Múltiples Sistemas/diagnóstico , Atrofia de Múltiples Sistemas/fisiopatología , Enfermedad de Parkinson/diagnóstico , Enfermedad de Parkinson/fisiopatología , Masculino , Diagnóstico Diferencial , Femenino , Persona de Mediana Edad , Anciano , Movimientos Sacádicos/fisiología , Grabación en Video , Nistagmo Optoquinético/fisiología , Seguimiento Ocular Uniforme/fisiología
5.
Sci Rep ; 14(1): 10579, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720014

RESUMEN

The complex dynamics of animal manoeuvrability in the wild is extremely challenging to study. The cheetah (Acinonyx jubatus) is a perfect example: despite great interest in its unmatched speed and manoeuvrability, obtaining complete whole-body motion data from these animals remains an unsolved problem. This is especially difficult in wild cheetahs, where it is essential that the methods used are remote and do not constrain the animal's motion. In this work, we use data obtained from cheetahs in the wild to present a trajectory optimisation approach for estimating the 3D kinematics and joint torques of subjects remotely. We call this approach kinetic full trajectory estimation (K-FTE). We validate the method on a dataset comprising synchronised video and force plate data. We are able to reconstruct the 3D kinematics with an average reprojection error of 17.69 pixels (62.94% PCK using the nose-to-eye(s) length segment as a threshold), while the estimates produce an average root-mean-square error of 171.3N ( ≈ 17.16% of peak force during stride) for the estimated ground reaction force when compared against the force plate data. While the joint torques cannot be directly validated against ground truth data, as no such data is available for cheetahs, the estimated torques agree with previous studies of quadrupeds in controlled settings. These results will enable deeper insight into the study of animal locomotion in a more natural environment for both biologists and roboticists.


Asunto(s)
Acinonyx , Acinonyx/fisiología , Animales , Fenómenos Biomecánicos , Imagenología Tridimensional , Locomoción/fisiología , Torque , Grabación en Video
6.
Sci Rep ; 14(1): 10560, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720020

RESUMEN

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Asunto(s)
Algoritmos , Compresión de Datos , Grabación en Video , Humanos , Compresión de Datos/métodos , Actividades Humanas , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
7.
J Exp Biol ; 227(9)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38722696

RESUMEN

Animals deliver and withstand physical impacts in diverse behavioral contexts, from competing rams clashing their antlers together to archerfish impacting prey with jets of water. Though the ability of animals to withstand impact has generally been studied by focusing on morphology, behaviors may also influence impact resistance. Mantis shrimp exchange high-force strikes on each other's coiled, armored telsons (tailplates) during contests over territory. Prior work has shown that telson morphology has high impact resistance. I hypothesized that the behavior of coiling the telson also contributes to impact energy dissipation. By measuring impact dynamics from high-speed videos of strikes exchanged during contests between freely moving animals, I found that approximately 20% more impact energy was dissipated by the telson as compared with findings from a prior study that focused solely on morphology. This increase is likely due to behavior: because the telson is lifted off the substrate, the entire body flexes after contact, dissipating more energy than exoskeletal morphology does on its own. While variation in the degree of telson coil did not affect energy dissipation, proportionally more energy was dissipated from higher velocity strikes and from strikes from more massive appendages. Overall, these findings show that analysis of both behavior and morphology is crucial to understanding impact resistance, and suggest future research on the evolution of structure and function under the selective pressure of biological impacts.


Asunto(s)
Crustáceos , Animales , Fenómenos Biomecánicos , Crustáceos/fisiología , Crustáceos/anatomía & histología , Metabolismo Energético , Conducta Predatoria/fisiología , Conducta Animal/fisiología , Grabación en Video
8.
J Exp Biol ; 227(9)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38726757

RESUMEN

Differences in the physical and behavioral attributes of prey are likely to impose disparate demands of force and speed on the jaws of a predator. Because of biomechanical trade-offs between force and speed, this presents an interesting conundrum for predators of diverse prey types. Loggerhead shrikes (Lanius ludovicianus) are medium-sized (∼50 g) passeriform birds that dispatch and feed on a variety of arthropod and vertebrate prey, primarily using their beaks. We used high-speed video of shrikes biting a force transducer in lateral view to obtain corresponding measurements of bite force, upper and lower bill linear and angular displacements, and velocities. Our results show that upper bill depression (about the craniofacial hinge) is more highly correlated with bite force, whereas lower bill elevation is more highly correlated with jaw-closing velocity. These results suggest that the upper and lower jaws might play different roles for generating force and speed (respectively) in these and perhaps other birds as well. We hypothesize that a division of labor between the jaws may allow shrikes to capitalize on elements of force and speed without compromising performance. As expected on theoretical grounds, bite force trades-off against jaw-closing velocity during the act of biting, although peak bite force and jaw-closing velocity across individual shrikes show no clear signs of a force-velocity trade-off. As a result, shrikes appear to bite with jaw-closing velocities and forces that maximize biting power, which may be selectively advantageous for predators of diverse prey that require both jaw-closing force and speed.


Asunto(s)
Fuerza de la Mordida , Maxilares , Animales , Fenómenos Biomecánicos , Maxilares/fisiología , Passeriformes/fisiología , Conducta Predatoria/fisiología , Pico/fisiología , Grabación en Video
11.
Sensors (Basel) ; 24(9)2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38732772

RESUMEN

In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 ± 0.35 s, our approach stands as a robust solution for automatic fixation annotation.


Asunto(s)
Tecnología de Seguimiento Ocular , Fijación Ocular , Humanos , Fijación Ocular/fisiología , Grabación en Video/métodos , Algoritmos , Movimientos Oculares/fisiología
13.
Comput Methods Programs Biomed ; 250: 108195, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38692251

RESUMEN

BACKGROUND AND OBJECTIVE: Timely stroke treatment can limit brain damage and improve outcomes, which depends on early recognition of the symptoms. However, stroke cases are often missed by the first respondent paramedics. One of the earliest external symptoms of stroke is based on facial expressions. METHODS: We propose a computerized analysis of facial expressions using action units to distinguish between Post-Stroke and healthy people. Action units enable analysis of subtle and specific facial movements and are interpretable to the facial expressions. The RGB videos from the Toronto Neuroface Dataset, which were recorded during standard orofacial examinations of 14 people with post-stroke (PS) and 11 healthy controls (HC) were used in this study. Action units were computed using XGBoost which was trained using HC, and classified using regression analysis for each of the nine facial expressions. The analysis was performed without manual intervention. RESULTS: The results were evaluated using leave-one-our validation. The accuracy was 82% for Kiss and Spread, with the best sensitivity of 91% in the differentiation of PS and HC. The features corresponding to mouth muscles were most suitable. CONCLUSIONS: This pilot study has shown that our method can detect PS based on two simple facial expressions. However, this needs to be tested in real-world conditions, with people of different ethnicities and smartphone use. The method has the potential for a computerized assessment of the videos for use by the first respondents using a smartphone to perform screening tests, which can facilitate the timely start of the treatment.


Asunto(s)
Expresión Facial , Accidente Cerebrovascular , Humanos , Proyectos Piloto , Femenino , Masculino , Persona de Mediana Edad , Anciano , Estudios de Casos y Controles , Grabación en Video
14.
BMC Vet Res ; 20(1): 172, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38702691

RESUMEN

BACKGROUND: Lameness examinations are commonly performed in equine medicine. Advancements in digital technology have increased the use of video recordings for lameness assessment, however, standardization of ideal video angle is not available yielding videos of poor diagnostic quality. The objective of this study was to evaluate the effect of video angle on the subjective assessment of front limb lameness. A randomized, blinded, crossover study was performed. Six horses with and without mechanically induced forelimb solar pain were recorded using 9 video angles including horses trotting directly away and towards the video camera, horses trotting away and towards a video camera placed to the left and right side of midline, and horses trotting in a circle with the video camera placed on the inside and outside of the circle. Videos were randomized and assessed by three expert equine veterinarians using a 0-5 point scoring system. Objective lameness parameters were collected using a body-mounted inertial sensor system (Lameness Locator®, Equinosis LLC). Interobserver agreement for subjective lameness scores and ease of grading scores were determined. RESULTS: Induction of lameness was successful in all horses. There was excellent agreement between objective lameness parameters and subjective lameness scores (AUC of the ROC = 0.87). For horses in the "lame" trials, interobserver agreement was moderate for video angle 2 when degree of lameness was considered and perfect for video angle 2 and 9 when lameness was considered as a binary outcome. All other angles had no to fair agreement. For horses in the "sound" trials, interobserver agreement was perfect for video angle 5. All other video angles had slight to moderate agreement. CONCLUSIONS: When video assessment of forelimb lameness is required, a video of the horse trotting directly towards the video camera at a minimum is recommended. Other video angles may provide supportive information regarding lameness characteristics.


Asunto(s)
Estudios Cruzados , Enfermedades de los Caballos , Cojera Animal , Grabación en Video , Animales , Caballos , Cojera Animal/diagnóstico , Enfermedades de los Caballos/diagnóstico , Miembro Anterior , Femenino , Masculino
15.
J Neuroeng Rehabil ; 21(1): 72, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38702705

RESUMEN

BACKGROUND: Neurodegenerative diseases, such as Parkinson's disease (PD), necessitate frequent clinical visits and monitoring to identify changes in motor symptoms and provide appropriate care. By applying machine learning techniques to video data, automated video analysis has emerged as a promising approach to track and analyze motor symptoms, which could facilitate more timely intervention. However, existing solutions often rely on specialized equipment and recording procedures, which limits their usability in unstructured settings like the home. In this study, we developed a method to detect PD symptoms from unstructured videos of clinical assessments, without the need for specialized equipment or recording procedures. METHODS: Twenty-eight individuals with Parkinson's disease completed a video-recorded motor examination that included the finger-to-nose and hand pronation-supination tasks. Clinical staff provided ground truth scores for the level of Parkinsonian symptoms present. For each video, we used a pre-existing model called PIXIE to measure the location of several joints on the person's body and quantify how they were moving. Features derived from the joint angles and trajectories, designed to be robust to recording angle, were then used to train two types of machine-learning classifiers (random forests and support vector machines) to detect the presence of PD symptoms. RESULTS: The support vector machine trained on the finger-to-nose task had an F1 score of 0.93 while the random forest trained on the same task yielded an F1 score of 0.85. The support vector machine and random forest trained on the hand pronation-supination task had F1 scores of 0.20 and 0.33, respectively. CONCLUSION: These results demonstrate the feasibility of developing video analysis tools to track motor symptoms across variable perspectives. These tools do not work equally well for all tasks, however. This technology has the potential to overcome barriers to access for many individuals with degenerative neurological diseases like PD, providing them with a more convenient and timely method to monitor symptom progression, without requiring a structured video recording procedure. Ultimately, more frequent and objective home assessments of motor function could enable more precise telehealth optimization of interventions to improve clinical outcomes inside and outside of the clinic.


Asunto(s)
Aprendizaje Automático , Enfermedad de Parkinson , Grabación en Video , Humanos , Enfermedad de Parkinson/diagnóstico , Enfermedad de Parkinson/fisiopatología , Masculino , Femenino , Grabación en Video/métodos , Persona de Mediana Edad , Anciano , Máquina de Vectores de Soporte
16.
J Prev Med Hyg ; 65(1): E25-E35, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38706763

RESUMEN

Background: Tobacco use and exposure are leading causes of morbidity and mortality worldwide. In the past decade, educational efforts to reduce tobacco use and exposure have extended to social media, including video-sharing platforms. YouTube is one of the most publicly accessed video-sharing platforms. Purpose: This cross-sectional descriptive study was conducted to identify and describe sources, formats, and content of widely viewed YouTube videos on smoking cessation. Methods: In August to September 2023, the keywords "stop quit smoking" were used to search in YouTube and identify 100 videos with the highest view count. Results: Collectively, these videos were viewed over 220 million times. The majority (n = 35) were posted by nongovernmental/organization sources, with a smaller number posted by consumers (n = 25), and only eleven were posted by governmental agencies. The format used in the highest number of videos was the testimonial (n = 32 videos, over 77 million views). Other popular formats included animation (n = 23 videos, over 90 million views) and talk by professional (n = 20 videos, almost 43 million views). Video content included evidence-based and non-evidence-based practices. Evidence-based strategies aligned with U.S. Public Health Service Tobacco Treatment Guidelines (e.g. health systems approach in tobacco treatment, medication management). Non-evidence-based strategies included mindfulness and hypnotherapy. One key finding was that environmental tobacco exposure received scant coverage across the videos. Conclusions: Social media such as YouTube promises to reach large audiences at low cost without requiring high reading literacy. Additional attention is needed to create videos with up-to-date, accurate information that can engage consumers.


Asunto(s)
Cese del Hábito de Fumar , Medios de Comunicación Sociales , Humanos , Estudios Transversales , Cese del Hábito de Fumar/métodos , Grabación en Video , Cese del Uso de Tabaco/métodos
17.
PeerJ ; 12: e17091, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38708339

RESUMEN

Monitoring the diversity and distribution of species in an ecosystem is essential to assess the success of restoration strategies. Implementing biomonitoring methods, which provide a comprehensive assessment of species diversity and mitigate biases in data collection, holds significant importance in biodiversity research. Additionally, ensuring that these methods are cost-efficient and require minimal effort is crucial for effective environmental monitoring. In this study we compare the efficiency of species detection, the cost and the effort of two non-destructive sampling techniques: Baited Remote Underwater Video (BRUV) and environmental DNA (eDNA) metabarcoding to survey marine vertebrate species. Comparisons were conducted along the Sussex coast upon the introduction of the Nearshore Trawling Byelaw. This Byelaw aims to boost the recovery of the dense kelp beds and the associated biodiversity that existed in the 1980s. We show that overall BRUV surveys are more affordable than eDNA, however, eDNA detects almost three times as many species as BRUV. eDNA and BRUV surveys are comparable in terms of effort required for each method, unless eDNA analysis is carried out externally, in which case eDNA requires less effort for the lead researchers. Furthermore, we show that increased eDNA replication yields more informative results on community structure. We found that using both methods in conjunction provides a more complete view of biodiversity, with BRUV data supplementing eDNA monitoring by recording species missed by eDNA and by providing additional environmental and life history metrics. The results from this study will serve as a baseline of the marine vertebrate community in Sussex Bay allowing future biodiversity monitoring research projects to understand community structure as the ecosystem recovers following the removal of trawling fishing pressure. Although this study was regional, the findings presented herein have relevance to marine biodiversity and conservation monitoring programs around the globe.


Asunto(s)
Biodiversidad , ADN Ambiental , Monitoreo del Ambiente , ADN Ambiental/análisis , ADN Ambiental/genética , Animales , Monitoreo del Ambiente/métodos , Organismos Acuáticos/genética , Grabación en Video/métodos , Ecosistema , Código de Barras del ADN Taxonómico/métodos
18.
Am J Occup Ther ; 78(3)2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38691580

RESUMEN

IMPORTANCE: Static picture (SP) schedules are an established intervention for children with autism spectrum disorder (ASD), but the use of video modeling (VM) has not been thoroughly investigated. OBJECTIVE: To compare the effectiveness of VM prompts versus SP prompts in improving autistic children's independence with daily living skills. DESIGN: An experimental alternating treatment design. SETTING: Approved private school for children with disabilities. PARTICIPANTS: Seventeen participants (13 male and 4 female; ages 9-18 yr) with an ASD diagnosis. INTERVENTION: Visual prompts using a tablet were provided during task participation, with data collected in two phases. OUTCOMES AND MEASURES: Type and frequency of the prompts required to complete the task were documented for each participant during the intervention session. RESULTS: Both VM and SP conditions resulted in improvements in at least one phase. Most participants demonstrated a decrease in the number of required cues to complete the task and an increase in independence to complete the task. The decrease in number of cues required from baseline to end of data collection indicated clinically meaningful improvement in task completion. CONCLUSION: Both VM and SP prompts resulted in an increase in independence in daily living skills, with most participants demonstrating improvement in either condition, indicating that the use of visual prompts (either VM or SP) is effective with the ASD population. Plain-Language Summary: Occupational therapy practitioners who work with autistic children and adolescents often identify improving daily living skills as a goal area. Findings from this study build on evidence that supports the use of a visual aid (either static picture or video modeling) to improve autistic children's acquisition of daily living skills. The findings also highlight emerging evidence related to the level of function and effectiveness associated with the type of visual cue. Positionality Statement: This article primarily uses identity-first language (i.e., autistic person) and at times person-first language (i.e., person with autism) to reflect the variability in the language preferences of the autism community (Lord et al., 2022).


Asunto(s)
Actividades Cotidianas , Trastorno del Espectro Autista , Terapia Ocupacional , Humanos , Niño , Femenino , Masculino , Adolescente , Terapia Ocupacional/métodos , Trastorno del Espectro Autista/rehabilitación , Señales (Psicología) , Grabación en Video
19.
Artículo en Inglés | MEDLINE | ID: mdl-38717248

RESUMEN

A video can help highlight the real-time steps, anatomy and the technical aspects of a case that may be difficult to convey with text or static images alone. Editing with a regimented workflow allows for the transmission of only essential information to the viewer while maximizing efficiency by going through the editing process. This video tutorial breaks down the fundamentals of surgical video editing with tips and pointers to simplify the workflow.


Asunto(s)
Grabación en Video , Humanos , Procedimientos Quirúrgicos Operativos/métodos , Flujo de Trabajo
20.
IEEE Trans Image Process ; 33: 3256-3270, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38696298

RESUMEN

Video-based referring expression comprehension is a challenging task that requires locating the referred object in each video frame of a given video. While many existing approaches treat this task as an object-tracking problem, their performance is heavily reliant on the quality of the tracking templates. Furthermore, when there is not enough annotation data to assist in template selection, the tracking may fail. Other approaches are based on object detection, but they often use only one adjacent frame of the key frame for feature learning, which limits their ability to establish the relationship between different frames. In addition, improving the fusion of features from multiple frames and referring expressions to effectively locate the referents remains an open problem. To address these issues, we propose a novel approach called the Multi-Stage Image-Language Cross-Generative Fusion Network (MILCGF-Net), which is based on one-stage object detection. Our approach includes a Frame Dense Feature Aggregation module for dense feature learning of adjacent time sequences. Additionally, we propose an Image-Language Cross-Generative Fusion module as the main body of multi-stage learning to generate cross-modal features by calculating the similarity between video and expression, and then refining and fusing the generated features. To further enhance the cross-modal feature generation capability of our model, we introduce a consistency loss that constrains the image-language similarity and language-image similarity matrices during feature generation. We evaluate our proposed approach on three public datasets and demonstrate its effectiveness through comprehensive experimental results.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Grabación en Video , Grabación en Video/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...