Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Neurotrauma ; 41(5-6): 646-659, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-37624747

RESUMO

Eye tracking assessments are clinician dependent and can contribute to misclassification of coma. We investigated responsiveness to videos with and without audio in traumatic brain injury (TBI) subjects using video eye-tracking (VET). We recruited 20 healthy volunteers and 10 unresponsive TBI subjects. Clinicians were surveyed whether the subject was tracking on their bedside assessment. The Coma Recovery Scale-Revised (CRS-R) was also performed. Eye movements in response to three different 30-second videos with and without sound were recorded using VET. The videos consisted of moving characters (a dancer, a person skateboarding, and Spiderman). Tracking on VET was defined as visual fixation on the character and gaze movement in the same direction of the character on two separate occasions. Subjects were classified as "covert tracking" (tracking using VET only), "overt tracking" (VET and clinical exam by clinicians), and "no tracking". A k-nearest-neighbors model was also used to identify tracking computationally. Thalamocortical connectivity and structural integrity were evaluated with EEG and MRI. The ability to obey commands was evaluated at 6- and 12-month follow-up. The average age was 29 (± 17) years old. Three subjects demonstrated "covert tracking" (CRS-R of 6, 8, 7), two "overt tracking" (CRS-R 22, 11), and five subjects "no tracking" (CRS-R 8, 6, 5, 6, 7). Among the 84 tested trials in all subjects, 11 trials (13%) met the criteria for "covert tracking". Using the k-nearest approach, 14 trials (17%) were classified as "covert tracking". Subjects with "tracking" had higher thalamocortical connectivity, and had fewer structures injured in the eye-tracking network than those without tracking. At follow-up, 2 out of 3 "covert" and all "overt" subjects recovered consciousness versus only 2 subjects in the "no tracking" group. Immersive stimuli may serve as important objective tools to differentiate subtle tracking using VET.


Assuntos
Lesões Encefálicas Traumáticas , Coma , Humanos , Adulto , Estado de Consciência , Transtornos da Consciência/diagnóstico por imagem , Transtornos da Consciência/etiologia , Lesões Encefálicas Traumáticas/diagnóstico por imagem , Análise por Conglomerados
2.
Neurology ; 101(11): 489-494, 2023 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-37076304

RESUMO

OBJECTIVES: This study investigated video eye tracking (VET) in comatose patients with traumatic brain injury (TBI). METHODS: We recruited healthy participants and unresponsive patients with TBI. We surveyed the patients' clinicians on whether the patient was tracking and performed the Coma Recovery Scale-Revised (CRS-R). We recorded eye movements in response to motion of a finger, a face, a mirror, and an optokinetic stimulus using VET glasses. Patients were classified as covert tracking (tracking on VET alone) and overt tracking (VET and clinical examination). The ability to obey commands was evaluated at 6-month follow-up. RESULTS: We recruited 20 healthy participants and 10 patients with TBI. The use of VET was feasible in all participants and patients. Two patients demonstrated covert tracking (CRS-R of 6 and 8), 2 demonstrated overt tracking (CRS-R of 22 and 11), and 6 patients had no tracking (CRS-R of 8, 6, 5, 7, 6, and 7). Five of 56 (9%) tracking assessments were missed on clinical examination. All patients with tracking recovered consciousness at follow-up, whereas only 2 of 6 patients without tracking recovered at follow-up. DISCUSSION: VET is a feasible method to measure covert tracking. Future studies are needed to confirm the prognostic value of covert tracking.


Assuntos
Lesões Encefálicas Traumáticas , Coma , Humanos , Coma/etiologia , Lesões Encefálicas Traumáticas/complicações , Estado de Consciência/fisiologia , Prognóstico , Exame Físico
3.
Front Psychol ; 12: 731618, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35126224

RESUMO

In early 2020, in-person data collection dramatically slowed or was completely halted across the world as many labs were forced to close due to the COVID-19 pandemic. Developmental researchers who assess looking time (especially those who rely heavily on in-lab eye-tracking or live coding techniques) were forced to re-think their methods of data collection. While a variety of remote or online platforms are available for gathering behavioral data outside of the typical lab setting, few are specifically designed for collecting and processing looking time data in infants and young children. To address these challenges, our lab developed several novel approaches for continuing data collection and coding for a remotely administered audiovisual looking time protocol. First, we detail a comprehensive approach for successfully administering the Multisensory Attention Assessment Protocol (MAAP), developed by our lab to assess multisensory attention skills (MASks; duration of looking, speed of shifting/disengaging, accuracy of audiovisual matching). The MAAP is administered from a distance (remotely) by using Zoom, Gorilla Experiment Builder, an internet connection, and a home computer. This new data collection approach has the advantage that participants can be tested in their homes. We discuss challenges and successes in implementing our approach for remote testing and data collection during an ongoing longitudinal project. Second, we detail an approach for estimating gaze direction and duration collected remotely from webcam recordings using a post processing toolkit (OpenFace) and demonstrate its effectiveness and precision. However, because OpenFace derives gaze estimates without translating them to an external frame of reference (i.e., the participant's screen), we developed a machine learning (ML) approach to overcome this limitation. Thus, third, we trained a ML algorithm [(artificial neural network (ANN)] to classify gaze estimates from OpenFace with respect to areas of interest (AOI) on the participant's screen (i.e., left, right, and center). We then demonstrate reliability between this approach and traditional coding approaches (e.g., coding gaze live). The combination of OpenFace and ML will provide a method to automate the coding of looking time for data collected remotely. Finally, we outline a series of best practices for developmental researchers conducting remote data collection for looking time studies.

4.
IEEE Trans Image Process ; 23(12): 5743-55, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25248188

RESUMO

In scattering media, as in underwater or haze and fog in atmosphere, image contrast deteriorates significantly due to backscatter. This adversely affects the performance of many computer vision techniques developed for clear open-air conditions, including stereo matching, when applied to images acquired in these environments. Since the strength of the scattering depends on the distance to the scene points, the scattering field embodies range information that can be exploited for 3-D reconstruction. In this paper, we present an integrated solution for 3-D structure from stereovision that incorporates the visual cues from both disparity and scattering. The method applies to images of scenes illuminated by artificial sources and natural lighting, and performance improves with discrepancy between the backscatter fields in the two views. Neither source calibration nor knowledge of medium optical properties is required. Instead, backscatter fields at infinity, i.e., stereo images taken with no target in the field of view, are directly employed in the estimation process. Results from experiments with synthetic and real data demonstrate the key advantages of our method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA