RESUMO
Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.
Assuntos
Movimentos Oculares/fisiologia , Aprendizado de Máquina , Cognição/fisiologia , Humanos , Modelos LogísticosRESUMO
BACKGROUND: Cognitive performances on neuropsychological paper-and-pencil tests are generally evaluated quantitatively by examining a final score (e.g., total duration). Digital tests allow for a quantitative evaluation of "how" a patient attained a final score, which opens the possibility to assess more subtle cognitive impairment even when final scores are evaluated as normal. We assessed performance stability (i.e., the number of fluctuations in test performance) to investigate (1) differences in performance stability between patients with acquired brain injury (ABI) and healthy controls; (2) the added value of performance stability measures in patients with ABI; and (3) the relation between performance stability and cognitive complaints in daily life in patients with ABI. METHODS: We administered three digital neuropsychological tests (Rey Auditory Verbal Learning Test, Trail Making Test, Stroop Colour and Word Test) and the Cognitive Complaints-Participation (CoCo-P) inventory in patients with ABI (n = 161) and healthy controls (n = 91). RESULTS: Patients with ABI fluctuated more in their performance on all tests, when compared to healthy controls. Furthermore, 4-15% of patients who performed inside normal range on the conventional final scores were outside normal range on the performance stability measures. The performance stability measures, nor the conventional final scores, were associated with cognitive complaints in daily life. CONCLUSIONS: Stability in test performance of patients was clearly dissociable from healthy controls, and may assess additional cognitive weaknesses which might not be observed or objectified with paper-and-pencil tests. More research is needed for developing measures better associated with cognitive complaints.