Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Brain Sci ; 12(8)2022 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-36009094

RESUMEN

It is known that dyslexics present eye movement abnormalities. Previously, we have shown that eye movement abnormalities during reading or during saccade and vergence testing can predict dyslexia successfully. The current study further examines this issue focusing on eye movements during free exploration of paintings; the dataset was provided by a study in our laboratory carried by Ward and Kapoula. Machine learning (ML) classifiers were applied to eye movement features extracted by the software AIDEAL: a velocity threshold analysis reporting amplitude speed and disconjugacy of horizontal saccades. In addition, a new feature was introduced that concerns only the very short periods during which the eyes were moving, one to the left the other to the right; such periods occurred mostly during fixations between saccades; we calculated a global index of the frequency of such disconjugacy segments, of their duration and their amplitude. Such continuous evaluation of disconjugacy throughout the time series of eye movements differs from the disconjugacy feature that describes inequality of the saccade amplitude between the two eyes. The results show that both AIDEAL features, and the Disconjugacy Global Index (DGI) enable successful categorization of dyslexics from non-dyslexics, at least when applying this analysis to the specific paintings used in the present study. We suggest that this high power of predictability arises from both the content of the paintings selected and the physiologic relevance of eye movement features extracted by the AIDEAL and the DGI.

2.
Brain Sci ; 11(10)2021 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-34679400

RESUMEN

There is evidence that abnormalities in eye movements exist during reading in dyslexic individuals. A few recent studies applied Machine Learning (ML) classifiers to such eye movement data to predict dyslexia. A general problem with these studies is that eye movement data sets are limited to reading saccades and fixations that are confounded by reading difficulty, e.g., it is unclear whether abnormalities are the consequence or the cause of reading difficulty. Recently, Ward and Kapoula used LED targets (with the REMOBI & AIDEAL method) to demonstrate abnormalities of large saccades and vergence eye movements in depth demonstrating intrinsic eye movement problems independent from reading in dyslexia. In another study, binocular eye movements were studied while reading two texts: one using the "Alouette" text, which has no meaning and requires word decoding, the other using a meaningful text. It was found the Alouette text exacerbates eye movement abnormalities in dyslexics. In this paper, we more precisely quantify the quality of such eye movement descriptors for dyslexia detection. We use the descriptors produced in the four different setups as input to multiple classifiers and compare their generalization performances. Our results demonstrate that eye movement data from the Alouette test predicts dyslexia with an accuracy of 81.25%; similarly, we were able to predict dyslexia with an accuracy of 81.25% when using data from saccades to LED targets on the Remobi device and 77.3% when using vergence movements to LED targets. Noticeably, eye movement data from the meaningful text produced the lowest accuracy (70.2%). In a subsequent analysis, ML algorithms were applied to predict reading speed based on eye movement descriptors extracted from the meaningful reading, then from Remobi saccade and vergence tests. Remobi vergence eye movement descriptors can predict reading speed even better than eye movement descriptors from the meaningful reading test.

3.
Front Big Data ; 3: 577974, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33693418

RESUMEN

The use of artificial intelligence (AI) in a variety of research fields is speeding up multiple digital revolutions, from shifting paradigms in healthcare, precision medicine and wearable sensing, to public services and education offered to the masses around the world, to future cities made optimally efficient by autonomous driving. When a revolution happens, the consequences are not obvious straight away, and to date, there is no uniformly adapted framework to guide AI research to ensure a sustainable societal transition. To answer this need, here we analyze three key challenges to interdisciplinary AI research, and deliver three broad conclusions: 1) future development of AI should not only impact other scientific domains but should also take inspiration and benefit from other fields of science, 2) AI research must be accompanied by decision explainability, dataset bias transparency as well as development of evaluation methodologies and creation of regulatory agencies to ensure responsibility, and 3) AI education should receive more attention, efforts and innovation from the educational and scientific communities. Our analysis is of interest not only to AI practitioners but also to other researchers and the general public as it offers ways to guide the emerging collaborations and interactions toward the most fruitful outcomes.

4.
Artículo en Inglés | MEDLINE | ID: mdl-30136982

RESUMEN

A common challenge faced by many domain experts working with time series data is how to identify and compare similar patterns. This operation is fundamental in high-level tasks, such as detecting recurring phenomena or creating clusters of similar temporal sequences. While automatic measures exist to compute time series similarity, human intervention is often required to visually inspect these automatically generated results. The visualization literature has examined similarity perception and its relation to automatic similarity measures for line charts, but has not yet considered if alternative visual representations, such as horizon graphs and colorfields, alter this perception. Motivated by how neuroscientists evaluate epileptiform patterns, we conducted two experiments that study how these three visualization techniques affect similarity perception in EEG signals. We seek to understand if the time series results returned from automatic similarity measures are perceived in a similar manner, irrespective of the visualization technique; and if what people perceive as similar with each visualization aligns with different automatic measures and their similarity constraints. Our findings indicate that horizon graphs align with similarity measures that allow local variations in temporal position or speed (i.e., dynamic time warping) more than the two other techniques. On the other hand, horizon graphs do not align with measures that are insensitive to amplitude and y-offset scaling (i.e., measures based on z-normalization), but the inverse seems to be the case for line charts and colorfields. Overall, our work indicates that the choice of visualization affects what temporal patterns we consider as similar, i.e., the notion of similarity in time series is not visualization independent.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA