Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
BMC Neurol ; 22(1): 266, 2022 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-35850660

RESUMEN

BACKGROUND: The worldwide prevalence of dementia is rapidly rising. Alzheimer's disease (AD), accounts for 70% of cases and has a 10-20-year preclinical period, when brain pathology covertly progresses before cognitive symptoms appear. The 2020 Lancet Commission estimates that 40% of dementia cases could be prevented by modifying lifestyle/medical risk factors. To optimise dementia prevention effectiveness, there is urgent need to identify individuals with preclinical AD for targeted risk reduction. Current preclinical AD tests are too invasive, specialist or costly for population-level assessments. We have developed a new online test, TAS Test, that assesses a range of motor-cognitive functions and has capacity to be delivered at significant scale. TAS Test combines two innovations: using hand movement analysis to detect preclinical AD, and computer-human interface technologies to enable robust 'self-testing' data collection. The aims are to validate TAS Test to [1] identify preclinical AD, and [2] predict risk of cognitive decline and AD dementia. METHODS: Aim 1 will be addressed through a cross-sectional study of 500 cognitively healthy older adults, who will complete TAS Test items comprising measures of motor control, processing speed, attention, visuospatial ability, memory and language. TAS Test measures will be compared to a blood-based AD biomarker, phosphorylated tau 181 (p-tau181). Aim 2 will be addressed through a 5-year prospective cohort study of 10,000 older adults. Participants will complete TAS Test annually and subtests of the Cambridge Neuropsychological Test Battery (CANTAB) biennially. 300 participants will undergo in-person clinical assessments. We will use machine learning of motor-cognitive performance on TAS Test to develop an algorithm that classifies preclinical AD risk (p-tau181-defined) and determine the precision to prospectively estimate 5-year risks of cognitive decline and AD. DISCUSSION: This study will establish the precision of TAS Test to identify preclinical AD and estimate risk of cognitive decline and AD. If accurate, TAS Test will provide a low-cost, accessible enrichment strategy to pre-screen individuals for their likelihood of AD pathology prior to more expensive tests such as blood or imaging biomarkers. This would have wide applications in public health initiatives and clinical trials. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT05194787 , 18 January 2022. Retrospectively registered.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Anciano , Enfermedad de Alzheimer/diagnóstico , Enfermedad de Alzheimer/epidemiología , Enfermedad de Alzheimer/psicología , Péptidos beta-Amiloides , Biomarcadores , Disfunción Cognitiva/diagnóstico , Disfunción Cognitiva/epidemiología , Disfunción Cognitiva/psicología , Estudios Transversales , Humanos , Pruebas Neuropsicológicas , Estudios Prospectivos , Proteínas tau
2.
J Neurol Sci ; 463: 123089, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38991323

RESUMEN

BACKGROUND: The core clinical sign of Parkinson's disease (PD) is bradykinesia, for which a standard test is finger tapping: the clinician observes a person repetitively tap finger and thumb together. That requires an expert eye, a scarce resource, and even experts show variability and inaccuracy. Existing applications of technology to finger tapping reduce the tapping signal to one-dimensional measures, with researcher-defined features derived from those measures. OBJECTIVES: (1) To apply a deep learning neural network directly to video of finger tapping, without human-defined measures/features, and determine classification accuracy for idiopathic PD versus controls. (2) To visualise the features learned by the model. METHODS: 152 smartphone videos of 10s finger tapping were collected from 40 people with PD and 37 controls. We down-sampled pixel dimensions and videos were split into 1 s clips. A 3D convolutional neural network was trained on these clips. RESULTS: For discriminating PD from controls, our model showed training accuracy 0.91, and test accuracy 0.69, with test precision 0.73, test recall 0.76 and test AUROC 0.76. We also report class activation maps for the five most predictive features. These show the spatial and temporal sections of video upon which the network focuses attention to make a prediction, including an apparent dropping thumb movement distinct for the PD group. CONCLUSIONS: A deep learning neural network can be applied directly to standard video of finger tapping, to distinguish PD from controls, without a requirement to extract a one-dimensional signal from the video, or pre-define tapping features.


Asunto(s)
Aprendizaje Profundo , Enfermedad de Parkinson , Grabación en Video , Humanos , Enfermedad de Parkinson/fisiopatología , Enfermedad de Parkinson/diagnóstico , Masculino , Femenino , Anciano , Persona de Mediana Edad , Grabación en Video/métodos , Dedos/fisiopatología , Movimiento/fisiología , Redes Neurales de la Computación , Hipocinesia/fisiopatología , Hipocinesia/diagnóstico , Teléfono Inteligente
3.
Cryst Growth Des ; 24(8): 3277-3288, 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38659658

RESUMEN

Precision measurement of the growth rate of individual single crystal facets (hkl) represents an important component in the design of industrial crystallization processes. Current approaches for crystal growth measurement using optical microscopy are labor intensive and prone to error. An automated process using state-of-the-art computer vision and machine learning to segment and measure the crystal images is presented. The accuracies and efficiencies of the new crystal sizing approach are evaluated against existing manual and semi-automatic methods, demonstrating equivalent accuracy but over a much shorter time, thereby enabling a more complete kinematic analysis of the overall crystallization process. This is applied to measure in situ the crystal growth rates and through this determining the associated kinetic mechanisms for the crystallization of ß-form l-glutamic acid from the solution phase. Growth on the {101} capping faces is consistent with a Birth and Spread mechanism, in agreement with the literature, while the growth rate of the {021} prismatic faces, previously not available in the literature, is consistent with a Burton-Cabrera-Frank screw dislocation mechanism. At a typical supersaturation of σ = 0.78, the growth rate of the {101} capping faces (3.2 × 10-8 m s-1) is found to be 17 times that of the {021} prismatic faces (1.9 × 10-9 m s-1). Both capping and prismatic faces are found to have dead zones in their growth kinetic profiles, with the capping faces (σc = 0.23) being about half that of the prismatic faces (σc = 0.46). The importance of this overall approach as an integral component of the digital design of industrial crystallization processes is highlighted.

4.
Eur J Heart Fail ; 25(10): 1724-1738, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37403669

RESUMEN

AIMS: Multivariable prediction models can be used to estimate risk of incident heart failure (HF) in the general population. A systematic review and meta-analysis was performed to determine the performance of models. METHODS AND RESULTS: From inception to 3 November 2022 MEDLINE and EMBASE databases were searched for studies of multivariable models derived, validated and/or augmented for HF prediction in community-based cohorts. Discrimination measures for models with c-statistic data from ≥3 cohorts were pooled by Bayesian meta-analysis, with heterogeneity assessed through a 95% prediction interval (PI). Risk of bias was assessed using PROBAST. We included 36 studies with 59 prediction models. In meta-analysis, the Atherosclerosis Risk in Communities (ARIC) risk score (summary c-statistic 0.802, 95% confidence interval [CI] 0.707-0.883), GRaph-based Attention Model (GRAM; 0.791, 95% CI 0.677-0.885), Pooled Cohort equations to Prevent Heart Failure (PCP-HF) white men model (0.820, 95% CI 0.792-0.843), PCP-HF white women model (0.852, 95% CI 0.804-0.895), and REverse Time AttentIoN model (RETAIN; 0.839, 95% CI 0.748-0.916) had a statistically significant 95% PI and excellent discrimination performance. The ARIC risk score and PCP-HF models had significant summary discrimination among cohorts with a uniform prediction window. 77% of model results were at high risk of bias, certainty of evidence was low, and no model had a clinical impact study. CONCLUSIONS: Prediction models for estimating risk of incident HF in the community demonstrate excellent discrimination performance. Their usefulness remains uncertain due to high risk of bias, low certainty of evidence, and absence of clinical effectiveness research.


Asunto(s)
Aterosclerosis , Insuficiencia Cardíaca , Masculino , Humanos , Femenino , Insuficiencia Cardíaca/epidemiología , Teorema de Bayes , Factores de Riesgo
5.
Exp Brain Res ; 214(1): 131-7, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21822674

RESUMEN

Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.


Asunto(s)
Atención/fisiología , Fijación Ocular/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Valor Predictivo de las Pruebas , Grabación de Cinta de Video , Adulto Joven
6.
Front Robot AI ; 8: 686368, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34409071

RESUMEN

We present O2A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call "action vectors". The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task. We report on experiments in simulation and on a real robot, with changes in viewpoint of observation, properties of the objects involved, scene background and morphology of the manipulator between the demonstration and the learning domains. O2A outperforms baseline approaches under different domain shifts and has comparable performance with an Oracle (that uses an ideal reward function). Videos of the results, including demonstrations, can be found in our: project-website.

7.
PLoS One ; 10(6): e0127769, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26126116

RESUMEN

Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.


Asunto(s)
Algoritmos , Salud Laboral , Flujo de Trabajo , Cognición , Humanos , Imagenología Tridimensional , Aprendizaje , Medicina del Trabajo , Integración de Sistemas , Interfaz Usuario-Computador
8.
Front Hum Neurosci ; 7: 441, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23986671

RESUMEN

Perception of scenes has typically been investigated by using static or simplified visual displays. How attention is used to perceive and evaluate dynamic, realistic scenes is more poorly understood, in part due to the problem of comparing eye fixations to moving stimuli across observers. When the task and stimulus is common across observers, consistent fixation location can indicate that that region has high goal-based relevance. Here we investigated these issues when an observer has a specific, and naturalistic, task: closed-circuit television (CCTV) monitoring. We concurrently recorded eye movements and ratings of perceived suspiciousness as different observers watched the same set of clips from real CCTV footage. Trained CCTV operators showed greater consistency in fixation location and greater consistency in suspiciousness judgements than untrained observers. Training appears to increase between-operators consistency by learning "knowing what to look for" in these scenes. We used a novel "Dynamic Area of Focus (DAF)" analysis to show that in CCTV monitoring there is a temporal relationship between eye movements and subsequent manual responses, as we have previously found for a sports video watching task. For trained CCTV operators and for untrained observers, manual responses were most highly related to between-observer eye position spread when a temporal lag was introduced between the fixation and response data. Several hundred milliseconds after between-observer eye positions became most similar, observers tended to push the joystick to indicate perceived suspiciousness. Conversely, several hundred milliseconds after between-observer eye positions became dissimilar, observers tended to rate suspiciousness as low. These data provide further support for this DAF method as an important tool for examining goal-directed fixation behavior when the stimulus is a real moving image.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA