Your browser doesn't support javascript.
loading
Deep learning of Parkinson's movement from video, without human-defined measures.
Yang, Jiacheng; Williams, Stefan; Hogg, David C; Alty, Jane E; Relton, Samuel D.
Affiliation
  • Yang J; School of Computing, University of Leeds, UK.
  • Williams S; Leeds Institute of Health Sciences, University of Leeds, UK; Leeds Teaching Hospitals NHS Trust, UK. Electronic address: stefanwilliams@doctors.org.uk.
  • Hogg DC; School of Computing, University of Leeds, UK.
  • Alty JE; Leeds Teaching Hospitals NHS Trust, UK; Wicking Dementia Research and Education Centre, University of Tasmania, Australia.
  • Relton SD; Leeds Institute of Health Sciences, University of Leeds, UK.
J Neurol Sci ; 463: 123089, 2024 Jun 10.
Article in En | MEDLINE | ID: mdl-38991323
ABSTRACT

BACKGROUND:

The core clinical sign of Parkinson's disease (PD) is bradykinesia, for which a standard test is finger tapping the clinician observes a person repetitively tap finger and thumb together. That requires an expert eye, a scarce resource, and even experts show variability and inaccuracy. Existing applications of technology to finger tapping reduce the tapping signal to one-dimensional measures, with researcher-defined features derived from those measures.

OBJECTIVES:

(1) To apply a deep learning neural network directly to video of finger tapping, without human-defined measures/features, and determine classification accuracy for idiopathic PD versus controls. (2) To visualise the features learned by the model.

METHODS:

152 smartphone videos of 10s finger tapping were collected from 40 people with PD and 37 controls. We down-sampled pixel dimensions and videos were split into 1 s clips. A 3D convolutional neural network was trained on these clips.

RESULTS:

For discriminating PD from controls, our model showed training accuracy 0.91, and test accuracy 0.69, with test precision 0.73, test recall 0.76 and test AUROC 0.76. We also report class activation maps for the five most predictive features. These show the spatial and temporal sections of video upon which the network focuses attention to make a prediction, including an apparent dropping thumb movement distinct for the PD group.

CONCLUSIONS:

A deep learning neural network can be applied directly to standard video of finger tapping, to distinguish PD from controls, without a requirement to extract a one-dimensional signal from the video, or pre-define tapping features.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: J Neurol Sci Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: J Neurol Sci Year: 2024 Document type: Article