Your browser doesn't support javascript.
loading
Identifying daily activities of patient work for type 2 diabetes and co-morbidities: a deep learning and wearable camera approach.
Xiong, Hao; Phan, Hoai Nam; Yin, Kathleen; Berkovsky, Shlomo; Jung, Joshua; Lau, Annie Y S.
Afiliação
  • Xiong H; Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
  • Phan HN; Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
  • Yin K; Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
  • Berkovsky S; Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
  • Jung J; Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
  • Lau AYS; Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
J Am Med Inform Assoc ; 29(8): 1400-1408, 2022 07 12.
Article em En | MEDLINE | ID: mdl-35582885
ABSTRACT

OBJECTIVE:

People are increasingly encouraged to self-manage their chronic conditions; however, many struggle to practise it effectively. Most studies that investigate patient work (ie, tasks involved in self-management and contexts influencing such tasks) rely on self-reports, which are subject to recall and other biases. Few studies use wearable cameras and deep learning to capture and classify patient work activities automatically. MATERIALS AND

METHODS:

We propose a deep learning approach to classify activities of patient work collected from wearable cameras, thereby studying self-management routines more effectively. Twenty-six people with type 2 diabetes and comorbidities wore a wearable camera for a day, generating more than 400 h of video across 12 daily activities. To classify these video images, a weighted ensemble network that combines Linear Discriminant Analysis, Deep Convolutional Neural Networks, and Object Detection algorithms is developed. Performance of our model is assessed using Top-1 and Top-5 metrics, compared against manual classification conducted by 2 independent researchers.

RESULTS:

Across 12 daily activities, our model achieved on average the best Top-1 and Top-5 scores of 81.9 and 86.8, respectively. Our model also outperformed other non-ensemble techniques in terms of Top-1 and Top-5 scores for most activity classes, demonstrating the superiority of leveraging weighted ensemble techniques.

CONCLUSIONS:

Deep learning can be used to automatically classify daily activities of patient work collected from wearable cameras with high levels of accuracy. Using wearable cameras and a deep learning approach can offer an alternative approach to investigate patient work, one not subjected to biases commonly associated with self-report methods.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Diabetes Mellitus Tipo 2 / Dispositivos Eletrônicos Vestíveis / Aprendizado Profundo Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Diabetes Mellitus Tipo 2 / Dispositivos Eletrônicos Vestíveis / Aprendizado Profundo Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2022 Tipo de documento: Article