Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurogenom ; 4: 994969, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38234474

RESUMO

Background: While efforts to establish best practices with functional near infrared spectroscopy (fNIRS) signal processing have been published, there are still no community standards for applying machine learning to fNIRS data. Moreover, the lack of open source benchmarks and standard expectations for reporting means that published works often claim high generalisation capabilities, but with poor practices or missing details in the paper. These issues make it hard to evaluate the performance of models when it comes to choosing them for brain-computer interfaces. Methods: We present an open-source benchmarking framework, BenchNIRS, to establish a best practice machine learning methodology to evaluate models applied to fNIRS data, using five open access datasets for brain-computer interface (BCI) applications. The BenchNIRS framework, using a robust methodology with nested cross-validation, enables researchers to optimise models and evaluate them without bias. The framework also enables us to produce useful metrics and figures to detail the performance of new models for comparison. To demonstrate the utility of the framework, we present a benchmarking of six baseline models [linear discriminant analysis (LDA), support-vector machine (SVM), k-nearest neighbours (kNN), artificial neural network (ANN), convolutional neural network (CNN), and long short-term memory (LSTM)] on the five datasets and investigate the influence of different factors on the classification performance, including: number of training examples and size of the time window of each fNIRS sample used for classification. We also present results with a sliding window as opposed to simple classification of epochs, and with a personalised approach (within subject data classification) as opposed to a generalised approach (unseen subject data classification). Results and discussion: Results show that the performance is typically lower than the scores often reported in literature, and without great differences between models, highlighting that predicting unseen data remains a difficult task. Our benchmarking framework provides future authors, who are achieving significant high classification scores, with a tool to demonstrate the advances in a comparable way. To complement our framework, we contribute a set of recommendations for methodology decisions and writing papers, when applying machine learning to fNIRS data.

2.
JMIR Serious Games ; 10(2): e32489, 2022 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-35723912

RESUMO

BACKGROUND: Cognitive training and assessment technologies offer the promise of dementia risk reduction and a more timely diagnosis of dementia, respectively. Cognitive training games may help reduce the lifetime risk of dementia by helping to build cognitive reserve, whereas cognitive assessment technologies offer the opportunity for a more convenient approach to early detection or screening. OBJECTIVE: This study aims to elicit perspectives of potential end users on factors related to the acceptability of cognitive training games and assessment technologies, including their opinions on the meaningfulness of measurement of cognition, barriers to and facilitators of adoption, motivations to use games, and interrelationships with existing health care infrastructure. METHODS: Four linked workshops were conducted with the same group, each focusing on a specific topic: meaningful improvement, learning and motivation, trust in digital diagnosis, and barriers to technology adoption. Participants in the workshops included local involvement team members acting as facilitators and those recruited via Join Dementia Research through a purposive selection and volunteer sampling method. Group activities were recorded, and transcripts were analyzed using thematic analysis with a combination of a priori and data-driven themes. Using a mixed methods approach, we investigated the relationships between the categories of the Capability, Opportunity, and Motivation-Behavior change model along with data-driven themes by measuring the φ coefficient between coded excerpts and ensuring the reliability of our coding scheme by using independent reviewers and assessing interrater reliability. Finally, we explored these themes and their relationships to address our research objectives. RESULTS: In addition to discussions around the capability, motivation, and opportunity categories, several important themes emerged during the workshops: family and friends, cognition and mood, work and hobbies, and technology. Group participants mentioned the importance of functional and objective measures of cognitive change, the social aspect of activities as a motivating factor, and the opportunities and potential shortcomings of digital health care provision. Our quantitative results indicated at least moderate agreement on all but one of the coding schemes and good independence of our coding categories. Positive and statistically significant φ coefficients were observed between several coding themes between categories, including a relatively strong positive φ coefficient between capability and cognition (0.468; P<.001). CONCLUSIONS: The implications for researchers and technology developers include assessing how cognitive training and screening pathways would integrate into existing health care systems; however, further work needs to be undertaken to address barriers to adoption and the potential real-world impact of cognitive training and screening technologies. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.1007/978-3-030-49065-2_4.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...