Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Neuroinform ; 18: 1320189, 2024.
Article in English | MEDLINE | ID: mdl-38420133

ABSTRACT

Introduction: Pain assessment is extremely important in patients unable to communicate and it is often done by clinical judgement. However, assessing pain using observable indicators can be challenging for clinicians due to the subjective perceptions, individual differences in pain expression, and potential confounding factors. Therefore, the need for an objective pain assessment method that can assist medical practitioners. Functional near-infrared spectroscopy (fNIRS) has shown promising results to assess the neural function in response of nociception and pain. Previous studies have explored the use of machine learning with hand-crafted features in the assessment of pain. Methods: In this study, we aim to expand previous studies by exploring the use of deep learning models Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and (CNN-LSTM) to automatically extract features from fNIRS data and by comparing these with classical machine learning models using hand-crafted features. Results: The results showed that the deep learning models exhibited favourable results in the identification of different types of pain in our experiment using only fNIRS input data. The combination of CNN and LSTM in a hybrid model (CNN-LSTM) exhibited the highest performance (accuracy = 91.2%) in our problem setting. Statistical analysis using one-way ANOVA with Tukey's (post-hoc) test performed on accuracies showed that the deep learning models significantly improved accuracy performance as compared to the baseline models. Discussion: Overall, deep learning models showed their potential to learn features automatically without relying on manually-extracted features and the CNN-LSTM model could be used as a possible method of assessment of pain in non-verbal patients. Future research is needed to evaluate the generalisation of this method of pain assessment on independent populations and in real-life scenarios.

2.
Pattern Recognit Lett ; 153: 67-74, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34876763

ABSTRACT

Coronavirus (which is also known as COVID-19) is severely impacting the wellness and lives of many across the globe. There are several methods currently to detect and monitor the progress of the disease such as radiological image from patients' chests, measuring the symptoms and applying polymerase chain reaction (RT-PCR) test. X-ray imaging is one of the popular techniques used to visualise the impact of the virus on the lungs. Although manual detection of this disease using radiology images is more popular, it can be time-consuming, and is prone to human errors. Hence, automated detection of lung pathologies due to COVID-19 utilising deep learning (Bowles et al.) techniques can assist with yielding accurate results for huge databases. Large volumes of data are needed to achieve generalizable DL models; however, there are very few public databases available for detecting COVID-19 disease pathologies automatically. Standard data augmentation method can be used to enhance the models' generalizability. In this research, the Extensive COVID-19 X-ray and CT Chest Images Dataset has been used and generative adversarial network (GAN) coupled with trained, semi-supervised CycleGAN (SSA- CycleGAN) has been applied to augment the training dataset. Then a newly designed and finetuned Inception V3 transfer learning model has been developed to train the algorithm for detecting COVID-19 pandemic. The obtained results from the proposed Inception-CycleGAN model indicated Accuracy = 94.2%, Area under Curve = 92.2%, Mean Squared Error = 0.27, Mean Absolute Error = 0.16. The developed Inception-CycleGAN framework is ready to be tested with further COVID-19 X-Ray images of the chest.

3.
Artif Intell Med ; 109: 101954, 2020 09.
Article in English | MEDLINE | ID: mdl-34756219

ABSTRACT

This paper reports on research to design an ensemble deep learning framework that integrates fine-tuned, three-stream hybrid deep neural network (i.e., Ensemble Deep Learning Model, EDLM), employing Convolutional Neural Network (CNN) to extract facial image features, detect and accurately classify the pain. To develop the approach, the VGGFace is fine-tuned and integrated with Principal Component Analysis and employed to extract features in images from the Multimodal Intensity Pain database at the early phase of the model fusion. Subsequently, a late fusion, three layers hybrid CNN and recurrent neural network algorithm is developed with their outputs merged to produce image-classified features to classify pain levels. The EDLM model is then benchmarked by means of a single-stream deep learning model including several competing models based on deep learning methods. The results obtained indicate that the proposed framework is able to outperform the competing methods, applied in a multi-level pain detection database to produce a feature classification accuracy that exceeds 89 %, with a receiver operating characteristic of 93 %. To evaluate the generalization of the proposed EDLM model, the UNBC-McMaster Shoulder Pain dataset is used as a test dataset for all of the modelling experiments, which reveals the efficacy of the proposed method for pain classification from facial images. The study concludes that the proposed EDLM model can accurately classify pain and generate multi-class pain levels for potential applications in the medical informatics area, and should therefore, be explored further in expert systems for detecting and classifying the pain intensity of patients, and automatically evaluating the patients' pain level accurately.


Subject(s)
Facial Expression , Neural Networks, Computer , Algorithms , Databases, Factual , Humans , Pain
SELECTION OF CITATIONS
SEARCH DETAIL
...