Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Digit Health ; 10: 20552076241236635, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38434792

RESUMEN

Background: ChatGPT is an artificial intelligence-based large language model (LLM). ChatGPT has been widely applied in medicine, but its application in occupational therapy has been lacking. Objective: This study examined the accuracy of ChatGPT on the National Korean Occupational Therapy Licensing Examination (NKOTLE) and investigated its potential for application in the field of occupational therapy. Methods: ChatGPT 3.5 was used during the five years of the NKOTLE with Korean prompts. Multiple choice questions were entered manually by three dependent encoders, and scored according to the number of correct answers. Results: During the most recent five years, ChatGPT did not achieve a passing score of 60% accuracy and exhibited interrater agreement of 0.6 or higher. Conclusion: ChatGPT could not pass the NKOTLE but demonstrated a high level of agreement between raters. Even though the potential of ChatGPT to pass the NKOTLE is currently inadequate, it performed very close to the passing level even with only Korean prompts.

2.
Bioengineering (Basel) ; 10(8)2023 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-37627775

RESUMEN

The increasing prevalence of machine learning (ML) and automated machine learning (AutoML) applications across diverse industries necessitates rigorous comparative evaluations of their predictive accuracies under various computational environments. The purpose of this research was to compare and analyze the predictive accuracy of several machine learning algorithms, including RNNs, LSTMs, GRUs, XGBoost, and LightGBM, when implemented on different platforms such as Google Colab Pro, AWS SageMaker, GCP Vertex AI, and MS Azure. The predictive performance of each model within its respective environment was assessed using performance metrics such as accuracy, precision, recall, F1-score, and log loss. All algorithms were trained on the same dataset and implemented on their specified platforms to ensure consistent comparisons. The dataset used in this study comprised fitness images, encompassing 41 exercise types and totaling 6 million samples. These images were acquired from AI-hub, and joint coordinate values (x, y, z) were extracted utilizing the Mediapipe library. The extracted values were then stored in a CSV format. Among the ML algorithms, LSTM demonstrated the highest performance, achieving an accuracy of 73.75%, precision of 74.55%, recall of 73.68%, F1-score of 73.11%, and a log loss of 0.71. Conversely, among the AutoML algorithms, XGBoost performed exceptionally well on AWS SageMaker, boasting an accuracy of 99.6%, precision of 99.8%, recall of 99.2%, F1-score of 99.5%, and a log loss of 0.014. On the other hand, LightGBM exhibited the poorest performance on MS Azure, achieving an accuracy of 84.2%, precision of 82.2%, recall of 81.8%, F1-score of 81.5%, and a log loss of 1.176. The unnamed algorithm implemented on GCP Vertex AI showcased relatively favorable results, with an accuracy of 89.9%, precision of 94.2%, recall of 88.4%, F1-score of 91.2%, and a log loss of 0.268. Despite LightGBM's lackluster performance on MS Azure, the GRU implemented in Google Colab Pro displayed encouraging results, yielding an accuracy of 88.2%, precision of 88.5%, recall of 88.1%, F1-score of 88.4%, and a log loss of 0.44. Overall, this study revealed significant variations in performance across different algorithms and platforms. Particularly, AWS SageMaker's implementation of XGBoost outperformed other configurations, highlighting the importance of carefully considering the choice of algorithm and computational environment in predictive tasks. To gain a comprehensive understanding of the factors contributing to these performance discrepancies, further investigations are recommended.

3.
Artículo en Inglés | MEDLINE | ID: mdl-36834197

RESUMEN

BACKGROUND: The concept of virtual reality (VR)-based rehabilitation therapy for treating people with low back pain is of growing research interest. However, the effectiveness of such therapy for pain reduction in clinical settings remains controversial. METHODS: The present study was conducted according to the reporting guidelines presented in the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement. We searched the PubMed, Embase, CENTRAL, and ProQuest databases for both published and unpublished papers. The Cochrane risk of bias tool (version 2) was used to evaluate the quality of the selected studies. GRADEprofiler software (version 3.6.4) was used to evaluate the level of evidence. We analyzed the included research results using RevMan software (version 5.4.1). RESULTS: We included a total of 11 articles in the systematic review and meta-analysis, with a total of 1761 subjects. Having assessed the quality of these studies, the risk of bias was generally low with high heterogeneity. The results revealed a small to medium effect (standardized mean difference = ±0.37, 95% confidence interval: 0.75 to 0) based on evidence of moderate overall quality. CONCLUSION: There is evidence that treatment using VR improves patients' pain. The effect size was small to medium, with the studies presenting evidence of moderate overall quality. VR-based treatment can reduce pain; therefore, it may help in rehabilitation therapy.


Asunto(s)
Dolor de la Región Lumbar , Realidad Virtual , Humanos
4.
Artículo en Inglés | MEDLINE | ID: mdl-36497989

RESUMEN

Previous studies reported that virtual reality (VR)-based exposure therapy (VRET) was a clinically beneficial intervention for specific phobias. However, among VRET, VR-based graded exposure therapy (VR-GET) is little known about its efficacy on posttraumatic stress disorder (PTSD) symptoms. Therefore, this meta-analysis investigated the effects of VR-GET for PTSD symptoms. A literature search yielded seven randomized controlled trials. The differences between conditions regarding the primary outcome of PTSD symptoms in the effect size of the individual study were calculated using Hedges' g. The findings showed VR-GET showed a significantly larger effect size for PTSD symptoms (g = 1.100, p = 0.001), compared to controls. However, no significant difference between conventional VRET and controls was found for PTSD symptoms (g = -0.279, p = 0.970). These findings indicated the superiority of VR-GET for PTSD symptoms compared to controls, supporting the importance of immersive PTSD treatments. Nevertheless, the results need to be interpreted with caution due to the substantial number of military service personnel studies. Future trials, considering individually tailored scenarios in virtual environments to cover a wider range of trauma types, are required to investigate its evidence on treating PTSD.


Asunto(s)
Terapia Implosiva , Personal Militar , Trastornos por Estrés Postraumático , Terapia de Exposición Mediante Realidad Virtual , Humanos , Terapia Implosiva/métodos , Trastornos por Estrés Postraumático/terapia , Trastornos por Estrés Postraumático/diagnóstico , Resultado del Tratamiento , Terapia de Exposición Mediante Realidad Virtual/métodos
5.
Healthcare (Basel) ; 9(11)2021 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-34828625

RESUMEN

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA