Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Int Wound J ; 21(4): e14565, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38146127

RESUMEN

Chronic wounds contribute to significant healthcare and economic burden worldwide. Wound assessment remains challenging given its complex and dynamic nature. The use of artificial intelligence (AI) and machine learning methods in wound analysis is promising. Explainable modelling can help its integration and acceptance in healthcare systems. We aim to develop an explainable AI model for analysing vascular wound images among an Asian population. Two thousand nine hundred and fifty-seven wound images from a vascular wound image registry from a tertiary institution in Singapore were utilized. The dataset was split into training, validation and test sets. Wound images were classified into four types (neuroischaemic ulcer [NIU], surgical site infections [SSI], venous leg ulcers [VLU], pressure ulcer [PU]), measured with automatic estimation of width, length and depth and segmented into 18 wound and peri-wound features. Data pre-processing was performed using oversampling and augmentation techniques. Convolutional and deep learning models were utilized for model development. The model was evaluated with accuracy, F1 score and receiver operating characteristic (ROC) curves. Explainability methods were used to interpret AI decision reasoning. A web browser application was developed to demonstrate results of the wound AI model with explainability. After development, the model was tested on additional 15 476 unlabelled images to evaluate effectiveness. After the development on the training and validation dataset, the model performance on unseen labelled images in the test set achieved an AUROC of 0.99 for wound classification with mean accuracy of 95.9%. For wound measurements, the model achieved AUROC of 0.97 with mean accuracy of 85.0% for depth classification, and AUROC of 0.92 with mean accuracy of 87.1% for width and length determination. For wound segmentation, an AUROC of 0.95 and mean accuracy of 87.8% was achieved. Testing on unlabelled images, the model confidence score for wound classification was 82.8% with an explainability score of 60.6%. Confidence score was 87.6% for depth classification with 68.0% explainability score, while width and length measurement obtained 93.0% accuracy score with 76.6% explainability. Confidence score for wound segmentation was 83.9%, while explainability was 72.1%. Using explainable AI models, we have developed an algorithm and application for analysis of vascular wound images from an Asian population with accuracy and explainability. With further development, it can be utilized as a clinical decision support system and integrated into existing healthcare electronic systems.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Programas Informáticos , Aprendizaje Automático , Instituciones de Salud
2.
Heliyon ; 10(11): e31692, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38845865

RESUMEN

Background: Few studies have studied the efficacy of using immersive virtual reality (iVR) to teach surgical skills, especially by using real-world iVR recordings rather than simulations. This study aimed to investigate whether viewing 360° iVR instructional recordings produces greater improvements in basic suturing skills of students without prior medical training, beyond traditional methods like reading written manuals or watching 2D instructional videos. Materials and methods: This was a partially blinded randomized cohort study. 44 pre-university students (aged 17) were randomized equally to either the written instruction manual, 2D instructional video, or iVR recordings. All students first watched a silent 2D demonstration video of the suturing task, before attempting to place three simple interrupted sutures on a piece of meat as a baseline. The time taken for the first attempt was recorded. Students were then given an hour to train using their allocated modality. They attempted the suturing task again, and timings were re-recorded. Four blinded surgically-trained judges independently assessed the quality of the stitches placed both pre and post-intervention. One-way analysis of variance tests (ANOVAs) and independent two-sample t-tests were used to determine the effect of training modality on improvements in suturing scores and time taken to complete suturing from pre to post-training. Results: For suturing scores, the iVR group showed significantly larger score improvements than the Written Manual group (p = 0.031, Cohen's D = 0.92), while this iVR advantage was less pronounced when compared with the 2D Video group (p = 0.16, Cohen's D = 0.65). Similarly for time taken to complete suturing, the iVR group had significantly larger time improvements than the Written Manual group (p = 0.045), although this difference was less robust compared to the 2D Instructional Video group (p = 0.34). Conclusion: This study demonstrates that iVR training using real-world 360° instructional recordings produced significantly greater training gains in suturing scores and efficiency compared to reading written text. iVR training also led to larger training gains in both outcome measures than viewing 2D instructional videos, although the differences between them did not reach statistical significance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA