Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Br J Pain ; 17(3): 239-243, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37342397

ABSTRACT

The objective of this work was to evaluate the inter-rater and intra-rater reliability and minimal detectable difference (MDD) of pressure pain thresholds (PPTs) in pain-free participants with two examiners over two consecutive days in a cross-sectional study design. Examiners used a standardized method to measure and locate a specific testing site over tibialis anterior for PPT testing with a hand-held algometer. The mean of each examiner's three PPT measurements was used to calculate the intraclass correlation coefficient, inter-rater reliability, and intra-rater reliability. The minimal detectable difference (MDD) was calculated. Eighteen participants were recruited (11 female). The inter-rater reliability was 0.94 and 0.96 on day 1 and day 2, respectively. Intra-rater reliability for the examiners was 0.96 and 0.92 on day 1 and day 2, respectively. The MDD on day 1 was 1.24 kg/cm2 (CI: 0.76-2.03) and the MDD on day 2 was 0.88 kg/cm2 (CI: 0.54-1.43). This study demonstrates high inter- and intra-rater reliability and the MDD values for this method of pressure algometry.

2.
Pain ; 164(10): 2148-2190, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37027149

ABSTRACT

ABSTRACT: Interpatient variability is frequently observed among individuals with chronic low back pain (cLBP). This review aimed at identifying phenotypic domains and characteristics that account for interpatient variability in cLBP. We searched MEDLINE ALL (through Ovid), Embase Classic and EMBASE (through Ovid), Scopus, and CINAHL Complete (through EBSCOhost) databases. Studies that aimed to identify or predict cLBP different phenotypes were included. We excluded studies that focused on specific treatments. The methodological quality was assessed using an adaptation of the Downs and Black tool. Forty-three studies were included. Although the patient and pain-related characteristics used to identify phenotypes varied considerably across studies, the following were among the most identified phenotypic domains and characteristics that account for interpatient variability in cLBP: pain-related characteristics (including location, severity, qualities, and duration) and pain impact (including disability, sleep, and fatigue), psychological domains (including anxiety and depression), behavioral domains (including coping, somatization, fear avoidance, and catastrophizing), social domains (including employment and social support), and sensory profiling (including pain sensitivity and sensitization). Despite these findings, our review showed that the evidence on pain phenotyping still requires further investigation. The assessment of the methodological quality revealed several limitations. We recommend adopting a standard methodology to enhance the generalizability of the results and the implementation of a comprehensive and feasible assessment framework to facilitate personalized treatments in clinical settings.


Subject(s)
Chronic Pain , Low Back Pain , Humans , Low Back Pain/psychology , Anxiety , Adaptation, Psychological , Fear/psychology , Catastrophization , Chronic Pain/psychology
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4002-4005, 2021 11.
Article in English | MEDLINE | ID: mdl-34892108

ABSTRACT

Ultrasound (US) imaging is a widely used clinical technique that requires extensive training to use correctly. Good quality US images are essential for effective interpretation of the results, however numerous sources of error can impair quality. Currently, image quality assessment is performed by an experienced sonographer through visual inspection, however this is usually unachievable by inexperienced users. An autoencoder (AE) is a machine learning technique that has been shown to be effective at anomaly detection and could be used for fast and effective image quality assessment. In this study, we explored the use of an AE to distinguish between good and poor-quality US images (caused by artifacts and noise) by using the reconstruction error to train and test a random forest classifier (RFC) for classification. Good and poor-quality ultrasound images were obtained from forty-nine healthy subjects and were used to train an AE using two different loss functions, with one based on the structural similarity index measure (SSIM) and the other on the mean squared error (MSE). The resulting reconstruction errors of each image were then used to classify the images into two groups based on quality by training and testing an RFC. Using the SSIM based AE, the classifier showed an average accuracy of 71%±4.0% when classifying images based on user errors and an accuracy of 91%±1.0% when sorting images based on noise. The respective accuracies obtained from the AE using the MSE function were 76%±2.0% and 83%±2.0%. The results of this study demonstrate that an AE has the potential to differentiate good quality US images from those with poor quality, which could be used to help less experienced researchers and clinicians obtain a more objective measure of image quality when using US.


Subject(s)
Artifacts , Machine Learning , Humans , Ultrasonography
SELECTION OF CITATIONS
SEARCH DETAIL
...