Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
J Health Care Poor Underserved ; 34(1): 399-424, 2023.
Article in English | MEDLINE | ID: mdl-37464502

ABSTRACT

Hispanic/Latino representation in medical research remains poor. We describe factors affecting rates of recruitment, participation, adherence, and retention of Hispanics/Latinos in clinical studies in the United States and characterize proposed strategies to improve these rates. A targeted literature review was conducted. Relevant studies were identified from Embase, MEDLINE®, and CENTRAL from January 1, 2010 to September 4, 2020. Sixty-eight studies were included. Key facilitators to research involvement were establishing trust between research staff and participants, incorporating familism, and using culturally appropriate language. Common elements of successful strategies for improving research involvement included incorporating community partners, bilingual and culturally competent research staff, continuous engagement and building relationships between participants and staff, and incorporating Hispanic/Latino cultural values. There is no universal strategy to improve research involvement of Hispanics/Latinos. The best strategy is likely a combination of key elements from several strategies, tailored to each unique study population. Further research is needed.


Subject(s)
Clinical Trials as Topic , Hispanic or Latino , Observational Studies as Topic , Patient Participation , Humans , United States
2.
J Med Internet Res ; 24(11): e37683, 2022 11 21.
Article in English | MEDLINE | ID: mdl-36409538

ABSTRACT

BACKGROUND: With the advent of smart sensing technology, mobile and wearable devices can provide continuous and objective monitoring and assessment of motor function outcomes. OBJECTIVE: We aimed to describe the existing scientific literature on wearable and mobile technologies that are being used or tested for assessing motor functions in mobility-impaired and healthy adults and to evaluate the degree to which these devices provide clinically valid measures of motor function in these populations. METHODS: A systematic literature review was conducted by searching Embase, MEDLINE, CENTRAL (January 1, 2015, to June 24, 2020), the United States and European Union clinical trial registries, and the United States Food and Drug Administration website using predefined study selection criteria. Study selection, data extraction, and quality assessment were performed by 2 independent reviewers. RESULTS: A total of 91 publications representing 87 unique studies were included. The most represented clinical conditions were Parkinson disease (n=51 studies), followed by stroke (n=5), Huntington disease (n=5), and multiple sclerosis (n=2). A total of 42 motion-detecting devices were identified, and the majority (n=27, 64%) were created for the purpose of health care-related data collection, although approximately 25% were personal electronic devices (eg, smartphones and watches) and 11% were entertainment consoles (eg, Microsoft Kinect or Xbox and Nintendo Wii). The primary motion outcomes were related to gait (n=30), gross motor movements (n=25), and fine motor movements (n=23). As a group, sensor-derived motion data showed a mean sensitivity of 0.83 (SD 7.27), a mean specificity of 0.84 (SD 15.40), a mean accuracy of 0.90 (SD 5.87) in discriminating between diseased individuals and healthy controls, and a mean Pearson r validity coefficient of 0.52 (SD 0.22) relative to clinical measures. We did not find significant differences in the degree of validity between in-laboratory and at-home sensor-based assessments nor between device class (ie, health care-related device, personal electronic devices, and entertainment consoles). CONCLUSIONS: Sensor-derived motion data can be leveraged to classify and quantify disease status for a variety of neurological conditions. However, most of the recent research on digital clinical measures is derived from proof-of-concept studies with considerable variation in methodological approaches, and much of the reviewed literature has focused on clinical validation, with less than one-quarter of the studies performing analytical validation. Overall, future research is crucially needed to further consolidate that sensor-derived motion data may lead to the development of robust and transformative digital measurements intended to predict, diagnose, and quantify neurological disease state and its longitudinal change.


Subject(s)
Parkinson Disease , Wearable Electronic Devices , Adult , Humans , Gait , Health Status
3.
J Clin Epidemiol ; 136: 157-167, 2021 08.
Article in English | MEDLINE | ID: mdl-33979663

ABSTRACT

OBJECTIVES: To evaluate the impact of guidance and training on the inter-rater reliability (IRR), inter-consensus reliability (ICR) and evaluator burden of the Risk of Bias (RoB) in Non-randomized Studies (NRS) of Interventions (ROBINS-I) tool, and the RoB instrument for NRS of Exposures (ROB-NRSE). STUDY DESIGN AND SETTING: In a before-and-after study, seven reviewers appraised the RoB using ROBINS-I (n = 44) and ROB-NRSE (n = 44), before and after guidance and training. We used Gwet's AC1 statistic to calculate IRR and ICR. RESULTS: After guidance and training, the IRR and ICR of the overall bias domain of ROBINS-I and ROB-NRSE improved significantly; with many individual domains showing either a significant (IRR and ICR of ROB-NRSE; ICR of ROBINS-I), or nonsignificant improvement (IRR of ROBINS-I). Evaluator burden significantly decreased after guidance and training for ROBINS-I, whereas for ROB-NRSE there was a slight nonsignificant increase. CONCLUSION: Overall, there was benefit for guidance and training for both tools. We highly recommend guidance and training to reviewers prior to RoB assessments and that future research investigate aspects of guidance and training that are most effective.


Subject(s)
Biomedical Research/standards , Epidemiologic Research Design , Observer Variation , Peer Review/standards , Research Design/standards , Research Personnel/education , Adult , Biomedical Research/statistics & numerical data , Canada , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Psychometrics/methods , Reproducibility of Results , Research Design/statistics & numerical data , United Kingdom
4.
J Clin Epidemiol ; 128: 140-147, 2020 12.
Article in English | MEDLINE | ID: mdl-32987166

ABSTRACT

OBJECTIVE: To assess the real-world interrater reliability (IRR), interconsensus reliability (ICR), and evaluator burden of the Risk of Bias (RoB) in Nonrandomized Studies (NRS) of Interventions (ROBINS-I), and the ROB Instrument for NRS of Exposures (ROB-NRSE) tools. STUDY DESIGN AND SETTING: A six-center cross-sectional study with seven reviewers (2 reviewer pairs) assessing the RoB using ROBINS-I (n = 44 NRS) or ROB-NRSE (n = 44 NRS). We used Gwet's AC1 statistic to calculate the IRR and ICR. To measure the evaluator burden, we assessed the total time taken to apply the tool and reach a consensus. RESULTS: For ROBINS-I, both IRR and ICR for individual domains ranged from poor to substantial agreement. IRR and ICR on overall RoB were poor. The evaluator burden was 48.45 min (95% CI 45.61 to 51.29). For ROB-NRSE, the IRR and ICR for the majority of domains were poor, while the rest ranged from fair to perfect agreement. IRR and ICR on overall RoB were slight and poor, respectively. The evaluator burden was 36.98 min (95% CI 34.80 to 39.16). CONCLUSIONS: We found both tools to have low reliability, although ROBINS-I was slightly higher. Measures to increase agreement between raters (e.g., detailed training, supportive guidance material) may improve reliability and decrease evaluator burden.


Subject(s)
Consensus , Epidemiologic Research Design , Research Personnel/statistics & numerical data , Bias , Cross-Sectional Studies , Humans , Observer Variation , Reproducibility of Results , Risk Assessment
5.
Syst Rev ; 9(1): 32, 2020 02 12.
Article in English | MEDLINE | ID: mdl-32051035

ABSTRACT

BACKGROUND: A new tool, "risk of bias (ROB) instrument for non-randomized studies of exposures (ROB-NRSE)," was recently developed. It is important to establish consistency in its application and interpretation across review teams. In addition, it is important to understand if specialized training and guidance will improve the reliability in the results of the assessments. Therefore, the objective of this cross-sectional study is to establish the inter-rater reliability (IRR), inter-consensus reliability (ICR), and concurrent validity of the new ROB-NRSE tool. Furthermore, as this is a relatively new tool, it is important to understand the barriers to using this tool (e.g., time to conduct assessments and reach consensus-evaluator burden). METHODS: Reviewers from four participating centers will apprise the ROB of a sample of NRSE publications using ROB-NRSE tool in two stages. For IRR and ICR, two pairs of reviewers will assess the ROB for each NRSE publication. In the first stage, reviewers will assess the ROB without any formal guidance. In the second stage, reviewers will be provided customized training and guidance. At each stage, each pair of reviewers will resolve conflicts and arrive at a consensus. To calculate the IRR and ICR, we will use Gwet's AC1 statistic. For concurrent validity, reviewers will appraise a sample of NRSE publications using both the Newcastle-Ottawa Scale (NOS) and ROB-NRSE tool. We will analyze the concordance between the two tools for similar domains and for the overall judgments using Kendall's tau coefficient. To measure evaluator burden, we will assess the time taken to apply ROB-NRSE tool (without and with guidance), and the NOS. To assess the impact of customized training and guidance on the evaluator burden, we will use the generalized linear models. We will use Microsoft Excel and SAS 9.4, to manage and analyze study data, respectively. DISCUSSION: The quality of evidence from systematic reviews that include NRSE depends partly on the study-level ROB assessments. The findings of this study will contribute to an improved understanding of ROB-NRSE and how best to use it.


Subject(s)
Bias , Consensus , Reproducibility of Results , Research Design , Cross-Sectional Studies , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...