Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Med Internet Res ; 24(4): e33537, 2022 04 14.
Artículo en Inglés | MEDLINE | ID: mdl-35436221

RESUMEN

BACKGROUND: Suboptimal adherence to data collection procedures or a study intervention is often the cause of a failed clinical trial. Data from connected sensors, including wearables, referred to here as biometric monitoring technologies (BioMeTs), are capable of capturing adherence to both digital therapeutics and digital data collection procedures, thereby providing the opportunity to identify the determinants of adherence and thereafter, methods to maximize adherence. OBJECTIVE: We aim to describe the methods and definitions by which adherence has been captured and reported using BioMeTs in recent years. Identifying key gaps allowed us to make recommendations regarding minimum reporting requirements and consistency of definitions for BioMeT-based adherence data. METHODS: We conducted a systematic review of studies published between 2014 and 2019, which deployed a BioMeT outside the clinical or laboratory setting for which a quantitative, nonsurrogate, sensor-based measurement of adherence was reported. After systematically screening the manuscripts for eligibility, we extracted details regarding study design, participants, the BioMeT or BioMeTs used, and the definition and units of adherence. The primary definitions of adherence were categorized as a continuous variable based on duration (highest resolution), a continuous variable based on the number of measurements completed, or a categorical variable (lowest resolution). RESULTS: Our PubMed search terms identified 940 manuscripts; 100 (10.6%) met our eligibility criteria and contained descriptions of 110 BioMeTs. During literature screening, we found that 30% (53/177) of the studies that used a BioMeT outside of the clinical or laboratory setting failed to report a sensor-based, nonsurrogate, quantitative measurement of adherence. We identified 37 unique definitions of adherence reported for the 110 BioMeTs and observed that uniformity of adherence definitions was associated with the resolution of the data reported. When adherence was reported as a continuous time-based variable, the same definition of adherence was adopted for 92% (46/50) of the tools. However, when adherence data were simplified to a categorical variable, we observed 25 unique definitions of adherence reported for 37 tools. CONCLUSIONS: We recommend that quantitative, nonsurrogate, sensor-based adherence data be reported for all BioMeTs when feasible; a clear description of the sensor or sensors used to capture adherence data, the algorithm or algorithms that convert sample-level measurements to a metric of adherence, and the analytic validation data demonstrating that BioMeT-generated adherence is an accurate and reliable measurement of actual use be provided when available; and primary adherence data be reported as a continuous variable followed by categorical definitions if needed, and that the categories adopted are supported by clinical validation data and/or consistent with previous reports.


Asunto(s)
Biometría , Cimetidina , Biometría/métodos , Recolección de Datos , Humanos , Proyectos de Investigación , Tecnología
2.
Ther Innov Regul Sci ; 56(3): 394-404, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35142989

RESUMEN

BACKGROUND: Visual analogue scales (VASs) are used in a variety of patient-, observer- and clinician-reported outcome measures. While typically included in measures originally developed for pen-and-paper completion, a greater number of clinical trials currently use electronic approaches to their collection. This leads researchers to question whether the measurement properties of the scale have been conserved during the migration to an electronic format, particularly because electronic formats often use a different scale length than the 100 mm paper standard. METHODS: We performed a review of published studies investigating the measurement comparability of paper and electronic formats of the VAS. RESULTS: Our literature search yielded 26 studies published between 1997 and 2018 that reported comparison of paper and electronic formats using the VAS. After excluding 2 publications, 23 of the remaining 24 studies included in this review reported electronic formats of the VAS (eVAS) and paper formats (pVAS) to be equivalent. A further study concluded that eVAS and pVAS were both acceptable but should not be interchanged. eVAS length varied from 21 to 200 mm, indicating that 100 mm length is not a requirement. CONCLUSIONS: The literature supports the hypothesis that eVAS and pVAS provide comparable results regardless of the VAS length. When implementing a VAS on a screen-based electronic mode, we recommend following industry best practices for faithful migration to minimise the likelihood of non-comparability with pVAS.


Asunto(s)
Papel , Calidad de Vida , Electrónica , Humanos , Dimensión del Dolor/métodos , Escala Visual Analógica
3.
Ther Innov Regul Sci ; 53(4): 426-430, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30157687

RESUMEN

A growing number of clinical trials employ electronic media, in particular smartphones and tablets, to collect patient-reported outcome data. This is driven by the ubiquity of the technology, and an increased awareness of associated improvements in data integrity, quality and timeliness. Despite this, there remains a lingering question relating to the measurement equivalence of an instrument when migrated from paper to a screen-based format. As a result, researchers often must provide evidence demonstrating the measurement equivalence of paper and electronic versions, such as that recommended by the ISPOR ePRO Good Research Practices Task Force. In the last decade, a considerable body of work has emerged that overwhelmingly supports the measurement equivalence of instruments using screen-based electronic formats. Our review of key works derives recommendations on evidence needed to support electronic implementation. We recommend application of best practice recommendations is sufficient to conclude measurement equivalence with paper PROMs. In addition, we recommend that previous usability evidence in a representative group is sufficient, as opposed to per-study testing. Further, we conclude that this also applies to studies using multiple screen-based devices, including bring-your-own-device, if a minimum device specification can be ensured and the instrument is composed of standard response scale types.


Asunto(s)
Medición de Resultados Informados por el Paciente , Telemedicina , Computadoras de Mano , Registros Electrónicos de Salud , Humanos
4.
J Med Internet Res ; 20(6): e220, 2018 06 19.
Artículo en Inglés | MEDLINE | ID: mdl-29921563

RESUMEN

BACKGROUND: Despite an aging population, older adults are typically underrecruited in clinical trials, often because of the perceived burden associated with participation, particularly travel associated with clinic visits. Conducting a clinical trial remotely presents an opportunity to leverage mobile and wearable technologies to bring the research to the patient. However, the burden associated with shifting clinical research to a remote site requires exploration. While a remote trial may reduce patient burden, the extent to which this shifts burden on the other stakeholders needs to be investigated. OBJECTIVE: The aim of this study was to explore the burden associated with a remote trial in a nursing home setting on both staff and residents. METHODS: Using results from a grounded analysis of qualitative data, this study explored and characterized the burden associated with a remote trial conducted in a nursing home in Dublin, Ireland. A total of 11 residents were recruited to participate in this trial (mean age: 80 years; age range: 67-93 years). To support research activities, we also recruited 10 nursing home staff members, including health care assistants, an activities co-ordinator, and senior nurses. This study captured the lived experience of this remote trial among staff and residents and explored the burden associated with participation. At the end of the trial, a total of 6 residents and 8 members of staff participated in semistructured interviews (n=14). They reviewed clinical data generated by mobile and wearable devices and reflected upon their trial-related experiences. RESULTS: Staff reported extensive burden in fulfilling their roles and responsibilities to support activities of the trial. Among staff, we found eight key characteristics of burden: (1) comprehension, (2) time, (3) communication, (4) emotional load, (5) cognitive load, (6) research engagement, (7) logistical burden, and (8) product accountability. Residents reported comparatively less burden. Among residents, we found only four key characteristics of burden: (1) comprehension, (2) adherence, (3) emotional load, and (4) personal space. CONCLUSIONS: A remote trial in a nursing home setting can minimize the burden on residents and enable inclusive participation. However, it arguably creates additional burden on staff, particularly where they have a role to play in locally supporting and maintaining technology as part of data collection. Future research should examine how to measure and minimize the burden associated with data collection in remote trials.


Asunto(s)
Casas de Salud/normas , Investigación Cualitativa , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Conducta Social
5.
Value Health ; 21(5): 581-589, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29753356

RESUMEN

OBJECTIVES: The aim of this study was to assess the measurement equivalence of individual response scale types by using a patient reported outcome measure (PROM) collected on paper and migrated into electronic format for use on the subject's own mobile device (BYOD) and on a provisioned device (site device). METHODS: Subjects suffering from chronic health conditions causing daily pain or discomfort were invited to participate in this single-site, single visit, three-way crossover study. Association between individual item and instrument subscale scores was assessed by using the intraclass correlation coefficient (ICC) and its CI. Participant attitudes toward the use of BYOD in a clinical trial were assessed through use of a questionnaire. RESULTS: In this study, 155 subjects (females 83 [54%]; males 72 [46%]) ages 19 to 69 years (mean ± SD: 48.6 ± 13.1) were recruited. High association between the modes of administration (paper, BYOD, site device) was shown with analysis of ICCs (0.79-0.98) for each response scale type, including visual analogue scale, numeric rating scale, verbal response scale, and Likert scale. Of the subjects, 94% (146 of 155) stated that they would definitely or probably be willing to download an app onto their own mobile device for a forthcoming clinical trial. Forty-five percent of subjects felt BYOD would be more convenient compared with 15% preferring a provisioned device (40% had no preference). CONCLUSIONS: This study provides strong evidence supporting the use of BYOD for PROM collection in terms of the conservation of instrument measurement equivalence across the most widely used response scale types, and high patient acceptance of the approach.


Asunto(s)
Dolor Crónico/psicología , Computadoras de Mano , Aceptación de la Atención de Salud , Medición de Resultados Informados por el Paciente , Adulto , Anciano , Estudios Cruzados , Femenino , Humanos , Masculino , Persona de Mediana Edad , Aplicaciones Móviles , Dimensión del Dolor , Encuestas y Cuestionarios , Adulto Joven
6.
Clin Pharmacol Ther ; 104(1): 59-71, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-29574776

RESUMEN

The increasing miniaturization and affordability of sensors and circuitry has led to the current level of innovation in the area of wearable and microsensor solutions for health monitoring. This facilitates the development of solutions that can be used to measure complex health outcomes in nonspecialist and remote settings. In this article, we review a number of innovations related to brain monitoring including portable and wearable solutions to directly measure brain electrical activity, and solutions measuring aspects related to brain function such as sleep patterns, gait, cognition, voice acoustics, and gaze analysis. Despite the need for more scientific validation work, we conclude that there is enough understanding of how to implement these approaches as exploratory tools that may provide additional valuable insights due to the rich and frequent data they produce, to justify their inclusion in clinical study protocols.


Asunto(s)
Encéfalo , Telemedicina , Dispositivos Electrónicos Vestibles , Actigrafía , Cognición , Electroencefalografía , Medidas del Movimiento Ocular , Análisis de la Marcha , Humanos , Aplicaciones Móviles , Monitorización Neurofisiológica , Sueño , Teléfono Inteligente
7.
Value Health ; 21(1): 41-48, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29304939

RESUMEN

OBJECTIVES: To synthesize the findings of cognitive interview and usability studies performed to assess the measurement equivalence of patient-reported outcome (PRO) instruments migrated from paper to electronic formats (ePRO), and make recommendations regarding future migration validation requirements and ePRO design best practice. METHODS: We synthesized findings from all cognitive interview and usability studies performed by a contract research organization between 2012 and 2015: 53 studies comprising 68 unique instruments and 101 instrument evaluations. We summarized study findings to make recommendations for best practice and future validation requirements. RESULTS: Five studies (9%) identified minor findings during cognitive interview that may possibly affect instrument measurement properties. All findings could be addressed by application of ePRO best practice, such as eliminating scrolling, ensuring appropriate font size, ensuring suitable thickness of visual analogue scale lines, and providing suitable instructions. Similarly, regarding solution usability, 49 of the 53 studies (92%) recommended no changes in display clarity, navigation, operation, and completion without help. Reported usability findings could be eliminated by following good product design such as the size, location, and responsiveness of navigation buttons. CONCLUSIONS: With the benefit of accumulating evidence, it is possible to relax the need to routinely conduct cognitive interview and usability studies when implementing minor changes during instrument migration. Application of design best practice and selecting vendor solutions with good user interface and user experience properties that have been assessed in a representative group may enable many instrument migrations to be accepted without formal validation studies by instead conducting a structured expert screen review.


Asunto(s)
Registros Electrónicos de Salud/normas , Entrevistas como Asunto , Medición de Resultados Informados por el Paciente , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Benchmarking , Niño , Preescolar , Cognición , Toma de Decisiones , Medicina Basada en la Evidencia , Femenino , Humanos , Masculino , Persona de Mediana Edad , Papel , Investigación Cualitativa
8.
Health Qual Life Outcomes ; 13: 167, 2015 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-26446159

RESUMEN

OBJECTIVE: To conduct a systematic review and meta-analysis of the equivalence between electronic and paper administration of patient reported outcome measures (PROMs) in studies conducted subsequent to those included in Gwaltney et al's 2008 review. METHODS: A systematic literature review of PROM equivalence studies conducted between 2007 and 2013 identified 1,997 records from which 72 studies met pre-defined inclusion/exclusion criteria. PRO data from each study were extracted, in terms of both correlation coefficients (ICCs, Spearman and Pearson correlations, Kappa statistics) and mean differences (standardized by the standard deviation, SD, and the response scale range). Pooled estimates of correlation and mean difference were estimated. The modifying effects of mode of administration, year of publication, study design, time interval between administrations, mean age of participants and publication type were examined. RESULTS: Four hundred thirty-five individual correlations were extracted, these correlations being highly variable (I2 = 93.8) but showing generally good equivalence, with ICCs ranging from 0.65 to 0.99 and the pooled correlation coefficient being 0.88 (95% CI 0.87 to 0.88). Standardised mean differences for 307 studies were small and less variable (I2 = 33.5) with a pooled standardised mean difference of 0.037 (95% CI 0.031 to 0.042). Average administration mode/platform-specific correlations from 56 studies (61 estimates) had a pooled estimate of 0.88 (95% CI 0.86 to 0.90) and were still highly variable (I2 = 92.1). Similarly, average platform-specific ICCs from 39 studies (42 estimates) had a pooled estimate of 0.90 (95% CI 0.88 to 0.92) with an I2 of 91.5. After excluding 20 studies with outlying correlation coefficients (≥3SD from the mean), the I2 was 54.4, with the equivalence still high, the overall pooled correlation coefficient being 0.88 (95% CI 0.87 to 0.88). Agreement was found to be greater in more recent studies (p < 0.001), in randomized studies compared with non-randomised studies (p < 0.001), in studies with a shorter interval (<1 day) (p < 0.001), and in respondents of mean age 28 to 55 compared with those either younger or older (p < 0.001). In terms of mode/platform, paper vs Interactive Voice Response System (IVRS) comparisons had the lowest pooled agreement and paper vs tablet/touch screen the highest (p < 0.001). CONCLUSION: The present study supports the conclusion of Gwaltney's previous meta-analysis showing that PROMs administered on paper are quantitatively comparable with measures administered on an electronic device. It also confirms the ISPOR Taskforce´s conclusion that quantitative equivalence studies are not required for migrations with minor change only. This finding should be reassuring to investigators, regulators and sponsors using questionnaires on electronic devicesafter migration using best practices. Although there is data indicating that migrations with moderate changes produce equivalent instrument versions, hence do not require quantitative equivalence studies, additional work is necessary to establish this. Furthermore, there is the need to standardize migration practices and reporting practices (i.e. include copies of tested instrument versions and screenshots) so that clear recommendations regarding equivalence testing can be made in the future.raising questions about the necessity of conducting equivalence testing moving forward.


Asunto(s)
Evaluación de Resultado en la Atención de Salud/estadística & datos numéricos , Calidad de Vida , Encuestas y Cuestionarios/normas , Adulto , Femenino , Humanos , Masculino , Papel , Evaluación del Resultado de la Atención al Paciente , Reproducibilidad de los Resultados , Estadística como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...