Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
BMC Med Inform Decis Mak ; 18(1): 87, 2018 10 19.
Artículo en Inglés | MEDLINE | ID: mdl-30340488

RESUMEN

BACKGROUND: Online health information is unregulated and can be of highly variable quality. There is currently no singular quantitative tool that has undergone a validation process, can be used for a broad range of health information, and strikes a balance between ease of use, concision and comprehensiveness. To address this gap, we developed the QUality Evaluation Scoring Tool (QUEST). Here we report on the analysis of the reliability and validity of the QUEST in assessing the quality of online health information. METHODS: The QUEST and three existing tools designed to measure the quality of online health information were applied to two randomized samples of articles containing information about the treatment (n = 16) and prevention (n = 29) of Alzheimer disease as a sample health condition. Inter-rater reliability was assessed using a weighted Cohen's kappa (κ) for each item of the QUEST. To compare the quality scores generated by each pair of tools, convergent validity was measured using Kendall's tau (τ) ranked correlation. RESULTS: The QUEST demonstrated high levels of inter-rater reliability for the seven quality items included in the tool (κ ranging from 0.7387 to 1.0, P < .05). The tool was also found to demonstrate high convergent validity. For both treatment- and prevention-related articles, all six pairs of tests exhibited a strong correlation between the tools (τ ranging from 0.41 to 0.65, P < .05). CONCLUSIONS: Our findings support the QUEST as a reliable and valid tool to evaluate online articles about health. Results provide evidence that the QUEST integrates the strengths of existing tools and evaluates quality with equal efficacy using a concise, seven-item questionnaire. The QUEST can serve as a rapid, effective, and accessible method of appraising the quality of online health information for researchers and clinicians alike.


Asunto(s)
Información de Salud al Consumidor , Exactitud de los Datos , Gestión de la Información en Salud , Internet , Humanos , Reproducibilidad de los Resultados , Proyectos de Investigación , Encuestas y Cuestionarios
2.
Internet Interv ; 17: 100243, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30949436

RESUMEN

OBJECTIVE: To assess the availability, readability, and privacy-related content of the privacy policies and terms of agreement of mental health apps available through popular digital stores. MATERIALS AND METHODS: Popular smartphone app stores were searched using combinations of keywords "track" and "mood" and their synonyms. The first 100 apps from each search were evaluated for inclusion and exclusion criteria. Apps were assessed for availability of a privacy policy (PP) and terms of agreement (ToA) and if available, these documents were evaluated for both content and readability. RESULTS: Most of the apps collected in the sample did not include a PP or ToA. PPs could be accessed for 18% of iOS apps and 4% of Android apps; whereas ToAs were available for 15% of iOS and 3% of Android apps. Many PPs stated that users' information may be shared with third parties (71% iOS, 46% Android). DISCUSSION: Results demonstrate that information collection is occurring with the majority of apps that allow users to track the status of their mental health. Most of the apps collected in the initial sample did not include a PP or ToA despite this being a requirement by the store. The majority of PPs and ToAs that were evaluated are written at a post-secondary reading level and disclose that extensive data collection is occurring. CONCLUSION: Our findings raise concerns about consent, transparency, and data sharing associated with mental health apps and highlight the importance of improved regulation in the mobile app environment.

3.
Alzheimers Dement (N Y) ; 4: 297-303, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30090850

RESUMEN

INTRODUCTION: Computerized assessments are becoming widely accepted in the clinical setting and as a potential outcome measure in clinical trials. To gain patient perspectives of this experience, the aim of the present study was to investigate patient attitudes and perceptions of the Cognigram [Cogstate], a computerized cognitive assessment. METHODS: Semi-structured interviews were conducted with 19 older adults undergoing a computerized cognitive assessment at the University of British Columbia Hospital Clinic for Alzheimer Disease and Related Disorders. Thematic analysis was applied to identify key themes and relationships within the data. RESULTS: The analysis resulted in three categories: attitudes toward computers in healthcare, the cognitive assessment process, and evaluation of the computerized assessment experience. The results show shared views on the need for balance between human and computer intervention, as well as room for improvement in test design and utility. DISCUSSION: Careful design and user-testing should be made a priority in the development of computerized assessment interfaces, as well as reevaluating the cognitive assessment process to minimize patient anxiety and discomfort. Future research should move toward continuous data capture within clinical trials and ensuring instruments of high reliability to reduce variance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA