Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Mhealth ; 10: 27, 2024.
Article in English | MEDLINE | ID: mdl-39114464

ABSTRACT

Background: There is growing scientific evidence that wearable devices for seizure detection (WDD) perform well in controlled environments. However, their impact on the health and experience of patients with epilepsy (PWE) in community-based settings is less documented. We aimed to synthesize the scientific evidence about the performance of wearable devices used by PWE in community-based settings, and their impact on health outcomes and patient experience. Methods: We performed a mixed methods systematic review. We performed searches in PubMed, Google Scholar, Web of Science and Embase from inception until December 2022. Independent reviewers checked studies published in English for eligibility based on predefined inclusion and exclusion criteria. We collected information about studies, wearable devices, their performance, and their impact on health outcomes and patient experience. We used a narrative method to synthetize separately data for each question. We assessed the quality of included studies with the QUADAS-C and MMAT tools. Results: On a total of 9,595 publications, 10 studies met our eligibility criteria. Study populations included mostly PWE who were young (≤18 years) and/or their caregivers. Participants were living at home in most studies. Accelerometer was the wearable device mostly used for seizure detection. Wearable device performance was high (sensitivity ≥80% and false alarm rate ≤1/day), but some concerns remained due to false alarms according to qualitative studies. There was no significant effect of wearable device on quality of life (QoL) measures and no study reported quantitatively other health outcomes. Qualitative studies reported positive effect of wearable devices on QoL, seizure management and seizure-related injuries. Overall, patients reported that the device, especially the accelerometer, was suitable, but when the device was too visible, they found it uncomfortable. Study quality was low to medium. Conclusions: There is low quality scientific evidence supporting the performance of WDD in a home environment. Although qualitative findings support the positive impacts of wearable devices for patients and caregivers, more quantitative studies are needed to assess their impact on health outcomes such as QoL and seizure-related injuries.

2.
JMIR Res Protoc ; 12: e46684, 2023 Jun 26.
Article in English | MEDLINE | ID: mdl-37358896

ABSTRACT

BACKGROUND: The current literature identifies several potential benefits of artificial intelligence models for populations' health and health care systems' efficiency. However, there is a lack of understanding on how the risk of bias is considered in the development of primary health care and community health service artificial intelligence algorithms and to what extent they perpetuate or introduce potential biases toward groups that could be considered vulnerable in terms of their characteristics. To the best of our knowledge, no reviews are currently available to identify relevant methods to assess the risk of bias in these algorithms. The primary research question of this review is which strategies can assess the risk of bias in primary health care algorithms toward vulnerable or diverse groups? OBJECTIVE: This review aims to identify relevant methods to assess the risk of bias toward vulnerable or diverse groups in the development or deployment of algorithms in community-based primary health care and mitigation interventions deployed to promote and increase equity, diversity, and inclusion. This review looks at what attempts to mitigate bias have been documented and which vulnerable or diverse groups have been considered. METHODS: A rapid systematic review of the scientific literature will be conducted. In November 2022, an information specialist developed a specific search strategy based on the main concepts of our primary review question in 4 relevant databases in the last 5 years. We completed the search strategy in December 2022, and 1022 sources were identified. Since February 2023, two reviewers independently screened the titles and abstracts on the Covidence systematic review software. Conflicts are solved through consensus and discussion with a senior researcher. We include all studies on methods developed or tested to assess the risk of bias in algorithms that are relevant in community-based primary health care. RESULTS: In early May 2023, almost 47% (479/1022) of the titles and abstracts have been screened. We completed this first stage in May 2023. In June and July 2023, two reviewers will independently apply the same criteria to full texts, and all exclusion motives will be recorded. Data from selected studies will be extracted using a validated grid in August and analyzed in September 2023. Results will be presented using structured qualitative narrative summaries and submitted for publication by the end of 2023. CONCLUSIONS: The approach to identifying methods and target populations of this review is primarily qualitative. However, we will consider a meta-analysis if quantitative data and results are sufficient. This review will develop structured qualitative summaries of strategies to mitigate bias toward vulnerable populations and diverse groups in artificial intelligence models. This could be useful to researchers and other stakeholders to identify potential sources of bias in algorithms and try to reduce or eliminate them. TRIAL REGISTRATION: OSF Registries qbph8; https://osf.io/qbph8. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/46684.

SELECTION OF CITATIONS
SEARCH DETAIL