Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Healthc Inform Res ; 8(2): 244-285, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38681758

RESUMEN

As medication adherence represents a critical challenge in healthcare, pill and medication dispensers have gained increasing attention as potential solutions to promote adherence and improve patient outcomes. Following the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) methodology, we carried out a systematic literature review on papers indexed in Scopus and PubMed, which present solutions for pill or medication dispensers. Given the importance of user acceptance for these solutions, the research questions of the survey are driven by a human-centered perspective. We first provide an overview of the different solutions, classifying them according to their stage of development. We then analyze each solution considering its hardware/software architecture. Finally, we review the characteristics of user interfaces designed for interacting with pill and medication dispensers and analyze the involvement of different types of users in dispenser management. On the basis of this analysis, we draw findings and indications for future research that are aimed to provide insights to healthcare professionals, researchers, and designers who are interested in developing and using pill and medication dispensers.

2.
Front Artif Intell ; 6: 1099407, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37091304

RESUMEN

The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...