Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Regul Gov ; 18(1): 3-32, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38435808

ABSTRACT

In its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of "trustworthiness" with "acceptability" in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.

2.
3.
Am J Bioeth ; 22(7): 65-68, 2022 07.
Article in English | MEDLINE | ID: mdl-35737503

Subject(s)
Privacy , Humans
4.
J Med Internet Res ; 23(11): e29386, 2021 11 03.
Article in English | MEDLINE | ID: mdl-34730544

ABSTRACT

BACKGROUND: Artificial intelligence (AI)-driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople. OBJECTIVE: The aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease. METHODS: A cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants' responses followed by comparison-of-means tests were used to evaluate group differences in trust. RESULTS: Depending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (P=.65) and temporal arteritis, marginally significant (P=.09). Varying disease by explanation type resulted in statistical significance for input influence (P=.001), social proof (P=.049), and no explanation (P=.006), with counterfactual explanation (P=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user's specific question and discount the diseases that they may also be aware of. CONCLUSIONS: System builders developing explanations for symptom-checking apps should consider the recipient's knowledge of a disease and tailor explanations to each user's specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.


Subject(s)
Artificial Intelligence , Trust , Cross-Sectional Studies , Delivery of Health Care , Humans , Software
5.
J Med Ethics ; 2021 May 07.
Article in English | MEDLINE | ID: mdl-33963064
6.
J Alzheimers Dis ; 77(1): 339-353, 2020.
Article in English | MEDLINE | ID: mdl-32716354

ABSTRACT

BACKGROUND: Dementia has been described as the greatest global health challenge in the 21st Century on account of longevity gains increasing its incidence, escalating health and social care pressures. These pressures highlight ethical, social, and political challenges about healthcare resource allocation, what health improvements matter to patients, and how they are measured. This study highlights the complexity of the ethical landscape, relating particularly to the balances that need to be struck when allocating resources; when measuring and prioritizing outcomes; and when individual preferences are sought. OBJECTIVE: Health outcome prioritization is the ranking in order of desirability or importance of a set of disease-related objectives and their associated cost or risk. We analyze the complex ethical landscape in which this takes place in the most common dementia, Alzheimer's disease. METHODS: Narrative review of literature published since 2007, incorporating snowball sampling where necessary. We identified, thematized, and discussed key issues of ethical salience. RESULTS: Eight areas of ethical salience for outcome prioritization emerged: 1) Public health and distributive justice, 2) Scarcity of resources, 3) Heterogeneity and changing circumstances, 4) Knowledge of treatment, 5) Values and circumstances, 6) Conflicting priorities, 7) Communication, autonomy and caregiver issues, and 8) Disclosure of risk. CONCLUSION: These areas highlight the difficult balance to be struck when allocating resources, when measuring and prioritizing outcomes, and when individual preferences are sought. We conclude by reflecting on how tools in social sciences and ethics can help address challenges posed by resource allocation, measuring and prioritizing outcomes, and eliciting stakeholder preferences.


Subject(s)
Alzheimer Disease/diagnosis , Alzheimer Disease/therapy , Delivery of Health Care/ethics , Outcome Assessment, Health Care/ethics , Alzheimer Disease/psychology , Delivery of Health Care/methods , Humans , Outcome Assessment, Health Care/methods
7.
J Alzheimers Dis ; 76(3): 923-940, 2020.
Article in English | MEDLINE | ID: mdl-32597799

ABSTRACT

BACKGROUND: The therapeutic paradigm in Alzheimer's disease (AD) is shifting from symptoms management toward prevention goals. Secondary prevention requires the identification of individuals without clinical symptoms, yet "at-risk" of developing AD dementia in the future, and thus, the use of predictive modeling. OBJECTIVE: The objective of this study was to review the ethical concerns and social implications generated by this new approach. METHODS: We conducted a systematic literature review in Medline, Embase, PsycInfo, and Scopus, and complemented it with a gray literature search between March and July 2018. Then we analyzed data qualitatively using a thematic analysis technique. RESULTS: We identified thirty-one ethical issues and social concerns corresponding to eight ethical principles: (i) respect for autonomy, (ii) beneficence, (iii) non-maleficence, (iv) equality, justice, and diversity, (v) identity and stigma, (vi) privacy, (vii) accountability, transparency, and professionalism, and (viii) uncertainty avoidance. Much of the literature sees the discovery of disease-modifying treatment as a necessary and sufficient condition to justify AD risk assessment, overlooking future challenges in providing equitable access to it, establishing long-term treatment outcomes and social consequences of this approach, e.g., medicalization. The ethical/social issues associated specifically with predictive models, such as the adequate predictive power and reliability, infrastructural requirements, data privacy, potential for personalized medicine in AD, and limiting access to future AD treatment based on risk stratification, were covered scarcely. CONCLUSION: The ethical discussion needs to advance to reflect recent scientific developments and guide clinical practice now and in the future, so that necessary safeguards are implemented for large-scale AD secondary prevention.


Subject(s)
Alzheimer Disease/prevention & control , Alzheimer Disease/physiopathology , Brain/physiopathology , Alzheimer Disease/diagnosis , Beneficence , Bioethical Issues , Humans , Publications , Reproducibility of Results , Social Justice
8.
J Alzheimers Dis ; 67(2): 495-501, 2019.
Article in English | MEDLINE | ID: mdl-30584137

ABSTRACT

ROADMAP is a public-private advisory partnership to evaluate the usability of multiple data sources, including real-world evidence, in the decision-making process for new treatments in Alzheimer's disease, and to advance key concepts in disease and pharmacoeconomic modeling. ROADMAP identified key disease and patient outcomes for stakeholders to make informed funding and treatment decisions, provided advice on data integration methods and standards, and developed conceptual cost-effectiveness and disease models designed in part to assess whether early treatment provides long-term benefit.


Subject(s)
Alzheimer Disease/therapy , Evidence-Based Medicine , Aged , Aged, 80 and over , Alzheimer Disease/economics , Clinical Decision-Making , Cost-Benefit Analysis , Data Interpretation, Statistical , Humans , Treatment Outcome
9.
Life Sci Soc Policy ; 14(1): 9, 2018 May 09.
Article in English | MEDLINE | ID: mdl-29744694

ABSTRACT

This paper poses the question of whether people have a duty to participate in digital epidemiology. While an implied duty to participate has been argued for in relation to biomedical research in general, digital epidemiology involves processing of non-medical, granular and proprietary data types that pose different risks to participants. We first describe traditional justifications for epidemiology that imply a duty to participate for the general public, which take account of the immediacy and plausibility of threats, and the identifiability of data. We then consider how these justifications translate to digital epidemiology, understood as an evolution of traditional epidemiology that includes personal and proprietary digital data alongside formal medical datasets. We consider the risks imposed by re-purposing such data for digital epidemiology and propose eight justificatory conditions that should be met in justifying a duty to participate for specific digital epidemiological studies. The conditions are then applied to three hypothetical cases involving usage of social media data for epidemiological purposes. We conclude with a list of questions to be considered in public negotiations of digital epidemiology, including the application of a duty to participate to third-party data controllers, and the important distinction between moral and legal obligations to participate in research.


Subject(s)
Biomedical Research/ethics , Biomedical Research/methods , Electronic Health Records/ethics , Epidemiologic Studies , Moral Obligations , Research Subjects/psychology , Social Responsibility , Humans , Research Design
10.
Sci Eng Ethics ; 24(2): 505-528, 2018 04.
Article in English | MEDLINE | ID: mdl-28353045

ABSTRACT

In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.


Subject(s)
Artificial Intelligence , Government Regulation , Private Sector , Research , Social Responsibility , Social Values , Technology , Artificial Intelligence/ethics , Artificial Intelligence/legislation & jurisprudence , Delivery of Health Care , Disclosure , Ethics, Research , European Union , Government , Humans , Leadership , Policy , Politics , Research Report , Robotics , Transportation , United Kingdom , United States , Universities , Weapons
11.
Sci Robot ; 2(6)2017 May 31.
Article in English | MEDLINE | ID: mdl-33157874

ABSTRACT

To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.

12.
Sci Eng Ethics ; 22(2): 303-41, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26002496

ABSTRACT

The capacity to collect and analyse data is growing exponentially. Referred to as 'Big Data', this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: (1) informed consent, (2) privacy (including anonymisation and data protection), (3) ownership, (4) epistemology and objectivity, and (5) 'Big Data Divides' created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: (6) the dangers of ignoring group-level ethical harms; (7) the importance of epistemology in assessing the ethics of Big Data; (8) the changing nature of fiduciary relationships that become increasingly data saturated; (9) the need to distinguish between 'academic' and 'commercial' Big Data practices in terms of potential harm to data subjects; (10) future problems with ownership of intellectual property generated from analysis of aggregated datasets; and (11) the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices.


Subject(s)
Biomedical Research/ethics , Confidentiality , Data Collection/ethics , Informed Consent/ethics , Ownership , Patient Access to Records/ethics , Privacy , Bioethical Issues , Humans , Knowledge , Statistics as Topic/ethics
13.
Stud Health Technol Inform ; 187: 117-35, 2013.
Article in English | MEDLINE | ID: mdl-23920463

ABSTRACT

The chapter undertakes a comparison of different approaches to the ethical assessment of novel technologies by looking at two recent research projects. ETICA was a FP7 sister project to PHM-Ethics, responsible for identification and ethical evaluation of information and communication technologies emerging in the next 10-15 years. The aims, methods, outcomes and recommendations of ETICA are compared to those of PHM-Ethics, with identification of linkages and similar findings. A relationship is identified between the two projects, in which the assessment methodologies developed in the projects are shown to operate at separate, but complementary levels. ETICA sought to reform EU ethics governance for emerging ICTs. The outcomes of PHM-Ethics are analyzed within the policy recommendations of ETICA, which demonstrate how the PHM-Ethics toolbox can contribute to ethics governance reform and context-sensitive ethical assessment of the sort called for by ETICA.


Subject(s)
Biomedical Technology/ethics , Confidentiality/ethics , Diagnostic Self Evaluation , Ethical Analysis/methods , Medical Informatics/ethics , Monitoring, Ambulatory/ethics , Telemedicine/ethics , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL
...