Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Sci Eng Ethics ; 28(5): 37, 2022 08 23.
Article in English | MEDLINE | ID: mdl-35997901

ABSTRACT

In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We draw from the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects-autonomy; adapting capabilities of AWS; human control; and purpose of use-as the essential factors to define AWS and which are key when considering the related ethical and legal implications.


Subject(s)
Morals , Weapons , Humans
3.
Sci Eng Ethics ; 27(4): 44, 2021 07 06.
Article in English | MEDLINE | ID: mdl-34231029

ABSTRACT

Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of automation. In this article, we consider the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity's present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.

4.
Sci Eng Ethics ; 27(6): 68, 2021 11 12.
Article in English | MEDLINE | ID: mdl-34767085

ABSTRACT

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States' (US) AI strategies and considers (i) the visions of a 'Good AI Society' that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a 'Good AI Society' have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.


Subject(s)
Artificial Intelligence , Government , European Union , Policy , Societies , United States
5.
J Med Internet Res ; 22(8): e19311, 2020 08 03.
Article in English | MEDLINE | ID: mdl-32648850

ABSTRACT

Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis- and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI? In answering these questions, we show that four ethical concerns-related to paternalism, autonomy, freedom of speech, and pluralism-are partly responsible for the lack of intervention. We then suggest that these concerns can be overcome by relying on four arguments: (1) education is necessary but insufficient to curb the circulation of health MDI, (2) there is precedent for state control of internet content in other domains, (3) network dynamics adversely affect the spread of accurate health information, and (4) justice is best served by protecting those susceptible to inaccurate health information. These arguments provide a strong case for classifying the quality of the infosphere as a social determinant of health, thus making its protection a public health responsibility. In addition, they offer a strong justification for working to overcome the ethical concerns associated with state-led intervention in the infosphere to protect public health.


Subject(s)
Internet , Public Health , Social Determinants of Health , COVID-19 , Communication , Coronavirus Infections/epidemiology , Health Education , Humans , Pandemics , Pneumonia, Viral/epidemiology , Social Media
6.
Sci Eng Ethics ; 26(4): 2313-2343, 2020 08.
Article in English | MEDLINE | ID: mdl-31933119

ABSTRACT

This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term 'digital well-being' is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.


Subject(s)
Delivery of Health Care , Personal Autonomy , Technology , Humans , Technology/ethics
7.
Sci Eng Ethics ; 26(3): 1771-1796, 2020 06.
Article in English | MEDLINE | ID: mdl-32246245

ABSTRACT

The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.


Subject(s)
Artificial Intelligence , Morals , Humans
8.
Sci Eng Ethics ; 26(1): 89-120, 2020 02.
Article in English | MEDLINE | ID: mdl-30767109

ABSTRACT

Artificial intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime (AIC). AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area-spanning socio-legal studies to formal science-there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space.


Subject(s)
Artificial Intelligence/trends , Crime/trends , Social Media , Commerce/legislation & jurisprudence , Commerce/trends , Drug Trafficking/legislation & jurisprudence , Drug Trafficking/trends , Forecasting , Fraud/legislation & jurisprudence , Fraud/trends , Humans , Interdisciplinary Research , Liability, Legal , Sex Offenses/legislation & jurisprudence , Sex Offenses/trends
10.
Nature ; 556(7701): 296-298, 2018 04.
Article in English | MEDLINE | ID: mdl-29662138
11.
Sci Eng Ethics ; 25(5): 1357-1387, 2019 10.
Article in English | MEDLINE | ID: mdl-30357557

ABSTRACT

This article argues that personal medical data should be made available for scientific research, by enabling and encouraging individuals to donate their medical records once deceased, similar to the way in which they can already donate organs or bodies. This research is part of a project on posthumous medical data donation developed by the Digital Ethics Lab at the Oxford Internet Institute at the University of Oxford. Ten arguments are provided to support the need to foster posthumous medical data donation. Two major risks are also identified-harm to others, and lack of control over the use of data-which could follow from unregulated donation of medical data. The argument that record-based medical research should proceed without the need to secure informed consent is rejected, and instead a voluntary and participatory approach to using personal medical data should be followed. The analysis concludes by stressing the need to develop an ethical code for data donation to minimise the risks, and offers five foundational principles for ethical medical data donation suggested as a draft code.


Subject(s)
Biomedical Research/ethics , Databases as Topic/ethics , Health Records, Personal/ethics , Informed Consent , Attitude to Death , Codes of Ethics , Confidentiality , Humans , Ownership , Patient Preference , Practice Guidelines as Topic
12.
Sci Eng Ethics ; 24(2): 505-528, 2018 04.
Article in English | MEDLINE | ID: mdl-28353045

ABSTRACT

In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.


Subject(s)
Artificial Intelligence , Government Regulation , Private Sector , Research , Social Responsibility , Social Values , Technology , Artificial Intelligence/ethics , Artificial Intelligence/legislation & jurisprudence , Delivery of Health Care , Disclosure , Ethics, Research , European Union , Government , Humans , Leadership , Policy , Politics , Research Report , Robotics , Transportation , United Kingdom , United States , Universities , Weapons
13.
Sci Eng Ethics ; 22(6): 1575-1603, 2016 12.
Article in English | MEDLINE | ID: mdl-26613596

ABSTRACT

Online service providers (OSPs)-such as AOL, Facebook, Google, Microsoft, and Twitter-significantly shape the informational environment (infosphere) and influence users' experiences and interactions within it. There is a general agreement on the centrality of OSPs in information societies, but little consensus about what principles should shape their moral responsibilities and practices. In this article, we analyse the main contributions to the debate on the moral responsibilities of OSPs. By endorsing the method of the levels of abstract (LoAs), we first analyse the moral responsibilities of OSPs in the web (LoAIN). These concern the management of online information, which includes information filtering, Internet censorship, the circulation of harmful content, and the implementation and fostering of human rights (including privacy). We then consider the moral responsibilities ascribed to OSPs on the web (LoAON) and focus on the existing legal regulation of access to users' data. The overall analysis provides an overview of the current state of the debate and highlights two main results. First, topics related to OSPs' public role-especially their gatekeeping function, their corporate social responsibilities, and their role in implementing and fostering human rights-have acquired increasing relevance in the specialised literature. Second, there is a lack of an ethical framework that can (a) define OSPs' responsibilities, and (b) provide the fundamental sharable principles necessary to guide OSPs' conduct within the multicultural and international context in which they operate. This article contributes to the ethical framework necessary to deal with (a) and (b) by endorsing a LoA enabling the definition of the responsibilities of OSPs with respect to the well-being of the infosphere and of the entities inhabiting it (LoAFor).


Subject(s)
Internet/ethics , Morals , Computer Security/ethics , Computer Security/standards , Human Rights/standards , Humans
14.
Sci Eng Ethics ; 21(5): 1125-38, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25209218

ABSTRACT

The "struggle between liberties and authorities", as described by Mill, refers to the tension between individual rights and the rules restricting them that are imposed by public authorities exerting their power over civil society. In this paper I argue that contemporary information societies are experiencing a new form of such a struggle, which now involves liberties and authorities in the cyber-sphere and, more specifically, refers to the tension between cyber-security measures and individual liberties. Ethicists, political philosophers and political scientists have long debated how to strike an ethically sound balance between security measures and individual rights. I argue that such a balance can only be reached once individual rights are clearly defined, and that such a definition cannot prescind from an analysis of individual well-being in the information age. Hence, I propose an analysis of individual well-being which rests on the capability approach, and I then identify a set of rights that individuals should claim for themselves. Finally, I consider a criterion for balancing the proposed set of individual rights with cyber-security measures in the information age.


Subject(s)
Ethical Analysis , Government Regulation , Human Rights , Internet , Security Measures/ethics , Freedom , Humans
15.
Digit Soc ; 2(1): 12, 2023.
Article in English | MEDLINE | ID: mdl-37034181

ABSTRACT

Intelligence agencies have identified artificial intelligence (AI) as a key technology for maintaining an edge over adversaries. As a result, efforts to develop, acquire, and employ AI capabilities for purposes of national security are growing. This article reviews the ethical challenges presented by the use of AI for augmented intelligence analysis. These challenges have been identified through a qualitative systematic review of the relevant literature. The article identifies five sets of ethical challenges relating to intrusion, explainability and accountability, bias, authoritarianism and political security, and collaboration and classification, and offers a series of recommendations targeted at intelligence agencies to address and mitigate these challenges.

16.
AI Soc ; : 1-16, 2023 Jan 28.
Article in English | MEDLINE | ID: mdl-36741972

ABSTRACT

Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and analysis, obtaining more accurate results more quickly. As the infosphere expands to accommodate ever-increasing online presence, so does the pool of actionable OSINT. These developments raise important concerns in terms of governance, ethical, legal, and social implications (GELSI). New and crucial oversight concerns emerge alongside standard privacy concerns, as some of the more advanced data analysis tools require little to no supervision. This article offers a systematic review of the relevant literature. It analyzes 571 publications to assess the current state of the literature on the use of AI-powered OSINT (and the development of OSINT software) as it relates to the GELSI framework, highlighting potential gaps and suggesting new research directions.

17.
AI Soc ; 38(1): 283-307, 2023.
Article in English | MEDLINE | ID: mdl-34690449

ABSTRACT

In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.

18.
DNA Cell Biol ; 41(1): 11-15, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34941450

ABSTRACT

In this commentary, we focus on the ethical challenges of data sharing and its potential in supporting biomedical research. Taking human genomics (HG) and European governance for sharing genomic data as a case study, we consider how to balance competing rights and interests-balancing protection of the privacy of data subjects and data security, with scientific progress and the need to promote public health. This is of particular relevancy in light of the current pandemic, which stresses the urgent need for international collaborations to promote health for all. We draw from existing ethical codes for data sharing in HG to offer recommendations as to how to protect rights while fostering scientific research and open science.


Subject(s)
Information Dissemination
19.
AI Soc ; : 1-16, 2022 Sep 30.
Article in English | MEDLINE | ID: mdl-36212227

ABSTRACT

This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; (3) social inclusion, expressed in the aspects of citizen participation and inclusion, and inequality and discrimination; and (4) sustainability, with a specific focus on the environment as an element to protect but also as a strategic element for the future. Given the persisting disagreements around the definition of a smart city, the article identifies in these four dimensions a more stable reference framework within which ethical concerns can be clustered and discussed. Identifying these dimensions makes possible a review of the ethical implications of smart cities that is transversal to their different types and resilient towards the unsettled debate over their definition.

20.
Soc Sci Med ; 260: 113172, 2020 09.
Article in English | MEDLINE | ID: mdl-32702587

ABSTRACT

This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be 'ethically mindful? A series of screening stages were carried out-for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)-yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new 'AI winter' could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Humans , Morals , Ownership , Privacy
SELECTION OF CITATIONS
SEARCH DETAIL