RESUMO
Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions. Supplementary Information: The online version contains supplementary material available at 10.1007/s10676-024-09790-6.
Assuntos
Disseminação de Informação , Revisão da Pesquisa por Pares/métodos , Revisão da Pesquisa por Pares/normas , Avaliação de Programas e Projetos de Saúde , Confidencialidade , Disseminação de Informação/ética , Revisão da Pesquisa por Pares/ética , Publicações Periódicas como Assunto , Projetos PilotoRESUMO
The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the "phase 2" of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens' privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens' "personal data stores", to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates-if and when they want and for specific aims-with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society.
RESUMO
People are able to comfort others by talking about their problems. In our research, we are exploring whether computers can provide social support in a similar manner. Recently, we proposed a design for an empathic virtual buddy that supports victims of cyberbullying. To validate our approach in providing social support and to gather feedback from potential users, we performed an experiment (N = 30) to compare interaction with the buddy to reading a text. Both the buddy and the text received high scores; scores for the buddy were consistently higher. The difference was significant for the extent to which feelings were taken into account. These results indicate that participants liked to interact with the buddy and that they recognized the emotional cues emitted by the buddy, thus validating our approach in comforting users.
Assuntos
Bullying/psicologia , Empatia , Apoio Social , Interface Usuário-Computador , Adolescente , Feminino , Humanos , MasculinoRESUMO
Ethical concerns on autonomous weapon systems (AWS) call for a process of human oversight to ensure accountability over targeting decisions and the use of force. To align the behavior of autonomous systems with human values and norms, the Design for Values approach can be used to consciously embody values in the deployment of AWS. One instrument for the elicitation of values during the design is participative deliberation. In this paper, we describe a participative deliberation method and results of a value elicitation by means of the value deliberation process for which we organized two panels each consisting of a mixture of experts in the field of AWS working in military operations, foreign policy, NGO's and industry. The results of our qualitative study indicate not only that value discussion leads to changes in perception of the acceptability of alternatives, or options, in a scenario of AWS deployment, it also gives insight in to which values are deemed important and highlights that trust in the decision-making of an AWS is crucial.
RESUMO
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. "Human oversight" is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy-dynamically adjustable levels of autonomy-as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
RESUMO
[This corrects the article DOI: 10.1007/s11023-020-09527-6.].
RESUMO
In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a framework to evaluate their suitability in terms of impact on the users, employed technology and governance methods. We illustrate its usage with three applications, and with the European Data Protection Board (EDPB) guidelines, highlighting their limitations.
RESUMO
During the COVID-19 crisis there have been many difficult decisions governments and other decision makers had to make. E.g. do we go for a total lock down or keep schools open? How many people and which people should be tested? Although there are many good models from e.g. epidemiologists on the spread of the virus under certain conditions, these models do not directly translate into the interventions that can be taken by government. Neither can these models contribute to understand the economic and/or social consequences of the interventions. However, effective and sustainable solutions need to take into account this combination of factors. In this paper, we propose an agent-based social simulation tool, ASSOCC, that supports decision makers understand possible consequences of policy interventions, but exploring the combined social, health and economic consequences of these interventions.
RESUMO
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.
RESUMO
This article reports the findings of AI4People, an Atomium-EISMD initiative designed to lay the foundations for a "Good AI Society". We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations-to assess, to develop, to incentivise, and to support good AI-which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.