Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Front Robot AI ; 11: 1323980, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38361604

RESUMO

Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful human control over robots by satisfying accountability, responsibility, and transparency. To verify whether variable autonomy approaches truly ensure meaningful human control, the concept should be operationalized to allow its measurement. So far, designers of variable autonomy approaches lack metrics to systematically address meaningful human control. Methods: Therefore, this qualitative focus group (n = 5 experts) explored quantitative operationalizations of meaningful human control during dynamic task allocation using variable autonomy in human-robot teams for firefighting. This variable autonomy approach requires dynamic allocation of moral decisions to humans and non-moral decisions to robots, using robot identification of moral sensitivity. We analyzed the data of the focus group using reflexive thematic analysis. Results: Results highlight the usefulness of quantifying the traceability requirement of meaningful human control, and how situation awareness and performance can be used to objectively measure aspects of the traceability requirement. Moreover, results emphasize that team and robot outcomes can be used to verify meaningful human control but that identifying reasons underlying these outcomes determines the level of meaningful human control. Discussion: Based on our results, we propose an evaluation method that can verify if dynamic task allocation using variable autonomy in human-robot teams for firefighting ensures meaningful human control over the robot. This method involves subjectively and objectively quantifying traceability using human responses during and after simulations of the collaboration. In addition, the method involves semi-structured interviews after the simulation to identify reasons underlying outcomes and suggestions to improve the variable autonomy approach.

2.
AI Soc ; 38(3): 1151-1166, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36776534

RESUMO

The urban traffic environment is characterized by the presence of a highly differentiated pool of users, including vulnerable ones. This makes vehicle automation particularly difficult to implement, as a safe coordination among those users is hard to achieve in such an open scenario. Different strategies have been proposed to address these coordination issues, but all of them have been found to be costly for they negatively affect a range of human values (e.g. safety, democracy, accountability…). In this paper, we claim that the negative value impacts entailed by each of these strategies can be interpreted as lack of what we call Meaningful Human Control over different parts of a sociotechnical system. We argue that Meaningful Human Control theory provides the conceptual tools to reduce those unwanted consequences, and show how "designing for meaningful human control" constitutes a valid strategy to address coordination issues. Furthermore, we showcase a possible application of this framework in a highly dynamic urban scenario, aiming to safeguard important values such as safety, democracy, individual autonomy, and accountability. Our meaningful human control framework offers a perspective on coordination issues that allows to keep human actors in control while minimizing the active, operational role of the drivers. This approach makes ultimately possible to promote a safe and responsible transition to full automation.

3.
Ethics Inf Technol ; 25(1): 10, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36789353

RESUMO

Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of "meaningful human control" of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.

4.
Camb Q Healthc Ethics ; : 1-10, 2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36624620

RESUMO

Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain "on the loop," by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes "reflection machines" (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible with the planned decision. RMs think against the proposed decision in order to increase human resistance against automation complacency. Building on preliminary research, this paper will (1) make a case for deriving a set of design requirements for RMs from EU regulations, (2) suggest a way how RMs could support decision-making, (3) describe the possibility of how a prototype of an RM could apply to the medical domain of chronic low back pain, and (4) highlight the importance of exploring an RM's functionality and the experiences of users working with it.

5.
Sci Eng Ethics ; 28(5): 37, 2022 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-35997901

RESUMO

In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We draw from the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects-autonomy; adapting capabilities of AWS; human control; and purpose of use-as the essential factors to define AWS and which are key when considering the related ethical and legal implications.


Assuntos
Princípios Morais , Armas , Humanos
6.
Minds Mach (Dordr) ; : 1-25, 2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35915817

RESUMO

The paper presents a framework to realise "meaningful human control" over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project "Meaningful Human Control over Automated Driving Systems" lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately-though not necessarily directly-in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding.

7.
Front Big Data ; 5: 1017677, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36700136

RESUMO

Meaningful human control over AI is exalted as a key tool for assuring safety, dignity, and responsibility for AI and automated decision-systems. It is a central topic especially in fields that deal with the use of AI for decisions that could cause significant harm, like AI-enabled weapons systems. This paper argues that discussions regarding meaningful human control commonly fail to identify the purpose behind the call for meaningful human control and that stating that purpose is a necessary step in deciding how best to institutionalize meaningful human control over AI. The paper identifies 5 common purposes for human control and sketches how different purpose translate into different institutional design.

8.
Front Robot AI ; 8: 744590, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34805290

RESUMO

Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.

9.
Front Robot AI ; 8: 640647, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34124173

RESUMO

With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful Human Control (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent's part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human's understanding in the agent's reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team's moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent's behavior and for the team's decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state.

10.
Water Res ; 170: 115287, 2020 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-31812813

RESUMO

The functional diversity of two planktonic functional compartments, the nano-microphytoplankton and the mesozooplankton was used in order to better understand i) the drained marshes functioning and their related ecological functions, ii) the impacts of human control (replenishment) and human activities on the catchment basin (urbanization and catchment basin size). It was based on a monthly seasonal survey on 7 freshwater drained marshes. Both nano-microphyto- and mesozooplankton displayed high seasonal variations linked to the environmental fluctuations and human control on sea lock gates. Winter presented the lower biomasses of both compartments. Winter that is characterized by low water temperature, low light availability and high flood is actually related to the dominance of tychopelagic phytoplankton and K-strategists zooplankton. Spring and summer were characterized by i) the succession of pelagic large cells, small cells and then taxa with alternatives food strategies due to nitrogen limitation and phosphorous desorption from the sediment leading to eutrophication processes and ii) the dominance of r-strategists for mesozooplankton. The artificial summer replenishment acts positively on water quality by decreasing the eutrophication processes since the nitrogen inputs limit the proliferation of phytoplankton mixotrophs and diazotrophs and increase the ecological efficiency during the warm period. Both small and large catchment basins may lead to summer eutrophication processes in drained marshes since the largest ones imply higher hydrodynamic features at the root of large inputs of nitrogen nutrient favoring the phytoplankton development while the smallest ones exhibit hypoxia problems due to high proliferation of macrophytes. Urbanized marshes are less subjected to eutrophication during summer than non urbanized marshes due to more recurrent nutrient inputs from urban waste. However they exhibited a lower ecological efficiency. The results suggest that a better management of the hydrodynamics of such anthropogenic systems can avoid eutrophication risks on coastal areas.


Assuntos
Plâncton , Áreas Alagadas , Animais , Eutrofização , Atividades Humanas , Humanos , Fitoplâncton , Estações do Ano
11.
Front Robot AI ; 5: 15, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500902

RESUMO

Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a "responsibility gap" for harms caused by these systems. To address these concerns, the principle of "meaningful human control" has been introduced in the legal-political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what "meaningful human control" exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of "guidance control" as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of "Responsible Innovation" and "Value-sensitive Design," our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a "tracking" condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a "tracing" condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.

12.
J R Soc Interface ; 11(99)2014 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-25056217

RESUMO

Understanding how humans control unstable systems is central to many research problems, with applications ranging from quiet standing to aircraft landing. Increasingly, much evidence appears in favour of event-driven control hypothesis: human operators only start actively controlling the system when the discrepancy between the current and desired system states becomes large enough. The event-driven models based on the concept of threshold can explain many features of the experimentally observed dynamics. However, much still remains unclear about the dynamics of human-controlled systems, which likely indicates that humans use more intricate control mechanisms. This paper argues that control activation in humans may be not threshold-driven, but instead intrinsically stochastic, noise-driven. Specifically, we suggest that control activation stems from stochastic interplay between the operator's need to keep the controlled system near the goal state, on the one hand, and the tendency to postpone interrupting the system dynamics, on the other hand. We propose a model capturing this interplay and show that it matches the experimental data on human balancing of virtual overdamped stick. Our results illuminate that the noise-driven activation mechanism plays a crucial role at least in the considered task, and, hypothetically, in a broad range of human-controlled processes.


Assuntos
Modelos Biológicos , Equilíbrio Postural/fisiologia , Desempenho Psicomotor/fisiologia , Fenômenos Biomecânicos , Feminino , Jogos Experimentais , Humanos , Masculino , Limiar Sensorial/fisiologia , Processos Estocásticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA