RESUMEN
The past decade has seen efforts to develop new forms of autonomous systems with varying applications in different domains, from underwater search and rescue to clinical diagnosis. All of these applications require risk analyses, but such analyses often focus on technical sources of risk without acknowledging its wider systemic and organizational dimensions. In this article, we illustrate this deficit and a way of redressing it by offering a more systematic analysis of the sociotechnical sources of risk in an autonomous system. To this end, the article explores the development, deployment, and operation of an autonomous robot swarm for use in a public cloakroom in light of Macrae's structural, organizational, technological, epistemic, and cultural framework of sociotechnical risk. We argue that this framework provides a useful tool for capturing the complex "nontechnical" dimensions of risk in this domain that might otherwise be overlooked in the more conventional risk analyses that inform regulation and policymaking.
RESUMEN
With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers' understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical users who will be required to use them in patient care.This paper argues that clinical, professional and reputational safety will be risked if this deficit of professional guidance for clinical users of AI-CDDSs is not redressed. We argue it is not enough to develop training for clinical users without first establishing professional guidance regarding the rights and expectations of clinical users.We conclude with a call to action for clinical regulators: to unite to draft guidance for users of AI-CDDS that helps manage clinical, professional and reputational risks. We further suggest that this exercise offers an opportunity to address fundamental issues in the use of AI-CDDSs; regarding, for example, the fair burden of responsibility for outcomes.
RESUMEN
This paper looks at the dilemmas posed by 'expertise' in high-technology regulation by examining the US Federal Aviation Administration's (FAA) 'type-certification' process, through which they evaluate new designs of civil aircraft. It observes that the FAA delegate a large amount of this work to the manufacturers themselves, and discusses why they do this by invoking arguments from the sociology of science and technology. It suggests that - contrary to popular portrayal - regulators of high technologies face an inevitable epistemic barrier when making technological assessments, which forces them to delegate technical questions to people with more tacit knowledge, and hence to 'regulate' at a distance by evaluating 'trust' rather than 'technology'. It then unravels some of the implications of this and its relation to our theories of regulation and 'regulatory capture'.
Asunto(s)
Aviación/organización & administración , Certificación/organización & administración , Regulación Gubernamental , Valores Sociales , Evaluación de la Tecnología Biomédica , Confianza , Actitud , Delegación Profesional , Gobierno Federal , Agencias Gubernamentales/organización & administración , Humanos , Conocimiento , Opinión Pública , Medición de Riesgo , Seguridad , Sociología , Evaluación de la Tecnología Biomédica/organización & administración , Estados UnidosRESUMEN
Publics and policymakers increasingly have to contend with the risks of complex, safety-critical technologies, such as airframes and reactors. As such, 'technological risk' has become an important object of modern governance, with state regulators as core agents, and 'reliability assessment' as the most essential metric. The Science and Technology Studies (STS) literature casts doubt on whether or not we should place our faith in these assessments because predictively calculating the ultra-high reliability required of such systems poses seemingly insurmountable epistemological problems. This paper argues that these misgivings are warranted in the nuclear sphere, despite evidence from the aviation sphere suggesting that such calculations can be accurate. It explains why regulatory calculations that predict the reliability of new airframes cannot work in principle, and then it explains why those calculations work in practice. It then builds on this explanation to argue that the means by which engineers manage reliability in aviation is highly domain-specific, and to suggest how a more nuanced understanding of jetliners could inform debates about nuclear energy.