RESUMEN
This article investigates whether, in the context of rising nationalism, drawing attention to national innovation strategies influences public health behaviours, particularly vaccine uptake. It draws on an original two-wave panel study of United Kingdom (UK) respondents during the COVID pandemic. The survey included an experimental design, which primed respondents with a nationalist framing of COVID-19 vaccines, drawing attention to the UK's role in developing the AstraZeneca vaccine and in rapid approval and roll out of other vaccines. Our results show no significant impact of nationalist framing on vaccine willingness, even among those with nationalist or science-skeptical views. These findings suggest public health authorities should be cautious with nationalist framing, as it may be ineffective or counterproductive.
Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , Reino Unido , COVID-19/prevención & control , COVID-19/epidemiología , Vacunas contra la COVID-19/administración & dosificación , Salud Pública , Encuestas y Cuestionarios , Conductas Relacionadas con la Salud , Pandemias , Masculino , Femenino , Adulto , Persona de Mediana EdadRESUMEN
When it comes to making decisions about artificial intelligence (AI), Eric Schmidt is very clear. In 2023, the former Google CEO told NBC's Meet the Press, "there's no way a nonindustry person can understand what is possible. It's just too new, too hard, there's not the expertise." But if, as Schmidt believes, AI will be the next industrial revolution, then the technology is too important to be left to technology companies. AI poses huge challenges for democratic societies, and the decisions on it are currently being made by a very small group of people. Realizing the opportunities of AI, understanding its risks, and steering it toward the public interest will require a large dose of public participation.
RESUMEN
There's a scene in the movie Oppenheimer in which the protagonist is trying to explain to General Groves, his military overseer, the hazards of their endeavor. Groves asks Oppenheimer, "Are you saying there's a chance that when we push that button, we destroy the world?" The physicist says, "The chances are near zero." When Groves, understandably alarmed, asks for clarification, Oppenheimer responds, "What do you want from theory alone?"
RESUMEN
A survey published in October 2023 revealed what seemed to be a paradox. Over the past decade, self-driving vehicles have improved immeasurably, but public trust in the technology is low and falling. Only 37% of Americans said they would be comfortable riding in a self- driving vehicle, down from 39% in 2022 and 41% in 2021. Those that have used the technology express more enthusiasm, but the rest have seemingly had their confidence shaken by the failure of the technology to live up to its hype.
RESUMEN
Alan Turing introduced his 1950 paper on Computing Machinery and Intelligence with the question "Can machines think?" But rather than engaging in what he regarded as never-ending subjective debate about definitions of intelligence, he instead proposed a thought experiment. His "imitation game" offered a test in which an evaluator held conversations with a human and a computer. If the evaluator failed to tell them apart, the computer could be said to have exhibited artificial intelligence (AI). In the decades since Turing's paper, AI has gone from being a fountain of scientific hype to an academic backwater to a gold rush. Throughout, the Turing test has given computer scientists a sense of direction: a quest for what Turing called a "universal machine." Although the debate continues about whether the Turing test is a reasonable measure of artificial intelligence, the real problem is that it asks the wrong question. AI is no longer an academic debate. It is a technological reality. For society to make good decisions about AI, we should instead look to another great late 20th-century computer scientist, Joseph Weizenbaum. In a paper "On the impact of the computer on society," in Science in 1972, Weizenbaum argued that his fellow computer scientists should try to view their activities from the standpoint of a member of the public. Whereas computer scientists wonder how to get their technology to work and use "electronic wizardry" to make it safe, Weizenbaum argued that ordinary people would ask "is it good?" and "do we need these things?" As excitement builds about the possibilities of generative AI, rather than asking whether these machines are intelligent, we should instead ask whether they are useful.
RESUMEN
The ideal of the self-driving car replaces an error-prone human with an infallible, artificially intelligent driver. This narrative of autonomy promises liberation from the downsides of automobility, even if that means taking control away from autonomous, free-moving individuals. We look behind this narrative to understand the attachments that so-called 'autonomous' vehicles (AVs) are likely to have to the world. Drawing on 50 interviews with AV developers, researchers and other stakeholders, we explore the social and technological attachments that stakeholders see inside the vehicle, on the road and with the wider world. These range from software and hardware to the behaviours of other road users and the material, social and economic infrastructure that supports driving and self-driving. We describe how innovators understand, engage with or seek to escape from these attachments in three categories: 'brute force', which sees attachments as problems to be solved with more data, 'solve the world one place at a time', which sees attachments as limits on the technology's reach and 'reduce the complexity of the space', which sees attachments as solutions to the problems encountered by technology developers. Understanding attachments provides a powerful way to anticipate various possible constitutions for the technology.
Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Vehículos Autónomos , Humanos , Programas Informáticos , TecnologíaAsunto(s)
Epidermis/trasplante , Agencias de los Sistemas de Salud/economía , Medicina Regenerativa/tendencias , Células Madre/fisiología , Ingeniería de Tejidos/tendencias , Adulto , Animales , Trasplante de Médula Ósea/legislación & jurisprudencia , Trasplante de Médula Ósea/métodos , Quemaduras/cirugía , Quemaduras/terapia , Ensayos Clínicos como Asunto , Análisis Costo-Beneficio/estadística & datos numéricos , Células Madre Embrionarias/trasplante , Terapia Genética/economía , Terapia Genética/métodos , Accesibilidad a los Servicios de Salud/normas , Agencias de los Sistemas de Salud/legislación & jurisprudencia , Enfermedades Hematológicas/cirugía , Enfermedades Hematológicas/terapia , Humanos , Células Madre Pluripotentes Inducidas/trasplante , Modelos Animales , Medicina Regenerativa/economía , Medicina Regenerativa/legislación & jurisprudencia , Investigación con Células Madre/ética , Terapias en Investigación/ética , Ingeniería de Tejidos/economía , Ingeniería de Tejidos/legislación & jurisprudenciaRESUMEN
Self-driving cars, a quintessentially 'smart' technology, are not born smart. The algorithms that control their movements are learning as the technology emerges. Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking 'Who is learning, what are they learning and how are they learning?' Focusing on the successes and failures of social learning around the much-publicized crash of a Tesla Model S in 2016, I argue that trajectories and rhetorics of machine learning in transport pose a substantial governance challenge. 'Self-driving' or 'autonomous' cars are misnamed. As with other technologies, they are shaped by assumptions about social needs, solvable problems, and economic opportunities. Governing these technologies in the public interest means improving social learning by constructively engaging with the contingencies of machine learning.
Asunto(s)
Accidentes de Tránsito , Aprendizaje Automático , Vehículos a Motor/estadística & datos numéricos , Aprendizaje Social , Accidentes de Tránsito/psicología , Humanos , Tecnología/éticaRESUMEN
Geoengineering is defined as the 'deliberate and large-scale intervention in the Earth's climatic system with the aim of reducing global warming'. The technological proposals for doing this are highly speculative. Research is at an early stage, but there is a strong consensus that technologies would, if realisable, have profound and surprising ramifications. Geoengineering would seem to be an archetype of technology as social experiment, blurring lines that separate research from deployment and scientific knowledge from technological artefacts. Looking into the experimental systems of geoengineering, we can see the negotiation of what is known and unknown. The paper argues that, in renegotiating such systems, we can approach a new mode of governance-collective experimentation. This has important ramifications not just for how we imagine future geoengineering technologies, but also for how we govern geoengineering experiments currently under discussion.
Asunto(s)
Ingeniería/ética , Calentamiento Global/prevención & control , Ingeniería/tendencias , Invenciones/ética , Invenciones/normasRESUMEN
This introductory essay looks back on the two decades since the journal Public Understanding of Science was launched. Drawing on the invited commentaries in this special issue, we can see narratives of continuity and change around the practice and politics of public engagement with science. Public engagement would seem to be a necessary but insufficient part of opening up science and its governance. Those of us who have been involved in advocating, conducting and evaluating public engagement practice could be accused of over-promising. If we, as social scientists, are going to continue a normative commitment to the idea of public engagement, we should therefore develop new lines of argument and analysis. Our support for the idea of public engagement needs qualifying, as part of a broader, more ambitious interest in the idea of publicly engaged science.
Asunto(s)
Participación de la Comunidad , Opinión Pública , Ciencia/organización & administración , Acceso a la Información , Humanos , Invenciones , PolíticaAsunto(s)
Calentamiento Global , Opinión Pública , Atmósfera , Biotecnología , Clima , Ingeniería , HumanosRESUMEN
The need for policy makers to understand science and for scientists to understand policy processes is widely recognised. However, the science-policy relationship is sometimes difficult and occasionally dysfunctional; it is also increasingly visible, because it must deal with contentious issues, or itself becomes a matter of public controversy, or both. We suggest that identifying key unanswered questions on the relationship between science and policy will catalyse and focus research in this field. To identify these questions, a collaborative procedure was employed with 52 participants selected to cover a wide range of experience in both science and policy, including people from government, non-governmental organisations, academia and industry. These participants consulted with colleagues and submitted 239 questions. An initial round of voting was followed by a workshop in which 40 of the most important questions were identified by further discussion and voting. The resulting list includes questions about the effectiveness of science-based decision-making structures; the nature and legitimacy of expertise; the consequences of changes such as increasing transparency; choices among different sources of evidence; the implications of new means of characterising and representing uncertainties; and ways in which policy and political processes affect what counts as authoritative evidence. We expect this exercise to identify important theoretical questions and to help improve the mutual understanding and effectiveness of those working at the interface of science and policy.
Asunto(s)
Comunicación Interdisciplinaria , Política Pública/tendencias , Proyectos de Investigación , Toma de Decisiones en la Organización , InglaterraRESUMEN
UK scientific advice on the possible health risks of mobile phones has embraced (or seems to be embracing) broader engagement with interested non-experts. This paper explains the context of lost credibility that made such a development necessary, and the implications of greater engagement for the construction (and expert control) of "public concern." I narrate how scientific advice matured from an approach based on compliance with guidelines to a style of "public science" in which issues such as trust and democracy were intertwined with scientific risk assessment. This paper develops existing conceptions of the "public understanding of science" with an explanation based around the co-production of scientific and social order. Using a narrative drawn from a series of in-depth interviews with scientists and policymakers, I explain how expert reformulation of the state of scientific uncertainty within a public controversy reveals constructions of "The Public," and the desired extent of their engagement. Constructions of the public changed at the same time as a construction of uncertainty as solely an expert concern was molded into a state of politically workable public uncertainty. This paper demonstrates how publics can be constructed as instruments of credible policymaking, and suggests the potential for public alienation if nonexperts feel they have not been fairly represented.