RESUMEN
Computer technology has long been touted as a means of increasing the effectiveness of voluntary self-exclusion schemes - especially in terms of relieving gaming venue staff of the task of manually identifying and verifying the status of new customers. This paper reports on the government-led implementation of facial recognition technology as part of an automated self-exclusion program in the city of Adelaide in South Australia-one of the first jurisdiction-wide enforcements of this controversial technology in small venue gambling. Drawing on stakeholder interviews, site visits and documentary analysis over a two year period, the paper contrasts initial claims that facial recognition offered a straightforward and benign improvement to the efficiency of the city's long-running self-excluded gambler program, with subsequent concerns that the new technology was associated with heightened inconsistencies, inefficiencies and uncertainties. As such, the paper contends that regardless of the enthusiasms of government, tech industry and gaming lobby, facial recognition does not offer a ready 'technical fix' to problem gambling. The South Australian case illustrates how this technology does not appear to better address the core issues underpinning problem gambling, and/or substantially improve conditions for problem gamblers to refrain from gambling. As such, it is concluded that the gambling sector needs to pay close attention to the practical outcomes arising from initial cases such as this, and resist industry pressures for the wider replication of this technology in other jurisdictions.
RESUMEN
Generative artificial intelligence (AI) has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section, we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI's potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.
RESUMEN
Anatomy educators are often at the forefront of adopting innovative and advanced technologies for teaching, such as artificial intelligence (AI). While AI offers potential new opportunities for anatomical education, hard lessons learned from the deployment of AI tools in other domains (e.g., criminal justice, healthcare, and finance) suggest that these opportunities are likely to be tempered by disadvantages for at least some learners and within certain educational contexts. From the perspectives of an anatomy educator, public health researcher, medical ethicist, and an educational technology expert, this article examines five tensions between the promises and the perils of integrating AI into anatomy education. These tensions highlight the ways in which AI is currently ill-suited for incorporating the uncertainties intrinsic to anatomy education in the areas of (1) human variations, (2) healthcare practice, (3) diversity and social justice, (4) student support, and (5) student learning. Practical recommendations for a considered approach to working alongside AI in the contemporary (and future) anatomy education learning environment are provided, including enhanced transparency about how AI is integrated, AI developer diversity, inclusion of uncertainty and anatomical variations within deployed AI, provisions made for educator awareness of AI benefits and limitations, building in curricular "AI-free" time, and engaging AI to extend human capacities. These recommendations serve as a guiding framework for how the clinical anatomy discipline, and anatomy educators, can work alongside AI, and develop a more nuanced and considered approach to the role of AI in healthcare education.