Your browser doesn't support javascript.
loading
Artificial intelligence and mental capacity legislation: Opening Pandora's modem.
Redahan, Maria; Kelly, Brendan D.
Afiliación
  • Redahan M; Department of Psychiatry, St Vincent's University Hospital, Elm Park, Dublin 4 D04 T6F4, Ireland; Department of Psychiatry, Trinity College Dublin, Trinity Centre for Health Sciences, Tallaght University Hospital, Tallaght, Dublin 24 D24 NR0A, Ireland. Electronic address: redaham@tcd.ie.
  • Kelly BD; Department of Psychiatry, Trinity College Dublin, Trinity Centre for Health Sciences, Tallaght University Hospital, Tallaght, Dublin 24 D24 NR0A, Ireland.
Int J Law Psychiatry ; 94: 101985, 2024.
Article en En | MEDLINE | ID: mdl-38579525
ABSTRACT
People with impaired decision-making capacity enjoy the same rights to access technology as people with full capacity. Our paper looks at realising this right in the specific contexts of artificial intelligence (AI) and mental capacity legislation. Ireland's Assisted Decision-Making (Capacity) Act, 2015 commenced in April 2023 and refers to 'assistive technology' within its 'communication' criterion for capacity. We explore the potential benefits and risks of AI in assisting communication under this legislation and seek to identify principles or lessons which might be applicable in other jurisdictions. We focus especially on Ireland's provisions for advance healthcare directives because previous research demonstrates that common barriers to advance care planning include (i) lack of knowledge and skills, (ii) fear of starting conversations about advance care planning, and (iii) lack of time. We hypothesise that these barriers might be overcome, at least in part, by using generative AI which is already freely available worldwide. Bodies such as the United Nations have produced guidance about ethical use of AI and these guide our analysis. One of the ethical risks in the current context is that AI would reach beyond communication and start to influence the content of decisions, especially among people with impaired decision-making capacity. For example, when we asked one AI model to 'Make me an advance healthcare directive', its initial response did not explicitly suggest content for the directive, but it did suggest topics that might be included, which could be seen as setting an agenda. One possibility for circumventing this and other shortcomings, such as concerns around accuracy of information, is to look to foundational models of AI. With their capabilities to be trained and fine-tuned to downstream tasks, purpose-designed AI models could be adapted to provide education about capacity legislation, facilitate patient and staff interaction, and allow interactive updates by healthcare professionals. These measures could optimise the benefits of AI and minimise risks. Similar efforts have been made to use AI more responsibly in healthcare by training large language models to answer healthcare questions more safely and accurately. We highlight the need for open discussion about optimising the potential of AI while minimising risks in this population.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Contexto en salud: 1_ASSA2030 Problema de salud: 1_recursos_humanos_saude Asunto principal: Inteligencia Artificial / Competencia Mental Límite: Humans País/Región como asunto: Europa Idioma: En Revista: Int J Law Psychiatry Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Contexto en salud: 1_ASSA2030 Problema de salud: 1_recursos_humanos_saude Asunto principal: Inteligencia Artificial / Competencia Mental Límite: Humans País/Región como asunto: Europa Idioma: En Revista: Int J Law Psychiatry Año: 2024 Tipo del documento: Article
...