Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 81
1.
Cell Genom ; 4(6): 100564, 2024 Jun 12.
Article En | MEDLINE | ID: mdl-38795704

Here, we examine the challenges posed by laws in the United States and China for generative-AI-assisted genomic research collaboration. We recommend renewing the Agreement on Cooperation in Science and Technology to promote responsible principles for sharing human genomic data and to enhance transparency in research.


Artificial Intelligence , Genomics , China , Humans , Genomics/legislation & jurisprudence , United States , Artificial Intelligence/legislation & jurisprudence
3.
Int J Law Psychiatry ; 94: 101985, 2024.
Article En | MEDLINE | ID: mdl-38579525

People with impaired decision-making capacity enjoy the same rights to access technology as people with full capacity. Our paper looks at realising this right in the specific contexts of artificial intelligence (AI) and mental capacity legislation. Ireland's Assisted Decision-Making (Capacity) Act, 2015 commenced in April 2023 and refers to 'assistive technology' within its 'communication' criterion for capacity. We explore the potential benefits and risks of AI in assisting communication under this legislation and seek to identify principles or lessons which might be applicable in other jurisdictions. We focus especially on Ireland's provisions for advance healthcare directives because previous research demonstrates that common barriers to advance care planning include (i) lack of knowledge and skills, (ii) fear of starting conversations about advance care planning, and (iii) lack of time. We hypothesise that these barriers might be overcome, at least in part, by using generative AI which is already freely available worldwide. Bodies such as the United Nations have produced guidance about ethical use of AI and these guide our analysis. One of the ethical risks in the current context is that AI would reach beyond communication and start to influence the content of decisions, especially among people with impaired decision-making capacity. For example, when we asked one AI model to 'Make me an advance healthcare directive', its initial response did not explicitly suggest content for the directive, but it did suggest topics that might be included, which could be seen as setting an agenda. One possibility for circumventing this and other shortcomings, such as concerns around accuracy of information, is to look to foundational models of AI. With their capabilities to be trained and fine-tuned to downstream tasks, purpose-designed AI models could be adapted to provide education about capacity legislation, facilitate patient and staff interaction, and allow interactive updates by healthcare professionals. These measures could optimise the benefits of AI and minimise risks. Similar efforts have been made to use AI more responsibly in healthcare by training large language models to answer healthcare questions more safely and accurately. We highlight the need for open discussion about optimising the potential of AI while minimising risks in this population.


Artificial Intelligence , Mental Competency , Humans , Artificial Intelligence/legislation & jurisprudence , Mental Competency/legislation & jurisprudence , Ireland , Decision Making , Advance Directives/legislation & jurisprudence
4.
Eur J Radiol ; 175: 111462, 2024 Jun.
Article En | MEDLINE | ID: mdl-38608500

The integration of AI in radiology raises significant legal questions about responsibility for errors. Radiologists fear AI may introduce new legal challenges, despite its potential to enhance diagnostic accuracy. AI tools, even those approved by regulatory bodies like the FDA or CE, are not perfect, posing a risk of failure. The key issue is how AI is implemented: as a stand-alone diagnostic tool or as an aid to radiologists. The latter approach could reduce undesired side effects. However, it's unclear who should be held liable for AI failures, with potential candidates ranging from engineers and radiologists involved in AI development to companies and department heads who integrate these tools into clinical practice. The EU's AI Act, recognizing AI's risks, categorizes applications by risk level, with many radiology-related AI tools considered high risk. Legal precedents in autonomous vehicles offer some guidance on assigning responsibility. Yet, the existing legal challenges in radiology, such as diagnostic errors, persist. AI's potential to improve diagnostics raises questions about the legal implications of not using available AI tools. For instance, an AI tool improving the detection of pediatric fractures could reduce legal risks. This situation parallels innovations like car turn signals, where ignoring available safety enhancements could lead to legal problems. The debate underscores the need for further research and regulation to clarify AI's role in radiology, balancing innovation with legal and ethical considerations.


Artificial Intelligence , Liability, Legal , Radiology , Humans , Radiology/legislation & jurisprudence , Radiology/ethics , Artificial Intelligence/legislation & jurisprudence , Diagnostic Errors/legislation & jurisprudence , Diagnostic Errors/prevention & control , Radiologists/legislation & jurisprudence
7.
Australas Psychiatry ; 32(3): 214-219, 2024 Jun.
Article En | MEDLINE | ID: mdl-38545872

OBJECTIVE: This article explores the transformative impact of OpenAI and ChatGPT on Australian medical practitioners, particularly psychiatrists in the private practice setting. It delves into the extensive benefits and limitations associated with integrating ChatGPT into medical practice, summarising current policies and scrutinising medicolegal implications. CONCLUSION: A careful assessment is imperative to determine whether the benefits of AI integration outweigh the associated risks. Practitioners are urged to review AI-generated content to ensure its accuracy, recognising that liability likely resides with them rather than with AI platforms, despite the lack of case law specific to negligence and AI in the Australian context at present. It is important to employ measures that ensure patient confidentiality is not breached and practitioners are encouraged to seek counsel from their professional indemnity insurer. There is considerable potential for future development of specialised AI software tailored specifically for the medical profession, making the use of AI more suitable for the medical field in the Australian legal landscape. Moving forward, it is essential to embrace technology and actively address its challenges rather than dismissing AI integration into medical practice. It is becoming increasingly essential that both the psychiatric community, medical community at large and policy makers develop comprehensive guidelines to fill existing policy gaps and adapt to the evolving landscape of AI technologies in healthcare.


Private Practice , Psychiatry , Humans , Australia , Psychiatry/legislation & jurisprudence , Psychiatry/standards , Private Practice/legislation & jurisprudence , Private Practice/organization & administration , Artificial Intelligence/legislation & jurisprudence , Confidentiality/legislation & jurisprudence , Confidentiality/standards
8.
JAMA ; 331(11): 909-910, 2024 03 19.
Article En | MEDLINE | ID: mdl-38373004

This Viewpoint summarizes a recent lawsuit alleging that a hospital violated patients' privacy by sharing electronic health record (EHR) data with Google for development of medical artificial intelligence (AI) and discusses how the federal court's decision in the case provides key insights for hospitals planning to share EHR data with for-profit companies developing medical AI.


Artificial Intelligence , Confidentiality , Delivery of Health Care , Search Engine , Humans , Artificial Intelligence/legislation & jurisprudence , Confidentiality/legislation & jurisprudence , Delivery of Health Care/legislation & jurisprudence , Delivery of Health Care/methods , Electronic Health Records/legislation & jurisprudence , Privacy/legislation & jurisprudence , Search Engine/legislation & jurisprudence
13.
JAMA ; 331(3): 185-187, 2024 01 16.
Article En | MEDLINE | ID: mdl-38117529

In this Medical News article, JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, and Alondra Nelson, PhD, the Harold F. Linder Professor at the Institute for Advanced Study, discuss effective AI regulation frameworks to accommodate innovation.


Artificial Intelligence , Biomedical Research , Health Policy , Inventions , Legislation, Medical , Education, Medical, Graduate , Medicine , Artificial Intelligence/legislation & jurisprudence , Health Policy/legislation & jurisprudence , Inventions/legislation & jurisprudence , Biomedical Research/legislation & jurisprudence
14.
Rev. derecho genoma hum ; (59): 129-148, jul.-dic. 2023.
Article Es | IBECS | ID: ibc-232451

La cuestión de los sesgos en la IA constituye un reto importante en los sistemas de IA. Estos sesgos no surgen únicamente de los datos existentes, sino que también los introducen las personas que utilizan sistemas, que son intrínsecamente parciales, como todos los seres humanos. No obstante, esto constituye una realidad preocupante porque los algoritmos tienen la capacidad de influir significativamente en el diagnóstico de un médico. Análisis recientes indican que este fenómeno puede reproducirse incluso en situaciones en las que los médicos ya no reciben orientación del sistema. Esto implica no sólo una incapacidad para percibir el sesgo, sino también una propensión a propagarlo. Las consecuencias potenciales de este fenómeno pueden conducir a un ciclo que se autoperpetúa y que tiene la capacidad de infligir un daño significativo a las personas, especialmente cuando los sistemas de inteligencia artificial (IA) se emplean en contextos que implican asuntos delicados, como el ámbito de la asistencia sanitaria. En respuesta a esta circunstancia, los ordenamientos jurídicos han ideado mecanismos de gobernanza que, a primera vista, parecen suficientes, especialmente en la Unión Europea. Los reglamentos de reciente aparición relativos a los datos y los que ahora se enfocarán a la inteligencia artificial (IA)*** sirven como ilustración por excelencia de cómo lograr potencialmente una supervisión suficiente de los sistemas de IA. En su aplicación práctica, no obstante, es probable que numerosos mecanismos muestren ineficacia a la hora de identificar los sesgos que surgen tras la integración de estos sistemas en el mercado. Es importante considerar que, en esa coyuntura, puede haber múltiples agentes implicados, en los que se ha delegado predominantemente la responsabilidad. ... (AU)


The issue of bias in AI presents a significant challenge in AI systems. These biases not only arise from existing data but are also introduced by the individuals using the systems, who are inherently biased, like all humans. However, this constitutes a concerning reality because algorithms have the ability to significantly influence a doctor’s diagnosis. Recent analyses indicate that this phenomenon can occur even in situations where doctors are no longer receiving guidance from the system. This implies not only an inability to perceive bias but also a propensity to propagate it. The potential consequences of this phenomenon can lead to a self-perpetuating cycle that has the ability to inflict significant harm on individuals, especially when artificial intelligence (AI) systems are employed in sensitive contexts, such as healthcare. In response to this circumstance, legal frameworks have devised governance mechanisms that, at first glance, seem sufficient, especially in the European Union. Recently emerged regulations regarding data and those now focusing on artificial intelligence (AI) serve as prime illustrations of potentially achieving adequate supervision of AI systems. In practical application, however, numerous mechanisms are likely to show inefficacy in identifying biases arising from the integration of these systems into the market. It is important to consider that, at this juncture, there may be multiple agents involved, predominantly delegated responsibility. Hence, it is imperative to insist on the need to persuade AI developers to implement strict measures to regulate biases inherent in their systems. If the detection of these entities is not achieved, it will pose a significant challenge for others to achieve the same, especially until their presence becomes very noticeable. Another possibility is that the long-term repercussions will be experienced collectively. (AU)


Humans , Artificial Intelligence/ethics , Artificial Intelligence/legislation & jurisprudence , Artificial Intelligence/standards , Bias
18.
J Med Internet Res ; 25: e49989, 2023 09 11.
Article En | MEDLINE | ID: mdl-37695650

Health care is undergoing a profound transformation through the integration of artificial intelligence (AI). However, the rapid integration and expansive growth of AI within health care systems present ethical and legal challenges that warrant careful consideration. In this viewpoint, the author argues that the health care domain, due to its complexity, requires specialized approaches to regulating AI. Precise regulation can provide clear guidelines for addressing these challenges, thereby ensuring ethical and legal AI implementations.


Artificial Intelligence , Delivery of Health Care , Humans , Artificial Intelligence/legislation & jurisprudence
19.
Eur J Health Law ; 30(4): 406-427, 2023 02 07.
Article En | MEDLINE | ID: mdl-37582525

The AI Act is based on, and at the same time aims to protect fundamental rights, implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. Based on a risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. In this paper we explore the assumption that despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited.


Artificial Intelligence , Medicine , Artificial Intelligence/legislation & jurisprudence
...