Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Asian Bioeth Rev ; 16(3): 303-305, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39022382
2.
Asian Bioeth Rev ; 16(3): 345-372, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39022378

RESUMEN

With focus on the development and use of artificial intelligence (AI) systems in the digital health context, we consider the following questions: How does the European Union (EU) seek to facilitate the development and uptake of trustworthy AI systems through the AI Act? What does trustworthiness and trust mean in the AI Act, and how are they linked to some of the ongoing discussions of these terms in bioethics, law, and philosophy? What are the normative components of trustworthiness? And how do the requirements of the AI Act relate to these components? We first explain how the EU seeks to create an epistemic environment of trust through the AI Act to facilitate the development and uptake of trustworthy AI systems. The legislation establishes a governance regime that operates as a socio-epistemological infrastructure of trust which enables a performative framing of trust and trustworthiness. The degree of success that performative acts of trust and trustworthiness have achieved in realising the legislative goals may then be assessed in terms of statutorily defined proxies of trustworthiness. We show that to be trustworthy, these performative acts should be consistent with the ethical principles endorsed by the legislation; these principles are also manifested in at least four key features of the governance regime. However, specified proxies of trustworthiness are not expected to be adequate for applications of AI systems within a regulatory sandbox or in real-world testing. We explain why different proxies of trustworthiness for these applications may be regarded as 'special' trust domains and why the nature of trust should be understood as participatory.

3.
Semin Nephrol ; 41(3): 282-293, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-34330368

RESUMEN

Digitalization in nephrology has progressed in a manner that is disparate and siloed, even though learning (under a broader Learning Health System initiative) has been manifested in all the main areas of clinical application. Most applications based on artificial intelligence/machine learning (AI/ML) are still in the initial developmental stages and are yet to be adequately validated and shown to contribute to positive patient outcomes. There is also no consistent or comprehensive digitalization plan, and insufficient data are a limiting factor across all of these areas. In this article, we first consider how digitalization along nephrology care pathways relates to the Learning Health System initiative. We then consider the current state of AI/ML-based software and devices in nephrology and the ethical and regulatory challenges in scaling them up toward broader clinical application. We conclude with our proposal to establish a dedicated ethics and governance framework that is centered around health care providers in nephrology and the AI/ML-based software to which their work relates. This framework should help to integrate ethical and regulatory values and considerations, involve a wide range of stakeholders, and apply across normative domains that are conventionally demarcated as clinical, research, and public health.


Asunto(s)
Inteligencia Artificial , Nefrología , Humanos , Salud Pública
4.
J Bioeth Inq ; 17(4): 657-661, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33169256

RESUMEN

Following the outbreak of what would become the COVID-19 pandemic, social distancing measures were quickly introduced across East Asia-including drastic shelter-in-place orders in some cities-drawing on experience with the outbreak of severe acute respiratory syndrome (SARS) almost two decades ago. "Smart City" technologies and other digital tools were quickly deployed for infection control purposes, ranging from conventional thermal scanning cameras to digital tracing in the surveillance of at-risk individuals. Chatbots endowed with artificial intelligence have also been deployed to shift part of healthcare provision away from hospitals and to support a number of programmes for self-management of chronic disease in the community. With the closure of schools and adults working from home, digital technologies have also sustained many aspects of both professional and social life at a pace and scale not considered to be practicable before the outbreak. This paper considers how these new experiences with digital technologies in public health surveillance are spurring digitalization in East Asian societies beyond the conventional public health context. It also considers some of the concerns and challenges that are likely to arise with rapid digitalization, particularly in healthcare.


Asunto(s)
COVID-19/epidemiología , COVID-19/prevención & control , Control de Enfermedades Transmisibles/instrumentación , Práctica de Salud Pública , Inteligencia Artificial , Asia Oriental/epidemiología , Humanos , Pandemias , Vigilancia de la Población , SARS-CoV-2
5.
Bull World Health Organ ; 98(4): 263-269, 2020 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-32284650

RESUMEN

Technological advances in big data (large amounts of highly varied data from many different sources that may be processed rapidly), data sciences and artificial intelligence can improve health-system functions and promote personalized care and public good. However, these technologies will not replace the fundamental components of the health system, such as ethical leadership and governance, or avoid the need for a robust ethical and regulatory environment. In this paper, we discuss what a robust ethical and regulatory environment might look like for big data analytics in health insurance, and describe examples of safeguards and participatory mechanisms that should be established. First, a clear and effective data governance framework is critical. Legal standards need to be enacted and insurers should be encouraged and given incentives to adopt a human-centred approach in the design and use of big data analytics and artificial intelligence. Second, a clear and accountable process is necessary to explain what information can be used and how it can be used. Third, people whose data may be used should be empowered through their active involvement in determining how their personal data may be managed and governed. Fourth, insurers and governance bodies, including regulators and policy-makers, need to work together to ensure that the big data analytics based on artificial intelligence that are developed are transparent and accurate. Unless an enabling ethical environment is in place, the use of such analytics will likely contribute to the proliferation of unconnected data systems, worsen existing inequalities, and erode trustworthiness and trust.


Les progrès technologiques en matière de big data (un terme qui désigne de grandes quantités de données extrêmement variées, provenant de différentes sources et pouvant être traitées rapidement), de sciences de l'information et d'intelligence artificielle peuvent améliorer le fonctionnement du système de santé, mais aussi promouvoir des soins personnalisés et servir l'intérêt public. Néanmoins, ces technologies ne permettront pas de remplacer les composantes fondamentales du système de santé, comme le leadership éthique et la bonne gouvernance, ni d'éviter la nécessité de créer un environnement déontologique et réglementaire solide. Le présent document se penche sur la définition de cet environnement déontologique et réglementaire solide pour l'analyse des big data dans le domaine de l'assurance maladie, et fournit à titre d'exemple les mécanismes de protection et de participation qu'il convient d'instaurer. En premier lieu, imposer un cadre de gouvernance précis et efficace est essentiel au traitement des données. Des normes juridiques doivent être promulguées, tandis que les assureurs doivent être encouragés et incités à adopter une approche centrée sur l'humain, tant dans leur conception que dans leur utilisation de l'analyse des big data et de l'intelligence artificielle. Deuxièmement, il faut mettre en place un processus clair et responsable afin d'expliquer quels types d'informations sont susceptibles d'être employés et à quelles fins. Troisièmement, les personnes concernées doivent avoir la possibilité de déterminer de quelle manière leurs données personnelles sont gérées et régies, en étant activement impliquées dans ce processus. Et quatrièmement, les assureurs et les organes de gouvernance, dont les régulateurs et législateurs, doivent collaborer pour faire en sorte que l'analyse des big data basée sur l'intelligence artificielle soit correcte et transparente. À moins d'établir un environnement éthique, l'usage d'une telle analyse entraînera probablement la prolifération de systèmes de données non connectés, l'aggravation des inégalités actuelles ainsi qu'une perte de confiance et de fiabilité.


Los avances tecnológicos relativos a los macrodatos (es decir, grandes cantidades de datos muy variados de muchas fuentes diversas que pueden procesarse rápidamente), las ciencias de los datos y la inteligencia artificial pueden mejorar las funciones del sistema sanitario y promover la atención personalizada y el bien público. No obstante, estas tecnologías no sustituirán los componentes fundamentales del sistema sanitario, como el liderazgo ético y la gobernanza, ni evitarán la necesidad de un entorno ético y normativo sólido. En el presente documento se examina cómo podría ser un entorno ético y normativo sólido para el análisis de macrodatos en el ámbito de los seguros médicos, y se describen ejemplos de mecanismos de protección y participación que deberían establecerse. En primer lugar, es fundamental contar con un marco claro y eficaz de gestión de datos. Es necesario promulgar normas jurídicas y alentar e incentivar a las aseguradoras para que adopten un enfoque centrado en el ser humano en el diseño y la aplicación de análisis de macrodatos e inteligencia artificial. En segundo lugar, es necesario un proceso claro y responsable para explicar cómo y qué información se puede utilizar. En tercer lugar, se debe facultar a las personas cuyos datos puedan ser utilizados mediante su participación activa en la determinación de cómo se pueden gestionar y regular sus datos personales. En cuarto lugar, las aseguradoras y los órganos de gobierno, incluidos los reguladores y los responsables de formular políticas, deben colaborar para garantizar que los análisis de macrodatos basados en la inteligencia artificial que se elaboren sean transparentes y precisos. A menos que exista un entorno ético adecuado, el uso de esos análisis probablemente contribuirá a la proliferación de sistemas de datos sin conexión, empeorará las desigualdades existentes y reducirá la fiabilidad y la confianza.


Asunto(s)
Inteligencia Artificial , Macrodatos , Seguro de Salud , Confianza , Inteligencia Artificial/ética , Ciencia de los Datos
7.
Asian Bioeth Rev ; 11(1): 1-3, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33717296
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA