Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Circulation ; 149(6): e296-e311, 2024 02 06.
Artículo en Inglés | MEDLINE | ID: mdl-38193315

RESUMEN

Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.


Asunto(s)
American Heart Association , Inteligencia Artificial , Humanos , Aprendizaje Automático , Corazón , Imagen por Resonancia Magnética
2.
Radiology ; 310(2): e232030, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38411520

RESUMEN

According to the World Health Organization, climate change is the single biggest health threat facing humanity. The global health care system, including medical imaging, must manage the health effects of climate change while at the same time addressing the large amount of greenhouse gas (GHG) emissions generated in the delivery of care. Data centers and computational efforts are increasingly large contributors to GHG emissions in radiology. This is due to the explosive increase in big data and artificial intelligence (AI) applications that have resulted in large energy requirements for developing and deploying AI models. However, AI also has the potential to improve environmental sustainability in medical imaging. For example, use of AI can shorten MRI scan times with accelerated acquisition times, improve the scheduling efficiency of scanners, and optimize the use of decision-support tools to reduce low-value imaging. The purpose of this Radiology in Focus article is to discuss this duality at the intersection of environmental sustainability and AI in radiology. Further discussed are strategies and opportunities to decrease AI-related emissions and to leverage AI to improve sustainability in radiology, with a focus on health equity. Co-benefits of these strategies are explored, including lower cost and improved patient outcomes. Finally, knowledge gaps and areas for future research are highlighted.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiografía , Macrodatos , Cambio Climático
3.
BMC Med Ethics ; 25(1): 46, 2024 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-38637857

RESUMEN

BACKGROUND: The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. METHODS: The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was "Ethics of AI in Global Health Research". The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022. RESULTS: We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships. CONCLUSIONS: The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.


Asunto(s)
Inteligencia Artificial , Bioética , Humanos , Salud Global , Sudáfrica , Ética en Investigación
4.
Can Assoc Radiol J ; 75(3): 558-567, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38445497

RESUMEN

Artificial intelligence (AI) is rapidly evolving and has transformative potential for interventional radiology (IR) clinical practice. However, formal training in AI may be limited for many clinicians and therefore presents a challenge for initial implementation and trust in AI. An understanding of the foundational concepts in AI may help familiarize the interventional radiologist with the field of AI, thus facilitating understanding and participation in the development and deployment of AI. A pragmatic classification system of AI based on the complexity of the model may guide clinicians in the assessment of AI. Finally, the current state of AI in IR and the patterns of implementation are explored (pre-procedural, intra-procedural, and post-procedural).


Asunto(s)
Inteligencia Artificial , Radiología Intervencionista , Humanos , Radiología Intervencionista/educación , Radiología Intervencionista/métodos
5.
Can Assoc Radiol J ; 75(3): 568-574, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38445517

RESUMEN

The introduction of artificial intelligence (AI) in interventional radiology (IR) will bring about new challenges and opportunities for patients and clinicians. AI may comprise software as a medical device or AI-integrated hardware and will require a rigorous evaluation that should be guided based on the level of risk of the implementation. A hierarchy of risk of harm and possible harms are described herein. A checklist to guide deployment of an AI in a clinical IR environment is provided. As AI continues to evolve, regulation and evaluation of the AI medical devices will need to continue to evolve to keep pace and ensure patient safety.


Asunto(s)
Inteligencia Artificial , Radiología Intervencionista , Humanos , Radiología Intervencionista/métodos , Seguridad del Paciente , Radiografía Intervencional/métodos , Lista de Verificación
6.
PLOS Digit Health ; 3(2): e0000297, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38408043

RESUMEN

Radiology specific clinical decision support systems (CDSS) and artificial intelligence are poorly integrated into the radiologist workflow. Current research and development efforts of radiology CDSS focus on 4 main interventions, based around exam centric time points-after image acquisition, intra-report support, post-report analysis, and radiology workflow adjacent. We review the literature surrounding CDSS tools in these time points, requirements for CDSS workflow augmentation, and technologies that support clinician to computer workflow augmentation. We develop a theory of radiologist-decision tool interaction using a sequential explanatory study design. The study consists of 2 phases, the first a quantitative survey and the second a qualitative interview study. The phase 1 survey identifies differences between average users and radiologist users in software interventions using the User Acceptance of Information Technology: Toward a Unified View (UTAUT) framework. Phase 2 semi-structured interviews provide narratives on why these differences are found. To build this theory, we propose a novel solution called Radibot-a conversational agent capable of engaging clinicians with CDSS as an assistant using existing instant messaging systems supporting hospital communications. This work contributes an understanding of how radiologist-users differ from the average user and can be utilized by software developers to increase satisfaction of CDSS tools within radiology.

7.
NPJ Digit Med ; 7(1): 191, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39014060

RESUMEN

Increasing evidence supports reduced accuracy of noninvasive assessment tools, such as pulse oximetry, temperature probes, and AI skin diagnosis benchmarks, in patients with darker skin tones. The FDA is exploring potential strategies for device regulation to improve performance across diverse skin tones by including skin tone criteria. However, there is no consensus about how prospective studies should perform skin tone assessment in order to take this bias into account. There are several tools available to conduct skin tone assessments including administered visual scales (e.g., Fitzpatrick Skin Type, Pantone, Monk Skin Tone) and color measurement tools (e.g., reflectance colorimeters, reflectance spectrophotometers, cameras), although none are consistently used or validated across multiple medical domains. Accurate and consistent skin tone measurement depends on many factors including standardized environments, lighting, body parts assessed, patient conditions, and choice of skin tone assessment tool(s). As race and ethnicity are inadequate proxies for skin tone, these considerations can be helpful in standardizing the effect of skin tone on studies such as AI dermatology diagnoses, pulse oximetry, and temporal thermometers. Skin tone bias in medical devices is likely due to systemic factors that lead to inadequate validation across diverse skin tones. There is an opportunity for researchers to use skin tone assessment methods with standardized considerations in prospective studies of noninvasive tools that may be affected by skin tone. We propose considerations that researchers must take in order to improve device robustness to skin tone bias.

8.
EBioMedicine ; 102: 105047, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38471396

RESUMEN

BACKGROUND: It has been shown that AI models can learn race on medical images, leading to algorithmic bias. Our aim in this study was to enhance the fairness of medical image models by eliminating bias related to race, age, and sex. We hypothesise models may be learning demographics via shortcut learning and combat this using image augmentation. METHODS: This study included 44,953 patients who identified as Asian, Black, or White (mean age, 60.68 years ±18.21; 23,499 women) for a total of 194,359 chest X-rays (CXRs) from MIMIC-CXR database. The included CheXpert images comprised 45,095 patients (mean age 63.10 years ±18.14; 20,437 women) for a total of 134,300 CXRs were used for external validation. We also collected 1195 3D brain magnetic resonance imaging (MRI) data from the ADNI database, which included 273 participants with an average age of 76.97 years ±14.22, and 142 females. DL models were trained on either non-augmented or augmented images and assessed using disparity metrics. The features learned by the models were analysed using task transfer experiments and model visualisation techniques. FINDINGS: In the detection of radiological findings, training a model using augmented CXR images was shown to reduce disparities in error rate among racial groups (-5.45%), age groups (-13.94%), and sex (-22.22%). For AD detection, the model trained with augmented MRI images was shown 53.11% and 31.01% reduction of disparities in error rate among age and sex groups, respectively. Image augmentation led to a reduction in the model's ability to identify demographic attributes and resulted in the model trained for clinical purposes incorporating fewer demographic features. INTERPRETATION: The model trained using the augmented images was less likely to be influenced by demographic information in detecting image labels. These results demonstrate that the proposed augmentation scheme could enhance the fairness of interpretations by DL models when dealing with data from patients with different demographic backgrounds. FUNDING: National Science and Technology Council (Taiwan), National Institutes of Health.


Asunto(s)
Benchmarking , Aprendizaje , Anciano , Femenino , Humanos , Persona de Mediana Edad , Población Negra , Encéfalo , Demografía , Estados Unidos , Pueblo Asiatico , Población Blanca , Masculino , Negro o Afroamericano
9.
Can J Cardiol ; 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38885787

RESUMEN

The potential of artificial intelligence (AI) in medicine lies in its ability to enhance clinicians' capacity to analyse medical images, thereby improving diagnostic precision and accuracy and thus enhancing current tests. However, the integration of AI within health care is fraught with difficulties. Heterogeneity among health care system applications, reliance on proprietary closed-source software, and rising cybersecurity threats pose significant challenges. Moreover, before their deployment in clinical settings, AI models must demonstrate their effectiveness across a wide range of scenarios and must be validated by prospective studies, but doing so requires testing in an environment mirroring the clinical workflow, which is difficult to achieve without dedicated software. Finally, the use of AI techniques in health care raises significant legal and ethical issues, such as the protection of patient privacy, the prevention of bias, and the monitoring of the device's safety and effectiveness for regulatory compliance. This review describes challenges to AI integration in health care and provides guidelines on how to move forward. We describe an open-source solution that we developed that integrates AI models into the Picture Archives Communication System (PACS), called PACS-AI. This approach aims to increase the evaluation of AI models by facilitating their integration and validation with existing medical imaging databases. PACS-AI may overcome many current barriers to AI deployment and offer a pathway toward responsible, fair, and effective deployment of AI models in health care. In addition, we propose a list of criteria and guidelines that AI researchers should adopt when publishing a medical AI model to enhance standardisation and reproducibility.

10.
PLOS Digit Health ; 3(1): e0000417, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38236824

RESUMEN

The study provides a comprehensive review of OpenAI's Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4's report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA