Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
Add more filters

Publication year range
1.
Circulation ; 149(6): e296-e311, 2024 02 06.
Article in English | MEDLINE | ID: mdl-38193315

ABSTRACT

Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.


Subject(s)
American Heart Association , Artificial Intelligence , Humans , Machine Learning , Heart , Magnetic Resonance Imaging
2.
Radiology ; 310(2): e232030, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38411520

ABSTRACT

According to the World Health Organization, climate change is the single biggest health threat facing humanity. The global health care system, including medical imaging, must manage the health effects of climate change while at the same time addressing the large amount of greenhouse gas (GHG) emissions generated in the delivery of care. Data centers and computational efforts are increasingly large contributors to GHG emissions in radiology. This is due to the explosive increase in big data and artificial intelligence (AI) applications that have resulted in large energy requirements for developing and deploying AI models. However, AI also has the potential to improve environmental sustainability in medical imaging. For example, use of AI can shorten MRI scan times with accelerated acquisition times, improve the scheduling efficiency of scanners, and optimize the use of decision-support tools to reduce low-value imaging. The purpose of this Radiology in Focus article is to discuss this duality at the intersection of environmental sustainability and AI in radiology. Further discussed are strategies and opportunities to decrease AI-related emissions and to leverage AI to improve sustainability in radiology, with a focus on health equity. Co-benefits of these strategies are explored, including lower cost and improved patient outcomes. Finally, knowledge gaps and areas for future research are highlighted.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiography , Big Data , Climate Change
3.
BMC Med Ethics ; 25(1): 46, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38637857

ABSTRACT

BACKGROUND: The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. METHODS: The GFBR is an annual meeting organized by the World Health Organization and supported by the Wellcome Trust, the US National Institutes of Health, the UK Medical Research Council (MRC) and the South African MRC. The forum aims to bring together ethicists, researchers, policymakers, research ethics committee members and other actors to engage with challenges and opportunities specifically related to research ethics. In 2022 the focus of the GFBR was "Ethics of AI in Global Health Research". The forum consisted of 6 case study presentations, 16 governance presentations, and a series of small group and large group discussions. A total of 87 participants attended the forum from 31 countries around the world, representing disciplines of bioethics, AI, health policy, health professional practice, research funding, and bioinformatics. In this paper, we highlight central insights arising from GFBR 2022. RESULTS: We describe the significance of four thematic insights arising from the forum: (1) Appropriateness of building AI, (2) Transferability of AI systems, (3) Accountability for AI decision-making and outcomes, and (4) Individual consent. We then describe eight recommendations for governance leaders to enhance the ethical governance of AI in global health research, addressing issues such as AI impact assessments, environmental values, and fair partnerships. CONCLUSIONS: The 2022 Global Forum on Bioethics in Research illustrated several innovations in ethical governance of AI for global health research, as well as several areas in need of urgent attention internationally. This summary is intended to inform international and domestic efforts to strengthen research ethics and support the evolution of governance leadership to meet the demands of AI in global health research.


Subject(s)
Artificial Intelligence , Bioethics , Humans , Global Health , South Africa , Ethics, Research
4.
Can Assoc Radiol J ; : 8465371241236376, 2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38445497

ABSTRACT

Artificial intelligence (AI) is rapidly evolving and has transformative potential for interventional radiology (IR) clinical practice. However, formal training in AI may be limited for many clinicians and therefore presents a challenge for initial implementation and trust in AI. An understanding of the foundational concepts in AI may help familiarize the interventional radiologist with the field of AI, thus facilitating understanding and participation in the development and deployment of AI. A pragmatic classification system of AI based on the complexity of the model may guide clinicians in the assessment of AI. Finally, the current state of AI in IR and the patterns of implementation are explored (pre-procedural, intra-procedural, and post-procedural).

5.
Can Assoc Radiol J ; : 8465371241236377, 2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38445517

ABSTRACT

The introduction of artificial intelligence (AI) in interventional radiology (IR) will bring about new challenges and opportunities for patients and clinicians. AI may comprise software as a medical device or AI-integrated hardware and will require a rigorous evaluation that should be guided based on the level of risk of the implementation. A hierarchy of risk of harm and possible harms are described herein. A checklist to guide deployment of an AI in a clinical IR environment is provided. As AI continues to evolve, regulation and evaluation of the AI medical devices will need to continue to evolve to keep pace and ensure patient safety.

6.
AJR Am J Roentgenol ; 221(3): 302-308, 2023 09.
Article in English | MEDLINE | ID: mdl-37095660

ABSTRACT

Artificial intelligence (AI) holds promise for helping patients access new and individualized health care pathways while increasing efficiencies for health care practitioners. Radiology has been at the forefront of this technology in medicine; many radiology practices are implementing and trialing AI-focused products. AI also holds great promise for reducing health disparities and promoting health equity. Radiology is ideally positioned to help reduce disparities given its central and critical role in patient care. The purposes of this article are to discuss the potential benefits and pitfalls of deploying AI algorithms in radiology, specifically highlighting the impact of AI on health equity; to explore ways to mitigate drivers of inequity; and to enhance pathways for creating better health care for all individuals, centering on a practical framework that helps radiologists address health equity during deployment of new tools.


Subject(s)
Health Equity , Radiology , Humans , Artificial Intelligence , Radiologists , Radiology/methods , Algorithms
7.
J Digit Imaging ; 36(1): 105-113, 2023 02.
Article in English | MEDLINE | ID: mdl-36344632

ABSTRACT

Improving detection and follow-up of recommendations made in radiology reports is a critical unmet need. The long and unstructured nature of radiology reports limits the ability of clinicians to assimilate the full report and identify all the pertinent information for prioritizing the critical cases. We developed an automated NLP pipeline using a transformer-based ClinicalBERT++ model which was fine-tuned on 3 M radiology reports and compared against the traditional BERT model. We validated the models on both internal hold-out ED cases from EUH as well as external cases from Mayo Clinic. We also evaluated the model by combining different sections of the radiology reports. On the internal test set of 3819 reports, the ClinicalBERT++ model achieved 0.96 f1-score while the BERT also achieved the same performance using the reason for exam and impression sections. However, ClinicalBERT++ outperformed BERT on the external test dataset of 2039 reports and achieved the highest performance for classifying critical finding reports (0.81 precision and 0.54 recall). The ClinicalBERT++ model has been successfully applied to large-scale radiology reports from 5 different sites. Automated NLP system that can analyze free-text radiology reports, along with the reason for the exam, to identify critical radiology findings and recommendations could enable automated alert notifications to clinicians about the need for clinical follow-up. The clinical significance of our proposed model is that it could be used as an additional layer of safeguard to clinical practice and reduce the chance of important findings reported in a radiology report is not overlooked by clinicians as well as provide a way to retrospectively track large hospital databases for evaluating the documentation of the critical findings.


Subject(s)
Natural Language Processing , Radiology , Humans , Retrospective Studies , Radiography , Research Report
8.
J Biomed Inform ; 123: 103918, 2021 11.
Article in English | MEDLINE | ID: mdl-34560275

ABSTRACT

OBJECTIVE: With increasing patient complexity whose data are stored in fragmented health information systems, automated and time-efficient ways of gathering important information from the patients' medical history are needed for effective clinical decision making. Using COVID-19 as a case study, we developed a query-bot information retrieval system with user-feedback to allow clinicians to ask natural questions to retrieve data from patient notes. MATERIALS AND METHODS: We applied clinicalBERT, a pre-trained contextual language model, to our dataset of patient notes to obtain sentence embeddings, using K-Means to reduce computation time for real-time interaction. Rocchio algorithm was then employed to incorporate user-feedback and improve retrieval performance. RESULTS: In an iterative feedback loop experiment, MAP for final iteration was 0.93/0.94 as compared to initial MAP of 0.66/0.52 for generic and 1./1. compared to 0.79/0.83 for COVID-19 specific queries confirming that contextual model handles the ambiguity in natural language queries and feedback helps to improve retrieval performance. User-in-loop experiment also outperformed the automated pseudo relevance feedback method. Moreover, the null hypothesis which assumes identical precision between initial retrieval and relevance feedback was rejected with high statistical significance (p â‰ª 0.05). Compared to Word2Vec, TF-IDF and bioBERT models, clinicalBERT works optimally considering the balance between response precision and user-feedback. DISCUSSION: Our model works well for generic as well as COVID-19 specific queries. However, some generic queries are not answered as well as others because clustering reduces query performance and vague relations between queries and sentences are considered non-relevant. We also tested our model for queries with the same meaning but different expressions and demonstrated that these query variations yielded similar performance after incorporation of user-feedback. CONCLUSION: In conclusion, we develop an NLP-based query-bot that handles synonyms and natural language ambiguity in order to retrieve relevant information from the patient chart. User-feedback is critical to improve model performance.


Subject(s)
COVID-19 , Algorithms , Feedback , Humans , Information Storage and Retrieval , SARS-CoV-2
9.
J Digit Imaging ; 34(4): 1005-1013, 2021 08.
Article in English | MEDLINE | ID: mdl-34405297

ABSTRACT

Real-time execution of machine learning (ML) pipelines on radiology images is difficult due to limited computing resources in clinical environments, whereas running them in research clusters requires efficient data transfer capabilities. We developed Niffler, an open-source Digital Imaging and Communications in Medicine (DICOM) framework that enables ML and processing pipelines in research clusters by efficiently retrieving images from the hospitals' PACS and extracting the metadata from the images. We deployed Niffler at our institution (Emory Healthcare, the largest healthcare network in the state of Georgia) and retrieved data from 715 scanners spanning 12 sites, up to 350 GB/day continuously in real-time as a DICOM data stream over the past 2 years. We also used Niffler to retrieve images bulk on-demand based on user-provided filters to facilitate several research projects. This paper presents the architecture and three such use cases of Niffler. First, we executed an IVC filter detection and segmentation pipeline on abdominal radiographs in real-time, which was able to classify 989 test images with an accuracy of 96.0%. Second, we applied the Niffler Metadata Extractor to understand the operational efficiency of individual MRI systems based on calculated metrics. We benchmarked the accuracy of the calculated exam time windows by comparing Niffler against the Clinical Data Warehouse (CDW). Niffler accurately identified the scanners' examination timeframes and idling times, whereas CDW falsely depicted several exam overlaps due to human errors. Third, with metadata extracted from the images by Niffler, we identified scanners with misconfigured time and reconfigured five scanners. Our evaluations highlight how Niffler enables real-time ML and processing pipelines in a research cluster.


Subject(s)
Radiology Information Systems , Radiology , Data Warehousing , Humans , Machine Learning , Radiography
10.
Can Assoc Radiol J ; 70(4): 329-334, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31585825

ABSTRACT

This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.


Subject(s)
Artificial Intelligence/ethics , Radiology/ethics , Canada , Consensus , Europe , Humans , Radiologists/ethics , Societies, Medical , United States
11.
J Digit Imaging ; 30(5): 602-608, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28623557

ABSTRACT

Numerous initiatives are in place to support value based care in radiology including decision support using appropriateness criteria, quality metrics like radiation dose monitoring, and efforts to improve the quality of the radiology report for consumption by referring providers. These initiatives are largely data driven. Organizations can choose to purchase proprietary registry systems, pay for software as a service solution, or deploy/build their own registry systems. Traditionally, registries are created for a single purpose like radiation dosage or specific disease tracking like diabetes registry. This results in a fragmented view of the patient, and increases overhead to maintain such single purpose registry system by requiring an alternative data entry workflow and additional infrastructure to host and maintain multiple registries for different clinical needs. This complexity is magnified in the health care enterprise whereby radiology systems usually are run parallel to other clinical systems due to the different clinical workflow for radiologists. In the new era of value based care where data needs are increasing with demand for a shorter turnaround time to provide data that can be used for information and decision making, there is a critical gap to develop registries that are more adapt to the radiology workflow with minimal overhead on resources for maintenance and setup. We share our experience of developing and implementing an open source registry system for quality improvement and research in our academic institution that is driven by our radiology workflow.


Subject(s)
Quality Improvement , Radiology Information Systems , Radiology , Registries , Workflow , Humans
13.
PLOS Digit Health ; 3(2): e0000297, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38408043

ABSTRACT

Radiology specific clinical decision support systems (CDSS) and artificial intelligence are poorly integrated into the radiologist workflow. Current research and development efforts of radiology CDSS focus on 4 main interventions, based around exam centric time points-after image acquisition, intra-report support, post-report analysis, and radiology workflow adjacent. We review the literature surrounding CDSS tools in these time points, requirements for CDSS workflow augmentation, and technologies that support clinician to computer workflow augmentation. We develop a theory of radiologist-decision tool interaction using a sequential explanatory study design. The study consists of 2 phases, the first a quantitative survey and the second a qualitative interview study. The phase 1 survey identifies differences between average users and radiologist users in software interventions using the User Acceptance of Information Technology: Toward a Unified View (UTAUT) framework. Phase 2 semi-structured interviews provide narratives on why these differences are found. To build this theory, we propose a novel solution called Radibot-a conversational agent capable of engaging clinicians with CDSS as an assistant using existing instant messaging systems supporting hospital communications. This work contributes an understanding of how radiologist-users differ from the average user and can be utilized by software developers to increase satisfaction of CDSS tools within radiology.

14.
EBioMedicine ; 102: 105047, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38471396

ABSTRACT

BACKGROUND: It has been shown that AI models can learn race on medical images, leading to algorithmic bias. Our aim in this study was to enhance the fairness of medical image models by eliminating bias related to race, age, and sex. We hypothesise models may be learning demographics via shortcut learning and combat this using image augmentation. METHODS: This study included 44,953 patients who identified as Asian, Black, or White (mean age, 60.68 years ±18.21; 23,499 women) for a total of 194,359 chest X-rays (CXRs) from MIMIC-CXR database. The included CheXpert images comprised 45,095 patients (mean age 63.10 years ±18.14; 20,437 women) for a total of 134,300 CXRs were used for external validation. We also collected 1195 3D brain magnetic resonance imaging (MRI) data from the ADNI database, which included 273 participants with an average age of 76.97 years ±14.22, and 142 females. DL models were trained on either non-augmented or augmented images and assessed using disparity metrics. The features learned by the models were analysed using task transfer experiments and model visualisation techniques. FINDINGS: In the detection of radiological findings, training a model using augmented CXR images was shown to reduce disparities in error rate among racial groups (-5.45%), age groups (-13.94%), and sex (-22.22%). For AD detection, the model trained with augmented MRI images was shown 53.11% and 31.01% reduction of disparities in error rate among age and sex groups, respectively. Image augmentation led to a reduction in the model's ability to identify demographic attributes and resulted in the model trained for clinical purposes incorporating fewer demographic features. INTERPRETATION: The model trained using the augmented images was less likely to be influenced by demographic information in detecting image labels. These results demonstrate that the proposed augmentation scheme could enhance the fairness of interpretations by DL models when dealing with data from patients with different demographic backgrounds. FUNDING: National Science and Technology Council (Taiwan), National Institutes of Health.


Subject(s)
Benchmarking , Learning , Aged , Female , Humans , Middle Aged , Black People , Brain , Demography , United States , Asian People , White People , Male
15.
Can J Cardiol ; 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38885787

ABSTRACT

The potential of artificial intelligence (AI) in medicine lies in its ability to enhance clinicians' capacity to analyze medical images, thereby improving diagnostic precision and accuracy, thus enhancing current tests. However, the integration of AI within healthcare is fraught with difficulties. Heterogeneity among healthcare system applications, reliance on proprietary closed-source software, and rising cyber-security threats pose significant challenges. Moreover, prior to their deployment in clinical settings, AI models must demonstrate their effectiveness across a wide range of scenarios and must be validated by prospective studies, but doing so requires testing in an environment mirroring the clinical workflow which is difficult to achieve without dedicated software. Finally, the use of AI techniques in healthcare raises significant legal and ethical issues, such as the protection of patient privacy, the prevention of bias, and the monitoring of the device's safety and effectiveness for regulatory compliance. This review describes challenges to AI integration in healthcare and provides guidelines on how to move forward. We describe an open-source solution that we developed which integrates AI models into the Picture Archives Communication System (PACS), called PACS-AI. This approach aims to increase the evaluation of AI models by facilitating their integration and validation with existing medical imaging databases. PACS-AI may overcome many current barriers to AI deployment and offers a pathway towards responsible, fair, and effective deployment of AI models in healthcare. Additionally, we propose a list of criteria and guidelines that AI researchers should adopt when publishing a medical AI model, to enhance standardization and reproducibility.

16.
PLOS Digit Health ; 3(1): e0000417, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38236824

ABSTRACT

The study provides a comprehensive review of OpenAI's Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4's report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.

17.
Science ; 381(6654): 149-150, 2023 07 14.
Article in English | MEDLINE | ID: mdl-37440627

ABSTRACT

AI-predicted race variables pose risks and opportunities for studying health disparities.


Subject(s)
Artificial Intelligence , Diagnostic Imaging , Healthcare Disparities , Racial Groups , Humans
18.
IEEE J Biomed Health Inform ; 27(8): 3936-3947, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37167055

ABSTRACT

Automated curation of noisy external data in the medical domain has long been in high demand, as AI technologies need to be validated using various sources with clean, annotated data. Identifying the variance between internal and external sources is a fundamental step in curating a high-quality dataset, as the data distributions from different sources can vary significantly and subsequently affect the performance of AI models. The primary challenges for detecting data shifts are - (1) accessing private data across healthcare institutions for manual detection and (2) the lack of automated approaches to learn efficient shift-data representation without training samples. To overcome these problems, we propose an automated pipeline called MedShift to detect top-level shift samples and evaluate the significance of shift data without sharing data between internal and external organizations. MedShift employs unsupervised anomaly detectors to learn the internal distribution and identify samples showing significant shiftness for external datasets, and then compares their performance. To quantify the effects of detected shift data, we train a multi-class classifier that learns internal domain knowledge and evaluates the classification performance for each class in external domains after dropping the shift data. We also propose a data quality metric to quantify the dissimilarity between internal and external datasets. We verify the efficacy of MedShift using musculoskeletal radiographs (MURA) and chest X-ray datasets from multiple external sources. Our experiments show that our proposed shift data detection pipeline can be beneficial for medical centers to curate high-quality datasets more efficiently.

19.
J Am Coll Radiol ; 20(6): 554-560, 2023 06.
Article in English | MEDLINE | ID: mdl-37148953

ABSTRACT

PURPOSE: Artificial intelligence (AI) is rapidly reshaping how radiology is practiced. Its susceptibility to biases, however, is a primary concern as more AI algorithms become available for widespread use. So far, there has been limited evaluation of how sociodemographic variables are reported in radiology AI research. This study aims to evaluate the presence and extent of sociodemographic reporting in human subjects radiology AI original research. METHODS: All human subjects original radiology AI articles published from January to December 2020 in the top six US radiology journals, as determined by impact factor, were reviewed. Reporting of any sociodemographic variables (age, gender, and race or ethnicity) as well as any sociodemographic-based results were extracted. RESULTS: Of the 160 included articles, 54% reported at least one sociodemographic variable, 53% reported age, 47% gender, and 4% race or ethnicity. Six percent reported any sociodemographic-based results. There was significant variation in reporting of at least one sociodemographic variable by journal, ranging from 33% to 100%. CONCLUSIONS: Reporting of sociodemographic variables in human subjects original radiology AI research remains poor, putting the results and subsequent algorithms at increased risk of biases.


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiology/methods , Algorithms , Radiography , Ethnicity
20.
PLOS Digit Health ; 2(11): e0000386, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37983258

ABSTRACT

Numerous ethics guidelines have been handed down over the last few years on the ethical applications of machine learning models. Virtually every one of them mentions the importance of "fairness" in the development and use of these models. Unfortunately, though, these ethics documents omit providing a consensually adopted definition or characterization of fairness. As one group of authors observed, these documents treat fairness as an "afterthought" whose importance is undeniable but whose essence seems strikingly elusive. In this essay, which offers a distinctly American treatment of "fairness," we comment on a number of fairness formulations and on qualitative or statistical methods that have been encouraged to achieve fairness. We argue that none of them, at least from an American moral perspective, provides a one-size-fits-all definition of or methodology for securing fairness that could inform or standardize fairness over the universe of use cases witnessing machine learning applications. Instead, we argue that because fairness comprehensions and applications reflect a vast range of use contexts, model developers and clinician users will need to engage in thoughtful collaborations that examine how fairness should be conceived and operationalized in the use case at issue. Part II of this paper illustrates key moments in these collaborations, especially when inter and intra disagreement occurs among model developer and clinician user groups over whether a model is fair or unfair. We conclude by noting that these collaborations will likely occur over the lifetime of a model if its claim to fairness is to advance beyond "afterthought" status.

SELECTION OF CITATIONS
SEARCH DETAIL