(De)troubling transparency: artificial intelligence (AI) for clinical applications.
Med Humanit
; 49(1): 17-26, 2023 Mar.
Article
em En
| MEDLINE
| ID: mdl-35545432
Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope-one that is allowing for more early and accurate diagnosis-the acceptance of AI and ML technologies in hospitals remains low. A major reason for this is the lack of transparency associated with these technologies, in particular epistemic transparency, which results in AI disturbing or troubling established knowledge practices in clinical contexts. In this article, we describe the development process of one AI application for a clinical setting. We show how epistemic transparency is negotiated and co-produced in close collaboration between AI developers and clinicians and biomedical scientists, forming the context in which AI is accepted as an epistemic operator. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (pulmonary hypertension/PH), this paper examines how including clinicians and clinical scientists in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of data sets, building software and training the model The close collaboration results in an AI application that is at once social and technological: it integrates and inscribes into the technology the knowledge processes of the different participants in its development. We suggest that it is a misnomer to call these applications 'artificial' intelligence, and that they would be better developed and implemented if they were reframed as forms of sociotechnical intelligence.
Palavras-chave
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Médicos
/
Inteligência Artificial
Tipo de estudo:
Qualitative_research
/
Screening_studies
Limite:
Humans
Idioma:
En
Ano de publicação:
2023
Tipo de documento:
Article