Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Ethics ; 23(1): 50, 2022 05 06.
Artigo em Inglês | MEDLINE | ID: mdl-35524301

RESUMO

Research regarding the drivers of acceptance of clinical decision support systems (CDSS) by physicians is still rather limited. The literature that does exist, however, tends to focus on problems regarding the user-friendliness of CDSS. We have performed a thematic analysis of 24 interviews with physicians concerning specific clinical case vignettes, in order to explore their underlying opinions and attitudes regarding the introduction of CDSS in clinical practice, to allow a more in-depth analysis of factors underlying (non-)acceptance of CDSS. We identified three general themes from the results. First, 'the perceived role of the AI', including items referring to the tasks that may properly be assigned to the CDSS according to the respondents. Second, 'the perceived role of the physician', referring to the aspects of clinical practice that were seen as being fundamentally 'human' or non-automatable. Third, 'concerns regarding AI', including items referring to more general issues that were raised by the respondents regarding the introduction of CDSS in general and/or in clinical medicine in particular. Apart from the overall concerns expressed by the respondents regarding user-friendliness, we will explain how our results indicate that our respondents were primarily occupied by distinguishing between parts of their job that should be automated and aspects that should be kept in human hands. We refer to this distinction as 'the division of clinical labor.' This division is not based on knowledge regarding AI or medicine, but rather on which parts of a physician's job were seen by the respondents as being central to who they are as physicians and as human beings. Often the respondents' view that certain core parts of their job ought to be shielded from automation was closely linked to claims concerning the uniqueness of medicine as a domain. Finally, although almost all respondents claimed that they highly value their final responsibility, a closer investigation of this concept suggests that their view of 'final responsibility' was not that demanding after all.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Médicos , Inteligência Artificial , Atitude , Humanos , Pesquisa Qualitativa , Cidade de Roma
2.
BMC Med Inform Decis Mak ; 22(1): 185, 2022 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-35842722

RESUMO

BACKGROUND: There is increasing interest in incorporating clinical decision support (CDS) into electronic healthcare records (EHR). Successful implementation of CDS systems depends on acceptance of them by healthcare workers. We used a mix of quantitative and qualitative methods starting from Qsort methodology to explore expectations and perceptions of practicing physicians on the use of CDS incorporated in EHR. METHODS: The study was performed in a large tertiary care academic hospital. We used a mixed approach with a Q-sort based classification of pre-defined reactions to clinical case vignettes combined with a thinking-aloud approach, taking into account COREQ recommendations The open source software of Ken-Q Analysis version 1.0.6. was used for the quantitative analysis, using principal components and a Varimax rotation. For the qualitative analysis, a thematic analysis based on the four main themes was performed based on the audiotapes and field notes. RESULTS: Thirty physicians were interviewed (7 in training, 8 junior staff and 15 senior staff; 16 females). Nearly all respondents were strongly averse towards interruptive messages, especially when these also were obstructive. Obstructive interruption was considered to be acceptable only when it increases safety, is adjustable to user expertise level and/or allows deviations when the end-user explains why a deviation is desirable in the case at issue. Transparency was deemed an essential feature, which seems to boil down to providing sufficient clarification on the factors underlying the recommendations of the CDS, so that these can be compared against the physicians' existing knowledge, beliefs and convictions. CONCLUSION: Avoidance of disruptive workflows and transparency of the underlying decision processes are important points to consider when developing CDS systems incorporated in EHR.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Médicos , Registros Eletrônicos de Saúde , Feminino , Pessoal de Saúde , Humanos , Motivação , Software
3.
Front Genet ; 13: 903600, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36199569

RESUMO

The combination of "Big Data" and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This "opacity" problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an "epistemic warrant" for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find-that accuracy is sufficient justification for intervention for today's patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA