Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
JMIR Res Protoc ; 13: e54857, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38557315

RESUMO

BACKGROUND: Patients after kidney transplantation eventually face the risk of graft loss with the concomitant need for dialysis or retransplantation. Choosing the right kidney replacement therapy after graft loss is an important preference-sensitive decision for kidney transplant recipients. However, the rate of conversations about treatment options after kidney graft loss has been shown to be as low as 13% in previous studies. It is unknown whether the implementation of artificial intelligence (AI)-based risk prediction models can increase the number of conversations about treatment options after graft loss and how this might influence the associated shared decision-making (SDM). OBJECTIVE: This study aims to explore the impact of AI-based risk prediction for the risk of graft loss on the frequency of conversations about the treatment options after graft loss, as well as the associated SDM process. METHODS: This is a 2-year, prospective, randomized, 2-armed, parallel-group, single-center trial in a German kidney transplant center. All patients will receive the same routine post-kidney transplant care that usually includes follow-up visits every 3 months at the kidney transplant center. For patients in the intervention arm, physicians will be assisted by a validated and previously published AI-based risk prediction system that estimates the risk for graft loss in the next year, starting from 3 months after randomization until 24 months after randomization. The study population will consist of 122 kidney transplant recipients >12 months after transplantation, who are at least 18 years of age, are able to communicate in German, and have an estimated glomerular filtration rate <30 mL/min/1.73 m2. Patients with multi-organ transplantation, or who are not able to communicate in German, as well as underage patients, cannot participate. For the primary end point, the proportion of patients who have had a conversation about their treatment options after graft loss is compared at 12 months after randomization. Additionally, 2 different assessment tools for SDM, the CollaboRATE mean score and the Control Preference Scale, are compared between the 2 groups at 12 months and 24 months after randomization. Furthermore, recordings of patient-physician conversations, as well as semistructured interviews with patients, support persons, and physicians, are performed to support the quantitative results. RESULTS: The enrollment for the study is ongoing. The first results are expected to be submitted for publication in 2025. CONCLUSIONS: This is the first study to examine the influence of AI-based risk prediction on physician-patient interaction in the context of kidney transplantation. We use a mixed methods approach by combining a randomized design with a simple quantitative end point (frequency of conversations), different quantitative measurements for SDM, and several qualitative research methods (eg, records of physician-patient conversations and semistructured interviews) to examine the implementation of AI-based risk prediction in the clinic. TRIAL REGISTRATION: ClinicalTrials.gov NCT06056518; https://clinicaltrials.gov/study/NCT06056518. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/54857.

2.
PLoS One ; 18(4): e0282619, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37093808

RESUMO

Scientific publications about the application of machine learning models in healthcare often focus on improving performance metrics. However, beyond often short-lived improvements, many additional aspects need to be taken into consideration to make sustainable progress. What does it take to implement a clinical decision support system, what makes it usable for the domain experts, and what brings it eventually into practical usage? So far, there has been little research to answer these questions. This work presents a multidisciplinary view of machine learning in medical decision support systems and covers information technology, medical, as well as ethical aspects. The target audience is computer scientists, who plan to do research in a clinical context. The paper starts from a relatively straightforward risk prediction system in the subspecialty nephrology that was evaluated on historic patient data both intrinsically and based on a reader study with medical doctors. Although the results were quite promising, the focus of this article is not on the model itself or potential performance improvements. Instead, we want to let other researchers participate in the lessons we have learned and the insights we have gained when implementing and evaluating our system in a clinical setting within a highly interdisciplinary pilot project in the cooperation of computer scientists, medical doctors, ethicists, and legal experts.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Médicos , Humanos , Projetos Piloto , Atenção à Saúde , Publicações
4.
Front Med (Lausanne) ; 9: 1016366, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36606050

RESUMO

Introduction: Artificial intelligence-driven decision support systems (AI-DSS) have the potential to help physicians analyze data and facilitate the search for a correct diagnosis or suitable intervention. The potential of such systems is often emphasized. However, implementation in clinical practice deserves continuous attention. This article aims to shed light on the needs and challenges arising from the use of AI-DSS from physicians' perspectives. Methods: The basis for this study is a qualitative content analysis of expert interviews with experienced nephrologists after testing an AI-DSS in a straightforward usage scenario. Results: The results provide insights on the basics of clinical decision-making, expected challenges when using AI-DSS as well as a reflection on the test run. Discussion: While we can confirm the somewhat expectable demand for better explainability and control, other insights highlight the need to uphold classical strengths of the medical profession when using AI-DSS as well as the importance of broadening the view of AI-related challenges to the clinical environment, especially during treatment. Our results stress the necessity for adjusting AI-DSS to shared decision-making. We conclude that explainability must be context-specific while fostering meaningful interaction with the systems available.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...