Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
BMJ Oncol ; 3(1)2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39086924

RESUMO

Background: Mismatch repair deficiency (dMMR) and microsatellite instability-high (MSI-H) occur in a subset of cancers and have been shown to confer sensitivity to immune checkpoint inhibition (ICI); however, there is a lack of prospective data in urothelial carcinoma (UC). Methods and analysis: We performed a systematic review to estimate the prevalence of dMMR and MSI-H in UC, including survival and clinical outcomes. We searched for studies published up to 26 October 2022 in major scientific databases. We screened 1745 studies and included 110. Meta-analyses were performed if the extracted data were suitable. Results: The pooled weighted prevalences of dMMR in bladder cancer (BC) and upper tract UC (UTUC) were 2.30% (95% CI 1.12% to 4.65%) and 8.95% (95% CI 6.81% to 11.67%), respectively. The pooled weighted prevalences of MSI-H in BC and UTUC were 2.11% (95% CI 0.82% to 5.31%) and 8.36% (95% CI 5.50% to 12.53%), respectively. Comparing localised versus metastatic disease, the pooled weighted prevalences for MSI-H in BC were 5.26% (95% CI 0.86% to 26.12%) and 0.86% (95% CI 0.59% to 1.25%), respectively; and in UTUC, they were 18.04% (95% CI 13.36% to 23.91%) and 4.96% (95% CI 2.72% to 8.86%), respectively. Cumulatively, the response rate in dMMR/MSI-H metastatic UC treated with an ICI was 22/34 (64.7%) compared with 1/9 (11.1%) with chemotherapy. Conclusion: Both dMMR and MSI-H occur more frequently in UTUC than in BC. In UC, MSI-H occurs more frequently in localised disease than in metastatic disease. These biomarkers may predict sensitivity to ICI in metastatic UC and resistance to cisplatin-based chemotherapy.

2.
J Med Internet Res ; 26: e54758, 2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38758582

RESUMO

BACKGROUND: Artificial intelligence is increasingly being applied to many workflows. Large language models (LLMs) are publicly accessible platforms trained to understand, interact with, and produce human-readable text; their ability to deliver relevant and reliable information is also of particular interest for the health care providers and the patients. Hematopoietic stem cell transplantation (HSCT) is a complex medical field requiring extensive knowledge, background, and training to practice successfully and can be challenging for the nonspecialist audience to comprehend. OBJECTIVE: We aimed to test the applicability of 3 prominent LLMs, namely ChatGPT-3.5 (OpenAI), ChatGPT-4 (OpenAI), and Bard (Google AI), in guiding nonspecialist health care professionals and advising patients seeking information regarding HSCT. METHODS: We submitted 72 open-ended HSCT-related questions of variable difficulty to the LLMs and rated their responses based on consistency-defined as replicability of the response-response veracity, language comprehensibility, specificity to the topic, and the presence of hallucinations. We then rechallenged the 2 best performing chatbots by resubmitting the most difficult questions and prompting to respond as if communicating with either a health care professional or a patient and to provide verifiable sources of information. Responses were then rerated with the additional criterion of language appropriateness, defined as language adaptation for the intended audience. RESULTS: ChatGPT-4 outperformed both ChatGPT-3.5 and Bard in terms of response consistency (66/72, 92%; 54/72, 75%; and 63/69, 91%, respectively; P=.007), response veracity (58/66, 88%; 40/54, 74%; and 16/63, 25%, respectively; P<.001), and specificity to the topic (60/66, 91%; 43/54, 80%; and 27/63, 43%, respectively; P<.001). Both ChatGPT-4 and ChatGPT-3.5 outperformed Bard in terms of language comprehensibility (64/66, 97%; 53/54, 98%; and 52/63, 83%, respectively; P=.002). All displayed episodes of hallucinations. ChatGPT-3.5 and ChatGPT-4 were then rechallenged with a prompt to adapt their language to the audience and to provide source of information, and responses were rated. ChatGPT-3.5 showed better ability to adapt its language to nonmedical audience than ChatGPT-4 (17/21, 81% and 10/22, 46%, respectively; P=.03); however, both failed to consistently provide correct and up-to-date information resources, reporting either out-of-date materials, incorrect URLs, or unfocused references, making their output not verifiable by the reader. CONCLUSIONS: In conclusion, despite LLMs' potential capability in confronting challenging medical topics such as HSCT, the presence of mistakes and lack of clear references make them not yet appropriate for routine, unsupervised clinical use, or patient counseling. Implementation of LLMs' ability to access and to reference current and updated websites and research papers, as well as development of LLMs trained in specialized domain knowledge data sets, may offer potential solutions for their future clinical application.


Assuntos
Pessoal de Saúde , Transplante de Células-Tronco Hematopoéticas , Humanos , Inteligência Artificial , Idioma
3.
Oncologist ; 29(5): 407-414, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38309720

RESUMO

BACKGROUND: The capability of large language models (LLMs) to understand and generate human-readable text has prompted the investigation of their potential as educational and management tools for patients with cancer and healthcare providers. MATERIALS AND METHODS: We conducted a cross-sectional study aimed at evaluating the ability of ChatGPT-4, ChatGPT-3.5, and Google Bard to answer questions related to 4 domains of immuno-oncology (Mechanisms, Indications, Toxicities, and Prognosis). We generated 60 open-ended questions (15 for each section). Questions were manually submitted to LLMs, and responses were collected on June 30, 2023. Two reviewers evaluated the answers independently. RESULTS: ChatGPT-4 and ChatGPT-3.5 answered all questions, whereas Google Bard answered only 53.3% (P < .0001). The number of questions with reproducible answers was higher for ChatGPT-4 (95%) and ChatGPT3.5 (88.3%) than for Google Bard (50%) (P < .0001). In terms of accuracy, the number of answers deemed fully correct were 75.4%, 58.5%, and 43.8% for ChatGPT-4, ChatGPT-3.5, and Google Bard, respectively (P = .03). Furthermore, the number of responses deemed highly relevant was 71.9%, 77.4%, and 43.8% for ChatGPT-4, ChatGPT-3.5, and Google Bard, respectively (P = .04). Regarding readability, the number of highly readable was higher for ChatGPT-4 and ChatGPT-3.5 (98.1%) and (100%) compared to Google Bard (87.5%) (P = .02). CONCLUSION: ChatGPT-4 and ChatGPT-3.5 are potentially powerful tools in immuno-oncology, whereas Google Bard demonstrated relatively poorer performance. However, the risk of inaccuracy or incompleteness in the responses was evident in all 3 LLMs, highlighting the importance of expert-driven verification of the outputs returned by these technologies.


Assuntos
Neoplasias , Humanos , Estudos Transversais , Neoplasias/imunologia , Neoplasias/terapia , Oncologia/métodos , Oncologia/normas , Inquéritos e Questionários , Idioma , Imunoterapia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA