Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Learn Health Syst ; 8(3): e10409, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39036532

RESUMO

Purpose: In a learning health system (LHS), data gathered from clinical practice informs care and scientific investigation. To demonstrate how a novel data and analytics platform can enable an LHS at a regional cancer center by characterizing the care provided to breast cancer patients. Methods: Socioeconomic information, tumor characteristics, treatments and outcomes were extracted from the platform and combined to characterize the patient population and their clinical course. Oncologists were asked to identify examples where clinical practice guidelines (CPGs) or policy changes had varying impacts on practice. These constructs were evaluated by extracting the corresponding data. Results: Breast cancer patients (5768) seen at the Juravinski Cancer Centre between January 2014 and June 2022 were included. The average age was 62.5 years. The commonest histology was invasive ductal carcinoma (74.6%); 77% were estrogen receptor-positive and 15.5% were HER2 Neu positive. Breast-conserving surgery (BCS) occurred in 56%. For the 4294 patients who received systemic therapy, the initial indications were adjuvant (3096), neoadjuvant (828) and palliative (370). Metastases occurred in 531 patients and 495 patients died. Lowest-income patients had a higher mortality rate. For the adoption of CPGs, the uptake for adjuvant bisphosphonate was very low, 8% as predicted, compared to 64% for pertuzumab, a HER2 targeted agent and 40.2% for CD4/6 inhibitors in metastases. During COVID-19, the provincial cancer agency issued a policy to shorten the duration of radiation after BCS. There was a significant reduction in the average number of fractions to the breast by five fractions. Conclusion: Our platform characterized care and the clinical course of breast cancer patients. Practice changes in response to regulatory developments and policy changes were measured. Establishing a data platform is important for an LHS. The next step is for the data to feedback and change practice, that is, close the loop.

2.
JMIR Med Educ ; 9: e50514, 2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37725411

RESUMO

BACKGROUND: Large language model (LLM)-based chatbots are evolving at an unprecedented pace with the release of ChatGPT, specifically GPT-3.5, and its successor, GPT-4. Their capabilities in general-purpose tasks and language generation have advanced to the point of performing excellently on various educational examination benchmarks, including medical knowledge tests. Comparing the performance of these 2 LLM models to that of Family Medicine residents on a multiple-choice medical knowledge test can provide insights into their potential as medical education tools. OBJECTIVE: This study aimed to quantitatively and qualitatively compare the performance of GPT-3.5, GPT-4, and Family Medicine residents in a multiple-choice medical knowledge test appropriate for the level of a Family Medicine resident. METHODS: An official University of Toronto Department of Family and Community Medicine Progress Test consisting of multiple-choice questions was inputted into GPT-3.5 and GPT-4. The artificial intelligence chatbot's responses were manually reviewed to determine the selected answer, response length, response time, provision of a rationale for the outputted response, and the root cause of all incorrect responses (classified into arithmetic, logical, and information errors). The performance of the artificial intelligence chatbots were compared against a cohort of Family Medicine residents who concurrently attempted the test. RESULTS: GPT-4 performed significantly better compared to GPT-3.5 (difference 25.0%, 95% CI 16.3%-32.8%; McNemar test: P<.001); it correctly answered 89/108 (82.4%) questions, while GPT-3.5 answered 62/108 (57.4%) questions correctly. Further, GPT-4 scored higher across all 11 categories of Family Medicine knowledge. In 86.1% (n=93) of the responses, GPT-4 provided a rationale for why other multiple-choice options were not chosen compared to the 16.7% (n=18) achieved by GPT-3.5. Qualitatively, for both GPT-3.5 and GPT-4 responses, logical errors were the most common, while arithmetic errors were the least common. The average performance of Family Medicine residents was 56.9% (95% CI 56.2%-57.6%). The performance of GPT-3.5 was similar to that of the average Family Medicine resident (P=.16), while the performance of GPT-4 exceeded that of the top-performing Family Medicine resident (P<.001). CONCLUSIONS: GPT-4 significantly outperforms both GPT-3.5 and Family Medicine residents on a multiple-choice medical knowledge test designed for Family Medicine residents. GPT-4 provides a logical rationale for its response choice, ruling out other answer choices efficiently and with concise justification. Its high degree of accuracy and advanced reasoning capabilities facilitate its potential applications in medical education, including the creation of exam questions and scenarios as well as serving as a resource for medical knowledge or information on community services.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA