Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Aesthet Surg J ; 41(10): NP1303-NP1309, 2021 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-34077508

RESUMO

BACKGROUND: The use of autologous fat grafting (AFG) is becoming increasingly common as an adjunct to breast reconstruction. However, there is a paucity of data comparing the varying processing devices. OBJECTIVES: The goal of this study was to compare the outcomes of 2 commercially available AFG processing devices. METHODS: A retrospective review was conducted of patients who underwent AFG with dual-filter (Puregraft) or single-filter (Revolve) processing systems between 2016 and 2019. Propensity score matching was utilized to adjust for confounding. A total of 38 breasts from the Puregraft group were matched with 38 breasts from the Revolve group. RESULTS: Matching was successful in achieving a similar distribution of baseline characteristics between the 2 groups. The mean number of AFG sessions was comparable between the 2 groups (P = 0.37) with a similar median total volume (Puregraft, 159 mL vs Revolve, 130 mL; P = 0.23). Complication rates were similar between the 2 devices (Puregraft, 26%; Revolve, 18%; P = 0.47). Patients with at least 1 complication had higher overall AFG volume (median, 200 mL vs 130 mL; P = 0.03) and number of sessions (mean, 2.4 vs 1.8, P = 0.009) compared with those without any postoperative complication. CONCLUSIONS: Overall complication rates were comparable between 2 commonly used, commercially available AFG processing systems, and therefore the choice of which to use should be based on surgeon preference. Future studies are underway to decipher whether either system offers superior graft retention, cosmetic, or patient-reported outcomes.


Assuntos
Neoplasias da Mama , Mamoplastia , Tecido Adiposo , Feminino , Humanos , Mamoplastia/efeitos adversos , Pontuação de Propensão , Estudos Retrospectivos , Transplante Autólogo
2.
JMIR AI ; 3: e50442, 2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38875575

RESUMO

BACKGROUND: ChatGPT (Open AI) is a state-of-the-art large language model that uses artificial intelligence (AI) to address questions across diverse topics. The American Society of Clinical Oncology Self-Evaluation Program (ASCO-SEP) created a comprehensive educational program to help physicians keep up to date with the many rapid advances in the field. The question bank consists of multiple choice questions addressing the many facets of cancer care, including diagnosis, treatment, and supportive care. As ChatGPT applications rapidly expand, it becomes vital to ascertain if the knowledge of ChatGPT-3.5 matches the established standards that oncologists are recommended to follow. OBJECTIVE: This study aims to evaluate whether ChatGPT-3.5's knowledge aligns with the established benchmarks that oncologists are expected to adhere to. This will furnish us with a deeper understanding of the potential applications of this tool as a support for clinical decision-making. METHODS: We conducted a systematic assessment of the performance of ChatGPT-3.5 on the ASCO-SEP, the leading educational and assessment tool for medical oncologists in training and practice. Over 1000 multiple choice questions covering the spectrum of cancer care were extracted. Questions were categorized by cancer type or discipline, with subcategorization as treatment, diagnosis, or other. Answers were scored as correct if ChatGPT-3.5 selected the answer as defined by ASCO-SEP. RESULTS: Overall, ChatGPT-3.5 achieved a score of 56.1% (583/1040) for the correct answers provided. The program demonstrated varying levels of accuracy across cancer types or disciplines. The highest accuracy was observed in questions related to developmental therapeutics (8/10; 80% correct), while the lowest accuracy was observed in questions related to gastrointestinal cancer (102/209; 48.8% correct). There was no significant difference in the program's performance across the predefined subcategories of diagnosis, treatment, and other (P=.16, which is greater than .05). CONCLUSIONS: This study evaluated ChatGPT-3.5's oncology knowledge using the ASCO-SEP, aiming to address uncertainties regarding AI tools like ChatGPT in clinical decision-making. Our findings suggest that while ChatGPT-3.5 offers a hopeful outlook for AI in oncology, its present performance in ASCO-SEP tests necessitates further refinement to reach the requisite competency levels. Future assessments could explore ChatGPT's clinical decision support capabilities with real-world clinical scenarios, its ease of integration into medical workflows, and its potential to foster interdisciplinary collaboration and patient engagement in health care settings.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA