Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Arthroplasty ; 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39025279

RESUMEN

BACKGROUND: Outcomes and safety of "mix and match" in total hip arthroplasty (THA) using universal head-neck adapters (UHNA) are a matter of ongoing discussion and concern due to legal affairs. This study aimed at analyzing the "mix and match" use of UHNA and evaluating complication and reoperation rates, possible risk factors, and the implant's survival. METHODS: A total of 306 patients treated with THA (94.1% revisions) using a UHNA at our institution between 2006 and 2022 were identified and included. Diagnoses, comorbidities, implants, and UHNA specifications were retrospectively recorded. Outcomes, complications, and survival analyses were evaluated, taking into account various possible risk factors. RESULTS: There were 19.9% of the 306 included cases (58.5% women; median age 74 years; median follow-up 57 months) that had at least 1 complication. There were 43 patients (14.1%) who had to receive ≥1 rerevision surgery. The most common complication was postoperative recurrent dislocation (n = 27, 8.8%). There was one case of a prosthetic stem-neck fracture that was registered. Statistically significant risk factors for postoperative recurrent dislocations and postoperative aseptic loosening were, respectively, dislocation as an indication for UHNA implantation (P < .001) and oversized neck lengths (≥2XL; P = .004). The overall revision-free survival was 92% after 1 year and 82% at ten years. Statistically significant better survival rates were registered in patients ≥60 years old, who had fewer comorbidities (<2), and normal neck lengths (S to XL). CONCLUSIONS: The results of this study underline the overall safety of UHNA use in THA through "mix and match." Only one case of a stem-neck fracture was identified. The highlighted risk factors for failure must be kept in mind during the decision-making process with patients.

2.
Front Public Health ; 12: 1303319, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38584922

RESUMEN

Introduction: Since its introduction in November 2022, the artificial intelligence large language model ChatGPT has taken the world by storm. Among other applications it can be used by patients as a source of information on diseases and their treatments. However, little is known about the quality of the sarcoma-related information ChatGPT provides. We therefore aimed at analyzing how sarcoma experts evaluate the quality of ChatGPT's responses on sarcoma-related inquiries and assess the bot's answers in specific evaluation metrics. Methods: The ChatGPT responses to a sample of 25 sarcoma-related questions (5 definitions, 9 general questions, and 11 treatment-related inquiries) were evaluated by 3 independent sarcoma experts. Each response was compared with authoritative resources and international guidelines and graded on 5 different metrics using a 5-point Likert scale: completeness, misleadingness, accuracy, being up-to-date, and appropriateness. This resulted in maximum 25 and minimum 5 points per answer, with higher scores indicating a higher response quality. Scores ≥21 points were rated as very good, between 16 and 20 as good, while scores ≤15 points were classified as poor (11-15) and very poor (≤10). Results: The median score that ChatGPT's answers achieved was 18.3 points (IQR, i.e., Inter-Quartile Range, 12.3-20.3 points). Six answers were classified as very good, 9 as good, while 5 answers each were rated as poor and very poor. The best scores were documented in the evaluation of how appropriate the response was for patients (median, 3.7 points; IQR, 2.5-4.2 points), which were significantly higher compared to the accuracy scores (median, 3.3 points; IQR, 2.0-4.2 points; p = 0.035). ChatGPT fared considerably worse with treatment-related questions, with only 45% of its responses classified as good or very good, compared to general questions (78% of responses good/very good) and definitions (60% of responses good/very good). Discussion: The answers ChatGPT provided on a rare disease, such as sarcoma, were found to be of very inconsistent quality, with some answers being classified as very good and others as very poor. Sarcoma physicians should be aware of the risks of misinformation that ChatGPT poses and advise their patients accordingly.


Asunto(s)
Inteligencia Artificial , Sarcoma , Humanos , Lenguaje , Concienciación , Fuentes de Información
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA