RESUMEN
Background: This study aims to compare the refractive outcomes of cataract surgery using two different biometry devices, the IOL Master 500 and IOL Master 700, and to investigate the influence of patient-related factors on these outcomes. Methods: In this retrospective study, we analyzed data from 2994 eyes that underwent cataract surgery. Multiple linear regression analyses were performed to examine the impact of the biometry device (IOL Master 500 or IOL Master 700), patient age, time elapsed between biometry and surgery, gender, and insurance status, as well as biometric parameters (anterior chamber depth, axial length, and corneal curvature), on postoperative refractive outcomes, specifically the deviation from target refraction. Results: The choice of the IOL Master device did not result in a statistically significant difference between the two devices (p = 0.205). Age (p = 0.006) and gender (p = 0.001) were identified as significant predictors of refractive outcomes, with older patients and males experiencing slightly more hyperopic outcomes compared to younger patients and females, respectively. The time elapsed between biometry and surgery and insurance status did not significantly influence the refractive outcomes. Conclusions: Our study, supported by a large cohort and a diverse group of patients representing typical anatomical variants seen in cataract surgery, supports the thesis that the IOL Master 500 and IOL Master 700 can be regarded as equivalent and effective for biometry in cataract surgery. The differences between the devices were negligible. Therefore, switching between the devices is safe for bilateral patients.
RESUMEN
Background: The use of large language models (LLMs) as writing assistance for medical professionals is a promising approach to reduce the time required for documentation, but there may be practical, ethical, and legal challenges in many jurisdictions complicating the use of the most powerful commercial LLM solutions. Objective: In this study, we assessed the feasibility of using nonproprietary LLMs of the GPT variety as writing assistance for medical professionals in an on-premise setting with restricted compute resources, generating German medical text. Methods: We trained four 7-billion-parameter models with 3 different architectures for our task and evaluated their performance using a powerful commercial LLM, namely Anthropic's Claude-v2, as a rater. Based on this, we selected the best-performing model and evaluated its practical usability with 2 independent human raters on real-world data. Results: In the automated evaluation with Claude-v2, BLOOM-CLP-German, a model trained from scratch on the German text, achieved the best results. In the manual evaluation by human experts, 95 (93.1%) of the 102 reports generated by that model were evaluated as usable as is or with only minor changes by both human raters. Conclusions: The results show that even with restricted compute resources, it is possible to generate medical texts that are suitable for documentation in routine clinical practice. However, the target language should be considered in the model selection when processing non-English text.