RESUMO
BACKGROUND: Several polygenic risk score (PRS) methods are available for measuring the cumulative effect of multiple risk-associated single nucleotide polymorphisms (SNPs). Their performance in predicting risk at the individual level has not been well studied. METHODS: We compared the performance of three PRS methods for prostate cancer risk assessment in a clinical trial cohort, including genetic risk score (GRS), pruning and thresholding (P + T), and linkage disequilibrium prediction (LDpred). Performance was evaluated for score deciles (broad-sense validity) and score values (narrow-sense validity). RESULTS: A training process was required to identify the best P + T model (397 SNPs) and LDpred model (3 011 362 SNPs). In contrast, GRS was directly calculated based on 110 established risk-associated SNPs. For broad-sense validity in the testing population, higher deciles were significantly associated with higher observed risk; Ptrend was 7.40 × 10-11 , 7.64 × 10-13 , and 7.51 × 10-10 for GRS, P + T, and LDpred, respectively. For narrow-sense validity, the calibration slope (1 is best) was 1.03, 0.77, and 0.87, and mean bias score (0 is best) was 0.09, 0.21, and 0.10 for GRS, P + T, and LDpred, respectively. CONCLUSIONS: The performance of GRS was better than P + T and LDpred. Fewer and well-established SNPs of GRS also make it more feasible and interpretable for genetic testing at the individual level.