Your browser doesn't support javascript.
loading
A comprehensive analysis of the IEDB MHC class-I automated benchmark.
Trevizani, Raphael; Yan, Zhen; Greenbaum, Jason A; Sette, Alessandro; Nielsen, Morten; Peters, Bjoern.
Afiliación
  • Trevizani R; Division of Vaccine Discovery, La Jolla Institute for Immunology, La Jolla, California 92037, USA.
  • Yan Z; Fiocruz Ceará, Fundação Oswaldo Cruz, Rua São José s/n, Precabura, Eusébio/CE, Brazil.
  • Greenbaum JA; Bioinformatics Core, La Jolla Institute for Immunology, La Jolla, California 92037, USA.
  • Sette A; Bioinformatics Core, La Jolla Institute for Immunology, La Jolla, California 92037, USA.
  • Nielsen M; Division of Vaccine Discovery, La Jolla Institute for Immunology, La Jolla, California 92037, USA.
  • Peters B; Department of Medicine, University of California San Diego, La Jolla, California 92093, USA.
Brief Bioinform ; 23(4)2022 07 18.
Article en En | MEDLINE | ID: mdl-35794711
In 2014, the Immune Epitope Database automated benchmark was created to compare the performance of the MHC class I binding predictors. However, this is not a straightforward process due to the different and non-standardized outputs of the methods. Additionally, some methods are more restrictive regarding the HLA alleles and epitope sizes for which they predict binding affinities, while others are more comprehensive. To address how these problems impacted the ranking of the predictors, we developed an approach to assess the reliability of different metrics. We found that using percentile-ranked results improved the stability of the ranks and allowed the predictors to be reliably ranked despite not being evaluated on the same data. We also found that given the rate new data are incorporated into the benchmark, a new method must wait for at least 4 years to be ranked against the pre-existing methods. The best-performing tools with statistically indistinguishable scores in this benchmark were NetMHCcons, NetMHCpan4.0, ANN3.4, NetMHCpan3.0 and NetMHCpan2.8. The results of this study will be used to improve the evaluation and display of benchmark performance. We highly encourage anyone working on MHC binding predictions to participate in this benchmark to get an unbiased evaluation of their predictors.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Benchmarking Tipo de estudio: Prognostic_studies Idioma: En Revista: Brief Bioinform Asunto de la revista: BIOLOGIA / INFORMATICA MEDICA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Benchmarking Tipo de estudio: Prognostic_studies Idioma: En Revista: Brief Bioinform Asunto de la revista: BIOLOGIA / INFORMATICA MEDICA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos
...