RESUMO
Large-scale, nationally representative surveys serve many vital functions, but these surveys are often long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a Science, Engineering, and Health doctorate. We used three main sources of information on the SDR variables: 1) a bibliography of documents using the SDR data, 2) the SDR website that allows users to download summary data, and 3) web timing paradata and break-off rates. The bibliography was coded for SDR variable usage and citation counts. Putting this information together, we identified 35 unused items (17% of the survey) by any of these sources and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.