RESUMO
Recent advances in proteomic technologies now allow unparalleled assessment of the molecular composition of a wide range of sample types. However, the application of such technologies and techniques should not be undertaken lightly. Here, we describe why the design of a proteomics experiment itself is only the first step in yielding high-quality, translatable results. Indeed, the effectiveness and/or impact of the majority of contemporary proteomics screens are hindered not by commonly considered technical limitations such as low proteome coverage but rather by insufficient analyses. Proteomic experimentation requires a careful methodological selection to account for variables from sample collection, through to database searches for peptide identification to standardised post-mass spectrometry options directed analysis workflow, which should be adjusted for each study, from determining when and how to filter proteomic data to choosing holistic versus trend-wise analyses for biologically relevant patterns. Finally, we highlight and discuss the difficulties inherent in the modelling and study of the majority of progressive neurodegenerative conditions. We provide evidence (in the context of neurodegenerative research) for the benefit of undertaking a comparative approach through the application of the above considerations in the alignment of publicly available pre-existing data sets to identify potential novel regulators of neuronal stability.