RESUMO
An array of large observational programs using ground-based and space-borne telescopes is planned in the next decade. The forthcoming wide-field sky surveys are expected to deliver a sheer volume of data exceeding an exabyte. Processing the large amount of multiplex astronomical data is technically challenging, and fully automated technologies based on machine learning (ML) and artificial intelligence are urgently needed. Maximizing scientific returns from the big data requires community-wide efforts. We summarize recent progress in ML applications in observational cosmology. We also address crucial issues in high-performance computing that are needed for the data processing and statistical analysis.
RESUMO
In general relativity, the average velocity field of dark matter around galaxy clusters is uniquely determined by the mass profile. The latter can be measured through weak lensing. We propose a new method of measuring the velocity field (phase space density) by stacking redshifts of surrounding galaxies from a spectroscopic sample. In combination with lensing, this yields a direct test of gravity on scales of 1-30 Mpc. Using N-body simulations, we show that this method can improve upon current constraints on f(R) and Dvali-Gabadadze-Porrati model parameters by several orders of magnitude when applied to upcoming imaging and redshift surveys.