Speaker
Description
Simulation-based inference (SBI) is emerging as a powerful tool for cosmological inference. However, accurate and robust inference requires large numbers of simulations: both to train neural compression techniques that rely on deep learning models, as well as to fit well calibrated models of the posterior. This poses a challenge when working with expensive, high-fidelity simulations, where generating large datasets is infeasible. In this work, we explore transfer learning strategies to reduce the number of required simulations while maintaining—or even improving—the accuracy of inference. Transfer learning is widely used in deep learning but remains underexplored in SBI, where domain adaptation between simulations of different fidelities could significantly improve efficiency. We apply established transfer learning techniques to CNN-based map-level inference models. Using the CAMELS Multifield Dataset, we demonstrate our approach by inferring cosmological parameters from 2D dark matter maps, transferring from N-body to hydrodynamical simulations such as Illustris. By reducing dependence on large simulation suites, transfer learning could allow for fewer, more accurate, simulations, improving both the efficiency and precision of SBI.