Over the last few years, machine learning–the discipline of automatically fitting mathematical models or rules from data–revolutionized science, engineering, and our society. This revolution is powered by the ever-increasing amounts of digitally recorded data, which are growing at an exponential rate. However, these advances do not come for free, as they incur important computational costs, such as memory requirements, execution time, or energy consumption. To reconcile learning from large-scale data with a reasoned use of computational resources, it seems crucial to research new learning paradigms. A particularly promising candidate is the compressive statistical learning framework. In a nutshell, the idea of this method is to first compre...