We investigate the performance of distributed learning for large-scale linear regression where the model unknowns are distributed over the network. We provide high-probability bounds on the generalization error for this distributed learning setting for isotropic as well as correlated Gaussian regressors. Our investigations show that the generalization error of the distributed solution can grow unboundedly even though the training error is low. We highlight the effect of partitioning of the training data over the network of learners on the generalization error. Our results are particularly interesting for the overparametrized scenario, illustrating fast convergence but also possibly unbounded generalization error
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
We consider information-theoretic bounds on the expected generalization error for statistical learni...
We consider information-theoretic bounds on expected generalization error for statistical learning p...
We investigate the performance of distributed learning for large-scale linear regression where the m...
We investigate the performance of distributed learning for large-scale linear regression where the m...
We investigate the performance of distributed learning for large-scale linear regression where the m...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning facilitates the scaling-up of data processing by distributing the computational...
Distributed learning facilitates the scaling-up of data processing by distributing the computational...
Distributed learning facilitates the scaling-up of data processing by distributing the computational...
Machine learning models are typically configured by minimizing the training error over a given train...
Machine learning models are typically configured by minimizing the training error over a given train...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
We consider information-theoretic bounds on the expected generalization error for statistical learni...
We consider information-theoretic bounds on expected generalization error for statistical learning p...
We investigate the performance of distributed learning for large-scale linear regression where the m...
We investigate the performance of distributed learning for large-scale linear regression where the m...
We investigate the performance of distributed learning for large-scale linear regression where the m...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning provides an attractive framework for scaling the learning task by sharing the c...
Distributed learning facilitates the scaling-up of data processing by distributing the computational...
Distributed learning facilitates the scaling-up of data processing by distributing the computational...
Distributed learning facilitates the scaling-up of data processing by distributing the computational...
Machine learning models are typically configured by minimizing the training error over a given train...
Machine learning models are typically configured by minimizing the training error over a given train...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
We consider information-theoretic bounds on the expected generalization error for statistical learni...
We consider information-theoretic bounds on expected generalization error for statistical learning p...