Keynote Speaker: On Device Intelligence Workshop at MLSys 2021

Link to MLSys:

We propose a novel method for federated learning that is customized to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then customized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We present an algorithm that locally de-biases model updates, while leveraging distributed data, so that each device can be effectively customized towards its objectives. Our method is fully agnostic to device heterogeneity and imbalanced data, scalable to massive number of devices, and allows for arbitrary partial participation. Our method has built-in convergence guarantees, and on benchmark datasets we demonstrate that it outperforms other state-of-art methods.