摘要:Distributed machine learning paradigms have benefited from the concurrent advancement of deep learning and the Internet of Things (IoT), among which federated learning is one of the most promising frameworks, where a central server collaborates with local learners to train a global model. The inherent heterogeneity of IoT devices, i.e., non-independent and identically distributed (non-i.i.d.) data, and the inconsistent communication network environment results in the bottleneck of a degraded learning performance and slow convergence. Moreover, most weight averaging-based model aggregation schemes raise learning fairness concerns. In this paper, we propose a peer-to-peer decentralized learning framework to tackle the above issues. Particularly, each local client iteratively finds a learning pair to exchange the local learning model. By doing this, multiple learning objectives are optimized to advocate for learning fairness while avoiding small-group domination. The proposed fairness-aware approach allows local clients to adaptively aggregate the received model based on the local learning performance. The experimental results demonstrate that the proposed approach is capable of significantly improving the efficacy of federated learning and outperforms the state-of-the-art schemes under real-world scenarios, including balanced-i.i.d., unbalanced-i.i.d., balanced-non.i.i.d., and unbalanced-non.i.i.d. environments.