Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward efficient resource utilization at edge nodes in federated learning
Blekinge Inst Technol, Dept Comp Sci, S-37179 Karlskrona, Sweden.;Univ Santiago De Compostela, Comp Graph & Data Engn COGRADE Res Grp, Santiago De Compostela, Spain..
Univ Skövde, Sch Informat, Hogskolevagen 1, S-54128 Skövde, Sweden..
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computational Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Scientific Computing.ORCID iD: 0000-0003-0302-6276
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Numerical Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computational Science. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Scientific Computing.ORCID iD: 0000-0001-7273-7923
2024 (English)In: Progress in Artificial Intelligence, ISSN 2192-6352, E-ISSN 2192-6360, Vol. 13, no 2, p. 101-117Article in journal (Refereed) Published
Abstract [en]

Federated learning (FL) enables edge nodes to collaboratively contribute to constructing a global model without sharing their data. This is accomplished by devices computing local, private model updates that are then aggregated by a server. However, computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning (DL) applications. Edge nodes tend to have limited hardware resources (RAM, CPU), and the network bandwidth and reliability at the edge is a concern for scaling federated fleet applications. In this paper, we propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices, as well as the load on the server and network in each global training round. For each local model update, we randomly select layers to train, freezing the remaining part of the model. In doing so, we can reduce both server load and communication costs per round by excluding all untrained layer weights from being transferred to the server. The goal of this study is to empirically explore the potential trade-off between resource utilization on devices and global model convergence under the proposed strategy. We implement the approach using the FL framework FEDn. A number of experiments were carried out over different datasets (CIFAR-10, CASA, and IMDB), performing different tasks using different DL model architectures. Our results show that training the model partially can accelerate the training process, efficiently utilizes resources on-device, and reduce the data transmission by around 75% and 53% when we train 25%, and 50% of the model layers, respectively, without harming the resulting global model accuracy. Furthermore, our results demonstrate a negative correlation between the number of participating clients in the training process and the number of layers that need to be trained on each client's side. As the number of clients increases, there is a decrease in the required number of layers. This observation highlights the potential of the approach, particularly in cross-device use cases.

Place, publisher, year, edition, pages
Springer Nature, 2024. Vol. 13, no 2, p. 101-117
Keywords [en]
Distributed training, Data privacy, Federated learning, Machine learning, Training parallelization, Partial training
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:uu:diva-543660DOI: 10.1007/s13748-024-00322-3ISI: 001242726300001Scopus ID: 2-s2.0-85195583160OAI: oai:DiVA.org:uu-543660DiVA, id: diva2:1915837
Available from: 2024-11-25 Created: 2024-11-25 Last updated: 2024-11-25Bibliographically approved

Open Access in DiVA

fulltext(1738 kB)41 downloads
File information
File name FULLTEXT01.pdfFile size 1738 kBChecksum SHA-512
63e8bda675fe5202281e232aff5fb0826a5b000f0cc92bb2a47c2757c1c3d41a06978ede21efb1fb4ae0da3132b1158838d99d38ec5ea2c3b5b6443560d4f5cd
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Toor, SalmanHellander, Andreas

Search in DiVA

By author/editor
Toor, SalmanHellander, Andreas
By organisation
Computational ScienceDivision of Scientific ComputingNumerical Analysis
In the same journal
Progress in Artificial Intelligence
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 41 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 171 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf