WAFFLE: Watermarking in Federated Learning

Buse G. A. Tekgul, Yuxi Xia, Samuel Marchal, N. Asokan

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

11 Citations (Scopus)


Federated learning is a distributed learning technique where machine learning models are trained on client devices in which the local training data resides. The training is coordinated via a central server which is, typically, controlled by the intended owner of the resulting model. By avoiding the need to transport the training data to the central server, federated learning improves privacy and efficiency. But it raises the risk of model theft by clients because the resulting model is available on every client device. Even if the application software used for local training may attempt to prevent direct access to the model, a malicious client may bypass any such restrictions by reverse engineering the application software. Watermarking is a well-known deterrence method against model theft by providing the means for model owners to demonstrate ownership of their models. Several recent deep neural network (DNN) watermarking techniques use backdooring: training the models with additional mislabeled data. Backdooring requires full access to the training data and control of the training process. This is feasible when a single party trains the model in a centralized manner, but not in a federated learning setting where the training process and training data are distributed among several client devices. In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning. It introduces a retraining step at the server after each aggregation of local models into the global model. We show that WAFFLE efficiently embeds a resilient watermark into models incurring only negligible degradation in test accuracy (-0.17%), and does not require access to training data. We also introduce a novel technique to generate the backdoor used as a watermark. It outperforms prior techniques, imposing no communication, and low computational (+3.2%) overhead11The research report version of this paper is also available in https://arxiv.org/abs/2008.07298, and the code for reproducing our work can be found at https://github.com/ssg-research/WAFFLE.
Original languageEnglish
Title of host publicationProceedings of 40th International Symposium on Reliable Distributed Systems, SRDS 2021
Number of pages11
ISBN (Electronic)978-1-6654-3819-3
ISBN (Print)978-1-6654-3820-9
Publication statusPublished - 22 Nov 2021
MoE publication typeA4 Conference publication
EventInternational Symposium on Reliable Distributed Systems - Virtual, online, Chicago, United States
Duration: 20 Sept 202123 Sept 2021
Conference number: 40


ConferenceInternational Symposium on Reliable Distributed Systems
Abbreviated titleSRDS
Country/TerritoryUnited States
Internet address


  • Training
  • Reverse engineering
  • Training data
  • Process control
  • Watermarking
  • Collaborative work
  • Data models


Dive into the research topics of 'WAFFLE: Watermarking in Federated Learning'. Together they form a unique fingerprint.

Cite this