Fake or Compromised? Making Sense of Malicious Clients in Federated Learning.
Exploiting Internal Randomness for Privacy in Vertical Federated Learning.
VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification.
Exploiting Layerwise Feature Representation Similarity For Backdoor Defence in Federated Learning.
FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models.
FLMJR: Improving Robustness of Federated Learning via Model Stability.
Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning.
Local Differential Privacy for Federated Learning in Industrial Settings.
Last updated 10 months ago