Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach.
Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning.
FAMOS: Robust Privacy-Preserving Authentication on Payment Apps via Federated Multi-Modal Contrastive Learning.
Efficient Privacy Auditing in Federated Learning.
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning.
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.
BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning.
Gradient Obfuscation Gives a False Sense of Security in Federated Learning.
Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks.
PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation.
VILLAIN: Backdoor Attacks Against Vertical Split Learning.
SIMC: ML Inference Secure Against Malicious Clients at Semi-Honest Cost.
Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors.
Label Inference Attacks Against Vertical Federated Learning.
FLAME: Taming Backdoors in Federated Learning.
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning.
Last updated 8 months ago