Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach.arrow-up-right
Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning.arrow-up-right
FAMOS: Robust Privacy-Preserving Authentication on Payment Apps via Federated Multi-Modal Contrastive Learning.arrow-up-right
Efficient Privacy Auditing in Federated Learning.arrow-up-right
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning.arrow-up-right
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.arrow-up-right
BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning.arrow-up-right
Gradient Obfuscation Gives a False Sense of Security in Federated Learning.arrow-up-right
Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks.arrow-up-right
PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation.arrow-up-right
VILLAIN: Backdoor Attacks Against Vertical Split Learning.arrow-up-right
SIMC: ML Inference Secure Against Malicious Clients at Semi-Honest Cost.arrow-up-right
Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors.arrow-up-right
Label Inference Attacks Against Vertical Federated Learning.arrow-up-right
FLAME: Taming Backdoors in Federated Learning.arrow-up-right
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning.arrow-up-right
Last updated 1 year ago