Passive Inference Attacks on Split Learning via Adversarial Regularization.arrow-up-right
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling.arrow-up-right
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation.arrow-up-right
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning.arrow-up-right
URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning.arrow-up-right
FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting.arrow-up-right
CrowdGuard: Federated Backdoor Detection in Federated Learning.arrow-up-right
Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated Learning.arrow-up-right
FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning.arrow-up-right
Securing Federated Sensitive Topic Classification against Poisoning Attacks.arrow-up-right
PPA: Preference Profiling Attack Against Federated Learning.arrow-up-right
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning.arrow-up-right
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection.arrow-up-right
FedCRI: Federated Mobile Cyber-Risk Intelligence.arrow-up-right
Interpretable Federated Transformer Log Learning for Cloud Threat Forensics.arrow-up-right
POSEIDON: Privacy-Preserving Federated Neural Network Learning.arrow-up-right
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping.arrow-up-right
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning.arrow-up-right
Last updated 12 months ago