Passive Inference Attacks on Split Learning via Adversarial Regularization.
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling.
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation.
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning.
URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning.
FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting.
CrowdGuard: Federated Backdoor Detection in Federated Learning.
Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated Learning.
FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning.
Securing Federated Sensitive Topic Classification against Poisoning Attacks.
PPA: Preference Profiling Attack Against Federated Learning.
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning.
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection.
FedCRI: Federated Mobile Cyber-Risk Intelligence.
Interpretable Federated Transformer Log Learning for Cloud Threat Forensics.
POSEIDON: Privacy-Preserving Federated Neural Network Learning.
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping.
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning.
Last updated 5 months ago