FedHome
CtrlK
  • 👋Welcome to FedHome
  • 🥵ResearchHub
    • 🍓Web
      • WWW
      • KDD
      • SIGIR
    • 🍎Security
      • USENIX Security
      • S&P
      • CCS
      • NDSS
      • ESORICS
      • ACSAC
    • 🍐Software
      • ICSE
      • ASE
      • FSE
    • 🍊System
      • OSDI
      • SOSP
      • EuroSys
      • USENIX ATC
      • MLSys
      • DAC
      • HPCA
      • MICRO
      • SC
      • SoCC
      • RTSS
      • ISCA
    • 🍋Database
      • SIGMOD
      • VLDB
      • ICDE
    • 🍌Network
      • NSDI
      • INFOCOM
    • 🍉ML
      • ICML
      • ICLR
      • AISTATS
      • NeurlPS
    • 🫐Mobile
      • MobiSys
      • MobiCom
      • SenSys
      • UbiComp
    • 🍑AI
      • AAAI
      • IJCAI
Powered by GitBook
On this page
  • 2024
  • 2023
  • 2022
  1. 🥵ResearchHub
  2. 🍎Security

S&P

2024

  • BadVFL: Backdoor Attacks in Vertical Federated Learning.

  • LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation.

  • Protecting Label Distribution in Cross-Silo Federated Learning.

  • FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks.

  • SHERPA: Explainable Robust Algorithms for Privacy-preserved Federated Learning in Future Networks to Defend against Data Poisoning Attacks.

2023

  • FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information.

  • RoFL: Robustness of Secure Federated Learning.

  • Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning.

  • BayBFed: Bayesian Backdoor Defense for Federated Learning.

  • ADI: Adversarial Dominating Inputs in Vertical Federated Learning Systems.

  • 3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning.

  • Scalable and Privacy-Preserving Federated Principal Component Analysis.

  • ELSA: Secure Aggregation for Federated Learning with Malicious Actors.

2022

  • SNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate Proofs.

  • Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning.

PreviousUSENIX SecurityNextCCS

Last updated 9 months ago