FedHome
CtrlK
  • 👋Welcome to FedHome
  • 🥵ResearchHub
    • 🍓Web
      • WWW
      • KDD
      • SIGIR
    • 🍎Security
      • USENIX Security
      • S&P
      • CCS
      • NDSS
      • ESORICS
      • ACSAC
    • 🍐Software
      • ICSE
      • ASE
      • FSE
    • 🍊System
      • OSDI
      • SOSP
      • EuroSys
      • USENIX ATC
      • MLSys
      • DAC
      • HPCA
      • MICRO
      • SC
      • SoCC
      • RTSS
      • ISCA
    • 🍋Database
      • SIGMOD
      • VLDB
      • ICDE
    • 🍌Network
      • NSDI
      • INFOCOM
    • 🍉ML
      • ICML
      • ICLR
      • AISTATS
      • NeurlPS
    • 🫐Mobile
      • MobiSys
      • MobiCom
      • SenSys
      • UbiComp
    • 🍑AI
      • AAAI
      • IJCAI
Powered by GitBook
On this page
  • 2024
  • 2023
  • 2022
  • 2020
  1. 🥵ResearchHub
  2. 🍎Security

USENIX Security

2024

  • Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach.

  • Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning.

  • FAMOS: Robust Privacy-Preserving Authentication on Payment Apps via Federated Multi-Modal Contrastive Learning.

  • Efficient Privacy Auditing in Federated Learning.

  • Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning.

  • ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning.

  • BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning.

2023

  • Gradient Obfuscation Gives a False Sense of Security in Federated Learning.

  • Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks.

  • PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation.

  • VILLAIN: Backdoor Attacks Against Vertical Split Learning.

2022

  • SIMC: ML Inference Secure Against Malicious Clients at Semi-Honest Cost.

  • Efficient Differentially Private Secure Aggregation for Federated Learning via Hardness of Learning with Errors.

  • Label Inference Attacks Against Vertical Federated Learning.

  • FLAME: Taming Backdoors in Federated Learning.

2020

  • Local Model Poisoning Attacks to Byzantine-Robust Federated Learning.

PreviousSecurityNextS&P

Last updated 8 months ago