About me
I am a Principal Applied Scientist in the AI & AGI Security team
at Amazon, where I build secure foundations for AI systems. My work
spans
post-quantum cryptography,
AI security, and
privacy-enhancing technologies.
I co-authored
CRYSTALS-Kyber
and
CRYSTALS-Dilithium,
the lattice-based schemes that NIST standardized as
ML-KEM
and
ML-DSA, now
replacing decades-old algorithms in TLS and Signal to protect
billions of connections against quantum computers. My work on
homomorphic encryption, anonymous credentials, and differential
privacy
has been deployed at scale by Amazon, Google, and Apple.
Previously, I was a Principal Applied Scientist at AWS
(2022–2025), where I led data protection and AI security in
the Provable Security & Automation organization. Before that, I
was a Cryptography Engineer at Apple (2021–2022) and a
Research Scientist at Google (2018–2021), working on
post-quantum cryptography, secure computation, anonymous
credentials, and privacy-preserving analytics. Earlier, I worked on
post-quantum cryptography and homomorphic encryption at SRI
International (2016–2018) and CryptoExperts (2011–2016).
I hold a Ph.D. from École Normale Supérieure and
University of Luxembourg (Gilles Kahn Prize, 2014).
Selected work
Post-quantum cryptography
Quantum computers will eventually break the cryptographic algorithms
that secure today’s internet. I have worked on designing,
analyzing, and deploying their replacements.
AI security
I work on building secure and reliable AI systems, from private
training with federated learning and secure aggregation to testing
LLM-integrated applications.
-
Advances and open problems in federated learning
— the foundational survey of federated learning, covering
privacy, robustness, and systems challenges.
Foundations and Trends in Machine Learning, 2021.
-
Secure single-server aggregation with (poly)logarithmic
overhead
— efficient secure aggregation protocol for
privacy-preserving machine learning.
Deployed by
Google
for federated learning with distributed differential
privacy.
Published at ACM CCS 2020.
-
Delta debugging for LLM-integrated systems
— a systematic approach to isolating faults in systems
that integrate large language models. ICSE-SEIP 2026.
Privacy-enhancing technologies
I design cryptographic systems that let organizations use sensitive
data without exposing it—from private information retrieval
and anonymous credentials to differential privacy and
contact-tracing analytics. Several of these systems have been
deployed at scale by Apple and Google.
All publications & preprints →