Three tools for practical differential privacy

Privacy Preserving Machine Learning (NeurIPS 2018 workshop) with

Differentially private learning on real-world data poses challenges for standard machine learning practice: privacy guarantees are difficult to interpret, hyperparameter tuning on private data reduces the privacy budget, and ad-hoc privacy attacks are often required to test model privacy. We introduce three tools to make differentially private machine learning more practical:

  1. simple sanity checks which can be carried out in a centralized manner before training,
  2. an adaptive clipping bound to reduce the effective number of tuneable privacy parameters, and
  3. we show that large-batch training improves model performance.