In this post I will provide a brief overview of the paper *“Certifying Some Distributional Robustness with Principled Adversarial Training”* . It assumes good knowledge of stochastic optimisation and adversarial robustness. This work is a positive step towards training neural networks that are robust to small perturbations of their inputs, which may stem from adversarial attacks.

*If you are interested in a PyTorch re-implementation or any derivations, feel free to get in touch!*

**Contributions**

This work makes two key contributions:

- proposes an adversarial procedure to train distributionally robust neural network models, which is otherwise intractable. This involves augmenting parameter updates with worst-case adversarial distributions within a certain Wasserstein distance from a nominal distribution created from the training data
- for smooth losses and medium levels of required robustness, the training procedure has theoretical guarantees on its computational and statistical performance; for higher adversarial robustness, it can be used as a heuristic approach. Its major benefit is then simplicity and applicability across many models and machine learning paradigms (e.g. supervised and reinforcement learning).

**Adversarial training procedure**

In stochastic optimisation, the aim is to minimise an expected loss: over a parameter , for a training distirbution . In distributionally robust optimisation, the aim is to minimise the following:

where is a postulated class of distributions around the data-generating distribution . influences robustness guarantees and computability. This work considers the robustness region defined by Wassertedin-based uncertainty sets , where defines the neighborhood around , and c is the “cost” for an adversary to perturbe to (the authors typically use with and set p=2 in their experiments).

As solving the min-max problem (1) is analytically intractable for deep learning and other complex models for arbitrary , the authors reformulate it using a Lagrangian relaxation with a fixed penalty paramter :

the usual loss has been replaced by the robust surrogate , which allows for adversarial perturbations z, modulated by the penalty . As is unknown, the penalty problem (2) is solved with the empirical distribution :

As we will discuss later on, the reformulated objective ensures that moderate levels of robustness against adversarial perturbations are achievable at no computational or statistical cost for smooth losses . This utilises the key insight that for large enough penalty (by duality, small enough robustness ), the robust surrogate function in (2b) is strongly concave and hence easy to optimise if is smooth in z. This implies that stochastic gradient methods applied to (2) have similar convergence guarantees as for non-robust methods.

By inspection, we can obtain that for large , the term dominates. If is designed to be strongly convex, then and would be strongly concave. More formally, this key insight relies on the assumptions that the cost *c *is 1-strong concave and that the loss is smooth such that there is some L for which is L-Lipschitz. The former gives a bound for c, and the latter along with a taylor series expansion of around z’=z, a bound for L. Combining them results in:

The last term makes use of the property , and that c is twice differentiable. For i.e. (negative ), (4) gives us the first-order condition for ()-strong concativity of . To reiterate, when the loss is smooth enough in z and the penalty is large enough (corresponding to less robustness), computing the surrogate (2b) is a strongly-concave optimisation problem.

**Computational guarantees**

Formulation (4) relaxes the requirement for a prescribed amount of robustness , and instead focuses on the Lagrangian penalty formulation (3). The authors develop a stochastic gradient descent (SGD) procedure to optimise it, motivated by the observation:

which is met under two assumptions:

- the cost function is continuous and l-strongly convex (e.g. )
- the loss satisfies certain Lipschitzian smoothness conditions

The resulting SGD procedure is demonstrated by Algorithm 1:

The convergence properties of this algorithm depends on the loss:

- when is convex in and is large enough (not too much robustness) so that is concave for all , algorithm 1 is efficiently solvable with convergence rate
- when the loss is non-convex in , the SGD method can convergence to a stationary point at the same rate as standard smooth non-convex optimisation when (as shown by theorem 2 in the paper). This theorem also suggests that approximate maximisation of the surrogate objective has limited effects.

*If the loss is not smooth in z, the inner supremum (2b) is NP-hard to compute for non-smooth deep networks. In practice, distributionally robust optimisation can easily become tractable for deep learning by replacing ReLUs with sigmoids, ELUs or other smooth activations.*

**Certificate of robustness and generalisation**

Algorithm 1 provably learns to protect against adversarial perturbations of the form (3) *on the training set*. The authors also show that such procedures generalize, allowing to prevent attacks on the test set. They are two key results in the corresponding section:

- an efficiently computable upper bound on the level of robustness for the worst-case population objective for any arbitrary level of robustness . This is optimal for , the level of robustness achieved for the empirical distribution by solving (3) (this gives parameters ).

- the adversarial perturbations on the training set generalize: solving the empirical penalty problem (3) guarantees a similar level of robustness as directly solving its population counterpart (2).

**Bounds on smoothness of neural networks**

Since the above guarantees only apply for a loss that satisfies , the authors provide conservative upper bounds on the Lipschitz constant of the loss. However, due to the conservative natural of the bound, choosing larger than this value — so that the aforementioned theoretical results apply– may not yield to appreciable robustness in practice.

**Visualising the benefits of certified robustness**

To demonstrate the certified robustness of their WRM approach (short for Wasserstein Risk Minimisation), the authors devise a simple supervise learning task. The underlying model is a small neural network with either all ReLU or ELU activations between layers. It is benchmarked against two common baseline models: ERM (short for Empirical Risk Minimisation) and FGM (Fast Gradient Minimisation).

Figure 1 shows the classification boundary learnt by each training procedure (separates blue from orange samples). For both activations, WRM pushes the classification boundaries further outwards than ERM and FGM; intuitively, adversarial examples come from pushing blue points outwards across the boundary. Additionally, it seems to be less affected by sensitivities in the data than ERM and FGM, as evident by its more symmetrical shape. WRM with ELU in particular, yields an axisymmetric classification boundary that hedges against adversarial perturbations in all directions. This demonstrates the certified level of robustness proven in this work.

The authors also demonstrate the certificate of robustness on the worst-case performance for various levels of robustness for the same toy dataset, as well as MNIST:

Experimental results can be reproduced using the official implementation of the paper in TensorFlow.

**Limitations**

- the adversarial training procedure is only tractable for smooth losses (i.e. the gradient of the loss must not change abruptly). Specifically, for the inner supremum in (3) to be strongly concave, the Lagrangian penalty parameter must satisfy . L is a problem-dependent smoothness parameter, which is most often unknown and hard to approximate
- convergence for non-convex SGD only applies for small values of robustness and to a limited set of Wasserstein costs. In practice methods do not outperform baseline models for large adversarial attacks either
- the upper bound on the level of robustness achieved for the worst-case population objective and generalisation guarantee use a measure of model complexity that can become prohibitively large for neural networks