Deep Declarative Networks

A New Hope — Paper Review
Andrew Sonin

The paper “Deep Declarative Networks: A New Hope” by Gould et al. proposes a novel architectural paradigm that blends optimization layers directly into deep learning models. Rather than just stacking layers of differentiable functions, Deep Declarative Networks (DDNs) allow certain layers to implicitly solve optimization problems — and do so in a way that’s fully differentiable and compatible with backpropagation.

🔍 What’s the big idea?

At the core of DDNs is the insight that many useful computations in machine learning (e.g., projections onto constraints, solving small QPs, estimating parameters under constraints) are more naturally posed as optimization problems rather than function approximations. Instead of approximating such operations with trainable neural nets, DDNs define them declaratively — via an optimization problem — and then embed them into the network as a layer.

These “declarative nodes” return the solution to an optimization problem, and the authors show how to compute gradients through these nodes using the implicit function theorem — a classic tool in applied math that’s having a bit of a renaissance in differentiable programming.

🧠 Why does this matter?

This approach gives models inductive bias grounded in optimization structure. For example, a model can directly project predictions onto a feasible region, or perform MAP estimation over structured variables as part of the forward pass. Instead of learning this structure from scratch, you just declare it.

DDNs thus combine the flexibility of deep learning with the precision and control of optimization. This opens up powerful hybrid modeling options, especially in settings like:

🛠️ Technical highlights

🤔 Final thoughts

This paper is an excellent example of the growing trend toward differentiable programming and hybrid learning systems. It offers a clean, elegant way to incorporate domain knowledge (via optimization problems) into deep networks without sacrificing end-to-end training.

If you’re working on models that require structured outputs, constrained inference, or optimization-in-the-loop, Deep Declarative Networks deserve a close look. They might just be the right abstraction layer between deep learning and mathematical modeling.

Recommended for: ML researchers, applied scientists in control/vision, and anyone interested in the convergence of optimization and learning.

📚 References

  1. Gould, S., Hartley, R., & Campbell, D. (2020).

    Deep Declarative Networks: A New Hope

    arXiv:1909.04866

  1. Gould, S. et al. (2016).

    On Differentiating Parameterized Argmin and Argmax Problems with Application to Bi-level Optimization

    arXiv:1607.05447

  2. Agrawal, A., Amos, B., Barratt, S., Boyd, S., & Kolter, J. Z. (2019).

    Differentiable Convex Optimization Layers

    arXiv:1910.12430


Taxonomy

See related articles on the topics:

Categories

Tags