From Discrete to Smooth

A Practical Guide to Optimization Problems
Andrew Sonin

Many real-world tasks can be posed as optimization problems — from resource allocation to machine learning, signal processing, or operations research. The general form is:

$$ \min_{x} \; f(x) \quad \text{subject to} \quad x \in \mathcal{C} $$

where \( f(x) \) is the objective function and \( \mathcal{C} \) is the feasible region.

In practice, especially in fields like high-frequency trading and quantitative finance, it’s common to see practitioners get stuck in discrete formulations of optimization problems. While discrete methods have their place, there are many cases where insisting on a discrete approach leads to significant inefficiencies — both computationally and financially. I’ve often observed situations where a problem that naturally lends itself to smooth optimization techniques is instead attacked with brute-force grid search or other combinatorial methods.

This article is a response to that pattern — it’s a guide to recognizing when and how a discrete problem can (and should) be relaxed into a smooth one, saving time, money, and complexity. It outlines a pragmatic strategy: reduce complexity step by step, ideally toward smooth and convex formulations.


1. Discrete vs. Smooth Optimization

🚫 Discrete Optimization

When some variables must be integers (e.g., \( x \in \{0,1\}^n \)), the problem becomes discrete or combinatorial. These problems are notoriously hard:

⚠️ Avoid discrete optimization if possible. Instead, try relaxing the problem into a continuous one.

✅ Smooth Optimization

If variables are allowed to take real values (e.g., \( x \in [0,1]^n \)), then gradient-based methods and other powerful tools become available:


2. A Practical Strategy: Reduce to Smooth Optimization

Whenever you’re facing a hard problem — especially one involving integer variables or non-differentiable objectives — follow this pragmatic funnel:


✅ Step 1: Try Convex Optimization

If you can reformulate the problem so that:

then convex optimization solvers will likely give reliable global minima efficiently.

🧠 You can verify convexity by checking that the Hessian is positive semi-definite, or by using composition rules (see Boyd & Vandenberghe).

If convexity cannot be guaranteed — say, due to regularization or relaxations — the solver may fail or return local minima.


🧮 Step 2: Use the Lagrangian Method with Quadratic Forms

If convexity fails, keep things quadratic and smooth. That is:

Then solve using the method of Lagrange multipliers. Construct:

$$ \mathcal{L}(x, \lambda) = f(x) + \lambda^T (Ax - b) $$

Solving the KKT (Karush-Kuhn-Tucker) conditions gives:

⚠️ This works only if the constraint matrix has full rank and satisfies regularity conditions (e.g., constraint qualification). Otherwise, numerical instabilities may arise.


🔧 Step 3: Use Gradient-Based Optimization

If all else fails and the objective is non-convex but differentiable, use gradient descent or variants like Adam. Key tools:


3. Illustrative Example: Relaxing a Binary Problem

Suppose we are tasked with choosing a subset of assets to include in a portfolio. Each asset either is included (1) or not (0). The goal is to minimize risk while satisfying certain constraints on return, sector exposure, or liquidity. Such selection naturally leads to a binary optimization problem. Let’s formalize a simplified version of this situation:

Given:

The problem is:

$$ \min_{x \in \{0,1\}^n} \; c^T x \quad \text{s.t.} \quad Ax = b $$

This is discrete and hard. Let’s relax it:

  1. Allow \( x \in [0,1]^n \) instead of binary.
  2. Add a regularization term that encourages binary-like solutions.

New problem:

$$ \min_{x \in [0,1]^n} \; c^T x + \lambda \sum_{i=1}^n x_i(1 - x_i) \quad \text{s.t.} \quad Ax = b $$

Here, \(\lambda>0\) is a regularization parameter — not a Lagrange multiplier — that balances the original objective and the penalty term encouraging binary solutions.

This objective is differentiable but not convex due to the \( x_i(1 - x_i) \) terms. You can now:


4. Summary: A Decision Table for Optimization

Problem TypeFeasibilityTheoryRecommended Method
Discrete / IntegerVery difficultWeakAvoid or relax
Smooth, non-convexModerateLimitedGradient descent (Adam)
Quadratic + linearSolvableStrongLagrange multipliers
Smooth & convexEasyStrongConvex solvers / CVXPY

Golden Rule: When in doubt, reduce your problem to something smooth, quadratic, and convex — in that order.


References


Taxonomy

See related articles on the topics:

Categories

Tags