Probability of constraint satisfaction and Robust Optimization

This post explores some of the properties of robust mean-variance optimization.

Consider the classic mean-variance optimization model to select a portfolio xRN\mathbf{x} \in \mathbb{R}^N

minxxQxs.t. 1x=1, x0, μxrmin, \min_{\mathbf{x}} \mathbf{x}^{\intercal} Q \mathbf{x} \quad \textit{s.t.}\quad \mathbf{1}^{\intercal}\mathbf{x} = 1, \ \mathbf{x} \geq \mathbf{0},\ \boldsymbol{\mu}^{\intercal}\mathbf{x} \geq r_{\text{min}},

where rminr_{\text{min}} is the target return, Q\mathbf{Q} is the covariance matrix of asset returns, and μ\boldsymbol{\mu} is the expected return.

We do not know the true values of μ\boldsymbol{\mu} and Q\mathbf{Q}. Instead, we only have data to estimate these quantities. We form the corresponding estimates μ^\hat{\boldsymbol{\mu}} and Q^\hat{\mathbf{Q}} from a dataset of returns S={r(i)}i=1TS = \{\mathbf{r}^{(i)}\}_{i=1}^T and solve the MVO model with μ\boldsymbol{\mu} and Q\mathbf{Q} replaced with μ^\hat{\boldsymbol{\mu}} and Q^\hat{\mathbf{Q}}. Let xMVO(μ^,Q^)\mathbf{x}_{\text{MVO}} \left(\hat{\boldsymbol{\mu}}, \hat{\mathbf{Q}} \right) denote an MVO solution obtained using estimates μ^\hat{\boldsymbol{\mu}} and Q^\hat{\mathbf{Q}}.

What is random here? The estimates μ^\hat{\boldsymbol{\mu}} and Q^\hat{\mathbf{Q}} are random, since they depend on a random sample SS. We can then ask, "What is the probability of meeting the constraints?"

PS[μxMVO(μ,Q)rmin]=1\mathbb{P}_S[\boldsymbol{\mu}^{\intercal} \mathbf{x}_{\text{MVO}} \left(\boldsymbol{\mu}, \mathbf{Q} \right) \geq r_{\text{min}}] = 1 since xMVO\mathbf{x}_{\text{MVO}} is a deterministic quantity and satisfies the return constraint by definition.

Similarly, one can consider the probability of meeting the return constraint when using the estimates: P[μxMVO(μ^,Q^)rmin]=?\mathbb{P}[\boldsymbol{\mu}^{\intercal} \mathbf{x}_{\text{MVO}} \left(\hat{\boldsymbol{\mu}}, \hat{\mathbf{Q}} \right) \geq r_{\text{min}}] = ? . Unfortunately, this probability does not equal 1. This drawback motivates the use of robust optimization, where we replace the constraint

μxrmin \boldsymbol{\mu}^{\intercal}\mathbf{x} \geq r_{\text{min}}

with

μxrmin μU, \boldsymbol{\mu}^{\intercal}\mathbf{x} \geq r_{\text{min}} \ \forall \boldsymbol{\mu} \in \mathcal{U},

where U\mathcal{U} is an uncertainty set for the mean. A popular uncertainty set is the ellipsoid centered at the estimate μ^\hat{\boldsymbol{\mu}} with shape parameter

Θ=1T(Q^11000Q^22000Q^NN)\Theta = \frac{1}{T}\begin{pmatrix} \hat{Q}_{11} & 0 & \cdots & 0 \\ 0 & \hat{Q}_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \hat{Q}_{NN} \end{pmatrix}

and radius δ\delta. In this setting, the robust optimization problem can be written as:

minxxQ^ xs.t.μ^xδΘ1/2x2rmin1x=1x0 \begin{aligned} \min_{\mathbf{x}} \quad & \mathbf{x}^{\intercal} \hat{\mathbf{Q}}\ \mathbf{x} \\ \mathrm{s.t.} \quad & \hat{\boldsymbol{\mu}}^{\intercal} \mathbf{x} - \delta \|\boldsymbol{\Theta}^{1/2} \mathbf{x}\|_2 \geq r_{\text{min}} \\ & \mathbf{1}^{\intercal} \mathbf{x} = 1 \\ & \mathbf{x} \geq \mathbf{0} \end{aligned}

Now let xROB(μ^,Q^)\mathbf{x}_{\text{ROB}} \left(\hat{\boldsymbol{\mu}}, \hat{\mathbf{Q}} \right) denote a robust MVO solution obtained using estimates μ^\hat{\boldsymbol{\mu}} and Q^\hat{\mathbf{Q}} and consider the probability of constraint satisfaction P[μxROB(μ^,Q^)rmin]=?\mathbb{P}[\boldsymbol{\mu}^{\intercal} \mathbf{x}_{\text{ROB}} \left(\hat{\boldsymbol{\mu}}, \hat{\mathbf{Q}} \right) \geq r_{\text{min}}] = ? . This probability should be higher than the estimated MVO case if the ellipsoid covers a large portion of the support of the distribution of μ^\hat{\boldsymbol{\mu}}. However, in practice, we cannot easily evaluate these probabilities.

Google Colab Notebook

The notebook located here explores these facts in a contrived setting where we have sampling access to the return distribution. Indeed, the robust model increases the probability of satisfying the return constraint.

CC BY-SA 4.0 David Islip. Last modified: April 04, 2026. Website built with Franklin.jl and the Julia programming language.