I remember from undergrad years, one of the biggest sources of confusion was how to compute error bars for experimental data points. I recall my professor in the chemical engineering lab one day showed us the equation for propagation of uncertainty, but we never had the foggiest idea of how it was derived, what justified the use of the equation, or what the assumptions were behind the equation. I don’t fully understand the statistical arguments behind the POU formula. Furthermore, the POU formula is complicated and unwieldy to use for large equations, and I haven’t the foggiest idea how to apply it to an implicit equation.

# Constrained Optimization Paradigm

An alternate method of computing error bounds on a function is to recast the problem as two optimization problems. Suppose we have a function , with uncertainties and What then is the uncertainty in , ?

The way to find , is to first attempt to maximize f subject to the uncertainty bounds on x and y. Then once that problem is compete, then attempt to *minimize* subject to the same constraints. This yields quantities and . Call the nominal value of , . Then the upper bound on error is , and the lower bound on error is .

In MATLAB, this problem is easily solved using the Optimization Toolbox with fmincon(). The objective function is , and the upper and lower bounds on and form the vectors of upper and lower bounds on the decision variables.

The advantage of this approach, is that it is intuitively easy to understand. Furthermore, it can expose unevenness in your uncertainty (for example, the lower bound might be much closer to the nominal value than the upper bound). Lastly, this approach works for implicit functions.