• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1[/          Copyright Matthew Pulver 2018 - 2019.
2// Distributed under the Boost Software License, Version 1.0.
3//    (See accompanying file LICENSE_1_0.txt or copy at
4//          https://www.boost.org/LICENSE_1_0.txt)]
5
6[section:autodiff Automatic Differentiation]
7[template autodiff_equation[name]  '''<inlinemediaobject><imageobject><imagedata fileref="../equations/autodiff/'''[name]'''"></imagedata></imageobject></inlinemediaobject>''']
8
9[h1:synopsis Synopsis]
10
11    #include <boost/math/differentiation/autodiff.hpp>
12
13    namespace boost {
14    namespace math {
15    namespace differentiation {
16
17    // Function returning a single variable of differentiation. Recommended: Use auto for type.
18    template <typename RealType, size_t Order, size_t... Orders>
19    autodiff_fvar<RealType, Order, Orders...> make_fvar(RealType const& ca);
20
21    // Function returning multiple independent variables of differentiation in a std::tuple.
22    template<typename RealType, size_t... Orders, typename... RealTypes>
23    auto make_ftuple(RealTypes const&... ca);
24
25    // Type of combined autodiff types. Recommended: Use auto for return type (C++14).
26    template <typename RealType, typename... RealTypes>
27    using promote = typename detail::promote_args_n<RealType, RealTypes...>::type;
28
29    namespace detail {
30
31    // Single autodiff variable. Use make_fvar() or make_ftuple() to instantiate.
32    template <typename RealType, size_t Order>
33    class fvar {
34     public:
35      // Query return value of function to get the derivatives.
36      template <typename... Orders>
37      get_type_at<RealType, sizeof...(Orders) - 1> derivative(Orders... orders) const;
38
39      // All of the arithmetic and comparison operators are overloaded.
40      template <typename RealType2, size_t Order2>
41      fvar& operator+=(fvar<RealType2, Order2> const&);
42
43      fvar& operator+=(root_type const&);
44
45      // ...
46    };
47
48    // Standard math functions are overloaded and called via argument-dependent lookup (ADL).
49    template <typename RealType, size_t Order>
50    fvar<RealType, Order> floor(fvar<RealType, Order> const&);
51
52    template <typename RealType, size_t Order>
53    fvar<RealType, Order> exp(fvar<RealType, Order> const&);
54
55    // ...
56
57    }  // namespace detail
58
59    }  // namespace differentiation
60    }  // namespace math
61    }  // namespace boost
62
63[h1:description Description]
64
65Autodiff is a header-only C++ library that facilitates the [@https://en.wikipedia.org/wiki/Automatic_differentiation
66automatic differentiation] (forward mode) of mathematical functions of single and multiple variables.
67
68This implementation is based upon the [@https://en.wikipedia.org/wiki/Taylor_series Taylor series] expansion of
69an analytic function /f/ at the point ['x[sub 0]]:
70
71[/ Thanks to http://www.tlhiv.org/ltxpreview/ for LaTeX-to-SVG conversions. ]
72[/ \Large\begin{align*}
73f(x_0+\varepsilon) &= f(x_0) + f'(x_0)\varepsilon + \frac{f''(x_0)}{2!}\varepsilon^2 + \frac{f'''(x_0)}{3!}\varepsilon^3 + \cdots \\
74  &= \sum_{n=0}^N\frac{f^{(n)}(x_0)}{n!}\varepsilon^n + O\left(\varepsilon^{N+1}\right).
75\end{align*} ]
76[:[:[autodiff_equation taylor_series.svg]]]
77
78The essential idea of autodiff is the substitution of numbers with polynomials in the evaluation of ['f(x[sub 0])].
79By substituting the number ['x[sub 0]] with the first-order polynomial ['x[sub 0]+\u03b5], and using the same
80algorithm to compute ['f(x[sub 0]+\u03b5)], the resulting polynomial in ['\u03b5] contains the function's derivatives
81['f'(x[sub 0])], ['f''(x[sub 0])], ['f\'\'\'(x[sub 0])], ... within the coefficients. Each coefficient is equal to the
82derivative of its respective order, divided by the factorial of the order.
83
84In greater detail, assume one is interested in calculating the first /N/ derivatives of /f/ at ['x[sub 0]]. Without
85loss of precision to the calculation of the derivatives, all terms ['O(\u03b5[super N+1])] that include powers
86of ['\u03b5] greater than /N/ can be discarded. (This is due to the fact that each term in a polynomial depends
87only upon equal and lower-order terms under arithmetic operations.) Under these truncation rules, /f/ provides a
88polynomial-to-polynomial transformation:
89
90[/ \Large$$f \qquad : \qquad x_0+\varepsilon \qquad \mapsto \qquad
91    \sum_{n=0}^Ny_n\varepsilon^n=\sum_{n=0}^N\frac{f^{(n)}(x_0)}{n!}\varepsilon^n.$$ ]
92[:[:[autodiff_equation polynomial_transform.svg]]]
93
94C++'s ability to overload operators and functions allows for the creation of a class `fvar` ([_f]orward-mode autodiff
95[_var]iable) that represents polynomials in ['\u03b5]. Thus the same algorithm /f/ that calculates the numeric value
96of ['y[sub 0]=f(x[sub 0])], when written to accept and return variables of a generic (template) type, is also used
97to calculate the polynomial ['\u03a3[sub n]y[sub n]\u03b5[super n]=f(x[sub 0]+\u03b5)].  The derivatives
98['f[super (n)](x[sub 0])] are then found from the product of the respective factorial ['n!] and coefficient
99['y[sub n]]:
100
101[/ \Large$$\frac{d^nf}{dx^n}(x_0)=n!y_n.$$ ]
102[:[:[autodiff_equation derivative_formula.svg]]]
103
104
105[h1:examples Examples]
106
107[h2:example-single-variable Example 1: Single-variable derivatives]
108
109[h3 Calculate derivatives of ['f(x)=x[super 4]] at /x/=2.]
110
111In this example, `make_fvar<double, Order>(2.0)` instantiates the polynomial 2+['\u03b5]. The `Order=5`
112means that enough space is allocated (on the stack) to hold a polynomial of up to degree 5 during the proceeding
113computation.
114
115Internally, this is modeled by a `std::array<double,6>` whose elements `{2, 1, 0, 0, 0, 0}` correspond
116to the 6 coefficients of the polynomial upon initialization. Its fourth power, at the end of the computation, is
117a polynomial with coefficients `y = {16, 32, 24, 8, 1, 0}`.  The derivatives are obtained using the formula
118['f[super (n)](2)=n!*y[n]].
119
120    #include <boost/math/differentiation/autodiff.hpp>
121    #include <iostream>
122
123    template <typename T>
124    T fourth_power(T const& x) {
125      T x4 = x * x;  // retval in operator*() uses x4's memory via NRVO.
126      x4 *= x4;      // No copies of x4 are made within operator*=() even when squaring.
127      return x4;     // x4 uses y's memory in main() via NRVO.
128    }
129
130    int main() {
131      using namespace boost::math::differentiation;
132
133      constexpr unsigned Order = 5;                  // Highest order derivative to be calculated.
134      auto const x = make_fvar<double, Order>(2.0);  // Find derivatives at x=2.
135      auto const y = fourth_power(x);
136      for (unsigned i = 0; i <= Order; ++i)
137        std::cout << "y.derivative(" << i << ") = " << y.derivative(i) << std::endl;
138      return 0;
139    }
140    /*
141    Output:
142    y.derivative(0) = 16
143    y.derivative(1) = 32
144    y.derivative(2) = 48
145    y.derivative(3) = 48
146    y.derivative(4) = 24
147    y.derivative(5) = 0
148    */
149
150The above calculates
151
152[/ \Large\begin{alignat*}{3}
153{\tt y.derivative(0)} &=& f(2) =&& \left.x^4\right|_{x=2} &= 16\\
154{\tt y.derivative(1)} &=& f'(2) =&& \left.4\cdot x^3\right|_{x=2} &= 32\\
155{\tt y.derivative(2)} &=& f''(2) =&& \left.4\cdot 3\cdot x^2\right|_{x=2} &= 48\\
156{\tt y.derivative(3)} &=& f'''(2) =&& \left.4\cdot 3\cdot2\cdot x\right|_{x=2} &= 48\\
157{\tt y.derivative(4)} &=& f^{(4)}(2) =&& 4\cdot 3\cdot2\cdot1 &= 24\\
158{\tt y.derivative(5)} &=& f^{(5)}(2) =&& 0 &
159\end{alignat*} ]
160[:[:[autodiff_equation example1.svg]]]
161
162[h2:example-multiprecision
163Example 2: Multi-variable mixed partial derivatives with multi-precision data type]
164
165[/ \Large$\frac{\partial^{12}f}{\partial w^{3}\partial x^{2}\partial y^{4}\partial z^{3}}(11,12,13,14)$]
166[/ \Large$f(w,x,y,z)=\exp\left(w\sin\left(\frac{x\log(y)}{z}\right)+\sqrt{\frac{wz}{xy}}\right)+\frac{w^2}{\tan(z)}$]
167[h3 Calculate [autodiff_equation mixed12.svg] with a precision of about 50 decimal digits,
168where [autodiff_equation example2f.svg].]
169
170In this example, `make_ftuple<float50, Nw, Nx, Ny, Nz>(11, 12, 13, 14)` returns a `std::tuple` of 4
171independent `fvar` variables, with values of 11, 12, 13, and 14, for which the maximum order derivative to
172be calculated for each are 3, 2, 4, 3, respectively. The order of the variables is important, as it is the same
173order used when calling `v.derivative(Nw, Nx, Ny, Nz)` in the example below.
174
175    #include <boost/math/differentiation/autodiff.hpp>
176    #include <boost/multiprecision/cpp_bin_float.hpp>
177    #include <iostream>
178
179    using namespace boost::math::differentiation;
180
181    template <typename W, typename X, typename Y, typename Z>
182    promote<W, X, Y, Z> f(const W& w, const X& x, const Y& y, const Z& z) {
183      using namespace std;
184      return exp(w * sin(x * log(y) / z) + sqrt(w * z / (x * y))) + w * w / tan(z);
185    }
186
187    int main() {
188      using float50 = boost::multiprecision::cpp_bin_float_50;
189
190      constexpr unsigned Nw = 3;  // Max order of derivative to calculate for w
191      constexpr unsigned Nx = 2;  // Max order of derivative to calculate for x
192      constexpr unsigned Ny = 4;  // Max order of derivative to calculate for y
193      constexpr unsigned Nz = 3;  // Max order of derivative to calculate for z
194      // Declare 4 independent variables together into a std::tuple.
195      auto const variables = make_ftuple<float50, Nw, Nx, Ny, Nz>(11, 12, 13, 14);
196      auto const& w = std::get<0>(variables);  // Up to Nw derivatives at w=11
197      auto const& x = std::get<1>(variables);  // Up to Nx derivatives at x=12
198      auto const& y = std::get<2>(variables);  // Up to Ny derivatives at y=13
199      auto const& z = std::get<3>(variables);  // Up to Nz derivatives at z=14
200      auto const v = f(w, x, y, z);
201      // Calculated from Mathematica symbolic differentiation.
202      float50 const answer("1976.319600747797717779881875290418720908121189218755");
203      std::cout << std::setprecision(std::numeric_limits<float50>::digits10)
204                << "mathematica   : " << answer << '\n'
205                << "autodiff      : " << v.derivative(Nw, Nx, Ny, Nz) << '\n'
206                << std::setprecision(3)
207                << "relative error: " << (v.derivative(Nw, Nx, Ny, Nz) / answer - 1) << '\n';
208      return 0;
209    }
210    /*
211    Output:
212    mathematica   : 1976.3196007477977177798818752904187209081211892188
213    autodiff      : 1976.3196007477977177798818752904187209081211892188
214    relative error: 2.67e-50
215    */
216
217[h2:example-black_scholes
218Example 3: Black-Scholes Option Pricing with Greeks Automatically Calculated]
219[h3 Calculate greeks directly from the Black-Scholes pricing function.]
220
221Below is the standard Black-Scholes pricing function written as a function template, where the price, volatility
222(sigma), time to expiration (tau) and interest rate are template parameters. This means that any greek based on
223these 4 variables can be calculated using autodiff. The below example calculates delta and gamma where the variable
224of differentiation is only the price. For examples of more exotic greeks, see `example/black_scholes.cpp`.
225
226```
227#include <boost/math/differentiation/autodiff.hpp>
228#include <iostream>
229
230using namespace boost::math::constants;
231using namespace boost::math::differentiation;
232
233// Equations and function/variable names are from
234// https://en.wikipedia.org/wiki/Greeks_(finance)#Formulas_for_European_option_Greeks
235
236// Standard normal cumulative distribution function
237template <typename X>
238X Phi(X const& x) {
239  return 0.5 * erfc(-one_div_root_two<X>() * x);
240}
241
242enum class CP { call, put };
243
244// Assume zero annual dividend yield (q=0).
245template <typename Price, typename Sigma, typename Tau, typename Rate>
246promote<Price, Sigma, Tau, Rate> black_scholes_option_price(CP cp,
247                                                            double K,
248                                                            Price const& S,
249                                                            Sigma const& sigma,
250                                                            Tau const& tau,
251                                                            Rate const& r) {
252  using namespace std;
253  auto const d1 = (log(S / K) + (r + sigma * sigma / 2) * tau) / (sigma * sqrt(tau));
254  auto const d2 = (log(S / K) + (r - sigma * sigma / 2) * tau) / (sigma * sqrt(tau));
255  switch (cp) {
256    case CP::call:
257      return S * Phi(d1) - exp(-r * tau) * K * Phi(d2);
258    case CP::put:
259      return exp(-r * tau) * K * Phi(-d2) - S * Phi(-d1);
260  }
261}
262
263int main() {
264  double const K = 100.0;                    // Strike price.
265  auto const S = make_fvar<double, 2>(105);  // Stock price.
266  double const sigma = 5;                    // Volatility.
267  double const tau = 30.0 / 365;             // Time to expiration in years. (30 days).
268  double const r = 1.25 / 100;               // Interest rate.
269  auto const call_price = black_scholes_option_price(CP::call, K, S, sigma, tau, r);
270  auto const put_price = black_scholes_option_price(CP::put, K, S, sigma, tau, r);
271
272  std::cout << "black-scholes call price = " << call_price.derivative(0) << '\n'
273            << "black-scholes put  price = " << put_price.derivative(0) << '\n'
274            << "call delta = " << call_price.derivative(1) << '\n'
275            << "put  delta = " << put_price.derivative(1) << '\n'
276            << "call gamma = " << call_price.derivative(2) << '\n'
277            << "put  gamma = " << put_price.derivative(2) << '\n';
278  return 0;
279}
280/*
281Output:
282black-scholes call price = 56.5136
283black-scholes put  price = 51.4109
284call delta = 0.773818
285put  delta = -0.226182
286call gamma = 0.00199852
287put  gamma = 0.00199852
288*/
289```
290
291[h1 Advantages of Automatic Differentiation]
292The above examples illustrate some of the advantages of using autodiff:
293
294* Elimination of code redundancy. The existence of /N/ separate functions to calculate derivatives is a form
295  of code redundancy, with all the liabilities that come with it:
296    * Changes to one function require /N/ additional changes to other functions. In the 3rd example above,
297      consider how much larger and inter-dependent the above code base would be if a separate function were
298      written for [@https://en.wikipedia.org/wiki/Greeks_(finance)#Formulas_for_European_option_Greeks
299      each Greek] value.
300    * Dependencies upon a derivative function for a different purpose will break when changes are made to
301      the original function. What doesn't need to exist cannot break.
302    * Code bloat, reducing conceptual integrity. Control over the evolution of code is easier/safer when
303      the code base is smaller and able to be intuitively grasped.
304* Accuracy of derivatives over finite difference methods. Single-iteration finite difference methods always include
305  a ['\u0394x] free variable that must be carefully chosen for each application. If ['\u0394x] is too small, then
306  numerical errors become large. If ['\u0394x] is too large, then mathematical errors become large.  With autodiff,
307  there are no free variables to set and the accuracy of the answer is generally superior to finite difference
308  methods even with the best choice of ['\u0394x].
309
310[h1 Manual]
311Additional details are in the [@../differentiation/autodiff.pdf autodiff manual].
312
313[endsect]
314