1[section:root_finding_examples Examples of Root-Finding (with and without derivatives)] 2 3[import ../../example/root_finding_example.cpp] 4[import ../../example/root_finding_n_example.cpp] 5[import ../../example/root_finding_multiprecision_example.cpp] 6 7The examples demonstrate how to use the various tools for 8[@http://en.wikipedia.org/wiki/Root-finding_algorithm root finding]. 9 10We start with the simple cube root function `cbrt` ( C++ standard function name 11[@http://en.cppreference.com/w/cpp/numeric/math/cbrt cbrt]) 12showing root finding __cbrt_no_derivatives. 13 14We then show how use of derivatives can improve the speed of convergence. 15 16(But these examples are only a demonstration and do not try to make 17the ultimate improvements of an 'industrial-strength' 18implementation, for example, of `boost::math::cbrt`, mainly by using a better computed initial 'guess' 19at [@boost:/libs/math/include/boost/math/special_functions/cbrt.hpp cbrt.hpp]). 20 21Then we show how a higher root (__fifth_root) [super 5][radic] can be computed, 22and in 23[@../../example/root_finding_n_example.cpp root_finding_n_example.cpp] 24a generic method for the __nth_root that constructs the derivatives at compile-time. 25 26These methods should be applicable to other functions that can be differentiated easily. 27 28[section:cbrt_eg Finding the Cubed Root With and Without Derivatives] 29 30First some `#includes` that will be needed. 31 32[root_finding_include_1] 33 34[tip For clarity, `using` statements are provided to list what functions are being used in this example: 35you can, of course, partly or fully qualify the names in other ways. 36(For your application, you may wish to extract some parts into header files, 37but you should never use `using` statements globally in header files).] 38 39Let's suppose we want to find the root of a number ['a], and to start, compute the cube root. 40 41So the equation we want to solve is: 42 43[expression ['f(x) = x[cubed] -a]] 44 45We will first solve this without using any information 46about the slope or curvature of the cube root function. 47 48Fortunately, the cube-root function is 'Really Well Behaved' in that it is monotonic 49and has only one root (we leave negative values 'as an exercise for the student'). 50 51We then show how adding what we can know about this function, first just the slope 52or 1st derivative ['f'(x)], will speed homing in on the solution. 53 54Lastly, we show how adding the curvature ['f''(x)] too will speed convergence even more. 55 56[h3:cbrt_no_derivatives Cube root function without derivatives] 57 58First we define a function object (functor): 59 60[root_finding_noderiv_1] 61 62Implementing the cube-root function itself is fairly trivial now: 63the hardest part is finding a good approximation to begin with. 64In this case we'll just divide the exponent by three. 65(There are better but more complex guess algorithms used in 'real life'.) 66 67[root_finding_noderiv_2] 68 69This snippet from `main()` in [@../../example/root_finding_example.cpp root_finding_example.cpp] 70shows how it can be used. 71 72[root_finding_main_1] 73 74[pre 75 cbrt_noderiv(27) = 3 76 cbrt_noderiv(28) = 3.0365889718756618 77] 78 79The result of `bracket_and_solve_root` is a [@http://www.cplusplus.com/reference/utility/pair/ pair] 80of values that could be displayed. 81 82The number of bits separating them can be found using `float_distance(r.first, r.second)`. 83The distance is zero (closest representable) for 3[super 3] = 27 84but `float_distance(r.first, r.second) = 3` for cube root of 28 with this function. 85The result (avoiding overflow) is midway between these two values. 86 87[h3:cbrt_1st_derivative Cube root function with 1st derivative (slope)] 88 89We now solve the same problem, but using more information about the function, 90to show how this can speed up finding the best estimate of the root. 91 92For the root function, the 1st differential (the slope of the tangent to a curve at any point) is known. 93 94This algorithm is similar to this [@http://en.wikipedia.org/wiki/Nth_root_algorithm nth root algorithm]. 95 96If you need some reminders, then 97[@http://en.wikipedia.org/wiki/Derivative#Derivatives_of_elementary_functions derivatives of elementary functions] 98may help. 99 100Using the rule that the derivative of ['x[super n]] for positive n (actually all nonzero n) is ['n x[super n-1]], 101allows us to get the 1st differential as ['3x[super 2]]. 102 103To see how this extra information is used to find a root, view 104[@http://en.wikipedia.org/wiki/Newton%27s_method Newton-Raphson iterations] 105and the [@http://en.wikipedia.org/wiki/Newton%27s_method#mediaviewer/File:NewtonIteration_Ani.gif animation]. 106 107We define a better functor `cbrt_functor_deriv` that returns 108both the evaluation of the function to solve, along with its first derivative: 109 110To '['return]' two values, we use a [@http://en.cppreference.com/w/cpp/utility/pair std::pair] 111of floating-point values. 112 113[root_finding_1_deriv_1] 114 115The result of [@boost:/libs/math/include/boost/math/tools/roots.hpp `newton_raphson_iterate`] 116function is a single value. 117 118[tip There is a compromise between accuracy and speed when choosing the value of `digits`. 119It is tempting to simply chose `std::numeric_limits<T>::digits`, 120but this may mean some inefficient and unnecessary iterations as the function thrashes around 121trying to locate the last bit. In theory, since the precision doubles with each step 122it is sufficient to stop when half the bits are correct: as the last step will have doubled 123that to full precision. Of course the function has no way to tell if that is actually the case 124unless it does one more step to be sure. In practice setting the precision to slightly more 125than `std::numeric_limits<T>::digits / 2` is a good choice.] 126 127Note that it is up to the caller of the function to check the iteration count 128after the call to see if iteration stoped as a result of running out of iterations 129rather than meeting the required precision. 130 131Using the test data in [@../../test/test_cbrt.cpp /test/test_cbrt.cpp] this found the cube root 132exact to the last digit in every case, and in no more than 6 iterations at double 133precision. However, you will note that a high precision was used in this 134example, exactly what was warned against earlier on in these docs! In this 135particular case it is possible to compute ['f(x)] exactly and without undue 136cancellation error, so a high limit is not too much of an issue. 137 138However, reducing the limit to `std::numeric_limits<T>::digits * 2 / 3` gave full 139precision in all but one of the test cases (and that one was out by just one bit). 140The maximum number of iterations remained 6, but in most cases was reduced by one. 141 142Note also that the above code omits a probable optimization by computing z[sup2] 143and reusing it, omits error handling, and does not handle 144negative values of z correctly. (These are left as the customary exercise for the reader!) 145 146The `boost::math::cbrt` function also includes these and other improvements: 147most importantly it uses a much better initial guess which reduces the iteration count to 148just 1 in almost all cases. 149 150[h3:cbrt_2_derivatives Cube root with 1st & 2nd derivative (slope & curvature)] 151 152Next we define yet another even better functor `cbrt_functor_2deriv` that returns 153both the evaluation of the function to solve, 154along with its first [*and second] derivative: 155 156[expression ['f''(x) = 6x]] 157 158using information about both slope and curvature to speed convergence. 159 160To [''return'] three values, we use a `tuple` of three floating-point values: 161[root_finding_2deriv_1] 162 163The function `halley_iterate` also returns a single value, 164and the number of iterations will reveal if it met the convergence criterion set by `get_digits`. 165 166The no-derivative method gives a result of 167 168 cbrt_noderiv(28) = 3.0365889718756618 169 170with a 3 bits distance between the bracketed values, whereas the derivative methods both converge to a single value 171 172 cbrt_2deriv(28) = 3.0365889718756627 173 174which we can compare with the [@boost:/libs/math/doc/html/math_toolkit/powers/cbrt.html boost::math::cbrt] 175 176 cbrt(28) = 3.0365889718756627 177 178Note that the iterations are set to stop at just one-half of full precision, 179and yet, even so, not one of the test cases had a single bit wrong. 180What's more, the maximum number of iterations was now just 4. 181 182Just to complete the picture, we could have called 183[link math_toolkit.roots_deriv.schroder `schroder_iterate`] in the last 184example: and in fact it makes no difference to the accuracy or number of iterations 185in this particular case. However, the relative performance of these two methods 186may vary depending upon the nature of ['f(x)], and the accuracy to which the initial 187guess can be computed. There appear to be no generalisations that can be made 188except "try them and see". 189 190Finally, had we called `cbrt` with [@http://shoup.net/ntl/doc/RR.txt NTL::RR] 191set to 1000 bit precision (about 300 decimal digits), 192then full precision can be obtained with just 7 iterations. 193To put that in perspective, 194an increase in precision by a factor of 20, has less than doubled the number of 195iterations. That just goes to emphasise that most of the iterations are used 196up getting the first few digits correct: after that these methods can churn out 197further digits with remarkable efficiency. 198 199Or to put it another way: ['nothing beats a really good initial guess!] 200 201Full code of this example is at 202[@../../example/root_finding_example.cpp root_finding_example.cpp], 203 204[endsect] [/section:cbrt_eg Finding the Cubed Root With and Without Derivatives] 205 206 207[section:lambda Using C++11 Lambda's] 208 209Since all the root finding functions accept a function-object, they can be made to 210work (often in a lot less code) with C++11 lambda's. Here's the much reduced code for our "toy" cube root function: 211 212[root_finding_2deriv_lambda] 213 214Full code of this example is at 215[@../../example/root_finding_example.cpp root_finding_example.cpp], 216 217[endsect] [/section:lambda Using C++11 Lambda's] 218 219 220[section:5th_root_eg Computing the Fifth Root] 221 222Let's now suppose we want to find the [*fifth root] of a number ['a]. 223 224The equation we want to solve is : 225 226[expression ['f](x) = ['x[super 5] -a]] 227 228If your differentiation is a little rusty 229(or you are faced with an function whose complexity makes differentiation daunting), 230then you can get help, for example, from the invaluable 231[@http://www.wolframalpha.com/ WolframAlpha site.] 232 233For example, entering the command: `differentiate x ^ 5` 234 235or the Wolfram Language command: ` D[x ^ 5, x]` 236 237gives the output: `d/dx(x ^ 5) = 5 x ^ 4` 238 239and to get the second differential, enter: `second differentiate x ^ 5` 240 241or the Wolfram Language command: `D[x ^ 5, { x, 2 }]` 242 243to get the output: `d ^ 2 / dx ^ 2(x ^ 5) = 20 x ^ 3` 244 245To get a reference value, we can enter: [^fifth root 3126] 246 247or: `N[3126 ^ (1 / 5), 50]` 248 249to get a result with a precision of 50 decimal digits: 250 2515.0003199590478625588206333405631053401128722314376 252 253(We could also get a reference value using __multiprecision_root). 254 255The 1st and 2nd derivatives of ['x[super 5]] are: 256 257[expression ['f\'(x) = 5x[super 4]]] 258 259[expression ['f\'\'(x) = 20x[super 3]]] 260 261[root_finding_fifth_functor_2deriv] 262[root_finding_fifth_2deriv] 263 264Full code of this example is at 265[@../../example/root_finding_example.cpp root_finding_example.cpp] and 266[@../../example/root_finding_n_example.cpp root_finding_n_example.cpp]. 267 268[endsect] [/section:5th_root_eg Computing the Fifth Root] 269 270 271[section:multiprecision_root Root-finding using Boost.Multiprecision] 272 273The apocryphally astute reader might, by now, be asking "How do we know if this computes the 'right' answer?". 274 275For most values, there is, sadly, no 'right' answer. 276This is because values can only rarely be ['exactly represented] by C++ floating-point types. 277What we do want is the 'best' representation - one that is the nearest __representable value. 278(For more about how numbers are represented see __floating_point). 279 280Of course, we might start with finding an external reference source like 281__WolframAlpha, as above, but this is not always possible. 282 283Another way to reassure is to compute 'reference' values at higher precision 284with which to compare the results of our iterative computations using built-in like `double`. 285They should agree within the tolerance that was set. 286 287The result of `static_cast`ing to `double` from a higher-precision type like `cpp_bin_float_50` is guaranteed 288to be the [*nearest representable] `double` value. 289 290For example, the cube root functions in our example for `cbrt(28.)` compute 291 292`std::cbrt<double>(28.) = 3.0365889718756627` 293 294WolframAlpha says `3.036588971875662519420809578505669635581453977248111123242141...` 295 296`static_cast<double>(3.03658897187566251942080957850) = 3.0365889718756627` 297 298This example `cbrt(28.) = 3.0365889718756627` 299 300[tip To ensure that all potentially significant decimal digits are displayed use `std::numeric_limits<T>::max_digits10` 301(or if not available on older platforms or compilers use `2+std::numeric_limits<double>::digits*3010/10000`).[br] 302 303Ideally, values should agree to `std::numeric-limits<T>::digits10` decimal digits. 304 305This also means that a 'reference' value to be [*input] or `static_cast` should have 306at least `max_digits10` decimal digits (17 for 64-bit `double`). 307] 308 309If we wish to compute [*higher-precision values] then, on some platforms, we may be able to use `long double` 310with a higher precision than `double` to compare with the very common `double` 311and/or a more efficient built-in quad floating-point type like `__float128`. 312 313Almost all platforms can easily use __multiprecision, 314for example, __cpp_dec_float or a binary type __cpp_bin_float types, 315to compute values at very much higher precision. 316 317[note With multiprecision types, it is debatable whether to use the type `T` for computing the initial guesses. 318Type `double` is like to be accurate enough for the method used in these examples. 319This would limit the exponent range of possible values to that of `double`. 320There is also the cost of conversion to and from type `T` to consider. 321In these examples, `double` is used via `typedef double guess_type`.] 322 323Since the functors and functions used above are templated on the value type, 324we can very simply use them with any of the __multiprecision types. As a reminder, 325here's our toy cube root function using 2 derivatives and C++11 lambda functions to find the root: 326 327[root_finding_2deriv_lambda] 328 329Some examples below are 50 decimal digit decimal and binary types 330(and on some platforms a much faster `float128` or `quad_float` type ) 331that we can use with these includes: 332 333[root_finding_multiprecision_include_1] 334 335Some using statements simplify their use: 336 337[root_finding_multiprecision_example_1] 338 339They can be used thus: 340 341[root_finding_multiprecision_example_2] 342 343A reference value computed by __WolframAlpha is 344 345 N[2^(1/3), 50] 1.2599210498948731647672106072782283505702514647015 346 347which agrees exactly. 348 349To [*show] values to their full precision, it is necessary to adjust the `std::ostream` `precision` to suit the type, for example: 350 351[root_finding_multiprecision_show_1] 352 353[root_finding_multiprecision_example_3] 354 355which outputs: 356 357[pre 358cbrt(2) = 1.2599210498948731647672106072782283505702514647015 359 360value = 2, cube root =1.25992104989487 361value = 2, cube root =1.25992104989487 362value = 2, cube root =1.2599210498948731647672106072782283505702514647015 363] 364 365[tip Be [*very careful] about the floating-point type `T` that is passed to the root-finding function. 366Carelessly passing a integer by writing 367`cpp_dec_float_50 r = cbrt_2deriv(2);` or `show_cube_root(2);` 368will provoke many warnings and compile errors. 369 370Even `show_cube_root(2.F);` will produce warnings because `typedef double guess_type` defines the type 371used to compute the guess and bracket values as `double`. 372 373Even more treacherous is passing a `double` as in `cpp_dec_float_50 r = cbrt_2deriv(2.);` 374which silently gives the 'wrong' result, computing a `double` result and [*then] converting to `cpp_dec_float_50`! 375All digits beyond `max_digits10` will be incorrect. 376Making the `cbrt` type explicit with `cbrt_2deriv<cpp_dec_float_50>(2.);` will give you the desired 50 decimal digit precision result. 377] [/tip] 378 379Full code of this example is at 380[@../../example/root_finding_multiprecision_example.cpp root_finding_multiprecision_example.cpp]. 381 382[endsect] [/section:multiprecision_root Root-finding using Boost.Multiprecision] 383 384[section:nth_root Generalizing to Compute the nth root] 385 386If desired, we can now further generalize to compute the ['n]th root by computing the derivatives [*at compile-time] 387using the rules for differentiation and `boost::math::pow<N>` 388where template parameter `N` is an integer and a compile time constant. Our functor and function now have an additional template parameter `N`, 389for the root required. 390 391[note Since the powers and derivatives are fixed at compile time, the resulting code is as efficient as as if hand-coded as the cube and fifth-root examples above. 392A good compiler should also optimise any repeated multiplications.] 393 394Our ['n]th root functor is 395 396[root_finding_nth_functor_2deriv] 397 398and our ['n]th root function is 399 400[root_finding_nth_function_2deriv] 401 402[root_finding_n_example_2] 403 404produces an output similar to this 405 406[root_finding_example_output_1] 407 408[tip Take care with the type passed to the function. It is best to pass a `double` or greater-precision floating-point type. 409 410Passing an integer value, for example, `nth_2deriv<5>(2)` will be rejected, while `nth_2deriv<5, double>(2)` converts the integer to `double`. 411 412Avoid passing a `float` value that will provoke warnings (actually spurious) from the compiler about potential loss of data, 413as noted above.] 414 415[warning Asking for unreasonable roots, for example, `show_nth_root<1000000>(2.);` may lead to 416[@http://en.wikipedia.org/wiki/Loss_of_significance Loss of significance] like 417`Type double value = 2, 1000000th root = 1.00000069314783`. 418Use of the the `pow` function is more sensible for this unusual need. 419] 420 421Full code of this example is at 422[@../../example/root_finding_n_example.cpp root_finding_n_example.cpp]. 423 424[endsect] 425 426[section:elliptic_eg A More complex example - Inverting the Elliptic Integrals] 427 428The arc length of an ellipse with radii ['a] and ['b] is given by: 429 430[pre L(a, b) = 4aE(k)] 431 432with: 433 434[pre k = [sqrt](1 - b[super 2]/a[super 2])] 435 436where ['E(k)] is the complete elliptic integral of the second kind - see __ellint_2. 437 438Let's suppose we know the arc length and one radii, we can then calculate the other 439radius by inverting the formula above. We'll begin by encoding the above formula 440into a functor that our root-finding algorithms can call. 441 442Note that while not 443completely obvious from the formula above, the function is completely symmetrical 444in the two radii - which can be interchanged at will - in this case we need to 445make sure that `a >= b` so that we don't accidentally take the square root of a negative number: 446 447[import ../../example/root_elliptic_finding.cpp] 448 449[elliptic_noderv_func] 450 451We'll also need a decent estimate to start searching from, the approximation: 452 453[pre L(a, b) [approx] 4[sqrt](a[super 2] + b[super 2])] 454 455Is easily inverted to give us what we need, which using derivative-free root 456finding leads to the algorithm: 457 458[elliptic_root_noderiv] 459 460This function generally finds the root within 8-10 iterations, so given that the runtime 461is completely dominated by the cost of calling the elliptic integral it would be nice to 462reduce that count somewhat. We'll try to do that by using a derivative-based method; 463the derivatives of this function are rather hard to work out by hand, but fortunately 464[@http://www.wolframalpha.com/input/?i=d%2Fda+\[4+*+a+*+EllipticE%281+-+b^2%2Fa^2%29\] 465Wolfram Alpha] can do the grunt work for us to give: 466 467[pre d/da L(a, b) = 4(a[super 2]E(k) - b[super 2]K(k)) / (a[super 2] - b[super 2])] 468 469Note that now we have [*two] elliptic integral calls to get the derivative, so our 470functor will be at least twice as expensive to call as the derivative-free one above: 471we'll have to reduce the iteration count quite substantially to make a difference! 472 473Here's the revised functor: 474 475[elliptic_1deriv_func] 476 477The root-finding code is now almost the same as before, but we'll make use of 478Newton-iteration to get the result: 479 480[elliptic_1deriv] 481 482The number of iterations required for `double` precision is now usually around 4 - 483so we've slightly more than halved the number of iterations, but made the 484functor twice as expensive to call! 485 486Interestingly though, the second derivative requires no more expensive 487elliptic integral calls than the first does, in other words it comes 488essentially "for free", in which case we might as well make use of it 489and use Halley-iteration. This is quite a typical situation when 490inverting special-functions. Here's the revised functor: 491 492[elliptic_2deriv_func] 493 494The actual root-finding code is almost the same as before, except we can 495use Halley, rather than Newton iteration: 496 497[elliptic_2deriv] 498 499While this function uses only slightly fewer iterations (typically around 3) 500to find the root, compared to the original derivative-free method, we've moved from 5018-10 elliptic integral calls to 6. 502 503Full code of this example is at 504[@../../example/root_elliptic_finding.cpp root_elliptic_finding.cpp]. 505 506[endsect] 507 508 509[endsect] [/section:root_examples Examples of Root Finding (with and without derivatives)] 510 511[section:bad_guess The Effect of a Poor Initial Guess] 512 513It's instructive to take our "toy" example algorithms, and use deliberately bad initial guesses to see how the 514various root finding algorithms fair. We'll start with the cubed root, and using the cube root of 500 as the test case: 515 516[table 517[[Initial Guess=][-500% ([approx]1.323)][-100% ([approx]3.97)][-50% ([approx]3.96)][-20% ([approx]6.35)][-10% ([approx]7.14)][-5% ([approx]7.54)][5% ([approx]8.33)][10% ([approx]8.73)][20% ([approx]9.52)][50% ([approx]11.91)][100% ([approx]15.87)][500 ([approx]47.6)]] 518[[bracket_and_solve_root][12][8][8][10][11][11][11][11][11][11][7][13]] 519[[newton_iterate][12][7][7][5][5][4][4][5][5][6][7][9]] 520[[halley_iterate][7][4][4][3][3][3][3][3][3][4][4][6]] 521[[schroder_iterate][11][6][6][4][3][3][3][3][4][5][5][8]] 522] 523 524As you can see `bracket_and_solve_root` is relatively insensitive to starting location - as long as you don't start many orders of magnitude away from the root it will 525take roughly the same number of steps to bracket the root and solve it. On the other hand the derivative-based methods are slow to start, but once they have some digits 526correct they increase precision exceptionally fast: they are therefore quite sensitive to the initial starting location. 527 528The next table shows the number of iterations required to find the second radius of an ellipse with first radius 50 and arc-length 500: 529 530[table 531[[Initial Guess=][-500% ([approx]20.6)][-100% ([approx]61.81)][-50% ([approx]61.81)][-20% ([approx]98.9)][-10% ([approx]111.3)][-5% ([approx]117.4)][5% ([approx]129.8)][10% ([approx]136)][20% ([approx]148.3)][50% ([approx]185.4)][100% ([approx]247.2)][500 ([approx]741.7)]] 532[[bracket_and_solve_root][11][5][5][8][8][7][7][8][9][8][6][10]] 533[[newton_iterate][4][4][4][3][3][3][3][3][3][4][4][4]] 534[[halley_iterate][4][3][3][3][3][2][2][3][3][3][3][3]] 535[[schroder_iterate][4][3][3][3][3][2][2][3][3][3][3][3]] 536] 537 538Interestingly this function is much more resistant to a poor initial guess when using derivatives. 539 540[endsect] 541 542[section:bad_roots Examples Where Root Finding Goes Wrong] 543 544There are many reasons why root root finding can fail, here are just a few of the more common examples: 545 546[h3 Local Minima] 547 548If you start in the wrong place, such as z[sub 0] here: 549 550[$../roots/bad_root_1.svg] 551 552Then almost any root-finding algorithm will descend into a local minima rather than find the root. 553 554[h3 Flatlining] 555 556In this example, we're starting from a location (z[sub 0]) where the first derivative is essentially zero: 557 558[$../roots/bad_root_2.svg] 559 560In this situation the next iteration will shoot off to infinity (assuming we're using derivatives that is). Our 561code guards against this by insisting that the root is always bracketed, and then never stepping outside those bounds. 562In a case like this, no root finding algorithm can do better than bisecting until the root is found. 563 564Note that there is no scale on the graph, we have seen examples of this situation occur in practice ['even when 565several decimal places of the initial guess z[sub 0] are correct.] 566 567This is really a special case of a more common situation where root finding with derivatives is ['divergent]. Consider 568starting at z[sub 0] in this case: 569 570[$../roots/bad_root_4.svg] 571 572An initial Newton step would take you further from the root than you started, as will all subsequent steps. 573 574[h3 Micro-stepping / Non-convergence] 575 576Consider starting at z[sub 0] in this situation: 577 578[$../roots/bad_root_3.svg] 579 580The first derivative is essentially infinite, and the second close to zero (and so offers no correction if we use it), 581as a result we take a very small first step. In the worst case situation, the first step is so small 582- perhaps even so small that subtracting from z[sub 0] has no effect at the current working precision - that our algorithm 583will assume we are at the root already and terminate. Otherwise we will take lot's of very small steps which never converge 584on the root: our algorithms will protect against that by reverting to bisection. 585 586An example of this situation would be trying to find the root of e[super -1/z[super 2]] - this function has a single 587root at ['z = 0], but for ['z[sub 0] < 0] neither Newton nor Halley steps will ever converge on the root, and for ['z[sub 0] > 0] 588the steps are actually divergent. 589 590[endsect] 591 592[/ 593 Copyright 2015 John Maddock and Paul A. Bristow. 594 Distributed under the Boost Software License, Version 1.0. 595 (See accompanying file LICENSE_1_0.txt or copy at 596 http://www.boost.org/LICENSE_1_0.txt). 597] 598 599