• Home
  • Raw
  • Download

Lines Matching +full:float +full:- +full:divide +full:- +full:by +full:- +full:zero

1 /* Math module -- standard C math library functions, pi and e */
6 special values, IEEE-754 floating-point exceptions, and Python
12 large to approximate by a machine float, overflow is signaled and the
16 small to approximate by a machine float, underflow is signaled and the
17 result is a zero (with the appropriate sign).
20 approaches x exists and is an infinity), "divide by zero" is signaled
22 complicated a little by that the left-side and right-side limits may
23 not be the same; e.g., 1/x approaches +inf or -inf as x approaches 0
25 sign of the zero determines the result of 1/0.
36 For #2, return a zero (with the appropriate sign if that happens by
37 accident ;-)).
41 raised for division by zero and mod by zero.
46 In general, on an IEEE-754 platform the aim is to follow the C99
48 standard recommends raising the 'divide-by-zero' or 'invalid'
49 floating-point exceptions, Python should raise a ValueError. Where
70 754-2008 for finite arguments, but not for infinities or nans.
94 r = cos(pi*(y-0.5)); in m_sinpi()
97 /* N.B. -sin(pi*(y-1.0)) is *not* equivalent: it would give in m_sinpi()
98 -0.0 instead of 0.0 when y == 1.0. */ in m_sinpi()
99 r = sin(pi*(1.0-y)); in m_sinpi()
102 r = -cos(pi*(y-1.5)); in m_sinpi()
105 r = sin(pi*(y-2.0)); in m_sinpi()
113 /* Implementation of the real gamma function. In extensive but non-exhaustive
115 entire float domain. Note that accuracy may depend on the quality of the
122 used by the Boost library. Following Boost (again), we re-express the
125 double-checked against the coefficients in the Boost source code.
130 Gamma(x) involves computing pow(x+g-0.5, x-0.5) / exp(x+g-0.5). For many x
131 values, x+g-0.5 can be represented exactly. However, in cases where it
132 can't be represented exactly the small error in x+g-0.5 can be magnified
133 significantly by the pow and exp calls, especially for large x. A cheap
134 correction is to multiply by (1 + e*g/(x+g-0.5)), where e is the error
135 involved in the computation of x+g-0.5 (that is, e = computed value of
136 x+g-0.5 - exact value of x+g-0.5). Here's the proof:
139 -----------------
140 Write x+g-0.5 = y-e, where y is exactly representable as an IEEE 754
143 pow(x+g-0.5,x-0.5)/exp(x+g-0.5) = pow(y-e, x-0.5)/exp(y-e)
144 = pow(y, x-0.5)/exp(y) * C,
146 where the correction_factor C is given by
148 C = pow(1-e/y, x-0.5) * exp(e)
150 Since e is tiny, pow(1-e/y, x-0.5) ~ 1-(x-0.5)*e/y, and exp(x) ~ 1+e, so:
152 C ~ (1-(x-0.5)*e/y) * (1+e) ~ 1 + e*(y-(x-0.5))/y
154 But y-(x-0.5) = g+e, and g+e ~ g. So we get C ~ 1 + e*g/y, and
156 pow(x+g-0.5,x-0.5)/exp(x+g-0.5) ~ pow(y, x-0.5)/exp(y) * (1 + e*g/y),
189 /* denominator is x*(x+1)*...*(x+LANCZOS_N-2) */
215 function by x**(1-LANCZOS_N) and treat this as a in lanczos_sum()
221 for (i = LANCZOS_N; --i >= 0; ) { in lanczos_sum()
235 /* Constant for +infinity, generated in the same way as float('inf'). */
247 /* Constant nan value, generated in the same way as float('nan'). */
275 return Py_NAN; /* tgamma(-inf) = nan, invalid */ in m_tgamma()
280 /* tgamma(+-0.0) = +-inf, divide-by-zero */ in m_tgamma()
291 return gamma_integral[(int)x - 1]; in m_tgamma()
296 if (absx < 1e-20) { in m_tgamma()
304 x > 200, and underflows to +-0.0 for x < -200, not a negative in m_tgamma()
319 /* note: the correction can be foiled by an optimizing in m_tgamma()
321 a + b - a - b can be optimized to 0.0. This shouldn't in m_tgamma()
322 happen in a standards-conforming compiler. */ in m_tgamma()
323 double q = y - absx; in m_tgamma()
324 z = q - lanczos_g_minus_half; in m_tgamma()
327 double q = y - lanczos_g_minus_half; in m_tgamma()
328 z = q - absx; in m_tgamma()
332 r = -pi / m_sinpi(absx) / absx * exp(y) / lanczos_sum(absx); in m_tgamma()
333 r -= z * r; in m_tgamma()
335 r /= pow(y, absx - 0.5); in m_tgamma()
338 sqrtpow = pow(y, absx / 2.0 - 0.25); in m_tgamma()
347 r *= pow(y, absx - 0.5); in m_tgamma()
350 sqrtpow = pow(y, absx / 2.0 - 0.25); in m_tgamma()
376 return Py_HUGE_VAL; /* lgamma(+-inf) = +inf */ in m_lgamma()
382 errno = EDOM; /* lgamma(n) = inf, divide-by-zero for */ in m_lgamma()
391 /* tiny arguments: lgamma(x) ~ -log(fabs(x)) for small x */ in m_lgamma()
392 if (absx < 1e-20) in m_lgamma()
393 return -log(absx); in m_lgamma()
395 /* Lanczos' formula. We could save a fraction of a ulp in accuracy by in m_lgamma()
397 absorbed the exp(-lanczos_g) term, and throwing out the lanczos_g in m_lgamma()
399 r = log(lanczos_sum(absx)) - lanczos_g; in m_lgamma()
400 r += (absx - 0.5) * (log(absx + lanczos_g - 0.5) - 1); in m_lgamma()
403 r = logpi - log(fabs(m_sinpi(absx))) - log(absx) - r; in m_lgamma()
417 combined with the relations erf(-x) = -erf(x) and erfc(x) = 1.0 - erf(x),
422 erf(x) = x*exp(-x*x)/sqrt(pi) * [
425 The coefficient of x**(2k-2) here is 4**k*factorial(k)/factorial(2*k).
430 erfc(x) = x*exp(-x*x)/sqrt(pi) * [1/(0.5 + x**2 -) 0.5/(2.5 + x**2 - )
431 3.0/(4.5 + x**2 - ) 7.5/(6.5 + x**2 - ) ...]
435 k*(k-0.5)/(2*k+0.5 + x**2 - ...).
441 example, erfc(30.0) is approximately 2.56e-393).
456 Given a finite float x, return an approximation to erf(x).
471 fk -= 1.0; in m_erf_series()
476 result = acc * x * exp(-x2) / sqrtpi; in m_erf_series()
484 Given a positive float x, return an approximation to erfc(x). Converges
488 than the smallest representable nonzero float. */
509 temp = p; p = b*p - a*p_last; p_last = temp; in m_erfc_contfrac()
510 temp = q; q = b*q - a*q_last; q_last = temp; in m_erfc_contfrac()
512 /* Issue #8986: On some platforms, exp sets errno on underflow to zero; in m_erfc_contfrac()
515 result = p / q * x * exp(-x2) / sqrtpi; in m_erfc_contfrac()
539 return x > 0.0 ? 1.0 - cf : cf - 1.0; in m_erf()
558 return 1.0 - m_erf_series(x); in m_erfc()
561 return x > 0.0 ? cf : 2.0 - cf; in m_erfc()
582 /* atan2(+-inf, +inf) == +-pi/4 */ in m_atan2()
585 /* atan2(+-inf, -inf) == +-pi*3/4 */ in m_atan2()
588 /* atan2(+-inf, x) == +-pi/2 for finite x */ in m_atan2()
593 /* atan2(+-y, +inf) = atan2(+-0, +x) = +-0. */ in m_atan2()
596 /* atan2(+-y, -inf) = atan2(+-0., -x) = +-pi. */ in m_atan2()
603 /* IEEE 754-style remainder operation: x - n*y where n*y is the nearest
605 binary floating-point format, the result is always exact. */
628 m with the complement c = absy - m: m < 0.5*absy if and only if m < in m_remainder()
629 c, and so on. The catch is that absy - m might also not be in m_remainder()
632 - if m > 0.5*absy then absy - m is exactly representable, by in m_remainder()
634 - if m == 0.5*absy then again absy - m is exactly representable in m_remainder()
636 - if m < 0.5*absy then either (i) 0.5*absy is exactly representable, in m_remainder()
637 in which case 0.5*absy < absy - m, so 0.5*absy <= c and hence m < in m_remainder()
639 binade. Then absy - m is exactly representable and again m < c. in m_remainder()
642 c = absy - m; in m_remainder()
647 r = -c; in m_remainder()
658 return -m. in m_remainder()
662 0.5 * (absx - m) = (n/2) * absy in m_remainder()
667 fmod(0.5 * (absx - m), absy) = | in m_remainder()
670 Now m - 2.0 * fmod(...) gives the desired result: m in m_remainder()
671 if n is even, -m if m is odd. in m_remainder()
673 Note that all steps in fmod(0.5 * (absx - m), absy) in m_remainder()
678 r = m - 2.0 * fmod(0.5 * (absx - m), absy); in m_remainder()
700 log(-ve), log(NaN). Here are wrappers for log and log10 that deal with
701 special values directly, passing positive non-special values through to
713 return -Py_HUGE_VAL; /* log(0) = -inf */ in m_log()
715 return Py_NAN; /* log(-ve) = nan */ in m_log()
723 return Py_NAN; /* log(-inf) = nan */ in m_log()
747 return Py_NAN; /* log2(-inf) = nan, invalid-operation */ in m_log2()
765 return log(2.0 * m) / log(2.0) + (e - 1); in m_log2()
774 return -Py_HUGE_VAL; /* log2(0) = -inf, divide-by-zero */ in m_log2()
778 return Py_NAN; /* log2(-inf) = nan, invalid-operation */ in m_log2()
790 return -Py_HUGE_VAL; /* log10(0) = -inf */ in m_log10()
792 return Py_NAN; /* log10(-ve) = nan */ in m_log10()
800 return Py_NAN; /* log10(-inf) = nan */ in m_log10()
844 assert(errno); /* non-zero errno is a precondition for calling */ in is_error()
855 * should return a zero on underflow, and +- HUGE_VAL on in is_error()
856 * overflow, so testing the result for zero suffices to in is_error()
861 * to zero. So to be safe, we'll ignore ERANGE whenever the in is_error()
884 - a NaN result from non-NaN inputs causes ValueError to be raised
885 - an infinite result from finite inputs causes OverflowError to be
888 - if the result is finite and errno == EDOM then ValueError is
890 - if the result is finite and nonzero and errno == ERANGE then
896 For the majority of one-argument functions these rules are enough
898 of the C99 standard, with the 'invalid' and 'divide-by-zero'
899 floating-point exceptions mapping to Python's ValueError and the
900 'overflow' floating-point exception mapping to OverflowError.
913 if (x == -1.0 && PyErr_Occurred()) in math_1_to_whatever()
941 set errno properly (that is, errno = EDOM for invalid or divide-by-zero,
949 if (x == -1.0 && PyErr_Occurred()) in math_1a()
968 - a NaN result from non-NaN inputs causes ValueError to be raised
969 - an infinite result from finite inputs causes OverflowError to be
971 - if the result is finite and errno == EDOM then ValueError is
973 - if the result is finite and nonzero and errno == ERANGE then
979 For most two-argument functions (copysign, fmod, hypot, atan2)
982 'divide-by-zero' floating-point exceptions mapping to Python's
983 ValueError and the 'overflow' floating-point exception mapping to
1008 if ((x == -1.0 || y == -1.0) && PyErr_Occurred()) in math_2()
1051 "acos($module, x, /)\n--\n\n"
1054 "acosh($module, x, /)\n--\n\n"
1057 "asin($module, x, /)\n--\n\n"
1060 "asinh($module, x, /)\n--\n\n"
1063 "atan($module, x, /)\n--\n\n"
1066 "atan2($module, y, x, /)\n--\n\n"
1070 "atanh($module, x, /)\n--\n\n"
1103 "copysign($module, x, y, /)\n--\n\n"
1104 "Return a float with the magnitude (absolute value) of x but the sign of y.\n\n"
1105 "On platforms that support signed zeros, copysign(1.0, -0.0)\n"
1106 "returns -1.0.\n")
1108 "cos($module, x, /)\n--\n\n"
1111 "cosh($module, x, /)\n--\n\n"
1114 "erf($module, x, /)\n--\n\n"
1117 "erfc($module, x, /)\n--\n\n"
1120 "exp($module, x, /)\n--\n\n"
1123 "expm1($module, x, /)\n--\n\n"
1124 "Return exp(x)-1.\n\n"
1126 "evaluation of exp(x)-1 for small x.")
1128 "fabs($module, x, /)\n--\n\n"
1129 "Return the absolute value of the float x.")
1161 "gamma($module, x, /)\n--\n\n"
1164 "lgamma($module, x, /)\n--\n\n"
1167 "log1p($module, x, /)\n--\n\n"
1169 "The result is computed in a way which is accurate for x near zero.")
1171 "remainder($module, x, y, /)\n--\n\n"
1173 "Return x - n*y where n*y is the closest integer multiple of y.\n"
1177 "sin($module, x, /)\n--\n\n"
1180 "sinh($module, x, /)\n--\n\n"
1183 "sqrt($module, x, /)\n--\n\n"
1186 "tan($module, x, /)\n--\n\n"
1189 "tanh($module, x, /)\n--\n\n"
1192 /* Precision summation function as msum() by Raymond Hettinger in
1199 but the current implementation does not re-establish special
1200 value semantics across iterations (i.e. handling -Inf + Inf).
1203 therefore, sum([1e+308, 1e-308, 1e+308]) returns 1e+308 while
1204 sum([1e+308, 1e+308, 1e-308]) raises an OverflowError due to the
1210 regular doubles instead of extended long precision (80-bit) values. This
1212 can be resolved exactly into double-sized hi and lo values. As long as the
1215 exactly zero and therefore can be losslessly stored back into a double,
1225 accurate result returned by sum(itertools.chain(seq1, seq2)).
1230 /* Extend the partials array p[] by doubling its size. */
1231 static int /* non-zero on error */
1261 partials = [] # sorted, non-overlapping partial sums
1268 lo = y - (hi - x)
1281 result (using the round-half-to-even rule). The items in partials remain
1282 non-zero, non-special, non-overlapping and strictly increasing in
1285 Depends on IEEE 754 arithmetic guarantees and half-even rounding.
1296 Assumes IEEE-754 floating point arithmetic.
1338 yr = hi - x; in math_fsum()
1339 lo = y - yr; in math_fsum()
1373 "-inf + inf in fsum"); in math_fsum()
1381 hi = p[--n]; in math_fsum()
1386 y = p[--n]; in math_fsum()
1389 yr = hi - x; in math_fsum()
1390 lo = y - yr; in math_fsum()
1394 /* Make half-even rounding work across multiple partials. in math_fsum()
1395 Needed so that sum([1e-16, 1, 1e16]) will round-up the last in math_fsum()
1396 digit to two instead of down to zero (the 1e-16 makes the 1 in math_fsum()
1398 error fixed-up, math.fsum() can guarantee commutativity. */ in math_fsum()
1399 if (n > 0 && ((lo < 0.0 && p[n-1] < 0.0) || in math_fsum()
1400 (lo > 0.0 && p[n-1] > 0.0))) { in math_fsum()
1403 yr = x - hi; in math_fsum()
1422 * Equivalent to floor(lg(x))+1. Also equivalent to: bitwidth_of_type -
1449 n &= n - 1; /* clear least significant bit */ in count_set_bits()
1454 /* Divide-and-conquer factorial algorithm
1456 * Based on the formula and pseudo-code provided at:
1463 * ----------------------
1485 * Each term can be computed from the last by multiplying by the extra odd
1487 * we multiply by (11 * 13 * 15 * 17 * 19).
1491 * each subterm in the product for i, we multiply that subterm by 2**i:
1508 * The two-valuation k (i.e., the exponent of the largest power of 2 dividing
1510 * function. By standard results, its value is:
1514 * It can be shown (e.g., by complete induction on n) that two_valuation is
1515 * equal to n - count_set_bits(n), where count_set_bits(n) gives the number of
1516 * '1'-bits in the binary expansion of n.
1520 * divide and conquer. Assumes start and stop are odd and stop > start.
1521 * max_bits must be >= bit_length(stop - 2). */
1536 * conveniently the value returned by bit_length(z). The in factorial_partial_product()
1541 * We know that stop - 2 is the largest number to be multiplied. From in factorial_partial_product()
1543 * bit_length(stop - 2) in factorial_partial_product()
1546 num_operands = (stop - start) / 2; in factorial_partial_product()
1560 bit_length(midpoint - 2)); in factorial_partial_product()
1590 for (i = bit_length(n) - 2; i >= 0; i--) { in factorial_odd_part()
1600 partial = factorial_partial_product(lower, upper, bit_length(upper-2)); in factorial_odd_part()
1650 Raise a ValueError if x is negative or non-integral.
1678 if (x == -1 && PyErr_Occurred()) { in math_factorial()
1687 else if (overflow == -1 || x < 0) { in math_factorial()
1702 two_valuation = PyLong_FromLong(x - count_set_bits(x)); in math_factorial()
1732 if (Py_TYPE(x)->tp_dict == NULL) { in math_trunc()
1742 Py_TYPE(x)->tp_name); in math_trunc()
1759 m is a float and e is an int, such that x = m * 2.**e.
1806 if (exp == -1 && PyErr_Occurred()) in math_ldexp_impl()
1826 /* underflow to +-0 */ in math_ldexp_impl()
1878 do that by itself -- loghelper can. func is log or log10, and name is
1894 /* Negative or zero inputs give a ValueError. */ in loghelper()
1902 if (x == -1.0 && PyErr_Occurred()) { in loghelper()
1909 if (x == -1.0 && PyErr_Occurred()) in loghelper()
1920 /* Else let libm handle it by itself. */ in loghelper()
2015 /* fmod(x, +/-Inf) returns x for finite x. */ in math_fmod_impl()
2050 /* hypot(x, +/-Inf) returns Inf, even if x is a NaN. */ in math_hypot_impl()
2081 e.g. 0.**-5. (for which ValueError needs to be raised.)
2125 r = -y; /* result is +inf */ in math_pow_impl()
2126 if (x == 0.) /* 0**-inf: divide-by-zero */ in math_pow_impl()
2139 /* a NaN result should arise only from (-ve)**(finite in math_pow_impl()
2140 non-integer); in this case we want to raise ValueError. */ in math_pow_impl()
2147 (A) (+/-0.)**negative (-> divide-by-zero) in math_pow_impl()
2255 math.isclose -> bool
2260 rel_tol: double = 1e-09
2274 -inf, inf and NaN behave similarly to the IEEE 754 Standard. That
2275 is, NaN is not close to anything, even itself. inf and -inf are
2289 "tolerances must be non-negative"); in math_isclose_impl()
2290 return -1; in math_isclose_impl()
2294 /* short circuit exact equality -- needed to catch two infinities of in math_isclose_impl()
2303 Two infinities of the same sign are caught by the equality check in math_isclose_impl()
2315 diff = fabs(b - a); in math_isclose_impl()
2375 "mathematical functions defined by the C standard.");
2382 -1,