• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1#3.0.1
2#3.1.1
3#3.2.0
43.2.4
5#5745:37f59e65eb6c
65891:d8652709345d  # introduce AVX
7#5893:24b4dc92c6d3  # merge
85895:997c2ef9fc8b  # introduce FMA
9#5904:e1eafd14eaa1  # complex and AVX
105908:f8ee3c721251  # improve packing with ptranspose
11#5921:ca808bb456b0  # merge
12#5927:8b1001f9e3ac
135937:5a4ca1ad8c53  # New gebp kernel handling up to 3 packets x 4 register-level blocks
14#5949:f3488f4e45b2  # merge
15#5969:e09031dccfd9  # Disable 3pX4 kernel on Altivec
16#5992:4a429f5e0483  # merge
17before-evaluators
18#6334:f6a45e5b8b7c  # Implement evaluator for sparse outer products
19#6639:c9121c60b5c7
20#6655:06f163b5221f  # Properly detect FMA support on ARM
21#6677:700e023044e7   # FMA has been wrongly disabled
22#6681:11d31dafb0e3
23#6699:5e6e8e10aad1   # merge default to tensors
24#6726:ff2d2388e7b9   # merge default to tensors
25#6742:0cbd6195e829   # merge default to tensors
26#6747:853d2bafeb8f   # Generalized the gebp apis
276765:71584fd55762   # Made the blocking computation aware of the l3 cache; Also optimized the blocking parameters to take into account the number of threads used for a computation
28#6781:9cc5a931b2c6   # generalized gemv
29#6792:f6e1daab600a   # ensured that contractions that can be reduced to a matrix vector product
30#6844:039efd86b75c   # merge tensor
316845:7333ed40c6ef   # change prefetching in gebp
32#6856:b5be5e10eb7f   # merge index conversion
33#6893:c3a64aba7c70   # clean blocking size computation
34#6898:6fb31ebe6492   # rotating kernel for ARM
356899:877facace746   # rotating kernel for ARM only
36#6904:c250623ae9fa   # result_of
376921:915f1b1fc158   # fix prefetching change for ARM
386923:9ff25f6dacc6   # prefetching
396933:52572e60b5d3   # blocking size strategy
406937:c8c042f286b2   # avoid redundant pack_rhs
416981:7e5d6f78da59   # dynamic loop swapping
426984:45f26866c091   # rm dynamic loop swapping, adjust lhs's micro panel height to fully exploit L1 cache
436986:a675d05b6f8f   # blocking heuristic: block on the rhs in L1 if the lhs fit in L1.
447013:f875e75f07e5   # organize a little our default cache sizes, and use a saner default L1 outside of x86 (10% faster on Nexus 5)
457015:8aad8f35c955   # Refactor computeProductBlockingSizes to make room for the possibility of using lookup tables
467016:a58d253e8c91   # Polish lookup tables generation
477018:9b27294a8186   # actual_panel_rows computation should always be resilient to parameters not consistent with the known L1 cache size, see comment
487019:c758b1e2c073   # Provide a empirical lookup table for blocking sizes measured on a Nexus 5. Only for float, only for Android on ARM 32bit for now.
497085:627e039fba68   # Bug 986: add support for coefficient-based product with 0 depth.
507098:b6f1db9cf9ec   # Bug 992: don't select a 3p GEMM path with non-vectorizable scalar types, this hits unsupported paths in symm/triangular products code
517591:09a8e2186610   # 3.3-alpha1
527650:b0f3c8f43025   # help clang inlining
53#8744:74b789ada92a   # Improved the matrix multiplication blocking in the case where mr is not a power of 2 (e.g on Haswell CPUs)
548789:efcb912e4356   # Made the index type a template parameter to evaluateProductBlockingSizes. Use numext::mini and numext::maxi instead of std::min/std::max to compute blocking sizes
558972:81d53c711775   # Don't optimize the processing of the last rows of a matrix matrix product in cases that violate the assumptions made by the optimized code path
568985:d935df21a082   # Remove the rotating kernel.
578988:6c2dc56e73b3   # Bug 256: enable vectorization with unaligned loads/stores.
589148:b8b8c421e36c   # Relax mixing-type constraints for binary coefficient-wise operators
599174:d228bc282ac9   # merge
609212:c90098affa7b   # Fix performance regression introduced in changeset 8aad8f35c955
619213:9f1c14e4694b   # Fix performance regression in dgemm introduced by changeset 81d53c711775
62