Searched refs:CSE (Results 1 – 25 of 61) sorted by relevance
123
86 ; RUN: %s 2>&1 | FileCheck %s --check-prefix=CHECK-EARLY-CSE87 ; CHECK-EARLY-CSE: BISECT: running pass ({{[0-9]+}}) Early CSE on function (f1)88 ; CHECK-EARLY-CSE: BISECT: running pass ({{[0-9]+}}) Early CSE on function (f2)89 ; CHECK-EARLY-CSE: BISECT: running pass ({{[0-9]+}}) Early CSE on function (f3)92 ; RUN: 2>&1 | FileCheck %s --check-prefix=CHECK-NOT-EARLY-CSE93 ; CHECK-NOT-EARLY-CSE: BISECT: NOT running pass ({{[0-9]+}}) Early CSE on function (f1)94 ; CHECK-NOT-EARLY-CSE: BISECT: NOT running pass ({{[0-9]+}}) Early CSE on function (f2)95 ; CHECK-NOT-EARLY-CSE: BISECT: NOT running pass ({{[0-9]+}}) Early CSE on function (f3)
16 3 . Compute live ranges for CSE18 5 . [t] CSE27 14. [t] CSE45 certainly want to move LLVM emission from step 8 down until at least CSE
3 ; Can we CSE a known condition to a constant?23 ; We can CSE the condition, but we *don't* know it's value after the merge58 ; Replace a use rather than CSE
12 %D = zext i8 %V to i32 ;; CSE27 %G = add nuw i32 %C, %C ;; not a CSE with E81 ;; Simple call CSE'ing.
5 ; CSE between "icmp reg reg" and "sub reg reg".36 ; CSE between "icmp reg imm" and "sub reg imm".
3 ; Check that the kill flag is cleared between CSE'd instructions on their
23 ; anyway to get CSE effects.
6 ; Check that @llvm.aarch64.neon.ld2 is optimized away by Early CSE.40 ; Check that the first @llvm.aarch64.neon.st2 is optimized away by Early CSE.76 ; Check that the first @llvm.aarch64.neon.ld2 is optimized away by Early CSE.111 ; away by Early CSE.146 ; Check that @llvm.aarch64.neon.ld3 is not optimized away by Early CSE due181 ; Check that @llvm.aarch64.neon.st3 is not optimized away by Early CSE due to
50 define i32 @CSE() nounwind {57 ; CHECK-LABEL: CSE:
5 ; trouble with CSE in DAGCombine.
42 ; node becomes identical to %load2. CSE replaces %load1 which leaves its
49 ; xor in exit block will be CSE'ed and load will be folded to xor in entry.
5 ; Instcombine should be able to do trivial CSE of loads.
854 EarlyCSE CSE(TLI, TTI, DT, AC); in run() local856 if (!CSE.run()) in run()892 EarlyCSE CSE(TLI, TTI, DT, AC); in runOnFunction() local894 return CSE.run(); in runOnFunction()
6 ; Without CSE of libcalls, there are two calls in the output instead of one.
5 ; Don't CSE a cmp across a call that clobbers CPSR.
24 ; CSE of cmp across BB boundary
5 ; Without CSE of libcalls, there are two calls in the output instead of one.
19 ; to add the pseudo instructions to make sure they are CSE'ed at the same
33 ; investigate why address computations are not CSE'd. Or implement it.
28 ; but that's a CSE problem, not a LVI/jump threading problem)
475 else if (CStyleCastExpr *CSE = dyn_cast<CStyleCastExpr>(CE)) { in VisitCastExpr() local476 if (CSE->getType()->isVoidType()) { in VisitCastExpr()480 classify(CSE->getSubExpr(), Ignore); in VisitCastExpr()