Name |
Date |
Size |
#Lines |
LOC |
||
---|---|---|---|---|---|---|
.. | - | - | ||||
ARM/ | 03-May-2024 | - | 92,785 | 75,023 | ||
CellSPU/ | 03-May-2024 | - | 14,716 | 10,948 | ||
CppBackend/ | 03-May-2024 | - | 2,288 | 1,977 | ||
Hexagon/ | 03-May-2024 | - | 33,228 | 25,597 | ||
MBlaze/ | 03-May-2024 | - | 10,405 | 7,496 | ||
MSP430/ | 03-May-2024 | - | 6,066 | 4,356 | ||
Mips/ | 03-May-2024 | - | 18,810 | 13,341 | ||
NVPTX/ | 03-May-2024 | - | 15,883 | 12,702 | ||
PowerPC/ | 03-May-2024 | - | 31,498 | 25,200 | ||
Sparc/ | 03-May-2024 | - | 5,280 | 3,834 | ||
X86/ | 03-May-2024 | - | 74,649 | 57,209 | ||
XCore/ | 03-May-2024 | - | 6,174 | 4,477 | ||
Android.mk | D | 03-May-2024 | 862 | 42 | 26 | |
CMakeLists.txt | D | 03-May-2024 | 427 | 21 | 19 | |
LLVMBuild.txt | D | 03-May-2024 | 1.7 KiB | 57 | 50 | |
Makefile | D | 03-May-2024 | 662 | 21 | 6 | |
Mangler.cpp | D | 03-May-2024 | 8.5 KiB | 238 | 148 | |
README.txt | D | 03-May-2024 | 71.9 KiB | 2,370 | 1,785 | |
Target.cpp | D | 03-May-2024 | 3.5 KiB | 109 | 74 | |
TargetData.cpp | D | 03-May-2024 | 22.3 KiB | 664 | 467 | |
TargetELFWriterInfo.cpp | D | 03-May-2024 | 840 | 26 | 9 | |
TargetInstrInfo.cpp | D | 03-May-2024 | 3.1 KiB | 89 | 44 | |
TargetIntrinsicInfo.cpp | D | 03-May-2024 | 923 | 31 | 14 | |
TargetJITInfo.cpp | D | 03-May-2024 | 438 | 15 | 3 | |
TargetLibraryInfo.cpp | D | 03-May-2024 | 9.5 KiB | 353 | 299 | |
TargetLoweringObjectFile.cpp | D | 03-May-2024 | 12.1 KiB | 320 | 183 | |
TargetMachine.cpp | D | 03-May-2024 | 4.8 KiB | 165 | 112 | |
TargetMachineC.cpp | D | 03-May-2024 | 4.8 KiB | 198 | 155 | |
TargetRegisterInfo.cpp | D | 03-May-2024 | 8.7 KiB | 247 | 165 | |
TargetSubtargetInfo.cpp | D | 03-May-2024 | 1 KiB | 34 | 13 |
README.txt
1Target Independent Opportunities: 2 3//===---------------------------------------------------------------------===// 4 5We should recognized various "overflow detection" idioms and translate them into 6llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom: 7 8unsigned int mul(unsigned int a,unsigned int b) { 9 if ((unsigned long long)a*b>0xffffffff) 10 exit(0); 11 return a*b; 12} 13 14The legalization code for mul-with-overflow needs to be made more robust before 15this can be implemented though. 16 17//===---------------------------------------------------------------------===// 18 19Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and 20precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't 21safe in general, even on darwin. See the libm implementation of hypot for 22examples (which special case when x/y are exactly zero to get signed zeros etc 23right). 24 25//===---------------------------------------------------------------------===// 26 27On targets with expensive 64-bit multiply, we could LSR this: 28 29for (i = ...; ++i) { 30 x = 1ULL << i; 31 32into: 33 long long tmp = 1; 34 for (i = ...; ++i, tmp+=tmp) 35 x = tmp; 36 37This would be a win on ppc32, but not x86 or ppc64. 38 39//===---------------------------------------------------------------------===// 40 41Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0) 42 43//===---------------------------------------------------------------------===// 44 45Reassociate should turn things like: 46 47int factorial(int X) { 48 return X*X*X*X*X*X*X*X; 49} 50 51into llvm.powi calls, allowing the code generator to produce balanced 52multiplication trees. 53 54First, the intrinsic needs to be extended to support integers, and second the 55code generator needs to be enhanced to lower these to multiplication trees. 56 57//===---------------------------------------------------------------------===// 58 59Interesting? testcase for add/shift/mul reassoc: 60 61int bar(int x, int y) { 62 return x*x*x+y+x*x*x*x*x*y*y*y*y; 63} 64int foo(int z, int n) { 65 return bar(z, n) + bar(2*z, 2*n); 66} 67 68This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue 69is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X, 70which is the same number of multiplies and is canonical, because the 2*X has 71multiple uses. Here's a simple example: 72 73define i32 @test15(i32 %X1) { 74 %B = mul i32 %X1, 47 ; X1*47 75 %C = mul i32 %B, %B 76 ret i32 %C 77} 78 79 80//===---------------------------------------------------------------------===// 81 82Reassociate should handle the example in GCC PR16157: 83 84extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4; 85void f () { /* this can be optimized to four additions... */ 86 b4 = a4 + a3 + a2 + a1 + a0; 87 b3 = a3 + a2 + a1 + a0; 88 b2 = a2 + a1 + a0; 89 b1 = a1 + a0; 90} 91 92This requires reassociating to forms of expressions that are already available, 93something that reassoc doesn't think about yet. 94 95 96//===---------------------------------------------------------------------===// 97 98This function: (derived from GCC PR19988) 99double foo(double x, double y) { 100 return ((x + 0.1234 * y) * (x + -0.1234 * y)); 101} 102 103compiles to: 104_foo: 105 movapd %xmm1, %xmm2 106 mulsd LCPI1_1(%rip), %xmm1 107 mulsd LCPI1_0(%rip), %xmm2 108 addsd %xmm0, %xmm1 109 addsd %xmm0, %xmm2 110 movapd %xmm1, %xmm0 111 mulsd %xmm2, %xmm0 112 ret 113 114Reassociate should be able to turn it into: 115 116double foo(double x, double y) { 117 return ((x + 0.1234 * y) * (x - 0.1234 * y)); 118} 119 120Which allows the multiply by constant to be CSE'd, producing: 121 122_foo: 123 mulsd LCPI1_0(%rip), %xmm1 124 movapd %xmm1, %xmm2 125 addsd %xmm0, %xmm2 126 subsd %xmm1, %xmm0 127 mulsd %xmm2, %xmm0 128 ret 129 130This doesn't need -ffast-math support at all. This is particularly bad because 131the llvm-gcc frontend is canonicalizing the later into the former, but clang 132doesn't have this problem. 133 134//===---------------------------------------------------------------------===// 135 136These two functions should generate the same code on big-endian systems: 137 138int g(int *j,int *l) { return memcmp(j,l,4); } 139int h(int *j, int *l) { return *j - *l; } 140 141this could be done in SelectionDAGISel.cpp, along with other special cases, 142for 1,2,4,8 bytes. 143 144//===---------------------------------------------------------------------===// 145 146It would be nice to revert this patch: 147http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html 148 149And teach the dag combiner enough to simplify the code expanded before 150legalize. It seems plausible that this knowledge would let it simplify other 151stuff too. 152 153//===---------------------------------------------------------------------===// 154 155For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal 156to the type size. It works but can be overly conservative as the alignment of 157specific vector types are target dependent. 158 159//===---------------------------------------------------------------------===// 160 161We should produce an unaligned load from code like this: 162 163v4sf example(float *P) { 164 return (v4sf){P[0], P[1], P[2], P[3] }; 165} 166 167//===---------------------------------------------------------------------===// 168 169Add support for conditional increments, and other related patterns. Instead 170of: 171 172 movl 136(%esp), %eax 173 cmpl $0, %eax 174 je LBB16_2 #cond_next 175LBB16_1: #cond_true 176 incl _foo 177LBB16_2: #cond_next 178 179emit: 180 movl _foo, %eax 181 cmpl $1, %edi 182 sbbl $-1, %eax 183 movl %eax, _foo 184 185//===---------------------------------------------------------------------===// 186 187Combine: a = sin(x), b = cos(x) into a,b = sincos(x). 188 189Expand these to calls of sin/cos and stores: 190 double sincos(double x, double *sin, double *cos); 191 float sincosf(float x, float *sin, float *cos); 192 long double sincosl(long double x, long double *sin, long double *cos); 193 194Doing so could allow SROA of the destination pointers. See also: 195http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687 196 197This is now easily doable with MRVs. We could even make an intrinsic for this 198if anyone cared enough about sincos. 199 200//===---------------------------------------------------------------------===// 201 202quantum_sigma_x in 462.libquantum contains the following loop: 203 204 for(i=0; i<reg->size; i++) 205 { 206 /* Flip the target bit of each basis state */ 207 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target); 208 } 209 210Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just 211so cool to turn it into something like: 212 213 long long Res = ((MAX_UNSIGNED) 1 << target); 214 if (target < 32) { 215 for(i=0; i<reg->size; i++) 216 reg->node[i].state ^= Res & 0xFFFFFFFFULL; 217 } else { 218 for(i=0; i<reg->size; i++) 219 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL 220 } 221 222... which would only do one 32-bit XOR per loop iteration instead of two. 223 224It would also be nice to recognize the reg->size doesn't alias reg->node[i], but 225this requires TBAA. 226 227//===---------------------------------------------------------------------===// 228 229This isn't recognized as bswap by instcombine (yes, it really is bswap): 230 231unsigned long reverse(unsigned v) { 232 unsigned t; 233 t = v ^ ((v << 16) | (v >> 16)); 234 t &= ~0xff0000; 235 v = (v << 24) | (v >> 8); 236 return v ^ (t >> 8); 237} 238 239//===---------------------------------------------------------------------===// 240 241[LOOP DELETION] 242 243We don't delete this output free loop, because trip count analysis doesn't 244realize that it is finite (if it were infinite, it would be undefined). Not 245having this blocks Loop Idiom from matching strlen and friends. 246 247void foo(char *C) { 248 int x = 0; 249 while (*C) 250 ++x,++C; 251} 252 253//===---------------------------------------------------------------------===// 254 255[LOOP RECOGNITION] 256 257These idioms should be recognized as popcount (see PR1488): 258 259unsigned countbits_slow(unsigned v) { 260 unsigned c; 261 for (c = 0; v; v >>= 1) 262 c += v & 1; 263 return c; 264} 265unsigned countbits_fast(unsigned v){ 266 unsigned c; 267 for (c = 0; v; c++) 268 v &= v - 1; // clear the least significant bit set 269 return c; 270} 271 272BITBOARD = unsigned long long 273int PopCnt(register BITBOARD a) { 274 register int c=0; 275 while(a) { 276 c++; 277 a &= a - 1; 278 } 279 return c; 280} 281unsigned int popcount(unsigned int input) { 282 unsigned int count = 0; 283 for (unsigned int i = 0; i < 4 * 8; i++) 284 count += (input >> i) & i; 285 return count; 286} 287 288This should be recognized as CLZ: rdar://8459039 289 290unsigned clz_a(unsigned a) { 291 int i; 292 for (i=0;i<32;i++) 293 if (a & (1<<(31-i))) 294 return i; 295 return 32; 296} 297 298This sort of thing should be added to the loop idiom pass. 299 300//===---------------------------------------------------------------------===// 301 302These should turn into single 16-bit (unaligned?) loads on little/big endian 303processors. 304 305unsigned short read_16_le(const unsigned char *adr) { 306 return adr[0] | (adr[1] << 8); 307} 308unsigned short read_16_be(const unsigned char *adr) { 309 return (adr[0] << 8) | adr[1]; 310} 311 312//===---------------------------------------------------------------------===// 313 314-instcombine should handle this transform: 315 icmp pred (sdiv X / C1 ), C2 316when X, C1, and C2 are unsigned. Similarly for udiv and signed operands. 317 318Currently InstCombine avoids this transform but will do it when the signs of 319the operands and the sign of the divide match. See the FIXME in 320InstructionCombining.cpp in the visitSetCondInst method after the switch case 321for Instruction::UDiv (around line 4447) for more details. 322 323The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of 324this construct. 325 326//===---------------------------------------------------------------------===// 327 328[LOOP OPTIMIZATION] 329 330SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization 331opportunities in its double_array_divs_variable function: it needs loop 332interchange, memory promotion (which LICM already does), vectorization and 333variable trip count loop unrolling (since it has a constant trip count). ICC 334apparently produces this very nice code with -ffast-math: 335 336..B1.70: # Preds ..B1.70 ..B1.69 337 mulpd %xmm0, %xmm1 #108.2 338 mulpd %xmm0, %xmm1 #108.2 339 mulpd %xmm0, %xmm1 #108.2 340 mulpd %xmm0, %xmm1 #108.2 341 addl $8, %edx # 342 cmpl $131072, %edx #108.2 343 jb ..B1.70 # Prob 99% #108.2 344 345It would be better to count down to zero, but this is a lot better than what we 346do. 347 348//===---------------------------------------------------------------------===// 349 350Consider: 351 352typedef unsigned U32; 353typedef unsigned long long U64; 354int test (U32 *inst, U64 *regs) { 355 U64 effective_addr2; 356 U32 temp = *inst; 357 int r1 = (temp >> 20) & 0xf; 358 int b2 = (temp >> 16) & 0xf; 359 effective_addr2 = temp & 0xfff; 360 if (b2) effective_addr2 += regs[b2]; 361 b2 = (temp >> 12) & 0xf; 362 if (b2) effective_addr2 += regs[b2]; 363 effective_addr2 &= regs[4]; 364 if ((effective_addr2 & 3) == 0) 365 return 1; 366 return 0; 367} 368 369Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems, 370we don't eliminate the computation of the top half of effective_addr2 because 371we don't have whole-function selection dags. On x86, this means we use one 372extra register for the function when effective_addr2 is declared as U64 than 373when it is declared U32. 374 375PHI Slicing could be extended to do this. 376 377//===---------------------------------------------------------------------===// 378 379Tail call elim should be more aggressive, checking to see if the call is 380followed by an uncond branch to an exit block. 381 382; This testcase is due to tail-duplication not wanting to copy the return 383; instruction into the terminating blocks because there was other code 384; optimized out of the function after the taildup happened. 385; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call 386 387define i32 @t4(i32 %a) { 388entry: 389 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1] 390 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1] 391 br i1 %tmp.2, label %then.0, label %else.0 392 393then.0: ; preds = %entry 394 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1] 395 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1] 396 br label %return 397 398else.0: ; preds = %entry 399 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1] 400 br i1 %tmp.7, label %then.1, label %return 401 402then.1: ; preds = %else.0 403 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1] 404 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1] 405 br label %return 406 407return: ; preds = %then.1, %else.0, %then.0 408 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ], 409 [ %tmp.9, %then.1 ] 410 ret i32 %result.0 411} 412 413//===---------------------------------------------------------------------===// 414 415Tail recursion elimination should handle: 416 417int pow2m1(int n) { 418 if (n == 0) 419 return 0; 420 return 2 * pow2m1 (n - 1) + 1; 421} 422 423Also, multiplies can be turned into SHL's, so they should be handled as if 424they were associative. "return foo() << 1" can be tail recursion eliminated. 425 426//===---------------------------------------------------------------------===// 427 428Argument promotion should promote arguments for recursive functions, like 429this: 430 431; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val 432 433define internal i32 @foo(i32* %x) { 434entry: 435 %tmp = load i32* %x ; <i32> [#uses=0] 436 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1] 437 ret i32 %tmp.foo 438} 439 440define i32 @bar(i32* %x) { 441entry: 442 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1] 443 ret i32 %tmp3 444} 445 446//===---------------------------------------------------------------------===// 447 448We should investigate an instruction sinking pass. Consider this silly 449example in pic mode: 450 451#include <assert.h> 452void foo(int x) { 453 assert(x); 454 //... 455} 456 457we compile this to: 458_foo: 459 subl $28, %esp 460 call "L1$pb" 461"L1$pb": 462 popl %eax 463 cmpl $0, 32(%esp) 464 je LBB1_2 # cond_true 465LBB1_1: # return 466 # ... 467 addl $28, %esp 468 ret 469LBB1_2: # cond_true 470... 471 472The PIC base computation (call+popl) is only used on one path through the 473code, but is currently always computed in the entry block. It would be 474better to sink the picbase computation down into the block for the 475assertion, as it is the only one that uses it. This happens for a lot of 476code with early outs. 477 478Another example is loads of arguments, which are usually emitted into the 479entry block on targets like x86. If not used in all paths through a 480function, they should be sunk into the ones that do. 481 482In this case, whole-function-isel would also handle this. 483 484//===---------------------------------------------------------------------===// 485 486Investigate lowering of sparse switch statements into perfect hash tables: 487http://burtleburtle.net/bob/hash/perfect.html 488 489//===---------------------------------------------------------------------===// 490 491We should turn things like "load+fabs+store" and "load+fneg+store" into the 492corresponding integer operations. On a yonah, this loop: 493 494double a[256]; 495void foo() { 496 int i, b; 497 for (b = 0; b < 10000000; b++) 498 for (i = 0; i < 256; i++) 499 a[i] = -a[i]; 500} 501 502is twice as slow as this loop: 503 504long long a[256]; 505void foo() { 506 int i, b; 507 for (b = 0; b < 10000000; b++) 508 for (i = 0; i < 256; i++) 509 a[i] ^= (1ULL << 63); 510} 511 512and I suspect other processors are similar. On X86 in particular this is a 513big win because doing this with integers allows the use of read/modify/write 514instructions. 515 516//===---------------------------------------------------------------------===// 517 518DAG Combiner should try to combine small loads into larger loads when 519profitable. For example, we compile this C++ example: 520 521struct THotKey { short Key; bool Control; bool Shift; bool Alt; }; 522extern THotKey m_HotKey; 523THotKey GetHotKey () { return m_HotKey; } 524 525into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer): 526 527__Z9GetHotKeyv: ## @_Z9GetHotKeyv 528 movq _m_HotKey@GOTPCREL(%rip), %rax 529 movzwl (%rax), %ecx 530 movzbl 2(%rax), %edx 531 shlq $16, %rdx 532 orq %rcx, %rdx 533 movzbl 3(%rax), %ecx 534 shlq $24, %rcx 535 orq %rdx, %rcx 536 movzbl 4(%rax), %eax 537 shlq $32, %rax 538 orq %rcx, %rax 539 ret 540 541//===---------------------------------------------------------------------===// 542 543We should add an FRINT node to the DAG to model targets that have legal 544implementations of ceil/floor/rint. 545 546//===---------------------------------------------------------------------===// 547 548Consider: 549 550int test() { 551 long long input[8] = {1,0,1,0,1,0,1,0}; 552 foo(input); 553} 554 555Clang compiles this into: 556 557 call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false) 558 %0 = getelementptr [8 x i64]* %input, i64 0, i64 0 559 store i64 1, i64* %0, align 16 560 %1 = getelementptr [8 x i64]* %input, i64 0, i64 2 561 store i64 1, i64* %1, align 16 562 %2 = getelementptr [8 x i64]* %input, i64 0, i64 4 563 store i64 1, i64* %2, align 16 564 %3 = getelementptr [8 x i64]* %input, i64 0, i64 6 565 store i64 1, i64* %3, align 16 566 567Which gets codegen'd into: 568 569 pxor %xmm0, %xmm0 570 movaps %xmm0, -16(%rbp) 571 movaps %xmm0, -32(%rbp) 572 movaps %xmm0, -48(%rbp) 573 movaps %xmm0, -64(%rbp) 574 movq $1, -64(%rbp) 575 movq $1, -48(%rbp) 576 movq $1, -32(%rbp) 577 movq $1, -16(%rbp) 578 579It would be better to have 4 movq's of 0 instead of the movaps's. 580 581//===---------------------------------------------------------------------===// 582 583http://llvm.org/PR717: 584 585The following code should compile into "ret int undef". Instead, LLVM 586produces "ret int 0": 587 588int f() { 589 int x = 4; 590 int y; 591 if (x == 3) y = 0; 592 return y; 593} 594 595//===---------------------------------------------------------------------===// 596 597The loop unroller should partially unroll loops (instead of peeling them) 598when code growth isn't too bad and when an unroll count allows simplification 599of some code within the loop. One trivial example is: 600 601#include <stdio.h> 602int main() { 603 int nRet = 17; 604 int nLoop; 605 for ( nLoop = 0; nLoop < 1000; nLoop++ ) { 606 if ( nLoop & 1 ) 607 nRet += 2; 608 else 609 nRet -= 1; 610 } 611 return nRet; 612} 613 614Unrolling by 2 would eliminate the '&1' in both copies, leading to a net 615reduction in code size. The resultant code would then also be suitable for 616exit value computation. 617 618//===---------------------------------------------------------------------===// 619 620We miss a bunch of rotate opportunities on various targets, including ppc, x86, 621etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate 622matching code in dag combine doesn't look through truncates aggressively 623enough. Here are some testcases reduces from GCC PR17886: 624 625unsigned long long f5(unsigned long long x, unsigned long long y) { 626 return (x << 8) | ((y >> 48) & 0xffull); 627} 628unsigned long long f6(unsigned long long x, unsigned long long y, int z) { 629 switch(z) { 630 case 1: 631 return (x << 8) | ((y >> 48) & 0xffull); 632 case 2: 633 return (x << 16) | ((y >> 40) & 0xffffull); 634 case 3: 635 return (x << 24) | ((y >> 32) & 0xffffffull); 636 case 4: 637 return (x << 32) | ((y >> 24) & 0xffffffffull); 638 default: 639 return (x << 40) | ((y >> 16) & 0xffffffffffull); 640 } 641} 642 643//===---------------------------------------------------------------------===// 644 645This (and similar related idioms): 646 647unsigned int foo(unsigned char i) { 648 return i | (i<<8) | (i<<16) | (i<<24); 649} 650 651compiles into: 652 653define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone { 654entry: 655 %conv = zext i8 %i to i32 656 %shl = shl i32 %conv, 8 657 %shl5 = shl i32 %conv, 16 658 %shl9 = shl i32 %conv, 24 659 %or = or i32 %shl9, %conv 660 %or6 = or i32 %or, %shl5 661 %or10 = or i32 %or6, %shl 662 ret i32 %or10 663} 664 665it would be better as: 666 667unsigned int bar(unsigned char i) { 668 unsigned int j=i | (i << 8); 669 return j | (j<<16); 670} 671 672aka: 673 674define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone { 675entry: 676 %conv = zext i8 %i to i32 677 %shl = shl i32 %conv, 8 678 %or = or i32 %shl, %conv 679 %shl5 = shl i32 %or, 16 680 %or6 = or i32 %shl5, %or 681 ret i32 %or6 682} 683 684or even i*0x01010101, depending on the speed of the multiplier. The best way to 685handle this is to canonicalize it to a multiply in IR and have codegen handle 686lowering multiplies to shifts on cpus where shifts are faster. 687 688//===---------------------------------------------------------------------===// 689 690We do a number of simplifications in simplify libcalls to strength reduce 691standard library functions, but we don't currently merge them together. For 692example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only 693be done safely if "b" isn't modified between the strlen and memcpy of course. 694 695//===---------------------------------------------------------------------===// 696 697We compile this program: (from GCC PR11680) 698http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487 699 700Into code that runs the same speed in fast/slow modes, but both modes run 2x 701slower than when compile with GCC (either 4.0 or 4.2): 702 703$ llvm-g++ perf.cpp -O3 -fno-exceptions 704$ time ./a.out fast 7051.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w 706 707$ g++ perf.cpp -O3 -fno-exceptions 708$ time ./a.out fast 7090.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w 710 711It looks like we are making the same inlining decisions, so this may be raw 712codegen badness or something else (haven't investigated). 713 714//===---------------------------------------------------------------------===// 715 716Divisibility by constant can be simplified (according to GCC PR12849) from 717being a mulhi to being a mul lo (cheaper). Testcase: 718 719void bar(unsigned n) { 720 if (n % 3 == 0) 721 true(); 722} 723 724This is equivalent to the following, where 2863311531 is the multiplicative 725inverse of 3, and 1431655766 is ((2^32)-1)/3+1: 726void bar(unsigned n) { 727 if (n * 2863311531U < 1431655766U) 728 true(); 729} 730 731The same transformation can work with an even modulo with the addition of a 732rotate: rotate the result of the multiply to the right by the number of bits 733which need to be zero for the condition to be true, and shrink the compare RHS 734by the same amount. Unless the target supports rotates, though, that 735transformation probably isn't worthwhile. 736 737The transformation can also easily be made to work with non-zero equality 738comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0". 739 740//===---------------------------------------------------------------------===// 741 742Better mod/ref analysis for scanf would allow us to eliminate the vtable and a 743bunch of other stuff from this example (see PR1604): 744 745#include <cstdio> 746struct test { 747 int val; 748 virtual ~test() {} 749}; 750 751int main() { 752 test t; 753 std::scanf("%d", &t.val); 754 std::printf("%d\n", t.val); 755} 756 757//===---------------------------------------------------------------------===// 758 759These functions perform the same computation, but produce different assembly. 760 761define i8 @select(i8 %x) readnone nounwind { 762 %A = icmp ult i8 %x, 250 763 %B = select i1 %A, i8 0, i8 1 764 ret i8 %B 765} 766 767define i8 @addshr(i8 %x) readnone nounwind { 768 %A = zext i8 %x to i9 769 %B = add i9 %A, 6 ;; 256 - 250 == 6 770 %C = lshr i9 %B, 8 771 %D = trunc i9 %C to i8 772 ret i8 %D 773} 774 775//===---------------------------------------------------------------------===// 776 777From gcc bug 24696: 778int 779f (unsigned long a, unsigned long b, unsigned long c) 780{ 781 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0); 782} 783int 784f (unsigned long a, unsigned long b, unsigned long c) 785{ 786 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0); 787} 788Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with 789"clang -emit-llvm-bc | opt -std-compile-opts". 790 791//===---------------------------------------------------------------------===// 792 793From GCC Bug 20192: 794#define PMD_MASK (~((1UL << 23) - 1)) 795void clear_pmd_range(unsigned long start, unsigned long end) 796{ 797 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK)) 798 f(); 799} 800The expression should optimize to something like 801"!((start|end)&~PMD_MASK). Currently not optimized with "clang 802-emit-llvm-bc | opt -std-compile-opts". 803 804//===---------------------------------------------------------------------===// 805 806unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return 807i;} 808unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;} 809These should combine to the same thing. Currently, the first function 810produces better code on X86. 811 812//===---------------------------------------------------------------------===// 813 814From GCC Bug 15784: 815#define abs(x) x>0?x:-x 816int f(int x, int y) 817{ 818 return (abs(x)) >= 0; 819} 820This should optimize to x == INT_MIN. (With -fwrapv.) Currently not 821optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 822 823//===---------------------------------------------------------------------===// 824 825From GCC Bug 14753: 826void 827rotate_cst (unsigned int a) 828{ 829 a = (a << 10) | (a >> 22); 830 if (a == 123) 831 bar (); 832} 833void 834minus_cst (unsigned int a) 835{ 836 unsigned int tem; 837 838 tem = 20 - a; 839 if (tem == 5) 840 bar (); 841} 842void 843mask_gt (unsigned int a) 844{ 845 /* This is equivalent to a > 15. */ 846 if ((a & ~7) > 8) 847 bar (); 848} 849void 850rshift_gt (unsigned int a) 851{ 852 /* This is equivalent to a > 23. */ 853 if ((a >> 2) > 5) 854 bar (); 855} 856 857All should simplify to a single comparison. All of these are 858currently not optimized with "clang -emit-llvm-bc | opt 859-std-compile-opts". 860 861//===---------------------------------------------------------------------===// 862 863From GCC Bug 32605: 864int c(int* x) {return (char*)x+2 == (char*)x;} 865Should combine to 0. Currently not optimized with "clang 866-emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it). 867 868//===---------------------------------------------------------------------===// 869 870int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;} 871Should be combined to "((b >> 1) | b) & 1". Currently not optimized 872with "clang -emit-llvm-bc | opt -std-compile-opts". 873 874//===---------------------------------------------------------------------===// 875 876unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);} 877Should combine to "x | (y & 3)". Currently not optimized with "clang 878-emit-llvm-bc | opt -std-compile-opts". 879 880//===---------------------------------------------------------------------===// 881 882int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);} 883Should fold to "(~a & c) | (a & b)". Currently not optimized with 884"clang -emit-llvm-bc | opt -std-compile-opts". 885 886//===---------------------------------------------------------------------===// 887 888int a(int a,int b) {return (~(a|b))|a;} 889Should fold to "a|~b". Currently not optimized with "clang 890-emit-llvm-bc | opt -std-compile-opts". 891 892//===---------------------------------------------------------------------===// 893 894int a(int a, int b) {return (a&&b) || (a&&!b);} 895Should fold to "a". Currently not optimized with "clang -emit-llvm-bc 896| opt -std-compile-opts". 897 898//===---------------------------------------------------------------------===// 899 900int a(int a, int b, int c) {return (a&&b) || (!a&&c);} 901Should fold to "a ? b : c", or at least something sane. Currently not 902optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 903 904//===---------------------------------------------------------------------===// 905 906int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);} 907Should fold to a && (b || c). Currently not optimized with "clang 908-emit-llvm-bc | opt -std-compile-opts". 909 910//===---------------------------------------------------------------------===// 911 912int a(int x) {return x | ((x & 8) ^ 8);} 913Should combine to x | 8. Currently not optimized with "clang 914-emit-llvm-bc | opt -std-compile-opts". 915 916//===---------------------------------------------------------------------===// 917 918int a(int x) {return x ^ ((x & 8) ^ 8);} 919Should also combine to x | 8. Currently not optimized with "clang 920-emit-llvm-bc | opt -std-compile-opts". 921 922//===---------------------------------------------------------------------===// 923 924int a(int x) {return ((x | -9) ^ 8) & x;} 925Should combine to x & -9. Currently not optimized with "clang 926-emit-llvm-bc | opt -std-compile-opts". 927 928//===---------------------------------------------------------------------===// 929 930unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;} 931Should combine to "a * 0x88888888 >> 31". Currently not optimized 932with "clang -emit-llvm-bc | opt -std-compile-opts". 933 934//===---------------------------------------------------------------------===// 935 936unsigned a(char* x) {if ((*x & 32) == 0) return b();} 937There's an unnecessary zext in the generated code with "clang 938-emit-llvm-bc | opt -std-compile-opts". 939 940//===---------------------------------------------------------------------===// 941 942unsigned a(unsigned long long x) {return 40 * (x >> 1);} 943Should combine to "20 * (((unsigned)x) & -2)". Currently not 944optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 945 946//===---------------------------------------------------------------------===// 947 948int g(int x) { return (x - 10) < 0; } 949Should combine to "x <= 9" (the sub has nsw). Currently not 950optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 951 952//===---------------------------------------------------------------------===// 953 954int g(int x) { return (x + 10) < 0; } 955Should combine to "x < -10" (the add has nsw). Currently not 956optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 957 958//===---------------------------------------------------------------------===// 959 960int f(int i, int j) { return i < j + 1; } 961int g(int i, int j) { return j > i - 1; } 962Should combine to "i <= j" (the add/sub has nsw). Currently not 963optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 964 965//===---------------------------------------------------------------------===// 966 967unsigned f(unsigned x) { return ((x & 7) + 1) & 15; } 968The & 15 part should be optimized away, it doesn't change the result. Currently 969not optimized with "clang -emit-llvm-bc | opt -std-compile-opts". 970 971//===---------------------------------------------------------------------===// 972 973This was noticed in the entryblock for grokdeclarator in 403.gcc: 974 975 %tmp = icmp eq i32 %decl_context, 4 976 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context 977 %tmp1 = icmp eq i32 %decl_context_addr.0, 1 978 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0 979 980tmp1 should be simplified to something like: 981 (!tmp || decl_context == 1) 982 983This allows recursive simplifications, tmp1 is used all over the place in 984the function, e.g. by: 985 986 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1] 987 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1] 988 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1] 989 990later. 991 992//===---------------------------------------------------------------------===// 993 994[STORE SINKING] 995 996Store sinking: This code: 997 998void f (int n, int *cond, int *res) { 999 int i; 1000 *res = 0; 1001 for (i = 0; i < n; i++) 1002 if (*cond) 1003 *res ^= 234; /* (*) */ 1004} 1005 1006On this function GVN hoists the fully redundant value of *res, but nothing 1007moves the store out. This gives us this code: 1008 1009bb: ; preds = %bb2, %entry 1010 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ] 1011 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ] 1012 %1 = load i32* %cond, align 4 1013 %2 = icmp eq i32 %1, 0 1014 br i1 %2, label %bb2, label %bb1 1015 1016bb1: ; preds = %bb 1017 %3 = xor i32 %.rle, 234 1018 store i32 %3, i32* %res, align 4 1019 br label %bb2 1020 1021bb2: ; preds = %bb, %bb1 1022 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ] 1023 %indvar.next = add i32 %i.05, 1 1024 %exitcond = icmp eq i32 %indvar.next, %n 1025 br i1 %exitcond, label %return, label %bb 1026 1027DSE should sink partially dead stores to get the store out of the loop. 1028 1029Here's another partial dead case: 1030http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395 1031 1032//===---------------------------------------------------------------------===// 1033 1034Scalar PRE hoists the mul in the common block up to the else: 1035 1036int test (int a, int b, int c, int g) { 1037 int d, e; 1038 if (a) 1039 d = b * c; 1040 else 1041 d = b - c; 1042 e = b * c + g; 1043 return d + e; 1044} 1045 1046It would be better to do the mul once to reduce codesize above the if. 1047This is GCC PR38204. 1048 1049 1050//===---------------------------------------------------------------------===// 1051This simple function from 179.art: 1052 1053int winner, numf2s; 1054struct { double y; int reset; } *Y; 1055 1056void find_match() { 1057 int i; 1058 winner = 0; 1059 for (i=0;i<numf2s;i++) 1060 if (Y[i].y > Y[winner].y) 1061 winner =i; 1062} 1063 1064Compiles into (with clang TBAA): 1065 1066for.body: ; preds = %for.inc, %bb.nph 1067 %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ] 1068 %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ] 1069 %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0 1070 %tmp5 = load double* %tmp4, align 8, !tbaa !4 1071 %idxprom7 = sext i32 %i.01718 to i64 1072 %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0 1073 %tmp11 = load double* %tmp10, align 8, !tbaa !4 1074 %cmp12 = fcmp ogt double %tmp5, %tmp11 1075 br i1 %cmp12, label %if.then, label %for.inc 1076 1077if.then: ; preds = %for.body 1078 %i.017 = trunc i64 %indvar to i32 1079 br label %for.inc 1080 1081for.inc: ; preds = %for.body, %if.then 1082 %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ] 1083 %indvar.next = add i64 %indvar, 1 1084 %exitcond = icmp eq i64 %indvar.next, %tmp22 1085 br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body 1086 1087 1088It is good that we hoisted the reloads of numf2's, and Y out of the loop and 1089sunk the store to winner out. 1090 1091However, this is awful on several levels: the conditional truncate in the loop 1092(-indvars at fault? why can't we completely promote the IV to i64?). 1093 1094Beyond that, we have a partially redundant load in the loop: if "winner" (aka 1095%i.01718) isn't updated, we reload Y[winner].y the next time through the loop. 1096Similarly, the addressing that feeds it (including the sext) is redundant. In 1097the end we get this generated assembly: 1098 1099LBB0_2: ## %for.body 1100 ## =>This Inner Loop Header: Depth=1 1101 movsd (%rdi), %xmm0 1102 movslq %edx, %r8 1103 shlq $4, %r8 1104 ucomisd (%rcx,%r8), %xmm0 1105 jbe LBB0_4 1106 movl %esi, %edx 1107LBB0_4: ## %for.inc 1108 addq $16, %rdi 1109 incq %rsi 1110 cmpq %rsi, %rax 1111 jne LBB0_2 1112 1113All things considered this isn't too bad, but we shouldn't need the movslq or 1114the shlq instruction, or the load folded into ucomisd every time through the 1115loop. 1116 1117On an x86-specific topic, if the loop can't be restructure, the movl should be a 1118cmov. 1119 1120//===---------------------------------------------------------------------===// 1121 1122[STORE SINKING] 1123 1124GCC PR37810 is an interesting case where we should sink load/store reload 1125into the if block and outside the loop, so we don't reload/store it on the 1126non-call path. 1127 1128for () { 1129 *P += 1; 1130 if () 1131 call(); 1132 else 1133 ... 1134-> 1135tmp = *P 1136for () { 1137 tmp += 1; 1138 if () { 1139 *P = tmp; 1140 call(); 1141 tmp = *P; 1142 } else ... 1143} 1144*P = tmp; 1145 1146We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but 1147we don't sink the store. We need partially dead store sinking. 1148 1149//===---------------------------------------------------------------------===// 1150 1151[LOAD PRE CRIT EDGE SPLITTING] 1152 1153GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack 1154leading to excess stack traffic. This could be handled by GVN with some crazy 1155symbolic phi translation. The code we get looks like (g is on the stack): 1156 1157bb2: ; preds = %bb1 1158.. 1159 %9 = getelementptr %struct.f* %g, i32 0, i32 0 1160 store i32 %8, i32* %9, align bel %bb3 1161 1162bb3: ; preds = %bb1, %bb2, %bb 1163 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ] 1164 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ] 1165 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0 1166 %11 = load i32* %10, align 4 1167 1168%11 is partially redundant, an in BB2 it should have the value %8. 1169 1170GCC PR33344 and PR35287 are similar cases. 1171 1172 1173//===---------------------------------------------------------------------===// 1174 1175[LOAD PRE] 1176 1177There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the 1178GCC testsuite, ones we don't get yet are (checked through loadpre25): 1179 1180[CRIT EDGE BREAKING] 1181loadpre3.c predcom-4.c 1182 1183[PRE OF READONLY CALL] 1184loadpre5.c 1185 1186[TURN SELECT INTO BRANCH] 1187loadpre14.c loadpre15.c 1188 1189actually a conditional increment: loadpre18.c loadpre19.c 1190 1191//===---------------------------------------------------------------------===// 1192 1193[LOAD PRE / STORE SINKING / SPEC HACK] 1194 1195This is a chunk of code from 456.hmmer: 1196 1197int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp, 1198 int *tpdm, int xmb, int *bp, int *ms) { 1199 int k, sc; 1200 for (k = 1; k <= M; k++) { 1201 mc[k] = mpp[k-1] + tpmm[k-1]; 1202 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc; 1203 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc; 1204 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc; 1205 mc[k] += ms[k]; 1206 } 1207} 1208 1209It is very profitable for this benchmark to turn the conditional stores to mc[k] 1210into a conditional move (select instr in IR) and allow the final store to do the 1211store. See GCC PR27313 for more details. Note that this is valid to xform even 1212with the new C++ memory model, since mc[k] is previously loaded and later 1213stored. 1214 1215//===---------------------------------------------------------------------===// 1216 1217[SCALAR PRE] 1218There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the 1219GCC testsuite. 1220 1221//===---------------------------------------------------------------------===// 1222 1223There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the 1224GCC testsuite. For example, we get the first example in predcom-1.c, but 1225miss the second one: 1226 1227unsigned fib[1000]; 1228unsigned avg[1000]; 1229 1230__attribute__ ((noinline)) 1231void count_averages(int n) { 1232 int i; 1233 for (i = 1; i < n; i++) 1234 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff; 1235} 1236 1237which compiles into two loads instead of one in the loop. 1238 1239predcom-2.c is the same as predcom-1.c 1240 1241predcom-3.c is very similar but needs loads feeding each other instead of 1242store->load. 1243 1244 1245//===---------------------------------------------------------------------===// 1246 1247[ALIAS ANALYSIS] 1248 1249Type based alias analysis: 1250http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705 1251 1252We should do better analysis of posix_memalign. At the least it should 1253no-capture its pointer argument, at best, we should know that the out-value 1254result doesn't point to anything (like malloc). One example of this is in 1255SingleSource/Benchmarks/Misc/dt.c 1256 1257//===---------------------------------------------------------------------===// 1258 1259Interesting missed case because of control flow flattening (should be 2 loads): 1260http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629 1261With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | 1262 opt -mem2reg -gvn -instcombine | llvm-dis 1263we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT 1264VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS 1265 1266//===---------------------------------------------------------------------===// 1267 1268http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633 1269We could eliminate the branch condition here, loading from null is undefined: 1270 1271struct S { int w, x, y, z; }; 1272struct T { int r; struct S s; }; 1273void bar (struct S, int); 1274void foo (int a, struct T b) 1275{ 1276 struct S *c = 0; 1277 if (a) 1278 c = &b.s; 1279 bar (*c, a); 1280} 1281 1282//===---------------------------------------------------------------------===// 1283 1284simplifylibcalls should do several optimizations for strspn/strcspn: 1285 1286strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn): 1287 1288size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2, 1289 int __reject3) { 1290 register size_t __result = 0; 1291 while (__s[__result] != '\0' && __s[__result] != __reject1 && 1292 __s[__result] != __reject2 && __s[__result] != __reject3) 1293 ++__result; 1294 return __result; 1295} 1296 1297This should turn into a switch on the character. See PR3253 for some notes on 1298codegen. 1299 1300456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn. 1301 1302//===---------------------------------------------------------------------===// 1303 1304simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917) 1305 1306char buf1[6], buf2[6], buf3[4], buf4[4]; 1307int i; 1308 1309int foo (void) { 1310 int ret = snprintf (buf1, sizeof buf1, "abcde"); 1311 ret += snprintf (buf2, sizeof buf2, "abcdef") * 16; 1312 ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256; 1313 ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096; 1314 return ret; 1315} 1316 1317//===---------------------------------------------------------------------===// 1318 1319"gas" uses this idiom: 1320 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string)) 1321.. 1322 else if (strchr ("<>", *intel_parser.op_string) 1323 1324Those should be turned into a switch. 1325 1326//===---------------------------------------------------------------------===// 1327 1328252.eon contains this interesting code: 1329 1330 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0 1331 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind 1332 %strlen = call i32 @strlen(i8* %3072) ; uses = 1 1333 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen 1334 call void @llvm.memcpy.i32(i8* %endptr, 1335 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1) 1336 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly 1337 1338This is interesting for a couple reasons. First, in this: 1339 1340The memcpy+strlen strlen can be replaced with: 1341 1342 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly 1343 1344Because the destination was just copied into the specified memory buffer. This, 1345in turn, can be constant folded to "4". 1346 1347In other code, it contains: 1348 1349 %endptr6978 = bitcast i8* %endptr69 to i32* 1350 store i32 7107374, i32* %endptr6978, align 1 1351 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly 1352 1353Which could also be constant folded. Whatever is producing this should probably 1354be fixed to leave this as a memcpy from a string. 1355 1356Further, eon also has an interesting partially redundant strlen call: 1357 1358bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit 1359 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2] 1360 %683 = load i8** %682, align 4 ; <i8*> [#uses=4] 1361 %684 = load i8* %683, align 1 ; <i8> [#uses=1] 1362 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1] 1363 br i1 %685, label %bb10, label %bb9 1364 1365bb9: ; preds = %bb8 1366 %686 = call i32 @strlen(i8* %683) nounwind readonly 1367 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1] 1368 br i1 %687, label %bb10, label %bb11 1369 1370bb10: ; preds = %bb9, %bb8 1371 %688 = call i32 @strlen(i8* %683) nounwind readonly 1372 1373This could be eliminated by doing the strlen once in bb8, saving code size and 1374improving perf on the bb8->9->10 path. 1375 1376//===---------------------------------------------------------------------===// 1377 1378I see an interesting fully redundant call to strlen left in 186.crafty:InputMove 1379which looks like: 1380 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0 1381 1382 1383bb62: ; preds = %bb55, %bb53 1384 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ] 1385 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1 1386 %172 = add i32 %171, -1 ; <i32> [#uses=1] 1387 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172 1388 1389... no stores ... 1390 br i1 %or.cond, label %bb65, label %bb72 1391 1392bb65: ; preds = %bb62 1393 store i8 0, i8* %173, align 1 1394 br label %bb72 1395 1396bb72: ; preds = %bb65, %bb62 1397 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ] 1398 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1 1399 1400Note that on the bb62->bb72 path, that the %177 strlen call is partially 1401redundant with the %171 call. At worst, we could shove the %177 strlen call 1402up into the bb65 block moving it out of the bb62->bb72 path. However, note 1403that bb65 stores to the string, zeroing out the last byte. This means that on 1404that path the value of %177 is actually just %171-1. A sub is cheaper than a 1405strlen! 1406 1407This pattern repeats several times, basically doing: 1408 1409 A = strlen(P); 1410 P[A-1] = 0; 1411 B = strlen(P); 1412 where it is "obvious" that B = A-1. 1413 1414//===---------------------------------------------------------------------===// 1415 1416186.crafty has this interesting pattern with the "out.4543" variable: 1417 1418call void @llvm.memcpy.i32( 1419 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0), 1420 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1) 1421%101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind 1422 1423It is basically doing: 1424 1425 memcpy(globalarray, "string"); 1426 printf(..., globalarray); 1427 1428Anyway, by knowing that printf just reads the memory and forward substituting 1429the string directly into the printf, this eliminates reads from globalarray. 1430Since this pattern occurs frequently in crafty (due to the "DisplayTime" and 1431other similar functions) there are many stores to "out". Once all the printfs 1432stop using "out", all that is left is the memcpy's into it. This should allow 1433globalopt to remove the "stored only" global. 1434 1435//===---------------------------------------------------------------------===// 1436 1437This code: 1438 1439define inreg i32 @foo(i8* inreg %p) nounwind { 1440 %tmp0 = load i8* %p 1441 %tmp1 = ashr i8 %tmp0, 5 1442 %tmp2 = sext i8 %tmp1 to i32 1443 ret i32 %tmp2 1444} 1445 1446could be dagcombine'd to a sign-extending load with a shift. 1447For example, on x86 this currently gets this: 1448 1449 movb (%eax), %al 1450 sarb $5, %al 1451 movsbl %al, %eax 1452 1453while it could get this: 1454 1455 movsbl (%eax), %eax 1456 sarl $5, %eax 1457 1458//===---------------------------------------------------------------------===// 1459 1460GCC PR31029: 1461 1462int test(int x) { return 1-x == x; } // --> return false 1463int test2(int x) { return 2-x == x; } // --> return x == 1 ? 1464 1465Always foldable for odd constants, what is the rule for even? 1466 1467//===---------------------------------------------------------------------===// 1468 1469PR 3381: GEP to field of size 0 inside a struct could be turned into GEP 1470for next field in struct (which is at same address). 1471 1472For example: store of float into { {{}}, float } could be turned into a store to 1473the float directly. 1474 1475//===---------------------------------------------------------------------===// 1476 1477The arg promotion pass should make use of nocapture to make its alias analysis 1478stuff much more precise. 1479 1480//===---------------------------------------------------------------------===// 1481 1482The following functions should be optimized to use a select instead of a 1483branch (from gcc PR40072): 1484 1485char char_int(int m) {if(m>7) return 0; return m;} 1486int int_char(char m) {if(m>7) return 0; return m;} 1487 1488//===---------------------------------------------------------------------===// 1489 1490int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; } 1491 1492Generates this: 1493 1494define i32 @func(i32 %a, i32 %b) nounwind readnone ssp { 1495entry: 1496 %0 = and i32 %a, 128 ; <i32> [#uses=1] 1497 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1] 1498 %2 = or i32 %b, 128 ; <i32> [#uses=1] 1499 %3 = and i32 %b, -129 ; <i32> [#uses=1] 1500 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1] 1501 ret i32 %b_addr.0 1502} 1503 1504However, it's functionally equivalent to: 1505 1506 b = (b & ~0x80) | (a & 0x80); 1507 1508Which generates this: 1509 1510define i32 @func(i32 %a, i32 %b) nounwind readnone ssp { 1511entry: 1512 %0 = and i32 %b, -129 ; <i32> [#uses=1] 1513 %1 = and i32 %a, 128 ; <i32> [#uses=1] 1514 %2 = or i32 %0, %1 ; <i32> [#uses=1] 1515 ret i32 %2 1516} 1517 1518This can be generalized for other forms: 1519 1520 b = (b & ~0x80) | (a & 0x40) << 1; 1521 1522//===---------------------------------------------------------------------===// 1523 1524These two functions produce different code. They shouldn't: 1525 1526#include <stdint.h> 1527 1528uint8_t p1(uint8_t b, uint8_t a) { 1529 b = (b & ~0xc0) | (a & 0xc0); 1530 return (b); 1531} 1532 1533uint8_t p2(uint8_t b, uint8_t a) { 1534 b = (b & ~0x40) | (a & 0x40); 1535 b = (b & ~0x80) | (a & 0x80); 1536 return (b); 1537} 1538 1539define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp { 1540entry: 1541 %0 = and i8 %b, 63 ; <i8> [#uses=1] 1542 %1 = and i8 %a, -64 ; <i8> [#uses=1] 1543 %2 = or i8 %1, %0 ; <i8> [#uses=1] 1544 ret i8 %2 1545} 1546 1547define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp { 1548entry: 1549 %0 = and i8 %b, 63 ; <i8> [#uses=1] 1550 %.masked = and i8 %a, 64 ; <i8> [#uses=1] 1551 %1 = and i8 %a, -128 ; <i8> [#uses=1] 1552 %2 = or i8 %1, %0 ; <i8> [#uses=1] 1553 %3 = or i8 %2, %.masked ; <i8> [#uses=1] 1554 ret i8 %3 1555} 1556 1557//===---------------------------------------------------------------------===// 1558 1559IPSCCP does not currently propagate argument dependent constants through 1560functions where it does not not all of the callers. This includes functions 1561with normal external linkage as well as templates, C99 inline functions etc. 1562Specifically, it does nothing to: 1563 1564define i32 @test(i32 %x, i32 %y, i32 %z) nounwind { 1565entry: 1566 %0 = add nsw i32 %y, %z 1567 %1 = mul i32 %0, %x 1568 %2 = mul i32 %y, %z 1569 %3 = add nsw i32 %1, %2 1570 ret i32 %3 1571} 1572 1573define i32 @test2() nounwind { 1574entry: 1575 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind 1576 ret i32 %0 1577} 1578 1579It would be interesting extend IPSCCP to be able to handle simple cases like 1580this, where all of the arguments to a call are constant. Because IPSCCP runs 1581before inlining, trivial templates and inline functions are not yet inlined. 1582The results for a function + set of constant arguments should be memoized in a 1583map. 1584 1585//===---------------------------------------------------------------------===// 1586 1587The libcall constant folding stuff should be moved out of SimplifyLibcalls into 1588libanalysis' constantfolding logic. This would allow IPSCCP to be able to 1589handle simple things like this: 1590 1591static int foo(const char *X) { return strlen(X); } 1592int bar() { return foo("abcd"); } 1593 1594//===---------------------------------------------------------------------===// 1595 1596functionattrs doesn't know much about memcpy/memset. This function should be 1597marked readnone rather than readonly, since it only twiddles local memory, but 1598functionattrs doesn't handle memset/memcpy/memmove aggressively: 1599 1600struct X { int *p; int *q; }; 1601int foo() { 1602 int i = 0, j = 1; 1603 struct X x, y; 1604 int **p; 1605 y.p = &i; 1606 x.q = &j; 1607 p = __builtin_memcpy (&x, &y, sizeof (int *)); 1608 return **p; 1609} 1610 1611This can be seen at: 1612$ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S 1613 1614 1615//===---------------------------------------------------------------------===// 1616 1617Missed instcombine transformation: 1618define i1 @a(i32 %x) nounwind readnone { 1619entry: 1620 %cmp = icmp eq i32 %x, 30 1621 %sub = add i32 %x, -30 1622 %cmp2 = icmp ugt i32 %sub, 9 1623 %or = or i1 %cmp, %cmp2 1624 ret i1 %or 1625} 1626This should be optimized to a single compare. Testcase derived from gcc. 1627 1628//===---------------------------------------------------------------------===// 1629 1630Missed instcombine or reassociate transformation: 1631int a(int a, int b) { return (a==12)&(b>47)&(b<58); } 1632 1633The sgt and slt should be combined into a single comparison. Testcase derived 1634from gcc. 1635 1636//===---------------------------------------------------------------------===// 1637 1638Missed instcombine transformation: 1639 1640 %382 = srem i32 %tmp14.i, 64 ; [#uses=1] 1641 %383 = zext i32 %382 to i64 ; [#uses=1] 1642 %384 = shl i64 %381, %383 ; [#uses=1] 1643 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1] 1644 1645The srem can be transformed to an and because if %tmp14.i is negative, the 1646shift is undefined. Testcase derived from 403.gcc. 1647 1648//===---------------------------------------------------------------------===// 1649 1650This is a range comparison on a divided result (from 403.gcc): 1651 1652 %1337 = sdiv i32 %1336, 8 ; [#uses=1] 1653 %.off.i208 = add i32 %1336, 7 ; [#uses=1] 1654 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1] 1655 1656We already catch this (removing the sdiv) if there isn't an add, we should 1657handle the 'add' as well. This is a common idiom with it's builtin_alloca code. 1658C testcase: 1659 1660int a(int x) { return (unsigned)(x/16+7) < 15; } 1661 1662Another similar case involves truncations on 64-bit targets: 1663 1664 %361 = sdiv i64 %.046, 8 ; [#uses=1] 1665 %362 = trunc i64 %361 to i32 ; [#uses=2] 1666... 1667 %367 = icmp eq i32 %362, 0 ; [#uses=1] 1668 1669//===---------------------------------------------------------------------===// 1670 1671Missed instcombine/dagcombine transformation: 1672define void @lshift_lt(i8 zeroext %a) nounwind { 1673entry: 1674 %conv = zext i8 %a to i32 1675 %shl = shl i32 %conv, 3 1676 %cmp = icmp ult i32 %shl, 33 1677 br i1 %cmp, label %if.then, label %if.end 1678 1679if.then: 1680 tail call void @bar() nounwind 1681 ret void 1682 1683if.end: 1684 ret void 1685} 1686declare void @bar() nounwind 1687 1688The shift should be eliminated. Testcase derived from gcc. 1689 1690//===---------------------------------------------------------------------===// 1691 1692These compile into different code, one gets recognized as a switch and the 1693other doesn't due to phase ordering issues (PR6212): 1694 1695int test1(int mainType, int subType) { 1696 if (mainType == 7) 1697 subType = 4; 1698 else if (mainType == 9) 1699 subType = 6; 1700 else if (mainType == 11) 1701 subType = 9; 1702 return subType; 1703} 1704 1705int test2(int mainType, int subType) { 1706 if (mainType == 7) 1707 subType = 4; 1708 if (mainType == 9) 1709 subType = 6; 1710 if (mainType == 11) 1711 subType = 9; 1712 return subType; 1713} 1714 1715//===---------------------------------------------------------------------===// 1716 1717The following test case (from PR6576): 1718 1719define i32 @mul(i32 %a, i32 %b) nounwind readnone { 1720entry: 1721 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1] 1722 br i1 %cond1, label %exit, label %bb.nph 1723bb.nph: ; preds = %entry 1724 %tmp = mul i32 %b, %a ; <i32> [#uses=1] 1725 ret i32 %tmp 1726exit: ; preds = %entry 1727 ret i32 0 1728} 1729 1730could be reduced to: 1731 1732define i32 @mul(i32 %a, i32 %b) nounwind readnone { 1733entry: 1734 %tmp = mul i32 %b, %a 1735 ret i32 %tmp 1736} 1737 1738//===---------------------------------------------------------------------===// 1739 1740We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates. 1741See GCC PR34949 1742 1743Another interesting case is that something related could be used for variables 1744that go const after their ctor has finished. In these cases, globalopt (which 1745can statically run the constructor) could mark the global const (so it gets put 1746in the readonly section). A testcase would be: 1747 1748#include <complex> 1749using namespace std; 1750const complex<char> should_be_in_rodata (42,-42); 1751complex<char> should_be_in_data (42,-42); 1752complex<char> should_be_in_bss; 1753 1754Where we currently evaluate the ctors but the globals don't become const because 1755the optimizer doesn't know they "become const" after the ctor is done. See 1756GCC PR4131 for more examples. 1757 1758//===---------------------------------------------------------------------===// 1759 1760In this code: 1761 1762long foo(long x) { 1763 return x > 1 ? x : 1; 1764} 1765 1766LLVM emits a comparison with 1 instead of 0. 0 would be equivalent 1767and cheaper on most targets. 1768 1769LLVM prefers comparisons with zero over non-zero in general, but in this 1770case it choses instead to keep the max operation obvious. 1771 1772//===---------------------------------------------------------------------===// 1773 1774define void @a(i32 %x) nounwind { 1775entry: 1776 switch i32 %x, label %if.end [ 1777 i32 0, label %if.then 1778 i32 1, label %if.then 1779 i32 2, label %if.then 1780 i32 3, label %if.then 1781 i32 5, label %if.then 1782 ] 1783if.then: 1784 tail call void @foo() nounwind 1785 ret void 1786if.end: 1787 ret void 1788} 1789declare void @foo() 1790 1791Generated code on x86-64 (other platforms give similar results): 1792a: 1793 cmpl $5, %edi 1794 ja LBB2_2 1795 cmpl $4, %edi 1796 jne LBB2_3 1797.LBB0_2: 1798 ret 1799.LBB0_3: 1800 jmp foo # TAILCALL 1801 1802If we wanted to be really clever, we could simplify the whole thing to 1803something like the following, which eliminates a branch: 1804 xorl $1, %edi 1805 cmpl $4, %edi 1806 ja .LBB0_2 1807 ret 1808.LBB0_2: 1809 jmp foo # TAILCALL 1810 1811//===---------------------------------------------------------------------===// 1812 1813We compile this: 1814 1815int foo(int a) { return (a & (~15)) / 16; } 1816 1817Into: 1818 1819define i32 @foo(i32 %a) nounwind readnone ssp { 1820entry: 1821 %and = and i32 %a, -16 1822 %div = sdiv i32 %and, 16 1823 ret i32 %div 1824} 1825 1826but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case 1827should be instcombined into just "a >> 4". 1828 1829We do get this at the codegen level, so something knows about it, but 1830instcombine should catch it earlier: 1831 1832_foo: ## @foo 1833## BB#0: ## %entry 1834 movl %edi, %eax 1835 sarl $4, %eax 1836 ret 1837 1838//===---------------------------------------------------------------------===// 1839 1840This code (from GCC PR28685): 1841 1842int test(int a, int b) { 1843 int lt = a < b; 1844 int eq = a == b; 1845 if (lt) 1846 return 1; 1847 return eq; 1848} 1849 1850Is compiled to: 1851 1852define i32 @test(i32 %a, i32 %b) nounwind readnone ssp { 1853entry: 1854 %cmp = icmp slt i32 %a, %b 1855 br i1 %cmp, label %return, label %if.end 1856 1857if.end: ; preds = %entry 1858 %cmp5 = icmp eq i32 %a, %b 1859 %conv6 = zext i1 %cmp5 to i32 1860 ret i32 %conv6 1861 1862return: ; preds = %entry 1863 ret i32 1 1864} 1865 1866it could be: 1867 1868define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp { 1869entry: 1870 %0 = icmp sle i32 %a, %b 1871 %retval = zext i1 %0 to i32 1872 ret i32 %retval 1873} 1874 1875//===---------------------------------------------------------------------===// 1876 1877This code can be seen in viterbi: 1878 1879 %64 = call noalias i8* @malloc(i64 %62) nounwind 1880... 1881 %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind 1882 %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind 1883 1884llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to 1885fold to %62. This is a security win (overflows of malloc will get caught) 1886and also a performance win by exposing more memsets to the optimizer. 1887 1888This occurs several times in viterbi. 1889 1890Note that this would change the semantics of @llvm.objectsize which by its 1891current definition always folds to a constant. We also should make sure that 1892we remove checking in code like 1893 1894 char *p = malloc(strlen(s)+1); 1895 __strcpy_chk(p, s, __builtin_objectsize(p, 0)); 1896 1897//===---------------------------------------------------------------------===// 1898 1899This code (from Benchmarks/Dhrystone/dry.c): 1900 1901define i32 @Func1(i32, i32) nounwind readnone optsize ssp { 1902entry: 1903 %sext = shl i32 %0, 24 1904 %conv = ashr i32 %sext, 24 1905 %sext6 = shl i32 %1, 24 1906 %conv4 = ashr i32 %sext6, 24 1907 %cmp = icmp eq i32 %conv, %conv4 1908 %. = select i1 %cmp, i32 10000, i32 0 1909 ret i32 %. 1910} 1911 1912Should be simplified into something like: 1913 1914define i32 @Func1(i32, i32) nounwind readnone optsize ssp { 1915entry: 1916 %sext = shl i32 %0, 24 1917 %conv = and i32 %sext, 0xFF000000 1918 %sext6 = shl i32 %1, 24 1919 %conv4 = and i32 %sext6, 0xFF000000 1920 %cmp = icmp eq i32 %conv, %conv4 1921 %. = select i1 %cmp, i32 10000, i32 0 1922 ret i32 %. 1923} 1924 1925and then to: 1926 1927define i32 @Func1(i32, i32) nounwind readnone optsize ssp { 1928entry: 1929 %conv = and i32 %0, 0xFF 1930 %conv4 = and i32 %1, 0xFF 1931 %cmp = icmp eq i32 %conv, %conv4 1932 %. = select i1 %cmp, i32 10000, i32 0 1933 ret i32 %. 1934} 1935//===---------------------------------------------------------------------===// 1936 1937clang -O3 currently compiles this code 1938 1939int g(unsigned int a) { 1940 unsigned int c[100]; 1941 c[10] = a; 1942 c[11] = a; 1943 unsigned int b = c[10] + c[11]; 1944 if(b > a*2) a = 4; 1945 else a = 8; 1946 return a + 7; 1947} 1948 1949into 1950 1951define i32 @g(i32 a) nounwind readnone { 1952 %add = shl i32 %a, 1 1953 %mul = shl i32 %a, 1 1954 %cmp = icmp ugt i32 %add, %mul 1955 %a.addr.0 = select i1 %cmp, i32 11, i32 15 1956 ret i32 %a.addr.0 1957} 1958 1959The icmp should fold to false. This CSE opportunity is only available 1960after GVN and InstCombine have run. 1961 1962//===---------------------------------------------------------------------===// 1963 1964memcpyopt should turn this: 1965 1966define i8* @test10(i32 %x) { 1967 %alloc = call noalias i8* @malloc(i32 %x) nounwind 1968 call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false) 1969 ret i8* %alloc 1970} 1971 1972into a call to calloc. We should make sure that we analyze calloc as 1973aggressively as malloc though. 1974 1975//===---------------------------------------------------------------------===// 1976 1977clang -O3 doesn't optimize this: 1978 1979void f1(int* begin, int* end) { 1980 std::fill(begin, end, 0); 1981} 1982 1983into a memset. This is PR8942. 1984 1985//===---------------------------------------------------------------------===// 1986 1987clang -O3 -fno-exceptions currently compiles this code: 1988 1989void f(int N) { 1990 std::vector<int> v(N); 1991 1992 extern void sink(void*); sink(&v); 1993} 1994 1995into 1996 1997define void @_Z1fi(i32 %N) nounwind { 1998entry: 1999 %v2 = alloca [3 x i32*], align 8 2000 %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0 2001 %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"* 2002 %conv = sext i32 %N to i64 2003 store i32* null, i32** %v2.sub, align 8, !tbaa !0 2004 %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1 2005 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0 2006 %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2 2007 store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0 2008 %cmp.i.i.i.i = icmp eq i32 %N, 0 2009 br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i 2010 2011_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry 2012 store i32* null, i32** %v2.sub, align 8, !tbaa !0 2013 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0 2014 %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv 2015 store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0 2016 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit 2017 2018cond.true.i.i.i.i: ; preds = %entry 2019 %cmp.i.i.i.i.i = icmp slt i32 %N, 0 2020 br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i 2021 2022if.then.i.i.i.i.i: ; preds = %cond.true.i.i.i.i 2023 call void @_ZSt17__throw_bad_allocv() noreturn nounwind 2024 unreachable 2025 2026_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i: ; preds = %cond.true.i.i.i.i 2027 %mul.i.i.i.i.i = shl i64 %conv, 2 2028 %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind 2029 %0 = bitcast i8* %call3.i.i.i.i.i to i32* 2030 store i32* %0, i32** %v2.sub, align 8, !tbaa !0 2031 store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0 2032 %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv 2033 store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0 2034 call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false) 2035 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit 2036 2037This is just the handling the construction of the vector. Most surprising here 2038is the fact that all three null stores in %entry are dead (because we do no 2039cross-block DSE). 2040 2041Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i. 2042This is a because the client of LazyValueInfo doesn't simplify all instruction 2043operands, just selected ones. 2044 2045//===---------------------------------------------------------------------===// 2046 2047clang -O3 -fno-exceptions currently compiles this code: 2048 2049void f(char* a, int n) { 2050 __builtin_memset(a, 0, n); 2051 for (int i = 0; i < n; ++i) 2052 a[i] = 0; 2053} 2054 2055into: 2056 2057define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind { 2058entry: 2059 %conv = sext i32 %n to i64 2060 tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false) 2061 %cmp8 = icmp sgt i32 %n, 0 2062 br i1 %cmp8, label %for.body.lr.ph, label %for.end 2063 2064for.body.lr.ph: ; preds = %entry 2065 %tmp10 = add i32 %n, -1 2066 %tmp11 = zext i32 %tmp10 to i64 2067 %tmp12 = add i64 %tmp11, 1 2068 call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false) 2069 ret void 2070 2071for.end: ; preds = %entry 2072 ret void 2073} 2074 2075This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold 2076the two memset's together. 2077 2078The issue with the addition only occurs in 64-bit mode, and appears to be at 2079least partially caused by Scalar Evolution not keeping its cache updated: it 2080returns the "wrong" result immediately after indvars runs, but figures out the 2081expected result if it is run from scratch on IR resulting from running indvars. 2082 2083//===---------------------------------------------------------------------===// 2084 2085clang -O3 -fno-exceptions currently compiles this code: 2086 2087struct S { 2088 unsigned short m1, m2; 2089 unsigned char m3, m4; 2090}; 2091 2092void f(int N) { 2093 std::vector<S> v(N); 2094 extern void sink(void*); sink(&v); 2095} 2096 2097into poor code for zero-initializing 'v' when N is >0. The problem is that 2098S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and 20994 stores on each iteration. If the struct were 8 bytes, this gets turned into 2100a memset. 2101 2102In order to handle this we have to: 2103 A) Teach clang to generate metadata for memsets of structs that have holes in 2104 them. 2105 B) Teach clang to use such a memset for zero init of this struct (since it has 2106 a hole), instead of doing elementwise zeroing. 2107 2108//===---------------------------------------------------------------------===// 2109 2110clang -O3 currently compiles this code: 2111 2112extern const int magic; 2113double f() { return 0.0 * magic; } 2114 2115into 2116 2117@magic = external constant i32 2118 2119define double @_Z1fv() nounwind readnone { 2120entry: 2121 %tmp = load i32* @magic, align 4, !tbaa !0 2122 %conv = sitofp i32 %tmp to double 2123 %mul = fmul double %conv, 0.000000e+00 2124 ret double %mul 2125} 2126 2127We should be able to fold away this fmul to 0.0. More generally, fmul(x,0.0) 2128can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and 2129not an INF. The CannotBeNegativeZero predicate in value tracking should be 2130extended to support general "fpclassify" operations that can return 2131yes/no/unknown for each of these predicates. 2132 2133In this predicate, we know that uitofp is trivially never NaN or -0.0, and 2134we know that it isn't +/-Inf if the floating point type has enough exponent bits 2135to represent the largest integer value as < inf. 2136 2137//===---------------------------------------------------------------------===// 2138 2139When optimizing a transformation that can change the sign of 0.0 (such as the 21400.0*val -> 0.0 transformation above), it might be provable that the sign of the 2141expression doesn't matter. For example, by the above rules, we can't transform 2142fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the 2143expression is defined to be -0.0. 2144 2145If we look at the uses of the fmul for example, we might be able to prove that 2146all uses don't care about the sign of zero. For example, if we have: 2147 2148 fadd(fmul(sitofp(x), 0.0), 2.0) 2149 2150Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can 2151transform the fmul to 0.0, and then the fadd to 2.0. 2152 2153//===---------------------------------------------------------------------===// 2154 2155We should enhance memcpy/memcpy/memset to allow a metadata node on them 2156indicating that some bytes of the transfer are undefined. This is useful for 2157frontends like clang when lowering struct copies, when some elements of the 2158struct are undefined. Consider something like this: 2159 2160struct x { 2161 char a; 2162 int b[4]; 2163}; 2164void foo(struct x*P); 2165struct x testfunc() { 2166 struct x V1, V2; 2167 foo(&V1); 2168 V2 = V1; 2169 2170 return V2; 2171} 2172 2173We currently compile this to: 2174$ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S 2175 2176 2177%struct.x = type { i8, [4 x i32] } 2178 2179define void @testfunc(%struct.x* sret %agg.result) nounwind ssp { 2180entry: 2181 %V1 = alloca %struct.x, align 4 2182 call void @foo(%struct.x* %V1) 2183 %tmp1 = bitcast %struct.x* %V1 to i8* 2184 %0 = bitcast %struct.x* %V1 to i160* 2185 %srcval1 = load i160* %0, align 4 2186 %tmp2 = bitcast %struct.x* %agg.result to i8* 2187 %1 = bitcast %struct.x* %agg.result to i160* 2188 store i160 %srcval1, i160* %1, align 4 2189 ret void 2190} 2191 2192This happens because SRoA sees that the temp alloca has is being memcpy'd into 2193and out of and it has holes and it has to be conservative. If we knew about the 2194holes, then this could be much much better. 2195 2196Having information about these holes would also improve memcpy (etc) lowering at 2197llc time when it gets inlined, because we can use smaller transfers. This also 2198avoids partial register stalls in some important cases. 2199 2200//===---------------------------------------------------------------------===// 2201 2202We don't fold (icmp (add) (add)) unless the two adds only have a single use. 2203There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for 2204example: 2205 2206 %indvar.next90 = add i64 %indvar89, 1 ;; Has 2 uses 2207 %tmp96 = add i64 %tmp95, 1 ;; Has 1 use 2208 %exitcond97 = icmp eq i64 %indvar.next90, %tmp96 2209 2210We don't fold this because we don't want to introduce an overlapped live range 2211of the ivar. However if we can make this more aggressive without causing 2212performance issues in two ways: 2213 22141. If *either* the LHS or RHS has a single use, we can definitely do the 2215 transformation. In the overlapping liverange case we're trading one register 2216 use for one fewer operation, which is a reasonable trade. Before doing this 2217 we should verify that the llc output actually shrinks for some benchmarks. 22182. If both ops have multiple uses, we can still fold it if the operations are 2219 both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't 2220 increase register pressure. 2221 2222There are a ton of icmp's we aren't simplifying because of the reg pressure 2223concern. Care is warranted here though because many of these are induction 2224variables and other cases that matter a lot to performance, like the above. 2225Here's a blob of code that you can drop into the bottom of visitICmp to see some 2226missed cases: 2227 2228 { Value *A, *B, *C, *D; 2229 if (match(Op0, m_Add(m_Value(A), m_Value(B))) && 2230 match(Op1, m_Add(m_Value(C), m_Value(D))) && 2231 (A == C || A == D || B == C || B == D)) { 2232 errs() << "OP0 = " << *Op0 << " U=" << Op0->getNumUses() << "\n"; 2233 errs() << "OP1 = " << *Op1 << " U=" << Op1->getNumUses() << "\n"; 2234 errs() << "CMP = " << I << "\n\n"; 2235 } 2236 } 2237 2238//===---------------------------------------------------------------------===// 2239 2240define i1 @test1(i32 %x) nounwind { 2241 %and = and i32 %x, 3 2242 %cmp = icmp ult i32 %and, 2 2243 ret i1 %cmp 2244} 2245 2246Can be folded to (x & 2) == 0. 2247 2248define i1 @test2(i32 %x) nounwind { 2249 %and = and i32 %x, 3 2250 %cmp = icmp ugt i32 %and, 1 2251 ret i1 %cmp 2252} 2253 2254Can be folded to (x & 2) != 0. 2255 2256SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the 2257icmp transform. 2258 2259//===---------------------------------------------------------------------===// 2260 2261This code: 2262 2263typedef struct { 2264int f1:1; 2265int f2:1; 2266int f3:1; 2267int f4:29; 2268} t1; 2269 2270typedef struct { 2271int f1:1; 2272int f2:1; 2273int f3:30; 2274} t2; 2275 2276t1 s1; 2277t2 s2; 2278 2279void func1(void) 2280{ 2281s1.f1 = s2.f1; 2282s1.f2 = s2.f2; 2283} 2284 2285Compiles into this IR (on x86-64 at least): 2286 2287%struct.t1 = type { i8, [3 x i8] } 2288@s2 = global %struct.t1 zeroinitializer, align 4 2289@s1 = global %struct.t1 zeroinitializer, align 4 2290define void @func1() nounwind ssp noredzone { 2291entry: 2292 %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4 2293 %bf.val.sext5 = and i32 %0, 1 2294 %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4 2295 %2 = and i32 %1, -4 2296 %3 = or i32 %2, %bf.val.sext5 2297 %bf.val.sext26 = and i32 %0, 2 2298 %4 = or i32 %3, %bf.val.sext26 2299 store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4 2300 ret void 2301} 2302 2303The two or/and's should be merged into one each. 2304 2305//===---------------------------------------------------------------------===// 2306 2307Machine level code hoisting can be useful in some cases. For example, PR9408 2308is about: 2309 2310typedef union { 2311 void (*f1)(int); 2312 void (*f2)(long); 2313} funcs; 2314 2315void foo(funcs f, int which) { 2316 int a = 5; 2317 if (which) { 2318 f.f1(a); 2319 } else { 2320 f.f2(a); 2321 } 2322} 2323 2324which we compile to: 2325 2326foo: # @foo 2327# BB#0: # %entry 2328 pushq %rbp 2329 movq %rsp, %rbp 2330 testl %esi, %esi 2331 movq %rdi, %rax 2332 je .LBB0_2 2333# BB#1: # %if.then 2334 movl $5, %edi 2335 callq *%rax 2336 popq %rbp 2337 ret 2338.LBB0_2: # %if.else 2339 movl $5, %edi 2340 callq *%rax 2341 popq %rbp 2342 ret 2343 2344Note that bb1 and bb2 are the same. This doesn't happen at the IR level 2345because one call is passing an i32 and the other is passing an i64. 2346 2347//===---------------------------------------------------------------------===// 2348 2349I see this sort of pattern in 176.gcc in a few places (e.g. the start of 2350store_bit_field). The rem should be replaced with a multiply and subtract: 2351 2352 %3 = sdiv i32 %A, %B 2353 %4 = srem i32 %A, %B 2354 2355Similarly for udiv/urem. Note that this shouldn't be done on X86 or ARM, 2356which can do this in a single operation (instruction or libcall). It is 2357probably best to do this in the code generator. 2358 2359//===---------------------------------------------------------------------===// 2360 2361unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; } 2362should fold to (x & y) == 0. 2363 2364//===---------------------------------------------------------------------===// 2365 2366unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; } 2367should fold to x > y. 2368 2369//===---------------------------------------------------------------------===// 2370