• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Target Independent Opportunities:
2
3//===---------------------------------------------------------------------===//
4
5We should recognized various "overflow detection" idioms and translate them into
6llvm.uadd.with.overflow and similar intrinsics.  Here is a multiply idiom:
7
8unsigned int mul(unsigned int a,unsigned int b) {
9 if ((unsigned long long)a*b>0xffffffff)
10   exit(0);
11  return a*b;
12}
13
14The legalization code for mul-with-overflow needs to be made more robust before
15this can be implemented though.
16
17//===---------------------------------------------------------------------===//
18
19Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
20precision don't matter (ffastmath).  Misc/mandel will like this. :)  This isn't
21safe in general, even on darwin.  See the libm implementation of hypot for
22examples (which special case when x/y are exactly zero to get signed zeros etc
23right).
24
25//===---------------------------------------------------------------------===//
26
27On targets with expensive 64-bit multiply, we could LSR this:
28
29for (i = ...; ++i) {
30   x = 1ULL << i;
31
32into:
33 long long tmp = 1;
34 for (i = ...; ++i, tmp+=tmp)
35   x = tmp;
36
37This would be a win on ppc32, but not x86 or ppc64.
38
39//===---------------------------------------------------------------------===//
40
41Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
42
43//===---------------------------------------------------------------------===//
44
45Reassociate should turn things like:
46
47int factorial(int X) {
48 return X*X*X*X*X*X*X*X;
49}
50
51into llvm.powi calls, allowing the code generator to produce balanced
52multiplication trees.
53
54First, the intrinsic needs to be extended to support integers, and second the
55code generator needs to be enhanced to lower these to multiplication trees.
56
57//===---------------------------------------------------------------------===//
58
59Interesting? testcase for add/shift/mul reassoc:
60
61int bar(int x, int y) {
62  return x*x*x+y+x*x*x*x*x*y*y*y*y;
63}
64int foo(int z, int n) {
65  return bar(z, n) + bar(2*z, 2*n);
66}
67
68This is blocked on not handling X*X*X -> powi(X, 3) (see note above).  The issue
69is that we end up getting t = 2*X  s = t*t   and don't turn this into 4*X*X,
70which is the same number of multiplies and is canonical, because the 2*X has
71multiple uses.  Here's a simple example:
72
73define i32 @test15(i32 %X1) {
74  %B = mul i32 %X1, 47   ; X1*47
75  %C = mul i32 %B, %B
76  ret i32 %C
77}
78
79
80//===---------------------------------------------------------------------===//
81
82Reassociate should handle the example in GCC PR16157:
83
84extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
85void f () {  /* this can be optimized to four additions... */
86        b4 = a4 + a3 + a2 + a1 + a0;
87        b3 = a3 + a2 + a1 + a0;
88        b2 = a2 + a1 + a0;
89        b1 = a1 + a0;
90}
91
92This requires reassociating to forms of expressions that are already available,
93something that reassoc doesn't think about yet.
94
95
96//===---------------------------------------------------------------------===//
97
98This function: (derived from GCC PR19988)
99double foo(double x, double y) {
100  return ((x + 0.1234 * y) * (x + -0.1234 * y));
101}
102
103compiles to:
104_foo:
105	movapd	%xmm1, %xmm2
106	mulsd	LCPI1_1(%rip), %xmm1
107	mulsd	LCPI1_0(%rip), %xmm2
108	addsd	%xmm0, %xmm1
109	addsd	%xmm0, %xmm2
110	movapd	%xmm1, %xmm0
111	mulsd	%xmm2, %xmm0
112	ret
113
114Reassociate should be able to turn it into:
115
116double foo(double x, double y) {
117  return ((x + 0.1234 * y) * (x - 0.1234 * y));
118}
119
120Which allows the multiply by constant to be CSE'd, producing:
121
122_foo:
123	mulsd	LCPI1_0(%rip), %xmm1
124	movapd	%xmm1, %xmm2
125	addsd	%xmm0, %xmm2
126	subsd	%xmm1, %xmm0
127	mulsd	%xmm2, %xmm0
128	ret
129
130This doesn't need -ffast-math support at all.  This is particularly bad because
131the llvm-gcc frontend is canonicalizing the later into the former, but clang
132doesn't have this problem.
133
134//===---------------------------------------------------------------------===//
135
136These two functions should generate the same code on big-endian systems:
137
138int g(int *j,int *l)  {  return memcmp(j,l,4);  }
139int h(int *j, int *l) {  return *j - *l; }
140
141this could be done in SelectionDAGISel.cpp, along with other special cases,
142for 1,2,4,8 bytes.
143
144//===---------------------------------------------------------------------===//
145
146It would be nice to revert this patch:
147http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
148
149And teach the dag combiner enough to simplify the code expanded before
150legalize.  It seems plausible that this knowledge would let it simplify other
151stuff too.
152
153//===---------------------------------------------------------------------===//
154
155For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
156to the type size. It works but can be overly conservative as the alignment of
157specific vector types are target dependent.
158
159//===---------------------------------------------------------------------===//
160
161We should produce an unaligned load from code like this:
162
163v4sf example(float *P) {
164  return (v4sf){P[0], P[1], P[2], P[3] };
165}
166
167//===---------------------------------------------------------------------===//
168
169Add support for conditional increments, and other related patterns.  Instead
170of:
171
172	movl 136(%esp), %eax
173	cmpl $0, %eax
174	je LBB16_2	#cond_next
175LBB16_1:	#cond_true
176	incl _foo
177LBB16_2:	#cond_next
178
179emit:
180	movl	_foo, %eax
181	cmpl	$1, %edi
182	sbbl	$-1, %eax
183	movl	%eax, _foo
184
185//===---------------------------------------------------------------------===//
186
187Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
188
189Expand these to calls of sin/cos and stores:
190      double sincos(double x, double *sin, double *cos);
191      float sincosf(float x, float *sin, float *cos);
192      long double sincosl(long double x, long double *sin, long double *cos);
193
194Doing so could allow SROA of the destination pointers.  See also:
195http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
196
197This is now easily doable with MRVs.  We could even make an intrinsic for this
198if anyone cared enough about sincos.
199
200//===---------------------------------------------------------------------===//
201
202quantum_sigma_x in 462.libquantum contains the following loop:
203
204      for(i=0; i<reg->size; i++)
205	{
206	  /* Flip the target bit of each basis state */
207	  reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
208	}
209
210Where MAX_UNSIGNED/state is a 64-bit int.  On a 32-bit platform it would be just
211so cool to turn it into something like:
212
213   long long Res = ((MAX_UNSIGNED) 1 << target);
214   if (target < 32) {
215     for(i=0; i<reg->size; i++)
216       reg->node[i].state ^= Res & 0xFFFFFFFFULL;
217   } else {
218     for(i=0; i<reg->size; i++)
219       reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
220   }
221
222... which would only do one 32-bit XOR per loop iteration instead of two.
223
224It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
225this requires TBAA.
226
227//===---------------------------------------------------------------------===//
228
229This isn't recognized as bswap by instcombine (yes, it really is bswap):
230
231unsigned long reverse(unsigned v) {
232    unsigned t;
233    t = v ^ ((v << 16) | (v >> 16));
234    t &= ~0xff0000;
235    v = (v << 24) | (v >> 8);
236    return v ^ (t >> 8);
237}
238
239//===---------------------------------------------------------------------===//
240
241[LOOP DELETION]
242
243We don't delete this output free loop, because trip count analysis doesn't
244realize that it is finite (if it were infinite, it would be undefined).  Not
245having this blocks Loop Idiom from matching strlen and friends.
246
247void foo(char *C) {
248  int x = 0;
249  while (*C)
250    ++x,++C;
251}
252
253//===---------------------------------------------------------------------===//
254
255[LOOP RECOGNITION]
256
257These idioms should be recognized as popcount (see PR1488):
258
259unsigned countbits_slow(unsigned v) {
260  unsigned c;
261  for (c = 0; v; v >>= 1)
262    c += v & 1;
263  return c;
264}
265unsigned countbits_fast(unsigned v){
266  unsigned c;
267  for (c = 0; v; c++)
268    v &= v - 1; // clear the least significant bit set
269  return c;
270}
271
272BITBOARD = unsigned long long
273int PopCnt(register BITBOARD a) {
274  register int c=0;
275  while(a) {
276    c++;
277    a &= a - 1;
278  }
279  return c;
280}
281unsigned int popcount(unsigned int input) {
282  unsigned int count = 0;
283  for (unsigned int i =  0; i < 4 * 8; i++)
284    count += (input >> i) & i;
285  return count;
286}
287
288This should be recognized as CLZ:  rdar://8459039
289
290unsigned clz_a(unsigned a) {
291  int i;
292  for (i=0;i<32;i++)
293    if (a & (1<<(31-i)))
294      return i;
295  return 32;
296}
297
298This sort of thing should be added to the loop idiom pass.
299
300//===---------------------------------------------------------------------===//
301
302These should turn into single 16-bit (unaligned?) loads on little/big endian
303processors.
304
305unsigned short read_16_le(const unsigned char *adr) {
306  return adr[0] | (adr[1] << 8);
307}
308unsigned short read_16_be(const unsigned char *adr) {
309  return (adr[0] << 8) | adr[1];
310}
311
312//===---------------------------------------------------------------------===//
313
314-instcombine should handle this transform:
315   icmp pred (sdiv X / C1 ), C2
316when X, C1, and C2 are unsigned.  Similarly for udiv and signed operands.
317
318Currently InstCombine avoids this transform but will do it when the signs of
319the operands and the sign of the divide match. See the FIXME in
320InstructionCombining.cpp in the visitSetCondInst method after the switch case
321for Instruction::UDiv (around line 4447) for more details.
322
323The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
324this construct.
325
326//===---------------------------------------------------------------------===//
327
328[LOOP OPTIMIZATION]
329
330SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
331opportunities in its double_array_divs_variable function: it needs loop
332interchange, memory promotion (which LICM already does), vectorization and
333variable trip count loop unrolling (since it has a constant trip count). ICC
334apparently produces this very nice code with -ffast-math:
335
336..B1.70:                        # Preds ..B1.70 ..B1.69
337       mulpd     %xmm0, %xmm1                                  #108.2
338       mulpd     %xmm0, %xmm1                                  #108.2
339       mulpd     %xmm0, %xmm1                                  #108.2
340       mulpd     %xmm0, %xmm1                                  #108.2
341       addl      $8, %edx                                      #
342       cmpl      $131072, %edx                                 #108.2
343       jb        ..B1.70       # Prob 99%                      #108.2
344
345It would be better to count down to zero, but this is a lot better than what we
346do.
347
348//===---------------------------------------------------------------------===//
349
350Consider:
351
352typedef unsigned U32;
353typedef unsigned long long U64;
354int test (U32 *inst, U64 *regs) {
355    U64 effective_addr2;
356    U32 temp = *inst;
357    int r1 = (temp >> 20) & 0xf;
358    int b2 = (temp >> 16) & 0xf;
359    effective_addr2 = temp & 0xfff;
360    if (b2) effective_addr2 += regs[b2];
361    b2 = (temp >> 12) & 0xf;
362    if (b2) effective_addr2 += regs[b2];
363    effective_addr2 &= regs[4];
364     if ((effective_addr2 & 3) == 0)
365        return 1;
366    return 0;
367}
368
369Note that only the low 2 bits of effective_addr2 are used.  On 32-bit systems,
370we don't eliminate the computation of the top half of effective_addr2 because
371we don't have whole-function selection dags.  On x86, this means we use one
372extra register for the function when effective_addr2 is declared as U64 than
373when it is declared U32.
374
375PHI Slicing could be extended to do this.
376
377//===---------------------------------------------------------------------===//
378
379Tail call elim should be more aggressive, checking to see if the call is
380followed by an uncond branch to an exit block.
381
382; This testcase is due to tail-duplication not wanting to copy the return
383; instruction into the terminating blocks because there was other code
384; optimized out of the function after the taildup happened.
385; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
386
387define i32 @t4(i32 %a) {
388entry:
389	%tmp.1 = and i32 %a, 1		; <i32> [#uses=1]
390	%tmp.2 = icmp ne i32 %tmp.1, 0		; <i1> [#uses=1]
391	br i1 %tmp.2, label %then.0, label %else.0
392
393then.0:		; preds = %entry
394	%tmp.5 = add i32 %a, -1		; <i32> [#uses=1]
395	%tmp.3 = call i32 @t4( i32 %tmp.5 )		; <i32> [#uses=1]
396	br label %return
397
398else.0:		; preds = %entry
399	%tmp.7 = icmp ne i32 %a, 0		; <i1> [#uses=1]
400	br i1 %tmp.7, label %then.1, label %return
401
402then.1:		; preds = %else.0
403	%tmp.11 = add i32 %a, -2		; <i32> [#uses=1]
404	%tmp.9 = call i32 @t4( i32 %tmp.11 )		; <i32> [#uses=1]
405	br label %return
406
407return:		; preds = %then.1, %else.0, %then.0
408	%result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
409                            [ %tmp.9, %then.1 ]
410	ret i32 %result.0
411}
412
413//===---------------------------------------------------------------------===//
414
415Tail recursion elimination should handle:
416
417int pow2m1(int n) {
418 if (n == 0)
419   return 0;
420 return 2 * pow2m1 (n - 1) + 1;
421}
422
423Also, multiplies can be turned into SHL's, so they should be handled as if
424they were associative.  "return foo() << 1" can be tail recursion eliminated.
425
426//===---------------------------------------------------------------------===//
427
428Argument promotion should promote arguments for recursive functions, like
429this:
430
431; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
432
433define internal i32 @foo(i32* %x) {
434entry:
435	%tmp = load i32* %x		; <i32> [#uses=0]
436	%tmp.foo = call i32 @foo( i32* %x )		; <i32> [#uses=1]
437	ret i32 %tmp.foo
438}
439
440define i32 @bar(i32* %x) {
441entry:
442	%tmp3 = call i32 @foo( i32* %x )		; <i32> [#uses=1]
443	ret i32 %tmp3
444}
445
446//===---------------------------------------------------------------------===//
447
448We should investigate an instruction sinking pass.  Consider this silly
449example in pic mode:
450
451#include <assert.h>
452void foo(int x) {
453  assert(x);
454  //...
455}
456
457we compile this to:
458_foo:
459	subl	$28, %esp
460	call	"L1$pb"
461"L1$pb":
462	popl	%eax
463	cmpl	$0, 32(%esp)
464	je	LBB1_2	# cond_true
465LBB1_1:	# return
466	# ...
467	addl	$28, %esp
468	ret
469LBB1_2:	# cond_true
470...
471
472The PIC base computation (call+popl) is only used on one path through the
473code, but is currently always computed in the entry block.  It would be
474better to sink the picbase computation down into the block for the
475assertion, as it is the only one that uses it.  This happens for a lot of
476code with early outs.
477
478Another example is loads of arguments, which are usually emitted into the
479entry block on targets like x86.  If not used in all paths through a
480function, they should be sunk into the ones that do.
481
482In this case, whole-function-isel would also handle this.
483
484//===---------------------------------------------------------------------===//
485
486Investigate lowering of sparse switch statements into perfect hash tables:
487http://burtleburtle.net/bob/hash/perfect.html
488
489//===---------------------------------------------------------------------===//
490
491We should turn things like "load+fabs+store" and "load+fneg+store" into the
492corresponding integer operations.  On a yonah, this loop:
493
494double a[256];
495void foo() {
496  int i, b;
497  for (b = 0; b < 10000000; b++)
498  for (i = 0; i < 256; i++)
499    a[i] = -a[i];
500}
501
502is twice as slow as this loop:
503
504long long a[256];
505void foo() {
506  int i, b;
507  for (b = 0; b < 10000000; b++)
508  for (i = 0; i < 256; i++)
509    a[i] ^= (1ULL << 63);
510}
511
512and I suspect other processors are similar.  On X86 in particular this is a
513big win because doing this with integers allows the use of read/modify/write
514instructions.
515
516//===---------------------------------------------------------------------===//
517
518DAG Combiner should try to combine small loads into larger loads when
519profitable.  For example, we compile this C++ example:
520
521struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
522extern THotKey m_HotKey;
523THotKey GetHotKey () { return m_HotKey; }
524
525into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
526
527__Z9GetHotKeyv:                         ## @_Z9GetHotKeyv
528	movq	_m_HotKey@GOTPCREL(%rip), %rax
529	movzwl	(%rax), %ecx
530	movzbl	2(%rax), %edx
531	shlq	$16, %rdx
532	orq	%rcx, %rdx
533	movzbl	3(%rax), %ecx
534	shlq	$24, %rcx
535	orq	%rdx, %rcx
536	movzbl	4(%rax), %eax
537	shlq	$32, %rax
538	orq	%rcx, %rax
539	ret
540
541//===---------------------------------------------------------------------===//
542
543We should add an FRINT node to the DAG to model targets that have legal
544implementations of ceil/floor/rint.
545
546//===---------------------------------------------------------------------===//
547
548Consider:
549
550int test() {
551  long long input[8] = {1,0,1,0,1,0,1,0};
552  foo(input);
553}
554
555Clang compiles this into:
556
557  call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
558  %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
559  store i64 1, i64* %0, align 16
560  %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
561  store i64 1, i64* %1, align 16
562  %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
563  store i64 1, i64* %2, align 16
564  %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
565  store i64 1, i64* %3, align 16
566
567Which gets codegen'd into:
568
569	pxor	%xmm0, %xmm0
570	movaps	%xmm0, -16(%rbp)
571	movaps	%xmm0, -32(%rbp)
572	movaps	%xmm0, -48(%rbp)
573	movaps	%xmm0, -64(%rbp)
574	movq	$1, -64(%rbp)
575	movq	$1, -48(%rbp)
576	movq	$1, -32(%rbp)
577	movq	$1, -16(%rbp)
578
579It would be better to have 4 movq's of 0 instead of the movaps's.
580
581//===---------------------------------------------------------------------===//
582
583http://llvm.org/PR717:
584
585The following code should compile into "ret int undef". Instead, LLVM
586produces "ret int 0":
587
588int f() {
589  int x = 4;
590  int y;
591  if (x == 3) y = 0;
592  return y;
593}
594
595//===---------------------------------------------------------------------===//
596
597The loop unroller should partially unroll loops (instead of peeling them)
598when code growth isn't too bad and when an unroll count allows simplification
599of some code within the loop.  One trivial example is:
600
601#include <stdio.h>
602int main() {
603    int nRet = 17;
604    int nLoop;
605    for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
606        if ( nLoop & 1 )
607            nRet += 2;
608        else
609            nRet -= 1;
610    }
611    return nRet;
612}
613
614Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
615reduction in code size.  The resultant code would then also be suitable for
616exit value computation.
617
618//===---------------------------------------------------------------------===//
619
620We miss a bunch of rotate opportunities on various targets, including ppc, x86,
621etc.  On X86, we miss a bunch of 'rotate by variable' cases because the rotate
622matching code in dag combine doesn't look through truncates aggressively
623enough.  Here are some testcases reduces from GCC PR17886:
624
625unsigned long long f5(unsigned long long x, unsigned long long y) {
626  return (x << 8) | ((y >> 48) & 0xffull);
627}
628unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
629  switch(z) {
630  case 1:
631    return (x << 8) | ((y >> 48) & 0xffull);
632  case 2:
633    return (x << 16) | ((y >> 40) & 0xffffull);
634  case 3:
635    return (x << 24) | ((y >> 32) & 0xffffffull);
636  case 4:
637    return (x << 32) | ((y >> 24) & 0xffffffffull);
638  default:
639    return (x << 40) | ((y >> 16) & 0xffffffffffull);
640  }
641}
642
643//===---------------------------------------------------------------------===//
644
645This (and similar related idioms):
646
647unsigned int foo(unsigned char i) {
648  return i | (i<<8) | (i<<16) | (i<<24);
649}
650
651compiles into:
652
653define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
654entry:
655  %conv = zext i8 %i to i32
656  %shl = shl i32 %conv, 8
657  %shl5 = shl i32 %conv, 16
658  %shl9 = shl i32 %conv, 24
659  %or = or i32 %shl9, %conv
660  %or6 = or i32 %or, %shl5
661  %or10 = or i32 %or6, %shl
662  ret i32 %or10
663}
664
665it would be better as:
666
667unsigned int bar(unsigned char i) {
668  unsigned int j=i | (i << 8);
669  return j | (j<<16);
670}
671
672aka:
673
674define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
675entry:
676  %conv = zext i8 %i to i32
677  %shl = shl i32 %conv, 8
678  %or = or i32 %shl, %conv
679  %shl5 = shl i32 %or, 16
680  %or6 = or i32 %shl5, %or
681  ret i32 %or6
682}
683
684or even i*0x01010101, depending on the speed of the multiplier.  The best way to
685handle this is to canonicalize it to a multiply in IR and have codegen handle
686lowering multiplies to shifts on cpus where shifts are faster.
687
688//===---------------------------------------------------------------------===//
689
690We do a number of simplifications in simplify libcalls to strength reduce
691standard library functions, but we don't currently merge them together.  For
692example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy.  This can only
693be done safely if "b" isn't modified between the strlen and memcpy of course.
694
695//===---------------------------------------------------------------------===//
696
697We compile this program: (from GCC PR11680)
698http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
699
700Into code that runs the same speed in fast/slow modes, but both modes run 2x
701slower than when compile with GCC (either 4.0 or 4.2):
702
703$ llvm-g++ perf.cpp -O3 -fno-exceptions
704$ time ./a.out fast
7051.821u 0.003s 0:01.82 100.0%	0+0k 0+0io 0pf+0w
706
707$ g++ perf.cpp -O3 -fno-exceptions
708$ time ./a.out fast
7090.821u 0.001s 0:00.82 100.0%	0+0k 0+0io 0pf+0w
710
711It looks like we are making the same inlining decisions, so this may be raw
712codegen badness or something else (haven't investigated).
713
714//===---------------------------------------------------------------------===//
715
716Divisibility by constant can be simplified (according to GCC PR12849) from
717being a mulhi to being a mul lo (cheaper).  Testcase:
718
719void bar(unsigned n) {
720  if (n % 3 == 0)
721    true();
722}
723
724This is equivalent to the following, where 2863311531 is the multiplicative
725inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
726void bar(unsigned n) {
727  if (n * 2863311531U < 1431655766U)
728    true();
729}
730
731The same transformation can work with an even modulo with the addition of a
732rotate: rotate the result of the multiply to the right by the number of bits
733which need to be zero for the condition to be true, and shrink the compare RHS
734by the same amount.  Unless the target supports rotates, though, that
735transformation probably isn't worthwhile.
736
737The transformation can also easily be made to work with non-zero equality
738comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
739
740//===---------------------------------------------------------------------===//
741
742Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
743bunch of other stuff from this example (see PR1604):
744
745#include <cstdio>
746struct test {
747    int val;
748    virtual ~test() {}
749};
750
751int main() {
752    test t;
753    std::scanf("%d", &t.val);
754    std::printf("%d\n", t.val);
755}
756
757//===---------------------------------------------------------------------===//
758
759These functions perform the same computation, but produce different assembly.
760
761define i8 @select(i8 %x) readnone nounwind {
762  %A = icmp ult i8 %x, 250
763  %B = select i1 %A, i8 0, i8 1
764  ret i8 %B
765}
766
767define i8 @addshr(i8 %x) readnone nounwind {
768  %A = zext i8 %x to i9
769  %B = add i9 %A, 6       ;; 256 - 250 == 6
770  %C = lshr i9 %B, 8
771  %D = trunc i9 %C to i8
772  ret i8 %D
773}
774
775//===---------------------------------------------------------------------===//
776
777From gcc bug 24696:
778int
779f (unsigned long a, unsigned long b, unsigned long c)
780{
781  return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
782}
783int
784f (unsigned long a, unsigned long b, unsigned long c)
785{
786  return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
787}
788Both should combine to ((a|b) & (c-1)) != 0.  Currently not optimized with
789"clang -emit-llvm-bc | opt -std-compile-opts".
790
791//===---------------------------------------------------------------------===//
792
793From GCC Bug 20192:
794#define PMD_MASK    (~((1UL << 23) - 1))
795void clear_pmd_range(unsigned long start, unsigned long end)
796{
797   if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
798       f();
799}
800The expression should optimize to something like
801"!((start|end)&~PMD_MASK). Currently not optimized with "clang
802-emit-llvm-bc | opt -std-compile-opts".
803
804//===---------------------------------------------------------------------===//
805
806unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
807i;}
808unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
809These should combine to the same thing.  Currently, the first function
810produces better code on X86.
811
812//===---------------------------------------------------------------------===//
813
814From GCC Bug 15784:
815#define abs(x) x>0?x:-x
816int f(int x, int y)
817{
818 return (abs(x)) >= 0;
819}
820This should optimize to x == INT_MIN. (With -fwrapv.)  Currently not
821optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
822
823//===---------------------------------------------------------------------===//
824
825From GCC Bug 14753:
826void
827rotate_cst (unsigned int a)
828{
829 a = (a << 10) | (a >> 22);
830 if (a == 123)
831   bar ();
832}
833void
834minus_cst (unsigned int a)
835{
836 unsigned int tem;
837
838 tem = 20 - a;
839 if (tem == 5)
840   bar ();
841}
842void
843mask_gt (unsigned int a)
844{
845 /* This is equivalent to a > 15.  */
846 if ((a & ~7) > 8)
847   bar ();
848}
849void
850rshift_gt (unsigned int a)
851{
852 /* This is equivalent to a > 23.  */
853 if ((a >> 2) > 5)
854   bar ();
855}
856
857All should simplify to a single comparison.  All of these are
858currently not optimized with "clang -emit-llvm-bc | opt
859-std-compile-opts".
860
861//===---------------------------------------------------------------------===//
862
863From GCC Bug 32605:
864int c(int* x) {return (char*)x+2 == (char*)x;}
865Should combine to 0.  Currently not optimized with "clang
866-emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
867
868//===---------------------------------------------------------------------===//
869
870int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
871Should be combined to  "((b >> 1) | b) & 1".  Currently not optimized
872with "clang -emit-llvm-bc | opt -std-compile-opts".
873
874//===---------------------------------------------------------------------===//
875
876unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
877Should combine to "x | (y & 3)".  Currently not optimized with "clang
878-emit-llvm-bc | opt -std-compile-opts".
879
880//===---------------------------------------------------------------------===//
881
882int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
883Should fold to "(~a & c) | (a & b)".  Currently not optimized with
884"clang -emit-llvm-bc | opt -std-compile-opts".
885
886//===---------------------------------------------------------------------===//
887
888int a(int a,int b) {return (~(a|b))|a;}
889Should fold to "a|~b".  Currently not optimized with "clang
890-emit-llvm-bc | opt -std-compile-opts".
891
892//===---------------------------------------------------------------------===//
893
894int a(int a, int b) {return (a&&b) || (a&&!b);}
895Should fold to "a".  Currently not optimized with "clang -emit-llvm-bc
896| opt -std-compile-opts".
897
898//===---------------------------------------------------------------------===//
899
900int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
901Should fold to "a ? b : c", or at least something sane.  Currently not
902optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
903
904//===---------------------------------------------------------------------===//
905
906int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
907Should fold to a && (b || c).  Currently not optimized with "clang
908-emit-llvm-bc | opt -std-compile-opts".
909
910//===---------------------------------------------------------------------===//
911
912int a(int x) {return x | ((x & 8) ^ 8);}
913Should combine to x | 8.  Currently not optimized with "clang
914-emit-llvm-bc | opt -std-compile-opts".
915
916//===---------------------------------------------------------------------===//
917
918int a(int x) {return x ^ ((x & 8) ^ 8);}
919Should also combine to x | 8.  Currently not optimized with "clang
920-emit-llvm-bc | opt -std-compile-opts".
921
922//===---------------------------------------------------------------------===//
923
924int a(int x) {return ((x | -9) ^ 8) & x;}
925Should combine to x & -9.  Currently not optimized with "clang
926-emit-llvm-bc | opt -std-compile-opts".
927
928//===---------------------------------------------------------------------===//
929
930unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
931Should combine to "a * 0x88888888 >> 31".  Currently not optimized
932with "clang -emit-llvm-bc | opt -std-compile-opts".
933
934//===---------------------------------------------------------------------===//
935
936unsigned a(char* x) {if ((*x & 32) == 0) return b();}
937There's an unnecessary zext in the generated code with "clang
938-emit-llvm-bc | opt -std-compile-opts".
939
940//===---------------------------------------------------------------------===//
941
942unsigned a(unsigned long long x) {return 40 * (x >> 1);}
943Should combine to "20 * (((unsigned)x) & -2)".  Currently not
944optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
945
946//===---------------------------------------------------------------------===//
947
948int g(int x) { return (x - 10) < 0; }
949Should combine to "x <= 9" (the sub has nsw).  Currently not
950optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
951
952//===---------------------------------------------------------------------===//
953
954int g(int x) { return (x + 10) < 0; }
955Should combine to "x < -10" (the add has nsw).  Currently not
956optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
957
958//===---------------------------------------------------------------------===//
959
960int f(int i, int j) { return i < j + 1; }
961int g(int i, int j) { return j > i - 1; }
962Should combine to "i <= j" (the add/sub has nsw).  Currently not
963optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
964
965//===---------------------------------------------------------------------===//
966
967This was noticed in the entryblock for grokdeclarator in 403.gcc:
968
969        %tmp = icmp eq i32 %decl_context, 4
970        %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
971        %tmp1 = icmp eq i32 %decl_context_addr.0, 1
972        %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
973
974tmp1 should be simplified to something like:
975  (!tmp || decl_context == 1)
976
977This allows recursive simplifications, tmp1 is used all over the place in
978the function, e.g. by:
979
980        %tmp23 = icmp eq i32 %decl_context_addr.1, 0            ; <i1> [#uses=1]
981        %tmp24 = xor i1 %tmp1, true             ; <i1> [#uses=1]
982        %or.cond8 = and i1 %tmp23, %tmp24               ; <i1> [#uses=1]
983
984later.
985
986//===---------------------------------------------------------------------===//
987
988[STORE SINKING]
989
990Store sinking: This code:
991
992void f (int n, int *cond, int *res) {
993    int i;
994    *res = 0;
995    for (i = 0; i < n; i++)
996        if (*cond)
997            *res ^= 234; /* (*) */
998}
999
1000On this function GVN hoists the fully redundant value of *res, but nothing
1001moves the store out.  This gives us this code:
1002
1003bb:		; preds = %bb2, %entry
1004	%.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1005	%i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1006	%1 = load i32* %cond, align 4
1007	%2 = icmp eq i32 %1, 0
1008	br i1 %2, label %bb2, label %bb1
1009
1010bb1:		; preds = %bb
1011	%3 = xor i32 %.rle, 234
1012	store i32 %3, i32* %res, align 4
1013	br label %bb2
1014
1015bb2:		; preds = %bb, %bb1
1016	%.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1017	%indvar.next = add i32 %i.05, 1
1018	%exitcond = icmp eq i32 %indvar.next, %n
1019	br i1 %exitcond, label %return, label %bb
1020
1021DSE should sink partially dead stores to get the store out of the loop.
1022
1023Here's another partial dead case:
1024http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1025
1026//===---------------------------------------------------------------------===//
1027
1028Scalar PRE hoists the mul in the common block up to the else:
1029
1030int test (int a, int b, int c, int g) {
1031  int d, e;
1032  if (a)
1033    d = b * c;
1034  else
1035    d = b - c;
1036  e = b * c + g;
1037  return d + e;
1038}
1039
1040It would be better to do the mul once to reduce codesize above the if.
1041This is GCC PR38204.
1042
1043
1044//===---------------------------------------------------------------------===//
1045This simple function from 179.art:
1046
1047int winner, numf2s;
1048struct { double y; int   reset; } *Y;
1049
1050void find_match() {
1051   int i;
1052   winner = 0;
1053   for (i=0;i<numf2s;i++)
1054       if (Y[i].y > Y[winner].y)
1055              winner =i;
1056}
1057
1058Compiles into (with clang TBAA):
1059
1060for.body:                                         ; preds = %for.inc, %bb.nph
1061  %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
1062  %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
1063  %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
1064  %tmp5 = load double* %tmp4, align 8, !tbaa !4
1065  %idxprom7 = sext i32 %i.01718 to i64
1066  %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
1067  %tmp11 = load double* %tmp10, align 8, !tbaa !4
1068  %cmp12 = fcmp ogt double %tmp5, %tmp11
1069  br i1 %cmp12, label %if.then, label %for.inc
1070
1071if.then:                                          ; preds = %for.body
1072  %i.017 = trunc i64 %indvar to i32
1073  br label %for.inc
1074
1075for.inc:                                          ; preds = %for.body, %if.then
1076  %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
1077  %indvar.next = add i64 %indvar, 1
1078  %exitcond = icmp eq i64 %indvar.next, %tmp22
1079  br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
1080
1081
1082It is good that we hoisted the reloads of numf2's, and Y out of the loop and
1083sunk the store to winner out.
1084
1085However, this is awful on several levels: the conditional truncate in the loop
1086(-indvars at fault? why can't we completely promote the IV to i64?).
1087
1088Beyond that, we have a partially redundant load in the loop: if "winner" (aka
1089%i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
1090Similarly, the addressing that feeds it (including the sext) is redundant. In
1091the end we get this generated assembly:
1092
1093LBB0_2:                                 ## %for.body
1094                                        ## =>This Inner Loop Header: Depth=1
1095	movsd	(%rdi), %xmm0
1096	movslq	%edx, %r8
1097	shlq	$4, %r8
1098	ucomisd	(%rcx,%r8), %xmm0
1099	jbe	LBB0_4
1100	movl	%esi, %edx
1101LBB0_4:                                 ## %for.inc
1102	addq	$16, %rdi
1103	incq	%rsi
1104	cmpq	%rsi, %rax
1105	jne	LBB0_2
1106
1107All things considered this isn't too bad, but we shouldn't need the movslq or
1108the shlq instruction, or the load folded into ucomisd every time through the
1109loop.
1110
1111On an x86-specific topic, if the loop can't be restructure, the movl should be a
1112cmov.
1113
1114//===---------------------------------------------------------------------===//
1115
1116[STORE SINKING]
1117
1118GCC PR37810 is an interesting case where we should sink load/store reload
1119into the if block and outside the loop, so we don't reload/store it on the
1120non-call path.
1121
1122for () {
1123  *P += 1;
1124  if ()
1125    call();
1126  else
1127    ...
1128->
1129tmp = *P
1130for () {
1131  tmp += 1;
1132  if () {
1133    *P = tmp;
1134    call();
1135    tmp = *P;
1136  } else ...
1137}
1138*P = tmp;
1139
1140We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1141we don't sink the store.  We need partially dead store sinking.
1142
1143//===---------------------------------------------------------------------===//
1144
1145[LOAD PRE CRIT EDGE SPLITTING]
1146
1147GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1148leading to excess stack traffic. This could be handled by GVN with some crazy
1149symbolic phi translation.  The code we get looks like (g is on the stack):
1150
1151bb2:		; preds = %bb1
1152..
1153	%9 = getelementptr %struct.f* %g, i32 0, i32 0
1154	store i32 %8, i32* %9, align  bel %bb3
1155
1156bb3:		; preds = %bb1, %bb2, %bb
1157	%c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1158	%b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1159	%10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1160	%11 = load i32* %10, align 4
1161
1162%11 is partially redundant, an in BB2 it should have the value %8.
1163
1164GCC PR33344 and PR35287 are similar cases.
1165
1166
1167//===---------------------------------------------------------------------===//
1168
1169[LOAD PRE]
1170
1171There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1172GCC testsuite, ones we don't get yet are (checked through loadpre25):
1173
1174[CRIT EDGE BREAKING]
1175loadpre3.c predcom-4.c
1176
1177[PRE OF READONLY CALL]
1178loadpre5.c
1179
1180[TURN SELECT INTO BRANCH]
1181loadpre14.c loadpre15.c
1182
1183actually a conditional increment: loadpre18.c loadpre19.c
1184
1185//===---------------------------------------------------------------------===//
1186
1187[LOAD PRE / STORE SINKING / SPEC HACK]
1188
1189This is a chunk of code from 456.hmmer:
1190
1191int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1192     int *tpdm, int xmb, int *bp, int *ms) {
1193 int k, sc;
1194 for (k = 1; k <= M; k++) {
1195     mc[k] = mpp[k-1]   + tpmm[k-1];
1196     if ((sc = ip[k-1]  + tpim[k-1]) > mc[k])  mc[k] = sc;
1197     if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k])  mc[k] = sc;
1198     if ((sc = xmb  + bp[k])         > mc[k])  mc[k] = sc;
1199     mc[k] += ms[k];
1200   }
1201}
1202
1203It is very profitable for this benchmark to turn the conditional stores to mc[k]
1204into a conditional move (select instr in IR) and allow the final store to do the
1205store.  See GCC PR27313 for more details.  Note that this is valid to xform even
1206with the new C++ memory model, since mc[k] is previously loaded and later
1207stored.
1208
1209//===---------------------------------------------------------------------===//
1210
1211[SCALAR PRE]
1212There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1213GCC testsuite.
1214
1215//===---------------------------------------------------------------------===//
1216
1217There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1218GCC testsuite.  For example, we get the first example in predcom-1.c, but
1219miss the second one:
1220
1221unsigned fib[1000];
1222unsigned avg[1000];
1223
1224__attribute__ ((noinline))
1225void count_averages(int n) {
1226  int i;
1227  for (i = 1; i < n; i++)
1228    avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1229}
1230
1231which compiles into two loads instead of one in the loop.
1232
1233predcom-2.c is the same as predcom-1.c
1234
1235predcom-3.c is very similar but needs loads feeding each other instead of
1236store->load.
1237
1238
1239//===---------------------------------------------------------------------===//
1240
1241[ALIAS ANALYSIS]
1242
1243Type based alias analysis:
1244http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1245
1246We should do better analysis of posix_memalign.  At the least it should
1247no-capture its pointer argument, at best, we should know that the out-value
1248result doesn't point to anything (like malloc).  One example of this is in
1249SingleSource/Benchmarks/Misc/dt.c
1250
1251//===---------------------------------------------------------------------===//
1252
1253Interesting missed case because of control flow flattening (should be 2 loads):
1254http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1255With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1256             opt -mem2reg -gvn -instcombine | llvm-dis
1257we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1258VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1259
1260//===---------------------------------------------------------------------===//
1261
1262http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1263We could eliminate the branch condition here, loading from null is undefined:
1264
1265struct S { int w, x, y, z; };
1266struct T { int r; struct S s; };
1267void bar (struct S, int);
1268void foo (int a, struct T b)
1269{
1270  struct S *c = 0;
1271  if (a)
1272    c = &b.s;
1273  bar (*c, a);
1274}
1275
1276//===---------------------------------------------------------------------===//
1277
1278simplifylibcalls should do several optimizations for strspn/strcspn:
1279
1280strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1281
1282size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1283                     int __reject3) {
1284  register size_t __result = 0;
1285  while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1286         __s[__result] != __reject2 && __s[__result] != __reject3)
1287    ++__result;
1288  return __result;
1289}
1290
1291This should turn into a switch on the character.  See PR3253 for some notes on
1292codegen.
1293
1294456.hmmer apparently uses strcspn and strspn a lot.  471.omnetpp uses strspn.
1295
1296//===---------------------------------------------------------------------===//
1297
1298simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917)
1299
1300char buf1[6], buf2[6], buf3[4], buf4[4];
1301int i;
1302
1303int foo (void) {
1304  int ret = snprintf (buf1, sizeof buf1, "abcde");
1305  ret += snprintf (buf2, sizeof buf2, "abcdef") * 16;
1306  ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256;
1307  ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096;
1308  return ret;
1309}
1310
1311//===---------------------------------------------------------------------===//
1312
1313"gas" uses this idiom:
1314  else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1315..
1316  else if (strchr ("<>", *intel_parser.op_string)
1317
1318Those should be turned into a switch.
1319
1320//===---------------------------------------------------------------------===//
1321
1322252.eon contains this interesting code:
1323
1324        %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1325        %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1326        %strlen = call i32 @strlen(i8* %3072)    ; uses = 1
1327        %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1328        call void @llvm.memcpy.i32(i8* %endptr,
1329          i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1330        %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1331
1332This is interesting for a couple reasons.  First, in this:
1333
1334The memcpy+strlen strlen can be replaced with:
1335
1336        %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1337
1338Because the destination was just copied into the specified memory buffer.  This,
1339in turn, can be constant folded to "4".
1340
1341In other code, it contains:
1342
1343        %endptr6978 = bitcast i8* %endptr69 to i32*
1344        store i32 7107374, i32* %endptr6978, align 1
1345        %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1346
1347Which could also be constant folded.  Whatever is producing this should probably
1348be fixed to leave this as a memcpy from a string.
1349
1350Further, eon also has an interesting partially redundant strlen call:
1351
1352bb8:            ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1353        %682 = getelementptr i8** %argv, i32 6          ; <i8**> [#uses=2]
1354        %683 = load i8** %682, align 4          ; <i8*> [#uses=4]
1355        %684 = load i8* %683, align 1           ; <i8> [#uses=1]
1356        %685 = icmp eq i8 %684, 0               ; <i1> [#uses=1]
1357        br i1 %685, label %bb10, label %bb9
1358
1359bb9:            ; preds = %bb8
1360        %686 = call i32 @strlen(i8* %683) nounwind readonly
1361        %687 = icmp ugt i32 %686, 254           ; <i1> [#uses=1]
1362        br i1 %687, label %bb10, label %bb11
1363
1364bb10:           ; preds = %bb9, %bb8
1365        %688 = call i32 @strlen(i8* %683) nounwind readonly
1366
1367This could be eliminated by doing the strlen once in bb8, saving code size and
1368improving perf on the bb8->9->10 path.
1369
1370//===---------------------------------------------------------------------===//
1371
1372I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1373which looks like:
1374       %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1375
1376
1377bb62:           ; preds = %bb55, %bb53
1378        %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1379        %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1380        %172 = add i32 %171, -1         ; <i32> [#uses=1]
1381        %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1382
1383...  no stores ...
1384       br i1 %or.cond, label %bb65, label %bb72
1385
1386bb65:           ; preds = %bb62
1387        store i8 0, i8* %173, align 1
1388        br label %bb72
1389
1390bb72:           ; preds = %bb65, %bb62
1391        %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1392        %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1393
1394Note that on the bb62->bb72 path, that the %177 strlen call is partially
1395redundant with the %171 call.  At worst, we could shove the %177 strlen call
1396up into the bb65 block moving it out of the bb62->bb72 path.   However, note
1397that bb65 stores to the string, zeroing out the last byte.  This means that on
1398that path the value of %177 is actually just %171-1.  A sub is cheaper than a
1399strlen!
1400
1401This pattern repeats several times, basically doing:
1402
1403  A = strlen(P);
1404  P[A-1] = 0;
1405  B = strlen(P);
1406  where it is "obvious" that B = A-1.
1407
1408//===---------------------------------------------------------------------===//
1409
1410186.crafty has this interesting pattern with the "out.4543" variable:
1411
1412call void @llvm.memcpy.i32(
1413        i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1414       i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1415%101 = call@printf(i8* ...   @out.4543, i32 0, i32 0)) nounwind
1416
1417It is basically doing:
1418
1419  memcpy(globalarray, "string");
1420  printf(...,  globalarray);
1421
1422Anyway, by knowing that printf just reads the memory and forward substituting
1423the string directly into the printf, this eliminates reads from globalarray.
1424Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1425other similar functions) there are many stores to "out".  Once all the printfs
1426stop using "out", all that is left is the memcpy's into it.  This should allow
1427globalopt to remove the "stored only" global.
1428
1429//===---------------------------------------------------------------------===//
1430
1431This code:
1432
1433define inreg i32 @foo(i8* inreg %p) nounwind {
1434  %tmp0 = load i8* %p
1435  %tmp1 = ashr i8 %tmp0, 5
1436  %tmp2 = sext i8 %tmp1 to i32
1437  ret i32 %tmp2
1438}
1439
1440could be dagcombine'd to a sign-extending load with a shift.
1441For example, on x86 this currently gets this:
1442
1443	movb	(%eax), %al
1444	sarb	$5, %al
1445	movsbl	%al, %eax
1446
1447while it could get this:
1448
1449	movsbl	(%eax), %eax
1450	sarl	$5, %eax
1451
1452//===---------------------------------------------------------------------===//
1453
1454GCC PR31029:
1455
1456int test(int x) { return 1-x == x; }     // --> return false
1457int test2(int x) { return 2-x == x; }    // --> return x == 1 ?
1458
1459Always foldable for odd constants, what is the rule for even?
1460
1461//===---------------------------------------------------------------------===//
1462
1463PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1464for next field in struct (which is at same address).
1465
1466For example: store of float into { {{}}, float } could be turned into a store to
1467the float directly.
1468
1469//===---------------------------------------------------------------------===//
1470
1471The arg promotion pass should make use of nocapture to make its alias analysis
1472stuff much more precise.
1473
1474//===---------------------------------------------------------------------===//
1475
1476The following functions should be optimized to use a select instead of a
1477branch (from gcc PR40072):
1478
1479char char_int(int m) {if(m>7) return 0; return m;}
1480int int_char(char m) {if(m>7) return 0; return m;}
1481
1482//===---------------------------------------------------------------------===//
1483
1484int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1485
1486Generates this:
1487
1488define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1489entry:
1490  %0 = and i32 %a, 128                            ; <i32> [#uses=1]
1491  %1 = icmp eq i32 %0, 0                          ; <i1> [#uses=1]
1492  %2 = or i32 %b, 128                             ; <i32> [#uses=1]
1493  %3 = and i32 %b, -129                           ; <i32> [#uses=1]
1494  %b_addr.0 = select i1 %1, i32 %3, i32 %2        ; <i32> [#uses=1]
1495  ret i32 %b_addr.0
1496}
1497
1498However, it's functionally equivalent to:
1499
1500         b = (b & ~0x80) | (a & 0x80);
1501
1502Which generates this:
1503
1504define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1505entry:
1506  %0 = and i32 %b, -129                           ; <i32> [#uses=1]
1507  %1 = and i32 %a, 128                            ; <i32> [#uses=1]
1508  %2 = or i32 %0, %1                              ; <i32> [#uses=1]
1509  ret i32 %2
1510}
1511
1512This can be generalized for other forms:
1513
1514     b = (b & ~0x80) | (a & 0x40) << 1;
1515
1516//===---------------------------------------------------------------------===//
1517
1518These two functions produce different code. They shouldn't:
1519
1520#include <stdint.h>
1521
1522uint8_t p1(uint8_t b, uint8_t a) {
1523  b = (b & ~0xc0) | (a & 0xc0);
1524  return (b);
1525}
1526
1527uint8_t p2(uint8_t b, uint8_t a) {
1528  b = (b & ~0x40) | (a & 0x40);
1529  b = (b & ~0x80) | (a & 0x80);
1530  return (b);
1531}
1532
1533define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1534entry:
1535  %0 = and i8 %b, 63                              ; <i8> [#uses=1]
1536  %1 = and i8 %a, -64                             ; <i8> [#uses=1]
1537  %2 = or i8 %1, %0                               ; <i8> [#uses=1]
1538  ret i8 %2
1539}
1540
1541define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1542entry:
1543  %0 = and i8 %b, 63                              ; <i8> [#uses=1]
1544  %.masked = and i8 %a, 64                        ; <i8> [#uses=1]
1545  %1 = and i8 %a, -128                            ; <i8> [#uses=1]
1546  %2 = or i8 %1, %0                               ; <i8> [#uses=1]
1547  %3 = or i8 %2, %.masked                         ; <i8> [#uses=1]
1548  ret i8 %3
1549}
1550
1551//===---------------------------------------------------------------------===//
1552
1553IPSCCP does not currently propagate argument dependent constants through
1554functions where it does not not all of the callers.  This includes functions
1555with normal external linkage as well as templates, C99 inline functions etc.
1556Specifically, it does nothing to:
1557
1558define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1559entry:
1560  %0 = add nsw i32 %y, %z
1561  %1 = mul i32 %0, %x
1562  %2 = mul i32 %y, %z
1563  %3 = add nsw i32 %1, %2
1564  ret i32 %3
1565}
1566
1567define i32 @test2() nounwind {
1568entry:
1569  %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1570  ret i32 %0
1571}
1572
1573It would be interesting extend IPSCCP to be able to handle simple cases like
1574this, where all of the arguments to a call are constant.  Because IPSCCP runs
1575before inlining, trivial templates and inline functions are not yet inlined.
1576The results for a function + set of constant arguments should be memoized in a
1577map.
1578
1579//===---------------------------------------------------------------------===//
1580
1581The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1582libanalysis' constantfolding logic.  This would allow IPSCCP to be able to
1583handle simple things like this:
1584
1585static int foo(const char *X) { return strlen(X); }
1586int bar() { return foo("abcd"); }
1587
1588//===---------------------------------------------------------------------===//
1589
1590functionattrs doesn't know much about memcpy/memset.  This function should be
1591marked readnone rather than readonly, since it only twiddles local memory, but
1592functionattrs doesn't handle memset/memcpy/memmove aggressively:
1593
1594struct X { int *p; int *q; };
1595int foo() {
1596 int i = 0, j = 1;
1597 struct X x, y;
1598 int **p;
1599 y.p = &i;
1600 x.q = &j;
1601 p = __builtin_memcpy (&x, &y, sizeof (int *));
1602 return **p;
1603}
1604
1605This can be seen at:
1606$ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1607
1608
1609//===---------------------------------------------------------------------===//
1610
1611Missed instcombine transformation:
1612define i1 @a(i32 %x) nounwind readnone {
1613entry:
1614  %cmp = icmp eq i32 %x, 30
1615  %sub = add i32 %x, -30
1616  %cmp2 = icmp ugt i32 %sub, 9
1617  %or = or i1 %cmp, %cmp2
1618  ret i1 %or
1619}
1620This should be optimized to a single compare.  Testcase derived from gcc.
1621
1622//===---------------------------------------------------------------------===//
1623
1624Missed instcombine or reassociate transformation:
1625int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1626
1627The sgt and slt should be combined into a single comparison. Testcase derived
1628from gcc.
1629
1630//===---------------------------------------------------------------------===//
1631
1632Missed instcombine transformation:
1633
1634  %382 = srem i32 %tmp14.i, 64                    ; [#uses=1]
1635  %383 = zext i32 %382 to i64                     ; [#uses=1]
1636  %384 = shl i64 %381, %383                       ; [#uses=1]
1637  %385 = icmp slt i32 %tmp14.i, 64                ; [#uses=1]
1638
1639The srem can be transformed to an and because if %tmp14.i is negative, the
1640shift is undefined.  Testcase derived from 403.gcc.
1641
1642//===---------------------------------------------------------------------===//
1643
1644This is a range comparison on a divided result (from 403.gcc):
1645
1646  %1337 = sdiv i32 %1336, 8                       ; [#uses=1]
1647  %.off.i208 = add i32 %1336, 7                   ; [#uses=1]
1648  %1338 = icmp ult i32 %.off.i208, 15             ; [#uses=1]
1649
1650We already catch this (removing the sdiv) if there isn't an add, we should
1651handle the 'add' as well.  This is a common idiom with it's builtin_alloca code.
1652C testcase:
1653
1654int a(int x) { return (unsigned)(x/16+7) < 15; }
1655
1656Another similar case involves truncations on 64-bit targets:
1657
1658  %361 = sdiv i64 %.046, 8                        ; [#uses=1]
1659  %362 = trunc i64 %361 to i32                    ; [#uses=2]
1660...
1661  %367 = icmp eq i32 %362, 0                      ; [#uses=1]
1662
1663//===---------------------------------------------------------------------===//
1664
1665Missed instcombine/dagcombine transformation:
1666define void @lshift_lt(i8 zeroext %a) nounwind {
1667entry:
1668  %conv = zext i8 %a to i32
1669  %shl = shl i32 %conv, 3
1670  %cmp = icmp ult i32 %shl, 33
1671  br i1 %cmp, label %if.then, label %if.end
1672
1673if.then:
1674  tail call void @bar() nounwind
1675  ret void
1676
1677if.end:
1678  ret void
1679}
1680declare void @bar() nounwind
1681
1682The shift should be eliminated.  Testcase derived from gcc.
1683
1684//===---------------------------------------------------------------------===//
1685
1686These compile into different code, one gets recognized as a switch and the
1687other doesn't due to phase ordering issues (PR6212):
1688
1689int test1(int mainType, int subType) {
1690  if (mainType == 7)
1691    subType = 4;
1692  else if (mainType == 9)
1693    subType = 6;
1694  else if (mainType == 11)
1695    subType = 9;
1696  return subType;
1697}
1698
1699int test2(int mainType, int subType) {
1700  if (mainType == 7)
1701    subType = 4;
1702  if (mainType == 9)
1703    subType = 6;
1704  if (mainType == 11)
1705    subType = 9;
1706  return subType;
1707}
1708
1709//===---------------------------------------------------------------------===//
1710
1711The following test case (from PR6576):
1712
1713define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1714entry:
1715 %cond1 = icmp eq i32 %b, 0                      ; <i1> [#uses=1]
1716 br i1 %cond1, label %exit, label %bb.nph
1717bb.nph:                                           ; preds = %entry
1718 %tmp = mul i32 %b, %a                           ; <i32> [#uses=1]
1719 ret i32 %tmp
1720exit:                                             ; preds = %entry
1721 ret i32 0
1722}
1723
1724could be reduced to:
1725
1726define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1727entry:
1728 %tmp = mul i32 %b, %a
1729 ret i32 %tmp
1730}
1731
1732//===---------------------------------------------------------------------===//
1733
1734We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1735See GCC PR34949
1736
1737Another interesting case is that something related could be used for variables
1738that go const after their ctor has finished.  In these cases, globalopt (which
1739can statically run the constructor) could mark the global const (so it gets put
1740in the readonly section).  A testcase would be:
1741
1742#include <complex>
1743using namespace std;
1744const complex<char> should_be_in_rodata (42,-42);
1745complex<char> should_be_in_data (42,-42);
1746complex<char> should_be_in_bss;
1747
1748Where we currently evaluate the ctors but the globals don't become const because
1749the optimizer doesn't know they "become const" after the ctor is done.  See
1750GCC PR4131 for more examples.
1751
1752//===---------------------------------------------------------------------===//
1753
1754In this code:
1755
1756long foo(long x) {
1757  return x > 1 ? x : 1;
1758}
1759
1760LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1761and cheaper on most targets.
1762
1763LLVM prefers comparisons with zero over non-zero in general, but in this
1764case it choses instead to keep the max operation obvious.
1765
1766//===---------------------------------------------------------------------===//
1767
1768define void @a(i32 %x) nounwind {
1769entry:
1770  switch i32 %x, label %if.end [
1771    i32 0, label %if.then
1772    i32 1, label %if.then
1773    i32 2, label %if.then
1774    i32 3, label %if.then
1775    i32 5, label %if.then
1776  ]
1777if.then:
1778  tail call void @foo() nounwind
1779  ret void
1780if.end:
1781  ret void
1782}
1783declare void @foo()
1784
1785Generated code on x86-64 (other platforms give similar results):
1786a:
1787	cmpl	$5, %edi
1788	ja	LBB2_2
1789	cmpl	$4, %edi
1790	jne	LBB2_3
1791.LBB0_2:
1792	ret
1793.LBB0_3:
1794	jmp	foo  # TAILCALL
1795
1796If we wanted to be really clever, we could simplify the whole thing to
1797something like the following, which eliminates a branch:
1798	xorl    $1, %edi
1799	cmpl	$4, %edi
1800	ja	.LBB0_2
1801	ret
1802.LBB0_2:
1803	jmp	foo  # TAILCALL
1804
1805//===---------------------------------------------------------------------===//
1806
1807We compile this:
1808
1809int foo(int a) { return (a & (~15)) / 16; }
1810
1811Into:
1812
1813define i32 @foo(i32 %a) nounwind readnone ssp {
1814entry:
1815  %and = and i32 %a, -16
1816  %div = sdiv i32 %and, 16
1817  ret i32 %div
1818}
1819
1820but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1821should be instcombined into just "a >> 4".
1822
1823We do get this at the codegen level, so something knows about it, but
1824instcombine should catch it earlier:
1825
1826_foo:                                   ## @foo
1827## BB#0:                                ## %entry
1828	movl	%edi, %eax
1829	sarl	$4, %eax
1830	ret
1831
1832//===---------------------------------------------------------------------===//
1833
1834This code (from GCC PR28685):
1835
1836int test(int a, int b) {
1837  int lt = a < b;
1838  int eq = a == b;
1839  if (lt)
1840    return 1;
1841  return eq;
1842}
1843
1844Is compiled to:
1845
1846define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1847entry:
1848  %cmp = icmp slt i32 %a, %b
1849  br i1 %cmp, label %return, label %if.end
1850
1851if.end:                                           ; preds = %entry
1852  %cmp5 = icmp eq i32 %a, %b
1853  %conv6 = zext i1 %cmp5 to i32
1854  ret i32 %conv6
1855
1856return:                                           ; preds = %entry
1857  ret i32 1
1858}
1859
1860it could be:
1861
1862define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1863entry:
1864  %0 = icmp sle i32 %a, %b
1865  %retval = zext i1 %0 to i32
1866  ret i32 %retval
1867}
1868
1869//===---------------------------------------------------------------------===//
1870
1871This code can be seen in viterbi:
1872
1873  %64 = call noalias i8* @malloc(i64 %62) nounwind
1874...
1875  %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1876  %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1877
1878llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1879fold to %62.  This is a security win (overflows of malloc will get caught)
1880and also a performance win by exposing more memsets to the optimizer.
1881
1882This occurs several times in viterbi.
1883
1884Note that this would change the semantics of @llvm.objectsize which by its
1885current definition always folds to a constant. We also should make sure that
1886we remove checking in code like
1887
1888  char *p = malloc(strlen(s)+1);
1889  __strcpy_chk(p, s, __builtin_objectsize(p, 0));
1890
1891//===---------------------------------------------------------------------===//
1892
1893This code (from Benchmarks/Dhrystone/dry.c):
1894
1895define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1896entry:
1897  %sext = shl i32 %0, 24
1898  %conv = ashr i32 %sext, 24
1899  %sext6 = shl i32 %1, 24
1900  %conv4 = ashr i32 %sext6, 24
1901  %cmp = icmp eq i32 %conv, %conv4
1902  %. = select i1 %cmp, i32 10000, i32 0
1903  ret i32 %.
1904}
1905
1906Should be simplified into something like:
1907
1908define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1909entry:
1910  %sext = shl i32 %0, 24
1911  %conv = and i32 %sext, 0xFF000000
1912  %sext6 = shl i32 %1, 24
1913  %conv4 = and i32 %sext6, 0xFF000000
1914  %cmp = icmp eq i32 %conv, %conv4
1915  %. = select i1 %cmp, i32 10000, i32 0
1916  ret i32 %.
1917}
1918
1919and then to:
1920
1921define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1922entry:
1923  %conv = and i32 %0, 0xFF
1924  %conv4 = and i32 %1, 0xFF
1925  %cmp = icmp eq i32 %conv, %conv4
1926  %. = select i1 %cmp, i32 10000, i32 0
1927  ret i32 %.
1928}
1929//===---------------------------------------------------------------------===//
1930
1931clang -O3 currently compiles this code
1932
1933int g(unsigned int a) {
1934  unsigned int c[100];
1935  c[10] = a;
1936  c[11] = a;
1937  unsigned int b = c[10] + c[11];
1938  if(b > a*2) a = 4;
1939  else a = 8;
1940  return a + 7;
1941}
1942
1943into
1944
1945define i32 @g(i32 a) nounwind readnone {
1946  %add = shl i32 %a, 1
1947  %mul = shl i32 %a, 1
1948  %cmp = icmp ugt i32 %add, %mul
1949  %a.addr.0 = select i1 %cmp, i32 11, i32 15
1950  ret i32 %a.addr.0
1951}
1952
1953The icmp should fold to false. This CSE opportunity is only available
1954after GVN and InstCombine have run.
1955
1956//===---------------------------------------------------------------------===//
1957
1958memcpyopt should turn this:
1959
1960define i8* @test10(i32 %x) {
1961  %alloc = call noalias i8* @malloc(i32 %x) nounwind
1962  call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
1963  ret i8* %alloc
1964}
1965
1966into a call to calloc.  We should make sure that we analyze calloc as
1967aggressively as malloc though.
1968
1969//===---------------------------------------------------------------------===//
1970
1971clang -O3 doesn't optimize this:
1972
1973void f1(int* begin, int* end) {
1974  std::fill(begin, end, 0);
1975}
1976
1977into a memset.  This is PR8942.
1978
1979//===---------------------------------------------------------------------===//
1980
1981clang -O3 -fno-exceptions currently compiles this code:
1982
1983void f(int N) {
1984  std::vector<int> v(N);
1985
1986  extern void sink(void*); sink(&v);
1987}
1988
1989into
1990
1991define void @_Z1fi(i32 %N) nounwind {
1992entry:
1993  %v2 = alloca [3 x i32*], align 8
1994  %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
1995  %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
1996  %conv = sext i32 %N to i64
1997  store i32* null, i32** %v2.sub, align 8, !tbaa !0
1998  %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
1999  store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2000  %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
2001  store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2002  %cmp.i.i.i.i = icmp eq i32 %N, 0
2003  br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
2004
2005_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
2006  store i32* null, i32** %v2.sub, align 8, !tbaa !0
2007  store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2008  %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
2009  store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2010  br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2011
2012cond.true.i.i.i.i:                                ; preds = %entry
2013  %cmp.i.i.i.i.i = icmp slt i32 %N, 0
2014  br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
2015
2016if.then.i.i.i.i.i:                                ; preds = %cond.true.i.i.i.i
2017  call void @_ZSt17__throw_bad_allocv() noreturn nounwind
2018  unreachable
2019
2020_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i:    ; preds = %cond.true.i.i.i.i
2021  %mul.i.i.i.i.i = shl i64 %conv, 2
2022  %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
2023  %0 = bitcast i8* %call3.i.i.i.i.i to i32*
2024  store i32* %0, i32** %v2.sub, align 8, !tbaa !0
2025  store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2026  %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
2027  store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2028  call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
2029  br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2030
2031This is just the handling the construction of the vector. Most surprising here
2032is the fact that all three null stores in %entry are dead (because we do no
2033cross-block DSE).
2034
2035Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
2036This is a because the client of LazyValueInfo doesn't simplify all instruction
2037operands, just selected ones.
2038
2039//===---------------------------------------------------------------------===//
2040
2041clang -O3 -fno-exceptions currently compiles this code:
2042
2043void f(char* a, int n) {
2044  __builtin_memset(a, 0, n);
2045  for (int i = 0; i < n; ++i)
2046    a[i] = 0;
2047}
2048
2049into:
2050
2051define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
2052entry:
2053  %conv = sext i32 %n to i64
2054  tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
2055  %cmp8 = icmp sgt i32 %n, 0
2056  br i1 %cmp8, label %for.body.lr.ph, label %for.end
2057
2058for.body.lr.ph:                                   ; preds = %entry
2059  %tmp10 = add i32 %n, -1
2060  %tmp11 = zext i32 %tmp10 to i64
2061  %tmp12 = add i64 %tmp11, 1
2062  call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
2063  ret void
2064
2065for.end:                                          ; preds = %entry
2066  ret void
2067}
2068
2069This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
2070the two memset's together.
2071
2072The issue with the addition only occurs in 64-bit mode, and appears to be at
2073least partially caused by Scalar Evolution not keeping its cache updated: it
2074returns the "wrong" result immediately after indvars runs, but figures out the
2075expected result if it is run from scratch on IR resulting from running indvars.
2076
2077//===---------------------------------------------------------------------===//
2078
2079clang -O3 -fno-exceptions currently compiles this code:
2080
2081struct S {
2082  unsigned short m1, m2;
2083  unsigned char m3, m4;
2084};
2085
2086void f(int N) {
2087  std::vector<S> v(N);
2088  extern void sink(void*); sink(&v);
2089}
2090
2091into poor code for zero-initializing 'v' when N is >0. The problem is that
2092S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
20934 stores on each iteration. If the struct were 8 bytes, this gets turned into
2094a memset.
2095
2096In order to handle this we have to:
2097  A) Teach clang to generate metadata for memsets of structs that have holes in
2098     them.
2099  B) Teach clang to use such a memset for zero init of this struct (since it has
2100     a hole), instead of doing elementwise zeroing.
2101
2102//===---------------------------------------------------------------------===//
2103
2104clang -O3 currently compiles this code:
2105
2106extern const int magic;
2107double f() { return 0.0 * magic; }
2108
2109into
2110
2111@magic = external constant i32
2112
2113define double @_Z1fv() nounwind readnone {
2114entry:
2115  %tmp = load i32* @magic, align 4, !tbaa !0
2116  %conv = sitofp i32 %tmp to double
2117  %mul = fmul double %conv, 0.000000e+00
2118  ret double %mul
2119}
2120
2121We should be able to fold away this fmul to 0.0.  More generally, fmul(x,0.0)
2122can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
2123not an INF.  The CannotBeNegativeZero predicate in value tracking should be
2124extended to support general "fpclassify" operations that can return
2125yes/no/unknown for each of these predicates.
2126
2127In this predicate, we know that uitofp is trivially never NaN or -0.0, and
2128we know that it isn't +/-Inf if the floating point type has enough exponent bits
2129to represent the largest integer value as < inf.
2130
2131//===---------------------------------------------------------------------===//
2132
2133When optimizing a transformation that can change the sign of 0.0 (such as the
21340.0*val -> 0.0 transformation above), it might be provable that the sign of the
2135expression doesn't matter.  For example, by the above rules, we can't transform
2136fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
2137expression is defined to be -0.0.
2138
2139If we look at the uses of the fmul for example, we might be able to prove that
2140all uses don't care about the sign of zero.  For example, if we have:
2141
2142  fadd(fmul(sitofp(x), 0.0), 2.0)
2143
2144Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
2145transform the fmul to 0.0, and then the fadd to 2.0.
2146
2147//===---------------------------------------------------------------------===//
2148
2149We should enhance memcpy/memcpy/memset to allow a metadata node on them
2150indicating that some bytes of the transfer are undefined.  This is useful for
2151frontends like clang when lowering struct copies, when some elements of the
2152struct are undefined.  Consider something like this:
2153
2154struct x {
2155  char a;
2156  int b[4];
2157};
2158void foo(struct x*P);
2159struct x testfunc() {
2160  struct x V1, V2;
2161  foo(&V1);
2162  V2 = V1;
2163
2164  return V2;
2165}
2166
2167We currently compile this to:
2168$ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
2169
2170
2171%struct.x = type { i8, [4 x i32] }
2172
2173define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
2174entry:
2175  %V1 = alloca %struct.x, align 4
2176  call void @foo(%struct.x* %V1)
2177  %tmp1 = bitcast %struct.x* %V1 to i8*
2178  %0 = bitcast %struct.x* %V1 to i160*
2179  %srcval1 = load i160* %0, align 4
2180  %tmp2 = bitcast %struct.x* %agg.result to i8*
2181  %1 = bitcast %struct.x* %agg.result to i160*
2182  store i160 %srcval1, i160* %1, align 4
2183  ret void
2184}
2185
2186This happens because SRoA sees that the temp alloca has is being memcpy'd into
2187and out of and it has holes and it has to be conservative.  If we knew about the
2188holes, then this could be much much better.
2189
2190Having information about these holes would also improve memcpy (etc) lowering at
2191llc time when it gets inlined, because we can use smaller transfers.  This also
2192avoids partial register stalls in some important cases.
2193
2194//===---------------------------------------------------------------------===//
2195
2196We don't fold (icmp (add) (add)) unless the two adds only have a single use.
2197There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
2198example:
2199
2200 %indvar.next90 = add i64 %indvar89, 1     ;; Has 2 uses
2201 %tmp96 = add i64 %tmp95, 1                ;; Has 1 use
2202 %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
2203
2204We don't fold this because we don't want to introduce an overlapped live range
2205of the ivar.  However if we can make this more aggressive without causing
2206performance issues in two ways:
2207
22081. If *either* the LHS or RHS has a single use, we can definitely do the
2209   transformation.  In the overlapping liverange case we're trading one register
2210   use for one fewer operation, which is a reasonable trade.  Before doing this
2211   we should verify that the llc output actually shrinks for some benchmarks.
22122. If both ops have multiple uses, we can still fold it if the operations are
2213   both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
2214   increase register pressure.
2215
2216There are a ton of icmp's we aren't simplifying because of the reg pressure
2217concern.  Care is warranted here though because many of these are induction
2218variables and other cases that matter a lot to performance, like the above.
2219Here's a blob of code that you can drop into the bottom of visitICmp to see some
2220missed cases:
2221
2222  { Value *A, *B, *C, *D;
2223    if (match(Op0, m_Add(m_Value(A), m_Value(B))) &&
2224        match(Op1, m_Add(m_Value(C), m_Value(D))) &&
2225        (A == C || A == D || B == C || B == D)) {
2226      errs() << "OP0 = " << *Op0 << "  U=" << Op0->getNumUses() << "\n";
2227      errs() << "OP1 = " << *Op1 << "  U=" << Op1->getNumUses() << "\n";
2228      errs() << "CMP = " << I << "\n\n";
2229    }
2230  }
2231
2232//===---------------------------------------------------------------------===//
2233
2234define i1 @test1(i32 %x) nounwind {
2235  %and = and i32 %x, 3
2236  %cmp = icmp ult i32 %and, 2
2237  ret i1 %cmp
2238}
2239
2240Can be folded to (x & 2) == 0.
2241
2242define i1 @test2(i32 %x) nounwind {
2243  %and = and i32 %x, 3
2244  %cmp = icmp ugt i32 %and, 1
2245  ret i1 %cmp
2246}
2247
2248Can be folded to (x & 2) != 0.
2249
2250SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the
2251icmp transform.
2252
2253//===---------------------------------------------------------------------===//
2254
2255This code:
2256
2257typedef struct {
2258int f1:1;
2259int f2:1;
2260int f3:1;
2261int f4:29;
2262} t1;
2263
2264typedef struct {
2265int f1:1;
2266int f2:1;
2267int f3:30;
2268} t2;
2269
2270t1 s1;
2271t2 s2;
2272
2273void func1(void)
2274{
2275s1.f1 = s2.f1;
2276s1.f2 = s2.f2;
2277}
2278
2279Compiles into this IR (on x86-64 at least):
2280
2281%struct.t1 = type { i8, [3 x i8] }
2282@s2 = global %struct.t1 zeroinitializer, align 4
2283@s1 = global %struct.t1 zeroinitializer, align 4
2284define void @func1() nounwind ssp noredzone {
2285entry:
2286  %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4
2287  %bf.val.sext5 = and i32 %0, 1
2288  %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4
2289  %2 = and i32 %1, -4
2290  %3 = or i32 %2, %bf.val.sext5
2291  %bf.val.sext26 = and i32 %0, 2
2292  %4 = or i32 %3, %bf.val.sext26
2293  store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4
2294  ret void
2295}
2296
2297The two or/and's should be merged into one each.
2298
2299//===---------------------------------------------------------------------===//
2300
2301Machine level code hoisting can be useful in some cases.  For example, PR9408
2302is about:
2303
2304typedef union {
2305 void (*f1)(int);
2306 void (*f2)(long);
2307} funcs;
2308
2309void foo(funcs f, int which) {
2310 int a = 5;
2311 if (which) {
2312   f.f1(a);
2313 } else {
2314   f.f2(a);
2315 }
2316}
2317
2318which we compile to:
2319
2320foo:                                    # @foo
2321# BB#0:                                 # %entry
2322       pushq   %rbp
2323       movq    %rsp, %rbp
2324       testl   %esi, %esi
2325       movq    %rdi, %rax
2326       je      .LBB0_2
2327# BB#1:                                 # %if.then
2328       movl    $5, %edi
2329       callq   *%rax
2330       popq    %rbp
2331       ret
2332.LBB0_2:                                # %if.else
2333       movl    $5, %edi
2334       callq   *%rax
2335       popq    %rbp
2336       ret
2337
2338Note that bb1 and bb2 are the same.  This doesn't happen at the IR level
2339because one call is passing an i32 and the other is passing an i64.
2340
2341//===---------------------------------------------------------------------===//
2342
2343I see this sort of pattern in 176.gcc in a few places (e.g. the start of
2344store_bit_field).  The rem should be replaced with a multiply and subtract:
2345
2346  %3 = sdiv i32 %A, %B
2347  %4 = srem i32 %A, %B
2348
2349Similarly for udiv/urem.  Note that this shouldn't be done on X86 or ARM,
2350which can do this in a single operation (instruction or libcall).  It is
2351probably best to do this in the code generator.
2352
2353//===---------------------------------------------------------------------===//
2354
2355unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; }
2356should fold to (x & y) == 0.
2357
2358//===---------------------------------------------------------------------===//
2359
2360unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; }
2361should fold to x > y.
2362
2363//===---------------------------------------------------------------------===//
2364
2365int f(double x) { return __builtin_fabs(x) < 0.0; }
2366should fold to false.
2367
2368//===---------------------------------------------------------------------===//
2369