• Home
Name Date Size #Lines LOC

..--

AsmParser/03-May-2024-10,1418,147

Disassembler/03-May-2024-5,3234,421

InstPrinter/03-May-2024-1,9801,612

MCTargetDesc/03-May-2024-7,8115,474

TargetInfo/03-May-2024-12979

A15SDOptimizer.cppD03-May-202425 KiB721484

ARM.hD03-May-20241.5 KiB5129

ARM.tdD03-May-202437.2 KiB697584

ARMAsmPrinter.cppD03-May-202469.5 KiB1,9071,375

ARMAsmPrinter.hD03-May-20244.7 KiB13778

ARMBaseInstrInfo.cppD03-May-2024157 KiB4,6323,597

ARMBaseInstrInfo.hD03-May-202422.1 KiB510296

ARMBaseRegisterInfo.cppD03-May-202429.9 KiB811586

ARMBaseRegisterInfo.hD03-May-20246.9 KiB192121

ARMCallingConv.hD03-May-202410.6 KiB289199

ARMCallingConv.tdD03-May-202410.6 KiB247191

ARMConstantIslandPass.cppD03-May-202487.4 KiB2,3121,450

ARMConstantPoolValue.cppD03-May-202410.3 KiB262195

ARMConstantPoolValue.hD03-May-20249.1 KiB257173

ARMExpandPseudoInsts.cppD03-May-202459.4 KiB1,4141,174

ARMFastISel.cppD03-May-2024106.7 KiB3,0712,340

ARMFeatures.hD03-May-20242.4 KiB9874

ARMFrameLowering.cppD03-May-202483.9 KiB2,1691,505

ARMFrameLowering.hD03-May-20243.4 KiB8653

ARMHazardRecognizer.cppD03-May-20243.4 KiB10374

ARMHazardRecognizer.hD03-May-20241.5 KiB5024

ARMISelDAGToDAG.cppD03-May-2024149.9 KiB3,9553,106

ARMISelLowering.cppD03-May-2024475 KiB12,3278,960

ARMISelLowering.hD03-May-202428.5 KiB678438

ARMInstrFormats.tdD03-May-202479.1 KiB2,3732,107

ARMInstrInfo.cppD03-May-20244.2 KiB135109

ARMInstrInfo.hD03-May-20241.5 KiB4919

ARMInstrInfo.tdD03-May-2024217.3 KiB5,7575,104

ARMInstrNEON.tdD03-May-2024387.3 KiB8,1797,460

ARMInstrThumb.tdD03-May-202455.8 KiB1,5191,338

ARMInstrThumb2.tdD03-May-2024179 KiB4,7664,230

ARMInstrVFP.tdD03-May-202472.1 KiB1,8951,620

ARMLoadStoreOptimizer.cppD03-May-202482.1 KiB2,3561,839

ARMMCInstLower.cppD03-May-20245 KiB164136

ARMMachineFunctionInfo.cppD03-May-2024939 2411

ARMMachineFunctionInfo.hD03-May-20248.2 KiB227116

ARMOptimizeBarriersPass.cppD03-May-20243.2 KiB10058

ARMPerfectShuffle.hD03-May-2024382 KiB6,5926,567

ARMRegisterInfo.cppD03-May-2024657 204

ARMRegisterInfo.hD03-May-2024817 3212

ARMRegisterInfo.tdD03-May-202417.7 KiB431379

ARMSchedule.tdD03-May-202412.8 KiB355341

ARMScheduleA8.tdD03-May-202449.6 KiB1,0761,060

ARMScheduleA9.tdD03-May-2024128.7 KiB2,5302,412

ARMScheduleSwift.tdD03-May-202448.9 KiB1,047977

ARMScheduleV6.tdD03-May-202412.3 KiB301290

ARMSelectionDAGInfo.cppD03-May-20249.4 KiB272200

ARMSelectionDAGInfo.hD03-May-20242.9 KiB7445

ARMSubtarget.cppD03-May-202412.4 KiB368248

ARMSubtarget.hD03-May-202417.5 KiB492267

ARMTargetMachine.cppD03-May-202415.9 KiB432302

ARMTargetMachine.hD03-May-20244.3 KiB13184

ARMTargetObjectFile.cppD03-May-20242.2 KiB6241

ARMTargetObjectFile.hD03-May-20241.3 KiB4424

ARMTargetTransformInfo.cppD03-May-202420 KiB496364

ARMTargetTransformInfo.hD03-May-20243.9 KiB12872

Android.mkD03-May-20242.2 KiB8570

CMakeLists.txtD03-May-20241.5 KiB5450

LICENSE.TXTD03-May-20242.7 KiB4840

LLVMBuild.txtD03-May-20241 KiB3632

MLxExpansionPass.cppD03-May-202411.8 KiB401307

MakefileD03-May-2024881 2511

README-Thumb.txtD03-May-20247 KiB262204

README-Thumb2.txtD03-May-2024308 75

README.txtD03-May-202422.3 KiB733555

Thumb1FrameLowering.cppD03-May-202423.9 KiB660502

Thumb1FrameLowering.hD03-May-20243.4 KiB8932

Thumb1InstrInfo.cppD03-May-20245 KiB12992

Thumb1InstrInfo.hD03-May-20242.4 KiB6433

Thumb2ITBlockPass.cppD03-May-20249 KiB300209

Thumb2InstrInfo.cppD03-May-202421 KiB637497

Thumb2InstrInfo.hD03-May-20243 KiB7939

Thumb2SizeReduction.cppD03-May-202436.4 KiB1,055782

ThumbRegisterInfo.cppD03-May-202423.8 KiB624457

ThumbRegisterInfo.hD03-May-20242.6 KiB6637

README-Thumb.txt

1//===---------------------------------------------------------------------===//
2// Random ideas for the ARM backend (Thumb specific).
3//===---------------------------------------------------------------------===//
4
5* Add support for compiling functions in both ARM and Thumb mode, then taking
6  the smallest.
7
8* Add support for compiling individual basic blocks in thumb mode, when in a
9  larger ARM function.  This can be used for presumed cold code, like paths
10  to abort (failure path of asserts), EH handling code, etc.
11
12* Thumb doesn't have normal pre/post increment addressing modes, but you can
13  load/store 32-bit integers with pre/postinc by using load/store multiple
14  instrs with a single register.
15
16* Make better use of high registers r8, r10, r11, r12 (ip). Some variants of add
17  and cmp instructions can use high registers. Also, we can use them as
18  temporaries to spill values into.
19
20* In thumb mode, short, byte, and bool preferred alignments are currently set
21  to 4 to accommodate ISA restriction (i.e. add sp, #imm, imm must be multiple
22  of 4).
23
24//===---------------------------------------------------------------------===//
25
26Potential jumptable improvements:
27
28* If we know function size is less than (1 << 16) * 2 bytes, we can use 16-bit
29  jumptable entries (e.g. (L1 - L2) >> 1). Or even smaller entries if the
30  function is even smaller. This also applies to ARM.
31
32* Thumb jumptable codegen can improve given some help from the assembler. This
33  is what we generate right now:
34
35	.set PCRELV0, (LJTI1_0_0-(LPCRELL0+4))
36LPCRELL0:
37	mov r1, #PCRELV0
38	add r1, pc
39	ldr r0, [r0, r1]
40	mov pc, r0
41	.align	2
42LJTI1_0_0:
43	.long	 LBB1_3
44        ...
45
46Note there is another pc relative add that we can take advantage of.
47     add r1, pc, #imm_8 * 4
48
49We should be able to generate:
50
51LPCRELL0:
52	add r1, LJTI1_0_0
53	ldr r0, [r0, r1]
54	mov pc, r0
55	.align	2
56LJTI1_0_0:
57	.long	 LBB1_3
58
59if the assembler can translate the add to:
60       add r1, pc, #((LJTI1_0_0-(LPCRELL0+4))&0xfffffffc)
61
62Note the assembler also does something similar to constpool load:
63LPCRELL0:
64     ldr r0, LCPI1_0
65=>
66     ldr r0, pc, #((LCPI1_0-(LPCRELL0+4))&0xfffffffc)
67
68
69//===---------------------------------------------------------------------===//
70
71We compile the following:
72
73define i16 @func_entry_2E_ce(i32 %i) {
74        switch i32 %i, label %bb12.exitStub [
75                 i32 0, label %bb4.exitStub
76                 i32 1, label %bb9.exitStub
77                 i32 2, label %bb4.exitStub
78                 i32 3, label %bb4.exitStub
79                 i32 7, label %bb9.exitStub
80                 i32 8, label %bb.exitStub
81                 i32 9, label %bb9.exitStub
82        ]
83
84bb12.exitStub:
85        ret i16 0
86
87bb4.exitStub:
88        ret i16 1
89
90bb9.exitStub:
91        ret i16 2
92
93bb.exitStub:
94        ret i16 3
95}
96
97into:
98
99_func_entry_2E_ce:
100        mov r2, #1
101        lsl r2, r0
102        cmp r0, #9
103        bhi LBB1_4      @bb12.exitStub
104LBB1_1: @newFuncRoot
105        mov r1, #13
106        tst r2, r1
107        bne LBB1_5      @bb4.exitStub
108LBB1_2: @newFuncRoot
109        ldr r1, LCPI1_0
110        tst r2, r1
111        bne LBB1_6      @bb9.exitStub
112LBB1_3: @newFuncRoot
113        mov r1, #1
114        lsl r1, r1, #8
115        tst r2, r1
116        bne LBB1_7      @bb.exitStub
117LBB1_4: @bb12.exitStub
118        mov r0, #0
119        bx lr
120LBB1_5: @bb4.exitStub
121        mov r0, #1
122        bx lr
123LBB1_6: @bb9.exitStub
124        mov r0, #2
125        bx lr
126LBB1_7: @bb.exitStub
127        mov r0, #3
128        bx lr
129LBB1_8:
130        .align  2
131LCPI1_0:
132        .long   642
133
134
135gcc compiles to:
136
137	cmp	r0, #9
138	@ lr needed for prologue
139	bhi	L2
140	ldr	r3, L11
141	mov	r2, #1
142	mov	r1, r2, asl r0
143	ands	r0, r3, r2, asl r0
144	movne	r0, #2
145	bxne	lr
146	tst	r1, #13
147	beq	L9
148L3:
149	mov	r0, r2
150	bx	lr
151L9:
152	tst	r1, #256
153	movne	r0, #3
154	bxne	lr
155L2:
156	mov	r0, #0
157	bx	lr
158L12:
159	.align 2
160L11:
161	.long	642
162
163
164GCC is doing a couple of clever things here:
165  1. It is predicating one of the returns.  This isn't a clear win though: in
166     cases where that return isn't taken, it is replacing one condbranch with
167     two 'ne' predicated instructions.
168  2. It is sinking the shift of "1 << i" into the tst, and using ands instead of
169     tst.  This will probably require whole function isel.
170  3. GCC emits:
171  	tst	r1, #256
172     we emit:
173        mov r1, #1
174        lsl r1, r1, #8
175        tst r2, r1
176
177//===---------------------------------------------------------------------===//
178
179When spilling in thumb mode and the sp offset is too large to fit in the ldr /
180str offset field, we load the offset from a constpool entry and add it to sp:
181
182ldr r2, LCPI
183add r2, sp
184ldr r2, [r2]
185
186These instructions preserve the condition code which is important if the spill
187is between a cmp and a bcc instruction. However, we can use the (potentially)
188cheaper sequnce if we know it's ok to clobber the condition register.
189
190add r2, sp, #255 * 4
191add r2, #132
192ldr r2, [r2, #7 * 4]
193
194This is especially bad when dynamic alloca is used. The all fixed size stack
195objects are referenced off the frame pointer with negative offsets. See
196oggenc for an example.
197
198//===---------------------------------------------------------------------===//
199
200Poor codegen test/CodeGen/ARM/select.ll f7:
201
202	ldr r5, LCPI1_0
203LPC0:
204	add r5, pc
205	ldr r6, LCPI1_1
206	ldr r2, LCPI1_2
207	mov r3, r6
208	mov lr, pc
209	bx r5
210
211//===---------------------------------------------------------------------===//
212
213Make register allocator / spiller smarter so we can re-materialize "mov r, imm",
214etc. Almost all Thumb instructions clobber condition code.
215
216//===---------------------------------------------------------------------===//
217
218Thumb load / store address mode offsets are scaled. The values kept in the
219instruction operands are pre-scale values. This probably ought to be changed
220to avoid extra work when we convert Thumb2 instructions to Thumb1 instructions.
221
222//===---------------------------------------------------------------------===//
223
224We need to make (some of the) Thumb1 instructions predicable. That will allow
225shrinking of predicated Thumb2 instructions. To allow this, we need to be able
226to toggle the 's' bit since they do not set CPSR when they are inside IT blocks.
227
228//===---------------------------------------------------------------------===//
229
230Make use of hi register variants of cmp: tCMPhir / tCMPZhir.
231
232//===---------------------------------------------------------------------===//
233
234Thumb1 immediate field sometimes keep pre-scaled values. See
235ThumbRegisterInfo::eliminateFrameIndex. This is inconsistent from ARM and
236Thumb2.
237
238//===---------------------------------------------------------------------===//
239
240Rather than having tBR_JTr print a ".align 2" and constant island pass pad it,
241add a target specific ALIGN instruction instead. That way, GetInstSizeInBytes
242won't have to over-estimate. It can also be used for loop alignment pass.
243
244//===---------------------------------------------------------------------===//
245
246We generate conditional code for icmp when we don't need to. This code:
247
248  int foo(int s) {
249    return s == 1;
250  }
251
252produces:
253
254foo:
255        cmp     r0, #1
256        mov.w   r0, #0
257        it      eq
258        moveq   r0, #1
259        bx      lr
260
261when it could use subs + adcs. This is GCC PR46975.
262

README-Thumb2.txt

1//===---------------------------------------------------------------------===//
2// Random ideas for the ARM backend (Thumb2 specific).
3//===---------------------------------------------------------------------===//
4
5Make sure jumptable destinations are below the jumptable in order to make use
6of tbb / tbh.
7

README.txt

1//===---------------------------------------------------------------------===//
2// Random ideas for the ARM backend.
3//===---------------------------------------------------------------------===//
4
5Reimplement 'select' in terms of 'SEL'.
6
7* We would really like to support UXTAB16, but we need to prove that the
8  add doesn't need to overflow between the two 16-bit chunks.
9
10* Implement pre/post increment support.  (e.g. PR935)
11* Implement smarter constant generation for binops with large immediates.
12
13A few ARMv6T2 ops should be pattern matched: BFI, SBFX, and UBFX
14
15Interesting optimization for PIC codegen on arm-linux:
16http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43129
17
18//===---------------------------------------------------------------------===//
19
20Crazy idea:  Consider code that uses lots of 8-bit or 16-bit values.  By the
21time regalloc happens, these values are now in a 32-bit register, usually with
22the top-bits known to be sign or zero extended.  If spilled, we should be able
23to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
24of the reload.
25
26Doing this reduces the size of the stack frame (important for thumb etc), and
27also increases the likelihood that we will be able to reload multiple values
28from the stack with a single load.
29
30//===---------------------------------------------------------------------===//
31
32The constant island pass is in good shape.  Some cleanups might be desirable,
33but there is unlikely to be much improvement in the generated code.
34
351.  There may be some advantage to trying to be smarter about the initial
36placement, rather than putting everything at the end.
37
382.  There might be some compile-time efficiency to be had by representing
39consecutive islands as a single block rather than multiple blocks.
40
413.  Use a priority queue to sort constant pool users in inverse order of
42    position so we always process the one closed to the end of functions
43    first. This may simply CreateNewWater.
44
45//===---------------------------------------------------------------------===//
46
47Eliminate copysign custom expansion. We are still generating crappy code with
48default expansion + if-conversion.
49
50//===---------------------------------------------------------------------===//
51
52Eliminate one instruction from:
53
54define i32 @_Z6slow4bii(i32 %x, i32 %y) {
55        %tmp = icmp sgt i32 %x, %y
56        %retval = select i1 %tmp, i32 %x, i32 %y
57        ret i32 %retval
58}
59
60__Z6slow4bii:
61        cmp r0, r1
62        movgt r1, r0
63        mov r0, r1
64        bx lr
65=>
66
67__Z6slow4bii:
68        cmp r0, r1
69        movle r0, r1
70        bx lr
71
72//===---------------------------------------------------------------------===//
73
74Implement long long "X-3" with instructions that fold the immediate in.  These
75were disabled due to badness with the ARM carry flag on subtracts.
76
77//===---------------------------------------------------------------------===//
78
79More load / store optimizations:
801) Better representation for block transfer? This is from Olden/power:
81
82	fldd d0, [r4]
83	fstd d0, [r4, #+32]
84	fldd d0, [r4, #+8]
85	fstd d0, [r4, #+40]
86	fldd d0, [r4, #+16]
87	fstd d0, [r4, #+48]
88	fldd d0, [r4, #+24]
89	fstd d0, [r4, #+56]
90
91If we can spare the registers, it would be better to use fldm and fstm here.
92Need major register allocator enhancement though.
93
942) Can we recognize the relative position of constantpool entries? i.e. Treat
95
96	ldr r0, LCPI17_3
97	ldr r1, LCPI17_4
98	ldr r2, LCPI17_5
99
100   as
101	ldr r0, LCPI17
102	ldr r1, LCPI17+4
103	ldr r2, LCPI17+8
104
105   Then the ldr's can be combined into a single ldm. See Olden/power.
106
107Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
108double 64-bit FP constant:
109
110	adr	r0, L6
111	ldmia	r0, {r0-r1}
112
113	.align 2
114L6:
115	.long	-858993459
116	.long	1074318540
117
1183) struct copies appear to be done field by field
119instead of by words, at least sometimes:
120
121struct foo { int x; short s; char c1; char c2; };
122void cpy(struct foo*a, struct foo*b) { *a = *b; }
123
124llvm code (-O2)
125        ldrb r3, [r1, #+6]
126        ldr r2, [r1]
127        ldrb r12, [r1, #+7]
128        ldrh r1, [r1, #+4]
129        str r2, [r0]
130        strh r1, [r0, #+4]
131        strb r3, [r0, #+6]
132        strb r12, [r0, #+7]
133gcc code (-O2)
134        ldmia   r1, {r1-r2}
135        stmia   r0, {r1-r2}
136
137In this benchmark poor handling of aggregate copies has shown up as
138having a large effect on size, and possibly speed as well (we don't have
139a good way to measure on ARM).
140
141//===---------------------------------------------------------------------===//
142
143* Consider this silly example:
144
145double bar(double x) {
146  double r = foo(3.1);
147  return x+r;
148}
149
150_bar:
151        stmfd sp!, {r4, r5, r7, lr}
152        add r7, sp, #8
153        mov r4, r0
154        mov r5, r1
155        fldd d0, LCPI1_0
156        fmrrd r0, r1, d0
157        bl _foo
158        fmdrr d0, r4, r5
159        fmsr s2, r0
160        fsitod d1, s2
161        faddd d0, d1, d0
162        fmrrd r0, r1, d0
163        ldmfd sp!, {r4, r5, r7, pc}
164
165Ignore the prologue and epilogue stuff for a second. Note
166	mov r4, r0
167	mov r5, r1
168the copys to callee-save registers and the fact they are only being used by the
169fmdrr instruction. It would have been better had the fmdrr been scheduled
170before the call and place the result in a callee-save DPR register. The two
171mov ops would not have been necessary.
172
173//===---------------------------------------------------------------------===//
174
175Calling convention related stuff:
176
177* gcc's parameter passing implementation is terrible and we suffer as a result:
178
179e.g.
180struct s {
181  double d1;
182  int s1;
183};
184
185void foo(struct s S) {
186  printf("%g, %d\n", S.d1, S.s1);
187}
188
189'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
190then reload them to r1, r2, and r3 before issuing the call (r0 contains the
191address of the format string):
192
193	stmfd	sp!, {r7, lr}
194	add	r7, sp, #0
195	sub	sp, sp, #12
196	stmia	sp, {r0, r1, r2}
197	ldmia	sp, {r1-r2}
198	ldr	r0, L5
199	ldr	r3, [sp, #8]
200L2:
201	add	r0, pc, r0
202	bl	L_printf$stub
203
204Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
205
206* Return an aggregate type is even worse:
207
208e.g.
209struct s foo(void) {
210  struct s S = {1.1, 2};
211  return S;
212}
213
214	mov	ip, r0
215	ldr	r0, L5
216	sub	sp, sp, #12
217L2:
218	add	r0, pc, r0
219	@ lr needed for prologue
220	ldmia	r0, {r0, r1, r2}
221	stmia	sp, {r0, r1, r2}
222	stmia	ip, {r0, r1, r2}
223	mov	r0, ip
224	add	sp, sp, #12
225	bx	lr
226
227r0 (and later ip) is the hidden parameter from caller to store the value in. The
228first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
229r2 into the address passed in. However, there is one additional stmia that
230stores r0, r1, and r2 to some stack location. The store is dead.
231
232The llvm-gcc generated code looks like this:
233
234csretcc void %foo(%struct.s* %agg.result) {
235entry:
236	%S = alloca %struct.s, align 4		; <%struct.s*> [#uses=1]
237	%memtmp = alloca %struct.s		; <%struct.s*> [#uses=1]
238	cast %struct.s* %S to sbyte*		; <sbyte*>:0 [#uses=2]
239	call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
240	cast %struct.s* %agg.result to sbyte*		; <sbyte*>:1 [#uses=2]
241	call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
242	cast %struct.s* %memtmp to sbyte*		; <sbyte*>:2 [#uses=1]
243	call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
244	ret void
245}
246
247llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
248constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
249into a number of load and stores, or 2) custom lower memcpy (of small size) to
250be ldmia / stmia. I think option 2 is better but the current register
251allocator cannot allocate a chunk of registers at a time.
252
253A feasible temporary solution is to use specific physical registers at the
254lowering time for small (<= 4 words?) transfer size.
255
256* ARM CSRet calling convention requires the hidden argument to be returned by
257the callee.
258
259//===---------------------------------------------------------------------===//
260
261We can definitely do a better job on BB placements to eliminate some branches.
262It's very common to see llvm generated assembly code that looks like this:
263
264LBB3:
265 ...
266LBB4:
267...
268  beq LBB3
269  b LBB2
270
271If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
272then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
273
274See McCat/18-imp/ComputeBoundingBoxes for an example.
275
276//===---------------------------------------------------------------------===//
277
278Pre-/post- indexed load / stores:
279
2801) We should not make the pre/post- indexed load/store transform if the base ptr
281is guaranteed to be live beyond the load/store. This can happen if the base
282ptr is live out of the block we are performing the optimization. e.g.
283
284mov r1, r2
285ldr r3, [r1], #4
286...
287
288vs.
289
290ldr r3, [r2]
291add r1, r2, #4
292...
293
294In most cases, this is just a wasted optimization. However, sometimes it can
295negatively impact the performance because two-address code is more restrictive
296when it comes to scheduling.
297
298Unfortunately, liveout information is currently unavailable during DAG combine
299time.
300
3012) Consider spliting a indexed load / store into a pair of add/sub + load/store
302   to solve #1 (in TwoAddressInstructionPass.cpp).
303
3043) Enhance LSR to generate more opportunities for indexed ops.
305
3064) Once we added support for multiple result patterns, write indexed loads
307   patterns instead of C++ instruction selection code.
308
3095) Use VLDM / VSTM to emulate indexed FP load / store.
310
311//===---------------------------------------------------------------------===//
312
313Implement support for some more tricky ways to materialize immediates.  For
314example, to get 0xffff8000, we can use:
315
316mov r9, #&3f8000
317sub r9, r9, #&400000
318
319//===---------------------------------------------------------------------===//
320
321We sometimes generate multiple add / sub instructions to update sp in prologue
322and epilogue if the inc / dec value is too large to fit in a single immediate
323operand. In some cases, perhaps it might be better to load the value from a
324constantpool instead.
325
326//===---------------------------------------------------------------------===//
327
328GCC generates significantly better code for this function.
329
330int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
331    int i = 0;
332
333    if (StackPtr != 0) {
334       while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
335          Line[i++] = Stack[--StackPtr];
336        if (LineLen > 32768)
337        {
338            while (StackPtr != 0 && i < LineLen)
339            {
340                i++;
341                --StackPtr;
342            }
343        }
344    }
345    return StackPtr;
346}
347
348//===---------------------------------------------------------------------===//
349
350This should compile to the mlas instruction:
351int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
352
353//===---------------------------------------------------------------------===//
354
355At some point, we should triage these to see if they still apply to us:
356
357http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
358http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
359http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
360
361http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
362http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
363http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
364http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
365http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
366http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
367http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
368
369http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
370http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
371http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
372http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
373http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
374http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
375http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
376
377http://www.inf.u-szeged.hu/gcc-arm/
378http://citeseer.ist.psu.edu/debus04linktime.html
379
380//===---------------------------------------------------------------------===//
381
382gcc generates smaller code for this function at -O2 or -Os:
383
384void foo(signed char* p) {
385  if (*p == 3)
386     bar();
387   else if (*p == 4)
388    baz();
389  else if (*p == 5)
390    quux();
391}
392
393llvm decides it's a good idea to turn the repeated if...else into a
394binary tree, as if it were a switch; the resulting code requires -1
395compare-and-branches when *p<=2 or *p==5, the same number if *p==4
396or *p>6, and +1 if *p==3.  So it should be a speed win
397(on balance).  However, the revised code is larger, with 4 conditional
398branches instead of 3.
399
400More seriously, there is a byte->word extend before
401each comparison, where there should be only one, and the condition codes
402are not remembered when the same two values are compared twice.
403
404//===---------------------------------------------------------------------===//
405
406More LSR enhancements possible:
407
4081. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
409   in a load / store.
4102. Allow iv reuse even when a type conversion is required. For example, i8
411   and i32 load / store addressing modes are identical.
412
413
414//===---------------------------------------------------------------------===//
415
416This:
417
418int foo(int a, int b, int c, int d) {
419  long long acc = (long long)a * (long long)b;
420  acc += (long long)c * (long long)d;
421  return (int)(acc >> 32);
422}
423
424Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
425two signed 32-bit values to produce a 64-bit value, and accumulates this with
426a 64-bit value.
427
428We currently get this with both v4 and v6:
429
430_foo:
431        smull r1, r0, r1, r0
432        smull r3, r2, r3, r2
433        adds r3, r3, r1
434        adc r0, r2, r0
435        bx lr
436
437//===---------------------------------------------------------------------===//
438
439This:
440        #include <algorithm>
441        std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
442        { return std::make_pair(a + b, a + b < a); }
443        bool no_overflow(unsigned a, unsigned b)
444        { return !full_add(a, b).second; }
445
446Should compile to:
447
448_Z8full_addjj:
449	adds	r2, r1, r2
450	movcc	r1, #0
451	movcs	r1, #1
452	str	r2, [r0, #0]
453	strb	r1, [r0, #4]
454	mov	pc, lr
455
456_Z11no_overflowjj:
457	cmn	r0, r1
458	movcs	r0, #0
459	movcc	r0, #1
460	mov	pc, lr
461
462not:
463
464__Z8full_addjj:
465        add r3, r2, r1
466        str r3, [r0]
467        mov r2, #1
468        mov r12, #0
469        cmp r3, r1
470        movlo r12, r2
471        str r12, [r0, #+4]
472        bx lr
473__Z11no_overflowjj:
474        add r3, r1, r0
475        mov r2, #1
476        mov r1, #0
477        cmp r3, r0
478        movhs r1, r2
479        mov r0, r1
480        bx lr
481
482//===---------------------------------------------------------------------===//
483
484Some of the NEON intrinsics may be appropriate for more general use, either
485as target-independent intrinsics or perhaps elsewhere in the ARM backend.
486Some of them may also be lowered to target-independent SDNodes, and perhaps
487some new SDNodes could be added.
488
489For example, maximum, minimum, and absolute value operations are well-defined
490and standard operations, both for vector and scalar types.
491
492The current NEON-specific intrinsics for count leading zeros and count one
493bits could perhaps be replaced by the target-independent ctlz and ctpop
494intrinsics.  It may also make sense to add a target-independent "ctls"
495intrinsic for "count leading sign bits".  Likewise, the backend could use
496the target-independent SDNodes for these operations.
497
498ARMv6 has scalar saturating and halving adds and subtracts.  The same
499intrinsics could possibly be used for both NEON's vector implementations of
500those operations and the ARMv6 scalar versions.
501
502//===---------------------------------------------------------------------===//
503
504Split out LDR (literal) from normal ARM LDR instruction. Also consider spliting
505LDR into imm12 and so_reg forms. This allows us to clean up some code. e.g.
506ARMLoadStoreOptimizer does not need to look at LDR (literal) and LDR (so_reg)
507while ARMConstantIslandPass only need to worry about LDR (literal).
508
509//===---------------------------------------------------------------------===//
510
511Constant island pass should make use of full range SoImm values for LEApcrel.
512Be careful though as the last attempt caused infinite looping on lencod.
513
514//===---------------------------------------------------------------------===//
515
516Predication issue. This function:
517
518extern unsigned array[ 128 ];
519int     foo( int x ) {
520  int     y;
521  y = array[ x & 127 ];
522  if ( x & 128 )
523     y = 123456789 & ( y >> 2 );
524  else
525     y = 123456789 & y;
526  return y;
527}
528
529compiles to:
530
531_foo:
532	and r1, r0, #127
533	ldr r2, LCPI1_0
534	ldr r2, [r2]
535	ldr r1, [r2, +r1, lsl #2]
536	mov r2, r1, lsr #2
537	tst r0, #128
538	moveq r2, r1
539	ldr r0, LCPI1_1
540	and r0, r2, r0
541	bx lr
542
543It would be better to do something like this, to fold the shift into the
544conditional move:
545
546	and r1, r0, #127
547	ldr r2, LCPI1_0
548	ldr r2, [r2]
549	ldr r1, [r2, +r1, lsl #2]
550	tst r0, #128
551	movne r1, r1, lsr #2
552	ldr r0, LCPI1_1
553	and r0, r1, r0
554	bx lr
555
556it saves an instruction and a register.
557
558//===---------------------------------------------------------------------===//
559
560It might be profitable to cse MOVi16 if there are lots of 32-bit immediates
561with the same bottom half.
562
563//===---------------------------------------------------------------------===//
564
565Robert Muth started working on an alternate jump table implementation that
566does not put the tables in-line in the text.  This is more like the llvm
567default jump table implementation.  This might be useful sometime.  Several
568revisions of patches are on the mailing list, beginning at:
569http://lists.llvm.org/pipermail/llvm-dev/2009-June/022763.html
570
571//===---------------------------------------------------------------------===//
572
573Make use of the "rbit" instruction.
574
575//===---------------------------------------------------------------------===//
576
577Take a look at test/CodeGen/Thumb2/machine-licm.ll. ARM should be taught how
578to licm and cse the unnecessary load from cp#1.
579
580//===---------------------------------------------------------------------===//
581
582The CMN instruction sets the flags like an ADD instruction, while CMP sets
583them like a subtract. Therefore to be able to use CMN for comparisons other
584than the Z bit, we'll need additional logic to reverse the conditionals
585associated with the comparison. Perhaps a pseudo-instruction for the comparison,
586with a post-codegen pass to clean up and handle the condition codes?
587See PR5694 for testcase.
588
589//===---------------------------------------------------------------------===//
590
591Given the following on armv5:
592int test1(int A, int B) {
593  return (A&-8388481)|(B&8388480);
594}
595
596We currently generate:
597	ldr	r2, .LCPI0_0
598	and	r0, r0, r2
599	ldr	r2, .LCPI0_1
600	and	r1, r1, r2
601	orr	r0, r1, r0
602	bx	lr
603
604We should be able to replace the second ldr+and with a bic (i.e. reuse the
605constant which was already loaded).  Not sure what's necessary to do that.
606
607//===---------------------------------------------------------------------===//
608
609The code generated for bswap on armv4/5 (CPUs without rev) is less than ideal:
610
611int a(int x) { return __builtin_bswap32(x); }
612
613a:
614	mov	r1, #255, 24
615	mov	r2, #255, 16
616	and	r1, r1, r0, lsr #8
617	and	r2, r2, r0, lsl #8
618	orr	r1, r1, r0, lsr #24
619	orr	r0, r2, r0, lsl #24
620	orr	r0, r0, r1
621	bx	lr
622
623Something like the following would be better (fewer instructions/registers):
624	eor     r1, r0, r0, ror #16
625	bic     r1, r1, #0xff0000
626	mov     r1, r1, lsr #8
627	eor     r0, r1, r0, ror #8
628	bx	lr
629
630A custom Thumb version would also be a slight improvement over the generic
631version.
632
633//===---------------------------------------------------------------------===//
634
635Consider the following simple C code:
636
637void foo(unsigned char *a, unsigned char *b, int *c) {
638 if ((*a | *b) == 0) *c = 0;
639}
640
641currently llvm-gcc generates something like this (nice branchless code I'd say):
642
643       ldrb    r0, [r0]
644       ldrb    r1, [r1]
645       orr     r0, r1, r0
646       tst     r0, #255
647       moveq   r0, #0
648       streq   r0, [r2]
649       bx      lr
650
651Note that both "tst" and "moveq" are redundant.
652
653//===---------------------------------------------------------------------===//
654
655When loading immediate constants with movt/movw, if there are multiple
656constants needed with the same low 16 bits, and those values are not live at
657the same time, it would be possible to use a single movw instruction, followed
658by multiple movt instructions to rewrite the high bits to different values.
659For example:
660
661  volatile store i32 -1, i32* inttoptr (i32 1342210076 to i32*), align 4,
662  !tbaa
663!0
664  volatile store i32 -1, i32* inttoptr (i32 1342341148 to i32*), align 4,
665  !tbaa
666!0
667
668is compiled and optimized to:
669
670    movw    r0, #32796
671    mov.w    r1, #-1
672    movt    r0, #20480
673    str    r1, [r0]
674    movw    r0, #32796    @ <= this MOVW is not needed, value is there already
675    movt    r0, #20482
676    str    r1, [r0]
677
678//===---------------------------------------------------------------------===//
679
680Improve codegen for select's:
681if (x != 0) x = 1
682if (x == 1) x = 1
683
684ARM codegen used to look like this:
685       mov     r1, r0
686       cmp     r1, #1
687       mov     r0, #0
688       moveq   r0, #1
689
690The naive lowering select between two different values. It should recognize the
691test is equality test so it's more a conditional move rather than a select:
692       cmp     r0, #1
693       movne   r0, #0
694
695Currently this is a ARM specific dag combine. We probably should make it into a
696target-neutral one.
697
698//===---------------------------------------------------------------------===//
699
700Optimize unnecessary checks for zero with __builtin_clz/ctz.  Those builtins
701are specified to be undefined at zero, so portable code must check for zero
702and handle it as a special case.  That is unnecessary on ARM where those
703operations are implemented in a way that is well-defined for zero.  For
704example:
705
706int f(int x) { return x ? __builtin_clz(x) : sizeof(int)*8; }
707
708should just be implemented with a CLZ instruction.  Since there are other
709targets, e.g., PPC, that share this behavior, it would be best to implement
710this in a target-independent way: we should probably fold that (when using
711"undefined at zero" semantics) to set the "defined at zero" bit and have
712the code generator expand out the right code.
713
714//===---------------------------------------------------------------------===//
715
716Clean up the test/MC/ARM files to have more robust register choices.
717
718R0 should not be used as a register operand in the assembler tests as it's then
719not possible to distinguish between a correct encoding and a missing operand
720encoding, as zero is the default value for the binary encoder.
721e.g.,
722    add r0, r0  // bad
723    add r3, r5  // good
724
725Register operands should be distinct. That is, when the encoding does not
726require two syntactical operands to refer to the same register, two different
727registers should be used in the test so as to catch errors where the
728operands are swapped in the encoding.
729e.g.,
730    subs.w r1, r1, r1 // bad
731    subs.w r1, r2, r3 // good
732
733