1//===- README_ALTIVEC.txt - Notes for improving Altivec code gen ----------===// 2 3Implement PPCInstrInfo::isLoadFromStackSlot/isStoreToStackSlot for vector 4registers, to generate better spill code. 5 6//===----------------------------------------------------------------------===// 7 8The first should be a single lvx from the constant pool, the second should be 9a xor/stvx: 10 11void foo(void) { 12 int x[8] __attribute__((aligned(128))) = { 1, 1, 1, 17, 1, 1, 1, 1 }; 13 bar (x); 14} 15 16#include <string.h> 17void foo(void) { 18 int x[8] __attribute__((aligned(128))); 19 memset (x, 0, sizeof (x)); 20 bar (x); 21} 22 23//===----------------------------------------------------------------------===// 24 25Altivec: Codegen'ing MUL with vector FMADD should add -0.0, not 0.0: 26http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8763 27 28When -ffast-math is on, we can use 0.0. 29 30//===----------------------------------------------------------------------===// 31 32 Consider this: 33 v4f32 Vector; 34 v4f32 Vector2 = { Vector.X, Vector.X, Vector.X, Vector.X }; 35 36Since we know that "Vector" is 16-byte aligned and we know the element offset 37of ".X", we should change the load into a lve*x instruction, instead of doing 38a load/store/lve*x sequence. 39 40//===----------------------------------------------------------------------===// 41 42For functions that use altivec AND have calls, we are VRSAVE'ing all call 43clobbered regs. 44 45//===----------------------------------------------------------------------===// 46 47Implement passing vectors by value into calls and receiving them as arguments. 48 49//===----------------------------------------------------------------------===// 50 51GCC apparently tries to codegen { C1, C2, Variable, C3 } as a constant pool load 52of C1/C2/C3, then a load and vperm of Variable. 53 54//===----------------------------------------------------------------------===// 55 56We need a way to teach tblgen that some operands of an intrinsic are required to 57be constants. The verifier should enforce this constraint. 58 59//===----------------------------------------------------------------------===// 60 61We currently codegen SCALAR_TO_VECTOR as a store of the scalar to a 16-byte 62aligned stack slot, followed by a load/vperm. We should probably just store it 63to a scalar stack slot, then use lvsl/vperm to load it. If the value is already 64in memory this is a big win. 65 66//===----------------------------------------------------------------------===// 67 68extract_vector_elt of an arbitrary constant vector can be done with the 69following instructions: 70 71vTemp = vec_splat(v0,2); // 2 is the element the src is in. 72vec_ste(&destloc,0,vTemp); 73 74We can do an arbitrary non-constant value by using lvsr/perm/ste. 75 76//===----------------------------------------------------------------------===// 77 78If we want to tie instruction selection into the scheduler, we can do some 79constant formation with different instructions. For example, we can generate 80"vsplti -1" with "vcmpequw R,R" and 1,1,1,1 with "vsubcuw R,R", and 0,0,0,0 with 81"vsplti 0" or "vxor", each of which use different execution units, thus could 82help scheduling. 83 84This is probably only reasonable for a post-pass scheduler. 85 86//===----------------------------------------------------------------------===// 87 88For this function: 89 90void test(vector float *A, vector float *B) { 91 vector float C = (vector float)vec_cmpeq(*A, *B); 92 if (!vec_any_eq(*A, *B)) 93 *B = (vector float){0,0,0,0}; 94 *A = C; 95} 96 97we get the following basic block: 98 99 ... 100 lvx v2, 0, r4 101 lvx v3, 0, r3 102 vcmpeqfp v4, v3, v2 103 vcmpeqfp. v2, v3, v2 104 bne cr6, LBB1_2 ; cond_next 105 106The vcmpeqfp/vcmpeqfp. instructions currently cannot be merged when the 107vcmpeqfp. result is used by a branch. This can be improved. 108 109//===----------------------------------------------------------------------===// 110 111The code generated for this is truly aweful: 112 113vector float test(float a, float b) { 114 return (vector float){ 0.0, a, 0.0, 0.0}; 115} 116 117LCPI1_0: ; float 118 .space 4 119 .text 120 .globl _test 121 .align 4 122_test: 123 mfspr r2, 256 124 oris r3, r2, 4096 125 mtspr 256, r3 126 lis r3, ha16(LCPI1_0) 127 addi r4, r1, -32 128 stfs f1, -16(r1) 129 addi r5, r1, -16 130 lfs f0, lo16(LCPI1_0)(r3) 131 stfs f0, -32(r1) 132 lvx v2, 0, r4 133 lvx v3, 0, r5 134 vmrghw v3, v3, v2 135 vspltw v2, v2, 0 136 vmrghw v2, v2, v3 137 mtspr 256, r2 138 blr 139 140//===----------------------------------------------------------------------===// 141 142int foo(vector float *x, vector float *y) { 143 if (vec_all_eq(*x,*y)) return 3245; 144 else return 12; 145} 146 147A predicate compare being used in a select_cc should have the same peephole 148applied to it as a predicate compare used by a br_cc. There should be no 149mfcr here: 150 151_foo: 152 mfspr r2, 256 153 oris r5, r2, 12288 154 mtspr 256, r5 155 li r5, 12 156 li r6, 3245 157 lvx v2, 0, r4 158 lvx v3, 0, r3 159 vcmpeqfp. v2, v3, v2 160 mfcr r3, 2 161 rlwinm r3, r3, 25, 31, 31 162 cmpwi cr0, r3, 0 163 bne cr0, LBB1_2 ; entry 164LBB1_1: ; entry 165 mr r6, r5 166LBB1_2: ; entry 167 mr r3, r6 168 mtspr 256, r2 169 blr 170 171//===----------------------------------------------------------------------===// 172 173CodeGen/PowerPC/vec_constants.ll has an and operation that should be 174codegen'd to andc. The issue is that the 'all ones' build vector is 175SelectNodeTo'd a VSPLTISB instruction node before the and/xor is selected 176which prevents the vnot pattern from matching. 177 178 179//===----------------------------------------------------------------------===// 180 181An alternative to the store/store/load approach for illegal insert element 182lowering would be: 183 1841. store element to any ol' slot 1852. lvx the slot 1863. lvsl 0; splat index; vcmpeq to generate a select mask 1874. lvsl slot + x; vperm to rotate result into correct slot 1885. vsel result together. 189 190//===----------------------------------------------------------------------===// 191 192Should codegen branches on vec_any/vec_all to avoid mfcr. Two examples: 193 194#include <altivec.h> 195 int f(vector float a, vector float b) 196 { 197 int aa = 0; 198 if (vec_all_ge(a, b)) 199 aa |= 0x1; 200 if (vec_any_ge(a,b)) 201 aa |= 0x2; 202 return aa; 203} 204 205vector float f(vector float a, vector float b) { 206 if (vec_any_eq(a, b)) 207 return a; 208 else 209 return b; 210} 211 212//===----------------------------------------------------------------------===// 213 214We should do a little better with eliminating dead stores. 215The stores to the stack are dead since %a and %b are not needed 216 217; Function Attrs: nounwind 218define <16 x i8> @test_vpmsumb() #0 { 219 entry: 220 %a = alloca <16 x i8>, align 16 221 %b = alloca <16 x i8>, align 16 222 store <16 x i8> <i8 1, i8 2, i8 3, i8 4, i8 5, i8 6, i8 7, i8 8, i8 9, i8 10, i8 11, i8 12, i8 13, i8 14, i8 15, i8 16>, <16 x i8>* %a, align 16 223 store <16 x i8> <i8 113, i8 114, i8 115, i8 116, i8 117, i8 118, i8 119, i8 120, i8 121, i8 122, i8 123, i8 124, i8 125, i8 126, i8 127, i8 112>, <16 x i8>* %b, align 16 224 %0 = load <16 x i8>* %a, align 16 225 %1 = load <16 x i8>* %b, align 16 226 %2 = call <16 x i8> @llvm.ppc.altivec.crypto.vpmsumb(<16 x i8> %0, <16 x i8> %1) 227 ret <16 x i8> %2 228} 229 230 231; Function Attrs: nounwind readnone 232declare <16 x i8> @llvm.ppc.altivec.crypto.vpmsumb(<16 x i8>, <16 x i8>) #1 233 234 235Produces the following code with -mtriple=powerpc64-unknown-linux-gnu: 236# %bb.0: # %entry 237 addis 3, 2, .LCPI0_0@toc@ha 238 addis 4, 2, .LCPI0_1@toc@ha 239 addi 3, 3, .LCPI0_0@toc@l 240 addi 4, 4, .LCPI0_1@toc@l 241 lxvw4x 0, 0, 3 242 addi 3, 1, -16 243 lxvw4x 35, 0, 4 244 stxvw4x 0, 0, 3 245 ori 2, 2, 0 246 lxvw4x 34, 0, 3 247 addi 3, 1, -32 248 stxvw4x 35, 0, 3 249 vpmsumb 2, 2, 3 250 blr 251 .long 0 252 .quad 0 253 254The two stxvw4x instructions are not needed. 255With -mtriple=powerpc64le-unknown-linux-gnu, the associated permutes 256are present too. 257 258//===----------------------------------------------------------------------===// 259 260The following example is found in test/CodeGen/PowerPC/vec_add_sub_doubleword.ll: 261 262define <2 x i64> @increment_by_val(<2 x i64> %x, i64 %val) nounwind { 263 %tmpvec = insertelement <2 x i64> <i64 0, i64 0>, i64 %val, i32 0 264 %tmpvec2 = insertelement <2 x i64> %tmpvec, i64 %val, i32 1 265 %result = add <2 x i64> %x, %tmpvec2 266 ret <2 x i64> %result 267 268This will generate the following instruction sequence: 269 std 5, -8(1) 270 std 5, -16(1) 271 addi 3, 1, -16 272 ori 2, 2, 0 273 lxvd2x 35, 0, 3 274 vaddudm 2, 2, 3 275 blr 276 277This will almost certainly cause a load-hit-store hazard. 278Since val is a value parameter, it should not need to be saved onto 279the stack, unless it's being done set up the vector register. Instead, 280it would be better to splat the value into a vector register, and then 281remove the (dead) stores to the stack. 282 283//===----------------------------------------------------------------------===// 284 285At the moment we always generate a lxsdx in preference to lfd, or stxsdx in 286preference to stfd. When we have a reg-immediate addressing mode, this is a 287poor choice, since we have to load the address into an index register. This 288should be fixed for P7/P8. 289 290//===----------------------------------------------------------------------===// 291 292Right now, ShuffleKind 0 is supported only on BE, and ShuffleKind 2 only on LE. 293However, we could actually support both kinds on either endianness, if we check 294for the appropriate shufflevector pattern for each case ... this would cause 295some additional shufflevectors to be recognized and implemented via the 296"swapped" form. 297 298//===----------------------------------------------------------------------===// 299 300There is a utility program called PerfectShuffle that generates a table of the 301shortest instruction sequence for implementing a shufflevector operation on 302PowerPC. However, this was designed for big-endian code generation. We could 303modify this program to create a little endian version of the table. The table 304is used in PPCISelLowering.cpp, PPCTargetLowering::LOWERVECTOR_SHUFFLE(). 305 306//===----------------------------------------------------------------------===// 307 308Opportunies to use instructions from PPCInstrVSX.td during code gen 309 - Conversion instructions (Sections 7.6.1.5 and 7.6.1.6 of ISA 2.07) 310 - Scalar comparisons (xscmpodp and xscmpudp) 311 - Min and max (xsmaxdp, xsmindp, xvmaxdp, xvmindp, xvmaxsp, xvminsp) 312 313Related to this: we currently do not generate the lxvw4x instruction for either 314v4f32 or v4i32, probably because adding a dag pattern to the recognizer requires 315a single target type. This should probably be addressed in the PPCISelDAGToDAG logic. 316 317//===----------------------------------------------------------------------===// 318 319Currently EXTRACT_VECTOR_ELT and INSERT_VECTOR_ELT are type-legal only 320for v2f64 with VSX available. We should create custom lowering 321support for the other vector types. Without this support, we generate 322sequences with load-hit-store hazards. 323 324v4f32 can be supported with VSX by shifting the correct element into 325big-endian lane 0, using xscvspdpn to produce a double-precision 326representation of the single-precision value in big-endian 327double-precision lane 0, and reinterpreting lane 0 as an FPR or 328vector-scalar register. 329 330v2i64 can be supported with VSX and P8Vector in the same manner as 331v2f64, followed by a direct move to a GPR. 332 333v4i32 can be supported with VSX and P8Vector by shifting the correct 334element into big-endian lane 1, using a direct move to a GPR, and 335sign-extending the 32-bit result to 64 bits. 336 337v8i16 can be supported with VSX and P8Vector by shifting the correct 338element into big-endian lane 3, using a direct move to a GPR, and 339sign-extending the 16-bit result to 64 bits. 340 341v16i8 can be supported with VSX and P8Vector by shifting the correct 342element into big-endian lane 7, using a direct move to a GPR, and 343sign-extending the 8-bit result to 64 bits. 344