1# Unofficial GCN/RDNA ISA reference errata 2 3## `v_sad_u32` 4 5The Vega ISA reference writes its behaviour as: 6 7``` 8D.u = abs(S0.i - S1.i) + S2.u. 9``` 10 11This is incorrect. The actual behaviour is what is written in the GCN3 reference 12guide: 13 14``` 15ABS_DIFF (A,B) = (A>B) ? (A-B) : (B-A) 16D.u = ABS_DIFF (S0.u,S1.u) + S2.u 17``` 18 19The instruction doesn't subtract the S0 and S1 and use the absolute value (the 20_signed_ distance), it uses the _unsigned_ distance between the operands. So 21`v_sad_u32(-5, 0, 0)` would return `4294967291` (`-5` interpreted as unsigned), 22not `5`. 23 24## `s_bfe_*` 25 26Both the RDNA, Vega and GCN3 ISA references write that these instructions don't write 27SCC. They do. 28 29## `v_bcnt_u32_b32` 30 31The Vega ISA reference writes its behaviour as: 32 33``` 34D.u = 0; 35for i in 0 ... 31 do 36D.u += (S0.u[i] == 1 ? 1 : 0); 37endfor. 38``` 39 40This is incorrect. The actual behaviour (and number of operands) is what 41is written in the GCN3 reference guide: 42 43``` 44D.u = CountOneBits(S0.u) + S1.u. 45``` 46 47## `v_alignbyte_b32` 48 49All versions of the ISA document are vague about it, but after some trial and 50error we discovered that only 2 bits of the 3rd operand are used. 51Therefore, this instruction can't shift more than 24 bits. 52 53The correct description of `v_alignbyte_b32` is probably the following: 54 55``` 56D.u = ({S0, S1} >> (8 * S2.u[1:0])) & 0xffffffff 57``` 58 59## SMEM stores 60 61The Vega ISA references doesn't say this (or doesn't make it clear), but 62the offset for SMEM stores must be in m0 if IMM == 0. 63 64The RDNA ISA doesn't mention SMEM stores at all, but they seem to be supported 65by the chip and are present in LLVM. AMD devs however highly recommend avoiding 66these instructions. 67 68## SMEM atomics 69 70RDNA ISA: same as the SMEM stores, the ISA pretends they don't exist, but they 71are there in LLVM. 72 73## VMEM stores 74 75All reference guides say (under "Vector Memory Instruction Data Dependencies"): 76 77> When a VM instruction is issued, the address is immediately read out of VGPRs 78> and sent to the texture cache. Any texture or buffer resources and samplers 79> are also sent immediately. However, write-data is not immediately sent to the 80> texture cache. 81 82Reading that, one might think that waitcnts need to be added when writing to 83the registers used for a VMEM store's data. Experimentation has shown that this 84does not seem to be the case on GFX8 and GFX9 (GFX6 and GFX7 are untested). It 85also seems unlikely, since NOPs are apparently needed in a subset of these 86situations. 87 88## MIMG opcodes on GFX8/GCN3 89 90The `image_atomic_{swap,cmpswap,add,sub}` opcodes in the GCN3 ISA reference 91guide are incorrect. The Vega ISA reference guide has the correct ones. 92 93## VINTRP encoding 94 95VEGA ISA doc says the encoding should be `110010` but `110101` works. 96 97## VOP1 instructions encoded as VOP3 98 99RDNA ISA doc says that `0x140` should be added to the opcode, but that doesn't 100work. What works is adding `0x180`, which LLVM also does. 101 102## FLAT, Scratch, Global instructions 103 104The NV bit was removed in RDNA, but some parts of the doc still mention it. 105 106RDNA ISA doc 13.8.1 says that SADDR should be set to 0x7f when ADDR is used, but 1079.3.1 says it should be set to NULL. We assume 9.3.1 is correct and set it to 108SGPR_NULL. 109 110## Legacy instructions 111 112Some instructions have a `_LEGACY` variant which implements "DX9 rules", in which 113the zero "wins" in multiplications, ie. `0.0*x` is always `0.0`. The VEGA ISA 114mentions `V_MAC_LEGACY_F32` but this instruction is not really there on VEGA. 115 116## `m0` with LDS instructions on Vega and newer 117 118The Vega ISA doc (both the old one and the "7nm" one) claims that LDS instructions 119use the `m0` register for address clamping like older GPUs, but this is not the case. 120 121In reality, only the `_addtid` variants of LDS instructions use `m0` on Vega and 122newer GPUs, so the relevant section of the RDNA ISA doc seems to apply. 123LLVM also doesn't emit any initialization of `m0` for LDS instructions, and this 124was also confirmed by AMD devs. 125 126## RDNA L0, L1 cache and DLC, GLC bits 127 128The old L1 cache was renamed to L0, and a new L1 cache was added to RDNA. The 129L1 cache is 1 cache per shader array. Some instruction encodings have DLC and 130GLC bits that interact with the cache. 131 132* DLC ("device level coherent") bit: controls the L1 cache 133* GLC ("globally coherent") bit: controls the L0 cache 134 135The recommendation from AMD devs is to always set these two bits at the same time, 136as it doesn't make too much sense to set them independently, aside from some 137circumstances (eg. we needn't set DLC when only one shader array is used). 138 139Stores and atomics always bypass the L1 cache, so they don't support the DLC bit, 140and it shouldn't be set in these cases. Setting the DLC for these cases can result 141in graphical glitches or hangs. 142 143## RDNA `s_dcache_wb` 144 145The `s_dcache_wb` is not mentioned in the RDNA ISA doc, but it is needed in order 146to achieve correct behavior in some SSBO CTS tests. 147 148## RDNA subvector mode 149 150The documentation of `s_subvector_loop_begin` and `s_subvector_mode_end` is not clear 151on what sort of addressing should be used, but it says that it 152"is equivalent to an `S_CBRANCH` with extra math", so the subvector loop handling 153in ACO is done according to the `s_cbranch` doc. 154 155## RDNA early rasterization 156 157The ISA documentation says about `s_endpgm`: 158 159> The hardware implicitly executes S_WAITCNT 0 and S_WAITCNT_VSCNT 0 160> before executing this instruction. 161 162What the doc doesn't say is that in case of NGG (and legacy VS) when there 163are no param exports, the driver sets `NO_PC_EXPORT=1` for optimal performance, 164and when this is set, the hardware will start clipping and rasterization 165as soon as it encounters a position export with `DONE=1`, without waiting 166for the NGG (or VS) to finish. 167 168It can even launch PS waves before NGG (or VS) ends. 169 170When this happens, any store performed by a VS is not guaranteed 171to be complete when PS tries to load it, so we need to manually 172make sure to insert wait instructions before the position exports. 173 174## A16 and G16 175 176On GFX9, the A16 field enables both 16 bit addresses and derivatives. 177Since GFX10+ these are fully independent of each other, A16 controls 16 bit addresses 178and G16 opcodes 16 bit derivatives. A16 without G16 uses 32 bit derivatives. 179 180# Hardware Bugs 181 182## SMEM corrupts VCCZ on SI/CI 183 184[See this LLVM source.](https://github.com/llvm/llvm-project/blob/acb089e12ae48b82c0b05c42326196a030df9b82/llvm/lib/Target/AMDGPU/SIInsertWaits.cpp#L580-L616) 185 186After issuing a SMEM instructions, we need to wait for the SMEM instructions to 187finish and then write to vcc (for example, `s_mov_b64 vcc, vcc`) to correct vccz 188 189Currently, we don't do this. 190 191## SGPR offset on MUBUF prevents addr clamping on SI/CI 192 193[See this LLVM source.](https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp#L1917-L1922) 194 195This leads to wrong bounds checking, using a VGPR offset fixes it. 196 197## GCN / GFX6 hazards 198 199### VINTRP followed by a read with `v_readfirstlane` or `v_readlane` 200 201It's required to insert 1 wait state if the dst VGPR of any `v_interp_*` is 202followed by a read with `v_readfirstlane` or `v_readlane` to fix GPU hangs on GFX6. 203Note that `v_writelane_*` is apparently not affected. This hazard isn't 204documented anywhere but AMD confirmed it. 205 206## RDNA / GFX10 hazards 207 208### SMEM store followed by a load with the same address 209 210We found that an `s_buffer_load` will produce incorrect results if it is preceded 211by an `s_buffer_store` with the same address. Inserting an `s_nop` between them 212does not mitigate the issue, so an `s_waitcnt lgkmcnt(0)` must be inserted. 213This is not mentioned by LLVM among the other GFX10 bugs, but LLVM doesn't use 214SMEM stores, so it's not surprising that they didn't notice it. 215 216### VMEMtoScalarWriteHazard 217 218Triggered by: 219VMEM/FLAT/GLOBAL/SCRATCH/DS instruction reads an SGPR (or EXEC, or M0). 220Then, a SALU/SMEM instruction writes the same SGPR. 221 222Mitigated by: 223A VALU instruction or an `s_waitcnt` between the two instructions. 224 225### SMEMtoVectorWriteHazard 226 227Triggered by: 228An SMEM instruction reads an SGPR. Then, a VALU instruction writes that same SGPR. 229 230Mitigated by: 231Any non-SOPP SALU instruction (except `s_setvskip`, `s_version`, and any non-lgkmcnt `s_waitcnt`). 232 233### Offset3fBug 234 235Any branch that is located at offset 0x3f will be buggy. Just insert some NOPs to make sure no branch 236is located at this offset. 237 238### InstFwdPrefetchBug 239 240According to LLVM, the `s_inst_prefetch` instruction can cause a hang. 241There are no further details. 242 243### LdsMisalignedBug 244 245When there is a misaligned multi-dword FLAT load/store instruction in WGP mode, 246it needs to be split into multiple single-dword FLAT instructions. 247 248ACO doesn't use FLAT load/store on GFX10, so is unaffected. 249 250### FlatSegmentOffsetBug 251 252The 12-bit immediate OFFSET field of FLAT instructions must always be 0. 253GLOBAL and SCRATCH are unaffected. 254 255ACO doesn't use FLAT load/store on GFX10, so is unaffected. 256 257### VcmpxPermlaneHazard 258 259Triggered by: 260Any permlane instruction that follows any VOPC instruction. 261Confirmed by AMD devs that despite the name, this doesn't only affect v_cmpx. 262 263Mitigated by: any VALU instruction except `v_nop`. 264 265### VcmpxExecWARHazard 266 267Triggered by: 268Any non-VALU instruction reads the EXEC mask. Then, any VALU instruction writes the EXEC mask. 269 270Mitigated by: 271A VALU instruction that writes an SGPR (or has a valid SDST operand), or `s_waitcnt_depctr 0xfffe`. 272Note: `s_waitcnt_depctr` is an internal instruction, so there is no further information 273about what it does or what its operand means. 274 275### LdsBranchVmemWARHazard 276 277Triggered by: 278VMEM/GLOBAL/SCRATCH instruction, then a branch, then a DS instruction, 279or vice versa: DS instruction, then a branch, then a VMEM/GLOBAL/SCRATCH instruction. 280 281Mitigated by: 282Only `s_waitcnt_vscnt null, 0`. Needed even if the first instruction is a load. 283 284### NSAClauseBug 285 286"MIMG-NSA in a hard clause has unpredictable results on GFX10.1" 287 288### NSAMaxSize5 289 290NSA MIMG instructions should be limited to 3 dwords before GFX10.3 to avoid 291stability issues: https://reviews.llvm.org/D103348 292