• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Unofficial GCN/RDNA ISA reference errata
2
3## `v_sad_u32`
4
5The Vega ISA reference writes its behaviour as:
6
7```
8D.u = abs(S0.i - S1.i) + S2.u.
9```
10
11This is incorrect. The actual behaviour is what is written in the GCN3 reference
12guide:
13
14```
15ABS_DIFF (A,B) = (A>B) ? (A-B) : (B-A)
16D.u = ABS_DIFF (S0.u,S1.u) + S2.u
17```
18
19The instruction doesn't subtract the S0 and S1 and use the absolute value (the
20_signed_ distance), it uses the _unsigned_ distance between the operands. So
21`v_sad_u32(-5, 0, 0)` would return `4294967291` (`-5` interpreted as unsigned),
22not `5`.
23
24## `s_bfe_*`
25
26Both the RDNA, Vega and GCN3 ISA references write that these instructions don't write
27SCC. They do.
28
29## `v_bcnt_u32_b32`
30
31The Vega ISA reference writes its behaviour as:
32
33```
34D.u = 0;
35for i in 0 ... 31 do
36D.u += (S0.u[i] == 1 ? 1 : 0);
37endfor.
38```
39
40This is incorrect. The actual behaviour (and number of operands) is what
41is written in the GCN3 reference guide:
42
43```
44D.u = CountOneBits(S0.u) + S1.u.
45```
46
47## `v_alignbyte_b32`
48
49All versions of the ISA document are vague about it, but after some trial and
50error we discovered that only 2 bits of the 3rd operand are used.
51Therefore, this instruction can't shift more than 24 bits.
52
53The correct description of `v_alignbyte_b32` is probably the following:
54
55```
56D.u = ({S0, S1} >> (8 * S2.u[1:0])) & 0xffffffff
57```
58
59## SMEM stores
60
61The Vega ISA references doesn't say this (or doesn't make it clear), but
62the offset for SMEM stores must be in m0 if IMM == 0.
63
64The RDNA ISA doesn't mention SMEM stores at all, but they seem to be supported
65by the chip and are present in LLVM. AMD devs however highly recommend avoiding
66these instructions.
67
68## SMEM atomics
69
70RDNA ISA: same as the SMEM stores, the ISA pretends they don't exist, but they
71are there in LLVM.
72
73## VMEM stores
74
75All reference guides say (under "Vector Memory Instruction Data Dependencies"):
76
77> When a VM instruction is issued, the address is immediately read out of VGPRs
78> and sent to the texture cache. Any texture or buffer resources and samplers
79> are also sent immediately. However, write-data is not immediately sent to the
80> texture cache.
81
82Reading that, one might think that waitcnts need to be added when writing to
83the registers used for a VMEM store's data. Experimentation has shown that this
84does not seem to be the case on GFX8 and GFX9 (GFX6 and GFX7 are untested). It
85also seems unlikely, since NOPs are apparently needed in a subset of these
86situations.
87
88## MIMG opcodes on GFX8/GCN3
89
90The `image_atomic_{swap,cmpswap,add,sub}` opcodes in the GCN3 ISA reference
91guide are incorrect. The Vega ISA reference guide has the correct ones.
92
93## VINTRP encoding
94
95VEGA ISA doc says the encoding should be `110010` but `110101` works.
96
97## VOP1 instructions encoded as VOP3
98
99RDNA ISA doc says that `0x140` should be added to the opcode, but that doesn't
100work. What works is adding `0x180`, which LLVM also does.
101
102## FLAT, Scratch, Global instructions
103
104The NV bit was removed in RDNA, but some parts of the doc still mention it.
105
106RDNA ISA doc 13.8.1 says that SADDR should be set to 0x7f when ADDR is used, but
1079.3.1 says it should be set to NULL. We assume 9.3.1 is correct and set it to
108SGPR_NULL.
109
110## Legacy instructions
111
112Some instructions have a `_LEGACY` variant which implements "DX9 rules", in which
113the zero "wins" in multiplications, ie. `0.0*x` is always `0.0`. The VEGA ISA
114mentions `V_MAC_LEGACY_F32` but this instruction is not really there on VEGA.
115
116## LDS size and allocation granule
117
118GFX7-8 ISA manuals are mistaken about the available LDS size.
119
120* GFX7+ workgroups can use 64KB LDS.
121  There is 64KB LDS per CU.
122* GFX6 workgroups can use 32KB LDS.
123  There is 64KB LDS per CU, but a single workgroup can only use half of it.
124
125 Regarding the LDS allocation granule, Mesa has the correct details and
126 the ISA manuals are mistaken.
127
128## `m0` with LDS instructions on Vega and newer
129
130The Vega ISA doc (both the old one and the "7nm" one) claims that LDS instructions
131use the `m0` register for address clamping like older GPUs, but this is not the case.
132
133In reality, only the `_addtid` variants of LDS instructions use `m0` on Vega and
134newer GPUs, so the relevant section of the RDNA ISA doc seems to apply.
135LLVM also doesn't emit any initialization of `m0` for LDS instructions, and this
136was also confirmed by AMD devs.
137
138## RDNA L0, L1 cache and DLC, GLC bits
139
140The old L1 cache was renamed to L0, and a new L1 cache was added to RDNA. The
141L1 cache is 1 cache per shader array. Some instruction encodings have DLC and
142GLC bits that interact with the cache.
143
144* DLC ("device level coherent") bit: controls the L1 cache
145* GLC ("globally coherent") bit: controls the L0 cache
146
147The recommendation from AMD devs is to always set these two bits at the same time,
148as it doesn't make too much sense to set them independently, aside from some
149circumstances (eg. we needn't set DLC when only one shader array is used).
150
151Stores and atomics always bypass the L1 cache, so they don't support the DLC bit,
152and it shouldn't be set in these cases. Setting the DLC for these cases can result
153in graphical glitches or hangs.
154
155## RDNA `s_dcache_wb`
156
157The `s_dcache_wb` is not mentioned in the RDNA ISA doc, but it is needed in order
158to achieve correct behavior in some SSBO CTS tests.
159
160## RDNA subvector mode
161
162The documentation of `s_subvector_loop_begin` and `s_subvector_mode_end` is not clear
163on what sort of addressing should be used, but it says that it
164"is equivalent to an `S_CBRANCH` with extra math", so the subvector loop handling
165in ACO is done according to the `s_cbranch` doc.
166
167## RDNA early rasterization
168
169The ISA documentation says about `s_endpgm`:
170
171> The hardware implicitly executes S_WAITCNT 0 and S_WAITCNT_VSCNT 0
172> before executing this instruction.
173
174What the doc doesn't say is that in case of NGG (and legacy VS) when there
175are no param exports, the driver sets `NO_PC_EXPORT=1` for optimal performance,
176and when this is set, the hardware will start clipping and rasterization
177as soon as it encounters a position export with `DONE=1`, without waiting
178for the NGG (or VS) to finish.
179
180It can even launch PS waves before NGG (or VS) ends.
181
182When this happens, any store performed by a VS is not guaranteed
183to be complete when PS tries to load it, so we need to manually
184make sure to insert wait instructions before the position exports.
185
186## A16 and G16
187
188On GFX9, the A16 field enables both 16 bit addresses and derivatives.
189Since GFX10+ these are fully independent of each other, A16 controls 16 bit addresses
190and G16 opcodes 16 bit derivatives. A16 without G16 uses 32 bit derivatives.
191
192## POPS collision wave ID argument (GFX9-10.3)
193
194The 2020 RDNA and RDNA 2 ISA references contain incorrect offsets and widths of
195the fields of the "POPS collision wave ID" SGPR argument.
196
197According to the code generated for Rasterizer Ordered View usage in Direct3D,
198the correct layout is:
199
200* [31]: Whether overlap has occurred.
201* [29:28] (GFX10+) / [28] (GFX9): ID of the packer the wave should be associated
202  with.
203* [25:16]: Newest overlapped wave ID.
204* [9:0]: Current wave ID.
205
206## RDNA3 `v_pk_fmac_f16_dpp`
207
208"Table 30. Which instructions support DPP" in the RDNA3 ISA documentation has no exception for
209VOP2 `v_pk_fmac_f16`. But like all other packed math opcodes, DPP does not function in practice.
210RDNA1 and RDNA2 support `v_pk_fmac_f16_dpp`.
211
212## ds_swizzle_b32 rotate/fft modes
213
214These are first mentioned in the GFX9 (Vega) ISA doc, information from the LLVM bug tracker
215and testing show they were already present on GFX8.
216
217# Hardware Bugs
218
219## SMEM corrupts VCCZ on SI/CI
220
221[See this LLVM source.](https://github.com/llvm/llvm-project/blob/acb089e12ae48b82c0b05c42326196a030df9b82/llvm/lib/Target/AMDGPU/SIInsertWaits.cpp#L580-L616)
222
223After issuing a SMEM instructions, we need to wait for the SMEM instructions to
224finish and then write to vcc (for example, `s_mov_b64 vcc, vcc`) to correct vccz
225
226Currently, we don't do this.
227
228## SGPR offset on MUBUF prevents addr clamping on SI/CI
229
230[See this LLVM source.](https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp#L1917-L1922)
231
232This leads to wrong bounds checking, using a VGPR offset fixes it.
233
234## unused VMEM/DS destination lanes can't be used without waiting
235
236On GFX11, we can't safely read/write unused lanes of VMEM/DS destination
237VGPRs without waiting for the load to finish.
238
239## GCN / GFX6 hazards
240
241### VINTRP followed by a read with `v_readfirstlane` or `v_readlane`
242
243It's required to insert 1 wait state if the dst VGPR of any  `v_interp_*` is
244followed by a read with `v_readfirstlane` or `v_readlane` to fix GPU hangs on GFX6.
245Note that `v_writelane_*` is apparently not affected. This hazard isn't
246documented anywhere but AMD confirmed it.
247
248## RDNA / GFX10 hazards
249
250### SMEM store followed by a load with the same address
251
252We found that an `s_buffer_load` will produce incorrect results if it is preceded
253by an `s_buffer_store` with the same address. Inserting an `s_nop` between them
254does not mitigate the issue, so an `s_waitcnt lgkmcnt(0)` must be inserted.
255This is not mentioned by LLVM among the other GFX10 bugs, but LLVM doesn't use
256SMEM stores, so it's not surprising that they didn't notice it.
257
258### VMEMtoScalarWriteHazard
259
260Triggered by:
261VMEM/FLAT/GLOBAL/SCRATCH/DS instruction reads an SGPR (or EXEC, or M0).
262Then, a SALU/SMEM instruction writes the same SGPR.
263
264Mitigated by:
265A VALU instruction or an `s_waitcnt` between the two instructions.
266
267### SMEMtoVectorWriteHazard
268
269Triggered by:
270An SMEM instruction reads an SGPR. Then, a VALU instruction writes that same SGPR.
271
272Mitigated by:
273Any non-SOPP SALU instruction (except `s_setvskip`, `s_version`, and any non-lgkmcnt `s_waitcnt`).
274
275### Offset3fBug
276
277Any branch that is located at offset 0x3f will be buggy. Just insert some NOPs to make sure no branch
278is located at this offset.
279
280### InstFwdPrefetchBug
281
282According to LLVM, the `s_inst_prefetch` instruction can cause a hang on GFX10.
283Seems to be resolved on GFX10.3+. There are no further details.
284
285### LdsMisalignedBug
286
287When there is a misaligned multi-dword FLAT load/store instruction in WGP mode,
288it needs to be split into multiple single-dword FLAT instructions.
289
290ACO doesn't use FLAT load/store on GFX10, so is unaffected.
291
292### FlatSegmentOffsetBug
293
294The 12-bit immediate OFFSET field of FLAT instructions must always be 0.
295GLOBAL and SCRATCH are unaffected.
296
297ACO doesn't use FLAT load/store on GFX10, so is unaffected.
298
299### VcmpxPermlaneHazard
300
301Triggered by:
302Any permlane instruction that follows any VOPC instruction which writes exec.
303
304Mitigated by: any VALU instruction except `v_nop`.
305
306### VcmpxExecWARHazard
307
308Triggered by:
309Any non-VALU instruction reads the EXEC mask. Then, any VALU instruction writes the EXEC mask.
310
311Mitigated by:
312A VALU instruction that writes an SGPR (or has a valid SDST operand), or `s_waitcnt_depctr 0xfffe`.
313Note: `s_waitcnt_depctr` is an internal instruction, so there is no further information
314about what it does or what its operand means.
315
316### LdsBranchVmemWARHazard
317
318Triggered by:
319VMEM/GLOBAL/SCRATCH instruction, then a branch, then a DS instruction,
320or vice versa: DS instruction, then a branch, then a VMEM/GLOBAL/SCRATCH instruction.
321
322Mitigated by:
323Only `s_waitcnt_vscnt null, 0`. Needed even if the first instruction is a load.
324
325### NSAClauseBug
326
327"MIMG-NSA in a hard clause has unpredictable results on GFX10.1"
328
329### NSAMaxSize5
330
331NSA MIMG instructions should be limited to 3 dwords before GFX10.3 to avoid
332stability issues: https://reviews.llvm.org/D103348
333
334## RDNA3 / GFX11 hazards
335
336### VcmpxPermlaneHazard
337
338Same as GFX10.
339
340### LdsDirectVALUHazard
341
342Triggered by:
343LDSDIR instruction writing a VGPR soon after it's used by a VALU instruction.
344
345Mitigated by:
346A vdst wait, preferably using the LDSDIR's field.
347
348### LdsDirectVMEMHazard
349
350Triggered by:
351LDSDIR instruction writing a VGPR after it's used by a VMEM/DS instruction.
352
353Mitigated by:
354Waiting for the VMEM/DS instruction to finish, a VALU or export instruction, or
355`s_waitcnt_depctr 0xffe3`.
356
357### VALUTransUseHazard
358
359Triggered by:
360A VALU instruction reading a VGPR written by a transcendental VALU instruction without 6+ VALU or 2+
361transcendental instructions in-between.
362
363Mitigated by:
364A va_vdst=0 wait: `s_waitcnt_deptr 0x0fff`
365
366### VALUPartialForwardingHazard
367
368Triggered by:
369A VALU instruction reading two VGPRs: one written before an exec write by SALU and one after. To
370trigger, there must be less than 3 VALU between the first and second VGPR writes and less than 5
371VALU between the second VGPR write and the current instruction.
372
373Mitigated by:
374A va_vdst=0 wait: `s_waitcnt_deptr 0x0fff`
375
376### VALUMaskWriteHazard
377
378Triggered by:
379SALU writing then SALU or VALU reading a SGPR that was previously used as a lane mask for a VALU.
380
381Mitigated by:
382A VALU instruction reading a non-exec SGPR before the SALU write, or a sa_sdst=0 wait after the
383SALU write: `s_waitcnt_depctr 0xfffe`
384