• Home
Name Date Size #Lines LOC

..--

tests/04-Jul-2025-15,3438,236

.clang-formatD04-Jul-20251.3 KiB5544

README-ISA.mdD04-Jul-202512.6 KiB384251

README.mdD04-Jul-202521.4 KiB353240

aco_assembler.cppD04-Jul-202560.7 KiB1,7881,517

aco_builder_h.pyD04-Jul-202522.3 KiB671593

aco_dead_code_analysis.cppD04-Jul-20251.5 KiB7146

aco_dominance.cppD04-Jul-20255.3 KiB156104

aco_form_hard_clauses.cppD04-Jul-202510 KiB271238

aco_insert_NOPs.cppD04-Jul-202570.9 KiB1,9471,513

aco_insert_delay_alu.cppD04-Jul-202512.3 KiB367267

aco_insert_exec_mask.cppD04-Jul-202530.4 KiB779610

aco_insert_waitcnt.cppD04-Jul-202528.6 KiB858642

aco_instruction_selection.cppD04-Jul-2025531.3 KiB13,22811,011

aco_instruction_selection.hD04-Jul-20254.2 KiB155110

aco_instruction_selection_setup.cppD04-Jul-202529.7 KiB770627

aco_interface.cppD04-Jul-202517.1 KiB534411

aco_interface.hD04-Jul-20253.7 KiB9364

aco_ir.cppD04-Jul-202557.6 KiB1,6281,407

aco_ir.hD04-Jul-202576.7 KiB2,3761,837

aco_jump_threading.cppD04-Jul-202513.3 KiB399268

aco_live_var_analysis.cppD04-Jul-202519.3 KiB547420

aco_lower_branches.cppD04-Jul-20256.9 KiB203148

aco_lower_phis.cppD04-Jul-202515.6 KiB424321

aco_lower_subdword.cppD04-Jul-20259.5 KiB323257

aco_lower_to_cssa.cppD04-Jul-202519.7 KiB567387

aco_lower_to_hw_instr.cppD04-Jul-2025130 KiB2,9432,394

aco_opcodes.pyD04-Jul-2025132.5 KiB2,0701,849

aco_opcodes_cpp.pyD04-Jul-20252 KiB9081

aco_opcodes_h.pyD04-Jul-2025863 4836

aco_opt_value_numbering.cppD04-Jul-202517.3 KiB476384

aco_optimizer.cppD04-Jul-2025192.6 KiB5,0694,117

aco_optimizer_postRA.cppD04-Jul-202546.6 KiB1,279812

aco_print_asm.cppD04-Jul-202512.1 KiB421344

aco_print_ir.cppD04-Jul-202541.5 KiB1,1331,066

aco_reduce_assign.cppD04-Jul-20256.9 KiB177129

aco_register_allocation.cppD04-Jul-2025123.8 KiB3,3882,635

aco_reindex_ssa.cppD04-Jul-20252.3 KiB9774

aco_repair_ssa.cppD04-Jul-20258.7 KiB266192

aco_scheduler.cppD04-Jul-202545.1 KiB1,297983

aco_scheduler_ilp.cppD04-Jul-202526 KiB793579

aco_shader_info.hD04-Jul-20255.2 KiB234181

aco_spill.cppD04-Jul-202563.9 KiB1,6651,256

aco_ssa_elimination.cppD04-Jul-20254.1 KiB12594

aco_statistics.cppD04-Jul-202522.4 KiB634519

aco_util.hD04-Jul-202536.3 KiB1,297905

aco_validate.cppD04-Jul-202571.8 KiB1,5361,367

meson.buildD04-Jul-20252.4 KiB9687

README-ISA.md

1# Unofficial GCN/RDNA ISA reference errata
2
3## `v_sad_u32`
4
5The Vega ISA reference writes its behaviour as:
6
7```
8D.u = abs(S0.i - S1.i) + S2.u.
9```
10
11This is incorrect. The actual behaviour is what is written in the GCN3 reference
12guide:
13
14```
15ABS_DIFF (A,B) = (A>B) ? (A-B) : (B-A)
16D.u = ABS_DIFF (S0.u,S1.u) + S2.u
17```
18
19The instruction doesn't subtract the S0 and S1 and use the absolute value (the
20_signed_ distance), it uses the _unsigned_ distance between the operands. So
21`v_sad_u32(-5, 0, 0)` would return `4294967291` (`-5` interpreted as unsigned),
22not `5`.
23
24## `s_bfe_*`
25
26Both the RDNA, Vega and GCN3 ISA references write that these instructions don't write
27SCC. They do.
28
29## `v_bcnt_u32_b32`
30
31The Vega ISA reference writes its behaviour as:
32
33```
34D.u = 0;
35for i in 0 ... 31 do
36D.u += (S0.u[i] == 1 ? 1 : 0);
37endfor.
38```
39
40This is incorrect. The actual behaviour (and number of operands) is what
41is written in the GCN3 reference guide:
42
43```
44D.u = CountOneBits(S0.u) + S1.u.
45```
46
47## `v_alignbyte_b32`
48
49All versions of the ISA document are vague about it, but after some trial and
50error we discovered that only 2 bits of the 3rd operand are used.
51Therefore, this instruction can't shift more than 24 bits.
52
53The correct description of `v_alignbyte_b32` is probably the following:
54
55```
56D.u = ({S0, S1} >> (8 * S2.u[1:0])) & 0xffffffff
57```
58
59## SMEM stores
60
61The Vega ISA references doesn't say this (or doesn't make it clear), but
62the offset for SMEM stores must be in m0 if IMM == 0.
63
64The RDNA ISA doesn't mention SMEM stores at all, but they seem to be supported
65by the chip and are present in LLVM. AMD devs however highly recommend avoiding
66these instructions.
67
68## SMEM atomics
69
70RDNA ISA: same as the SMEM stores, the ISA pretends they don't exist, but they
71are there in LLVM.
72
73## VMEM stores
74
75All reference guides say (under "Vector Memory Instruction Data Dependencies"):
76
77> When a VM instruction is issued, the address is immediately read out of VGPRs
78> and sent to the texture cache. Any texture or buffer resources and samplers
79> are also sent immediately. However, write-data is not immediately sent to the
80> texture cache.
81
82Reading that, one might think that waitcnts need to be added when writing to
83the registers used for a VMEM store's data. Experimentation has shown that this
84does not seem to be the case on GFX8 and GFX9 (GFX6 and GFX7 are untested). It
85also seems unlikely, since NOPs are apparently needed in a subset of these
86situations.
87
88## MIMG opcodes on GFX8/GCN3
89
90The `image_atomic_{swap,cmpswap,add,sub}` opcodes in the GCN3 ISA reference
91guide are incorrect. The Vega ISA reference guide has the correct ones.
92
93## VINTRP encoding
94
95VEGA ISA doc says the encoding should be `110010` but `110101` works.
96
97## VOP1 instructions encoded as VOP3
98
99RDNA ISA doc says that `0x140` should be added to the opcode, but that doesn't
100work. What works is adding `0x180`, which LLVM also does.
101
102## FLAT, Scratch, Global instructions
103
104The NV bit was removed in RDNA, but some parts of the doc still mention it.
105
106RDNA ISA doc 13.8.1 says that SADDR should be set to 0x7f when ADDR is used, but
1079.3.1 says it should be set to NULL. We assume 9.3.1 is correct and set it to
108SGPR_NULL.
109
110## Legacy instructions
111
112Some instructions have a `_LEGACY` variant which implements "DX9 rules", in which
113the zero "wins" in multiplications, ie. `0.0*x` is always `0.0`. The VEGA ISA
114mentions `V_MAC_LEGACY_F32` but this instruction is not really there on VEGA.
115
116## LDS size and allocation granule
117
118GFX7-8 ISA manuals are mistaken about the available LDS size.
119
120* GFX7+ workgroups can use 64KB LDS.
121  There is 64KB LDS per CU.
122* GFX6 workgroups can use 32KB LDS.
123  There is 64KB LDS per CU, but a single workgroup can only use half of it.
124
125 Regarding the LDS allocation granule, Mesa has the correct details and
126 the ISA manuals are mistaken.
127
128## `m0` with LDS instructions on Vega and newer
129
130The Vega ISA doc (both the old one and the "7nm" one) claims that LDS instructions
131use the `m0` register for address clamping like older GPUs, but this is not the case.
132
133In reality, only the `_addtid` variants of LDS instructions use `m0` on Vega and
134newer GPUs, so the relevant section of the RDNA ISA doc seems to apply.
135LLVM also doesn't emit any initialization of `m0` for LDS instructions, and this
136was also confirmed by AMD devs.
137
138## RDNA L0, L1 cache and DLC, GLC bits
139
140The old L1 cache was renamed to L0, and a new L1 cache was added to RDNA. The
141L1 cache is 1 cache per shader array. Some instruction encodings have DLC and
142GLC bits that interact with the cache.
143
144* DLC ("device level coherent") bit: controls the L1 cache
145* GLC ("globally coherent") bit: controls the L0 cache
146
147The recommendation from AMD devs is to always set these two bits at the same time,
148as it doesn't make too much sense to set them independently, aside from some
149circumstances (eg. we needn't set DLC when only one shader array is used).
150
151Stores and atomics always bypass the L1 cache, so they don't support the DLC bit,
152and it shouldn't be set in these cases. Setting the DLC for these cases can result
153in graphical glitches or hangs.
154
155## RDNA `s_dcache_wb`
156
157The `s_dcache_wb` is not mentioned in the RDNA ISA doc, but it is needed in order
158to achieve correct behavior in some SSBO CTS tests.
159
160## RDNA subvector mode
161
162The documentation of `s_subvector_loop_begin` and `s_subvector_mode_end` is not clear
163on what sort of addressing should be used, but it says that it
164"is equivalent to an `S_CBRANCH` with extra math", so the subvector loop handling
165in ACO is done according to the `s_cbranch` doc.
166
167## RDNA early rasterization
168
169The ISA documentation says about `s_endpgm`:
170
171> The hardware implicitly executes S_WAITCNT 0 and S_WAITCNT_VSCNT 0
172> before executing this instruction.
173
174What the doc doesn't say is that in case of NGG (and legacy VS) when there
175are no param exports, the driver sets `NO_PC_EXPORT=1` for optimal performance,
176and when this is set, the hardware will start clipping and rasterization
177as soon as it encounters a position export with `DONE=1`, without waiting
178for the NGG (or VS) to finish.
179
180It can even launch PS waves before NGG (or VS) ends.
181
182When this happens, any store performed by a VS is not guaranteed
183to be complete when PS tries to load it, so we need to manually
184make sure to insert wait instructions before the position exports.
185
186## A16 and G16
187
188On GFX9, the A16 field enables both 16 bit addresses and derivatives.
189Since GFX10+ these are fully independent of each other, A16 controls 16 bit addresses
190and G16 opcodes 16 bit derivatives. A16 without G16 uses 32 bit derivatives.
191
192## POPS collision wave ID argument (GFX9-10.3)
193
194The 2020 RDNA and RDNA 2 ISA references contain incorrect offsets and widths of
195the fields of the "POPS collision wave ID" SGPR argument.
196
197According to the code generated for Rasterizer Ordered View usage in Direct3D,
198the correct layout is:
199
200* [31]: Whether overlap has occurred.
201* [29:28] (GFX10+) / [28] (GFX9): ID of the packer the wave should be associated
202  with.
203* [25:16]: Newest overlapped wave ID.
204* [9:0]: Current wave ID.
205
206## RDNA3 `v_pk_fmac_f16_dpp`
207
208"Table 30. Which instructions support DPP" in the RDNA3 ISA documentation has no exception for
209VOP2 `v_pk_fmac_f16`. But like all other packed math opcodes, DPP does not function in practice.
210RDNA1 and RDNA2 support `v_pk_fmac_f16_dpp`.
211
212## ds_swizzle_b32 rotate/fft modes
213
214These are first mentioned in the GFX9 (Vega) ISA doc, information from the LLVM bug tracker
215and testing show they were already present on GFX8.
216
217# Hardware Bugs
218
219## SMEM corrupts VCCZ on SI/CI
220
221[See this LLVM source.](https://github.com/llvm/llvm-project/blob/acb089e12ae48b82c0b05c42326196a030df9b82/llvm/lib/Target/AMDGPU/SIInsertWaits.cpp#L580-L616)
222
223After issuing a SMEM instructions, we need to wait for the SMEM instructions to
224finish and then write to vcc (for example, `s_mov_b64 vcc, vcc`) to correct vccz
225
226Currently, we don't do this.
227
228## SGPR offset on MUBUF prevents addr clamping on SI/CI
229
230[See this LLVM source.](https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp#L1917-L1922)
231
232This leads to wrong bounds checking, using a VGPR offset fixes it.
233
234## unused VMEM/DS destination lanes can't be used without waiting
235
236On GFX11, we can't safely read/write unused lanes of VMEM/DS destination
237VGPRs without waiting for the load to finish.
238
239## GCN / GFX6 hazards
240
241### VINTRP followed by a read with `v_readfirstlane` or `v_readlane`
242
243It's required to insert 1 wait state if the dst VGPR of any  `v_interp_*` is
244followed by a read with `v_readfirstlane` or `v_readlane` to fix GPU hangs on GFX6.
245Note that `v_writelane_*` is apparently not affected. This hazard isn't
246documented anywhere but AMD confirmed it.
247
248## RDNA / GFX10 hazards
249
250### SMEM store followed by a load with the same address
251
252We found that an `s_buffer_load` will produce incorrect results if it is preceded
253by an `s_buffer_store` with the same address. Inserting an `s_nop` between them
254does not mitigate the issue, so an `s_waitcnt lgkmcnt(0)` must be inserted.
255This is not mentioned by LLVM among the other GFX10 bugs, but LLVM doesn't use
256SMEM stores, so it's not surprising that they didn't notice it.
257
258### VMEMtoScalarWriteHazard
259
260Triggered by:
261VMEM/FLAT/GLOBAL/SCRATCH/DS instruction reads an SGPR (or EXEC, or M0).
262Then, a SALU/SMEM instruction writes the same SGPR.
263
264Mitigated by:
265A VALU instruction or an `s_waitcnt` between the two instructions.
266
267### SMEMtoVectorWriteHazard
268
269Triggered by:
270An SMEM instruction reads an SGPR. Then, a VALU instruction writes that same SGPR.
271
272Mitigated by:
273Any non-SOPP SALU instruction (except `s_setvskip`, `s_version`, and any non-lgkmcnt `s_waitcnt`).
274
275### Offset3fBug
276
277Any branch that is located at offset 0x3f will be buggy. Just insert some NOPs to make sure no branch
278is located at this offset.
279
280### InstFwdPrefetchBug
281
282According to LLVM, the `s_inst_prefetch` instruction can cause a hang on GFX10.
283Seems to be resolved on GFX10.3+. There are no further details.
284
285### LdsMisalignedBug
286
287When there is a misaligned multi-dword FLAT load/store instruction in WGP mode,
288it needs to be split into multiple single-dword FLAT instructions.
289
290ACO doesn't use FLAT load/store on GFX10, so is unaffected.
291
292### FlatSegmentOffsetBug
293
294The 12-bit immediate OFFSET field of FLAT instructions must always be 0.
295GLOBAL and SCRATCH are unaffected.
296
297ACO doesn't use FLAT load/store on GFX10, so is unaffected.
298
299### VcmpxPermlaneHazard
300
301Triggered by:
302Any permlane instruction that follows any VOPC instruction which writes exec.
303
304Mitigated by: any VALU instruction except `v_nop`.
305
306### VcmpxExecWARHazard
307
308Triggered by:
309Any non-VALU instruction reads the EXEC mask. Then, any VALU instruction writes the EXEC mask.
310
311Mitigated by:
312A VALU instruction that writes an SGPR (or has a valid SDST operand), or `s_waitcnt_depctr 0xfffe`.
313Note: `s_waitcnt_depctr` is an internal instruction, so there is no further information
314about what it does or what its operand means.
315
316### LdsBranchVmemWARHazard
317
318Triggered by:
319VMEM/GLOBAL/SCRATCH instruction, then a branch, then a DS instruction,
320or vice versa: DS instruction, then a branch, then a VMEM/GLOBAL/SCRATCH instruction.
321
322Mitigated by:
323Only `s_waitcnt_vscnt null, 0`. Needed even if the first instruction is a load.
324
325### NSAClauseBug
326
327"MIMG-NSA in a hard clause has unpredictable results on GFX10.1"
328
329### NSAMaxSize5
330
331NSA MIMG instructions should be limited to 3 dwords before GFX10.3 to avoid
332stability issues: https://reviews.llvm.org/D103348
333
334## RDNA3 / GFX11 hazards
335
336### VcmpxPermlaneHazard
337
338Same as GFX10.
339
340### LdsDirectVALUHazard
341
342Triggered by:
343LDSDIR instruction writing a VGPR soon after it's used by a VALU instruction.
344
345Mitigated by:
346A vdst wait, preferably using the LDSDIR's field.
347
348### LdsDirectVMEMHazard
349
350Triggered by:
351LDSDIR instruction writing a VGPR after it's used by a VMEM/DS instruction.
352
353Mitigated by:
354Waiting for the VMEM/DS instruction to finish, a VALU or export instruction, or
355`s_waitcnt_depctr 0xffe3`.
356
357### VALUTransUseHazard
358
359Triggered by:
360A VALU instruction reading a VGPR written by a transcendental VALU instruction without 6+ VALU or 2+
361transcendental instructions in-between.
362
363Mitigated by:
364A va_vdst=0 wait: `s_waitcnt_deptr 0x0fff`
365
366### VALUPartialForwardingHazard
367
368Triggered by:
369A VALU instruction reading two VGPRs: one written before an exec write by SALU and one after. To
370trigger, there must be less than 3 VALU between the first and second VGPR writes and less than 5
371VALU between the second VGPR write and the current instruction.
372
373Mitigated by:
374A va_vdst=0 wait: `s_waitcnt_deptr 0x0fff`
375
376### VALUMaskWriteHazard
377
378Triggered by:
379SALU writing then SALU or VALU reading a SGPR that was previously used as a lane mask for a VALU.
380
381Mitigated by:
382A VALU instruction reading a non-exec SGPR before the SALU write, or a sa_sdst=0 wait after the
383SALU write: `s_waitcnt_depctr 0xfffe`
384

README.md

1# Welcome to ACO
2
3ACO (short for *AMD compiler*) is a back-end compiler for AMD GCN / RDNA GPUs, based on the NIR compiler infrastructure.
4Simply put, ACO translates shader programs from the NIR intermediate representation into a GCN / RDNA binary which the GPU can execute.
5
6## Motivation
7
8Why did we choose to develop a new compiler backend?
9
101. We'd like to give gamers a fluid, stutter-free experience, so we prioritize compilation speed.
112. Good divergence analysis allows us to better optimize runtime performance.
123. Issues can be fixed within mesa releases, independently of the schedule of other projects.
13
14## Control flow
15
16Modern GPUs are SIMD machines that execute the shader in parallel.
17In case of GCN / RDNA the parallelism is achieved by executing the shader on several waves, and each wave has several lanes (32 or 64).
18When every lane executes exactly the same instructions, and takes the same path, it's uniform control flow;
19otherwise when some lanes take one path while other lanes take a different path, it's divergent.
20
21Each hardware lane corresponds to a shader invocation from a software perspective.
22
23The hardware doesn't directly support divergence,
24so in case of divergent control flow, the GPU must execute both code paths, each with some lanes disabled.
25This is why divergence is a performance concern in shader programming.
26
27ACO deals with divergent control flow by maintaining two control flow graphs (CFG):
28
29* logical CFG - directly translated from NIR and shows the intended control flow of the program.
30* linear CFG - created according to Whole-Function Vectorization by Ralf Karrenberg and Sebastian Hack.
31  The linear CFG represents how the program is physically executed on GPU and may contain additional blocks for control flow handling and to avoid critical edges.
32  Note that all nodes of the logical CFG also participate in the linear CFG, but not vice versa.
33
34## Compilation phases
35
36#### Instruction Selection
37
38The instruction selection is based around the divergence analysis and works in 3 passes on the NIR shader.
39
401. The divergence analysis pass calculates for each SSA definition if its value is guaranteed to be uniform across all threads of the wave (subgroup).
412. We determine the register class for each SSA definition.
423. Actual instruction selection. The advanced divergence analysis allows for better usage of the scalar unit, scalar memory loads and the scalar register file.
43
44We have two types of instructions:
45
46* Hardware instructions as specified by the GCN / RDNA instruction set architecture manuals.
47* Pseudo instructions which are helpers that encapsulate more complex functionality.
48  They eventually get lowered to real hardware instructions.
49
50Each instruction can have operands (temporaries that it reads), and definitions (temporaries that it writes).
51Temporaries can be fixed to a specific register, or just specify a register class (either a single register, or a vector of several registers).
52
53#### Repair SSA
54
55This repairs SSA in the case of mismatches between the logical and linear CFG, where the definition of a linear temporary logically dominate its users but not linearly. This is followed by lower_phis to lower the phis created by this pass.
56
57Instruction selection might create mismatches between the logical CFG (the input NIR's CFG) and the linear CFG in the following situations:
58- We add a break at the end of a loop in case it has no active invocations (an empty exec can prevent any logical breaks from being taken). This creates a linear edge but no logical edge, and SGPR uses outside the loop can require a phi.
59- We add an empty exec skip over a block. This is a branch which skips most contents of a sequence of instructions if exec is empty. To avoid critical edges, the inside of the construct logically dominates the merge but not linearly.
60- An SGPR is defined in one side of a divergent IF but it used in or after the merge block. If the other side of the IF ends in a branch, a phi is not necessary according to the logical CFG, but it is for the linear CFG. However, `sanitize_cf_list()` should already resolve this before translation from NIR for additional reasons.
61
62#### Lower Phis
63
64After instructions selection, some phi instructions need further lowering. This includes booleans which are represented as scalar values. Because the scalar ALU doesn't respect the execution mask, divergent boolean phis need to be lowered to SALU shuffle code. This pass also inserts the necessary code in order to fix phis with subdword access and repairs phis in case of mismatches between logical and linear CFG.
65
66#### Lower Subdword
67
68For GFX6 and GFX7, this pass already lowers subdword pseudo instructions.
69
70#### Value Numbering
71
72The value numbering pass is necessary for two reasons: the lack of descriptor load representation in NIR,
73and every NIR instruction that gets emitted as multiple ACO instructions also has potential for CSE.
74This pass does dominator-tree value numbering.
75
76#### Optimization
77
78In this phase, simpler instructions are combined into more complex instructions (like the different versions of multiply-add as well as neg, abs, clamp, and output modifiers) and constants are inlined, moves are eliminated, etc.
79Exactly which optimizations are performed depends on the hardware for which the shader is being compiled.
80After this, repair_ssa needs to be run again in case it moves a SGPR use to a different block.
81
82#### Setup of reduction temporaries
83
84This pass is responsible for making sure that register allocation is correct for reductions, by adding pseudo instructions that utilize linear VGPRs.
85When a temporary has a linear VGPR register class, this means that the variable is considered *live* in the linear control flow graph.
86
87#### Insert exec mask
88
89In the GCN/RDNA architecture, there is a special register called `exec` which is used for manually controlling which VALU threads (aka. *lanes*) are active. The value of `exec` has to change in divergent branches, loops, etc. and it needs to be restored after the branch or loop is complete. This pass ensures that the correct lanes are active in every branch.
90
91#### Live-Variable Analysis
92
93A live-variable analysis is used to calculate the register need of the shader.
94This information is used for spilling and scheduling before register allocation.
95
96#### Spilling
97
98First, we lower the shader program to CSSA form.
99Then, if the register demand exceeds the global limit, this pass lowers register usage by temporarily storing excess scalar values in free vector registers, or excess vector values in scratch memory, and reloading them when needed. It is based on the paper "Register Spilling and Live-Range Splitting for SSA-Form Programs".
100
101#### Instruction Scheduling
102
103Scheduling is another NP-complete problem where basically all known heuristics suffer from unpredictable change in register pressure. For that reason, the implemented scheduler does not completely re-schedule all instructions, but only aims to move up memory loads as far as possible without exceeding the maximum register limit for the pre-calculated wave count. The reason this works is that ILP is very limited on GCN. This approach looks promising so far.
104
105#### Register Allocation
106
107The register allocator works on SSA (as opposed to LLVM's which works on virtual registers). The SSA properties guarantee that there are always as many registers available as needed. The problem is that some instructions require a vector of neighboring registers to be available, but the free regs might be scattered. In this case, the register allocator inserts shuffle code (moving some temporaries to other registers) to make space for the variable. The assumption is that it is (almost) always better to have a few more moves than to sacrifice a wave. The RA does SSA-reconstruction on the fly, which makes its runtime linear.
108
109#### Optimization (post-RA)
110
111Optimizations which depend on register assignment (like branching on VCCZ) are performed.
112
113#### SSA Elimination
114
115The next step is a pass out of SSA by inserting parallelcopies at the end of blocks to match the phi nodes' semantics.
116
117#### Jump Threading
118
119This pass aims to eliminate empty or unnecessary basic blocks. As this introduces critical edges, it can only be performed after SSA elimination.
120
121#### Lower to HW instructions
122
123Most pseudo instructions are lowered to actual machine instructions.
124These are mostly parallel copy instructions created by instruction selection or register allocation and spill/reload code.
125
126#### VOPD Scheduling
127
128This pass makes use of the VOPD instruction encoding on GFX11+. When using wave32 mode, this pass works on a partial dependency graph in order to combine two VALU instructions each into one VOPD instruction.
129
130#### ILP Scheduling
131
132This second scheduler works on registers rather than SSA-values to determine dependencies. It implements a forward list scheduling algorithm using a partial dependency graph of few instructions at a time and aims to create larger memory clauses and improve ILP.
133
134#### Insert wait states
135
136GCN requires some wait states to be manually inserted in order to ensure correct behavior on memory instructions and some register dependencies.
137This means that we need to insert `s_waitcnt` instructions (and its variants) so that the shader program waits until the eg. a memory operation is complete.
138
139#### Resolve hazards and insert NOPs
140
141Some instructions require wait states or other instructions to resolve hazards which are not handled by the hardware.
142This pass makes sure that no known hazards occur.
143
144#### Insert delay_alu and form clauses
145
146These passes introduce optional instructions which provide performance hints to the hardware. `s_delay_alu` is available on GFX11+ and describes ALU dependencies in order to allow the hardware to execute instructions from a different wave in the meantime. `s_clause` is avilable on GFX10+ with the purpose to complete an entire set of memory instructions before switching to a different wave.
147
148#### Emit program - Assembler
149
150The assembler emits the actual binary that will be sent to the hardware for execution. ACO's assembler is straight-forward because all instructions have their format, opcode, registers and potential fields already available, so it only needs to cater to the some differences between each hardware generation.
151
152## Supported shader stages
153
154Hardware stages (as executed on the chip) don't exactly match software stages (as defined in OpenGL / Vulkan).
155Which software stage gets executed on which hardware stage depends on what kind of software stages are present in the current pipeline.
156
157An important difference is that VS is always the first stage to run in SW models,
158whereas HW VS refers to the last HW stage before fragment shading in GCN/RDNA terminology.
159That's why, among other things, the HW VS is no longer used to execute the SW VS when tessellation or geometry shading are used.
160
161#### Glossary of software stages
162
163* VS = Vertex Shader
164* TCS = Tessellation Control Shader, equivalent to D3D HS = Hull Shader
165* TES = Tessellation Evaluation Shader, equivalent to D3D DS = Domain Shader
166* GS = Geometry Shader
167* FS = Fragment Shader, equivalent to D3D PS = Pixel Shader
168* CS = Compute Shader
169* TS = Task Shader
170* MS = Mesh Shader
171
172#### Glossary of hardware stages
173
174* LS = Local Shader (merged into HS on GFX9+), only runs SW VS when tessellation is used
175* HS = Hull Shader, the HW equivalent of a Tessellation Control Shader, runs before the fixed function hardware performs tessellation
176* ES = Export Shader (merged into GS on GFX9+), if there is a GS in the SW pipeline, the preceding stage (ie. SW VS or SW TES) always has to run on this HW stage
177* GS = Geometry Shader, also known as legacy GS
178* VS = Vertex Shader, **not equivalent to SW VS**: when there is a GS in the SW pipeline this stage runs a "GS copy" shader, otherwise it always runs the SW stage before FS
179* NGG = Next Generation Geometry, a new hardware stage that replaces legacy HW GS and HW VS on RDNA GPUs
180* PS = Pixel Shader, the HW equivalent to SW FS
181* CS = Compute Shader
182
183##### Notes about HW VS and the "GS copy" shader
184
185HW PS reads its inputs from a special ring buffer called Parameter Cache (PC) that only HW VS can write to, using export instructions.
186However, legacy GS store their output in VRAM (before GFX10/NGG).
187So in order for HW PS to be able to read the GS outputs, we must run something on the VS stage which reads the GS outputs
188from VRAM and exports them to the PC. This is what we call a "GS copy" shader.
189From a HW perspective the "GS copy" shader is in fact VS (it runs on the HW VS stage),
190but from a SW perspective it's not part of the traditional pipeline,
191it's just some "glue code" that we need for outputs to play nicely.
192
193On GFX10/NGG this limitation no longer exists, because NGG can export directly to the PC.
194
195##### Notes about the attribute ring
196
197Starting with GFX11, the parameter cache is replaced by the attribute ring,
198which is a discardable ring buffer located in VRAM.
199The outputs of the last pre-rasterization stage (VS, TES, GS or MS) are stored here.
200
201The attribute ring is designed to utilize the Infinity Cache.
202Store instructions are arranged so that each instruction writes a full cache line,
203so the GPU will never actually have to write any of that to VRAM.
204
205##### Notes about merged shaders
206
207The merged stages on GFX9 (and GFX10/legacy) are: LSHS and ESGS. On GFX10/NGG the ESGS is merged with HW VS into NGG.
208
209This might be confusing due to a mismatch between the number of invocations of these shaders.
210For example, ES is per-vertex, but GS is per-primitive.
211This is why merged shaders get an argument called `merged_wave_info` which tells how many invocations each part needs,
212and there is some code at the beginning of each part to ensure the correct number of invocations by disabling some threads.
213So, think about these as two independent shader programs slapped together.
214
215### Which software stage runs on which hardware stage?
216
217#### Graphics Pipeline
218
219##### GFX6-8:
220
221* Each SW stage has its own HW stage
222* LS and HS share the same LDS space, so LS can store its output to LDS, where HS can read it
223* HS, ES, GS outputs are stored in VRAM, next stage reads these from VRAM
224* GS outputs got to VRAM, so they have to be copied by a GS copy shader running on the HW VS stage
225
226| GFX6-8 HW stages:       | LS  | HS  | ES  | GS  | VS     | PS | ACO terminology |
227| -----------------------:|:----|:----|:----|:----|:-------|:---|:----------------|
228| SW stages: only VS+PS:  |     |     |     |     | VS     | FS | `vertex_vs`, `fragment_fs` |
229|            with tess:   | VS  | TCS |     |     | TES    | FS | `vertex_ls`, `tess_control_hs`, `tess_eval_vs`, `fragment_fs` |
230|            with GS:     |     |     | VS  | GS  | GS copy| FS | `vertex_es`, `geometry_gs`, `gs_copy_vs`, `fragment_fs` |
231|            with both:   | VS  | TCS | TES | GS  | GS copy| FS | `vertex_ls`, `tess_control_hs`, `tess_eval_es`, `geometry_gs`, `gs_copy_vs`, `fragment_fs` |
232
233##### GFX9+ (including GFX10/legacy):
234
235* HW LS and HS stages are merged, and the merged shader still uses LDS in the same way as before
236* HW ES and GS stages are merged, so ES outputs can go to LDS instead of VRAM
237* LSHS outputs and ESGS outputs are still stored in VRAM, so a GS copy shader is still necessary
238
239| GFX9+ HW stages:        | LSHS      | ESGS      | VS     | PS | ACO terminology |
240| -----------------------:|:----------|:----------|:-------|:---|:----------------|
241| SW stages: only VS+PS:  |           |           | VS     | FS | `vertex_vs`, `fragment_fs` |
242|            with tess:   | VS + TCS  |           | TES    | FS | `vertex_tess_control_hs`, `tess_eval_vs`, `fragment_fs` |
243|            with GS:     |           | VS + GS   | GS copy| FS | `vertex_geometry_gs`, `gs_copy_vs`, `fragment_fs` |
244|            with both:   | VS + TCS  | TES + GS  | GS copy| FS | `vertex_tess_control_hs`, `tess_eval_geometry_gs`, `gs_copy_vs`, `fragment_fs` |
245
246##### NGG (GFX10+ only):
247
248 * HW GS and VS stages are now merged, and NGG can export directly to PC
249 * GS copy shaders are no longer needed
250 * On GFX10.3+, per-primitive attributes (parameters) are also supported
251 * On GFX11+, parameter exports are replaced by attribute ring stores
252
253| GFX10/NGG HW stages:    | LSHS      | NGG                | PS | ACO terminology |
254| -----------------------:|:----------|:-------------------|:---|:----------------|
255| SW stages: only VS+PS:  |           | VS                 | FS | `vertex_ngg`, `fragment_fs` |
256|            with tess:   | VS + TCS  | TES                | FS | `vertex_tess_control_hs`, `tess_eval_ngg`, `fragment_fs` |
257|            with GS:     |           | VS + GS            | FS | `vertex_geometry_ngg`, `fragment_fs` |
258|            with both:   | VS + TCS  | TES + GS           | FS | `vertex_tess_control_hs`, `tess_eval_geometry_ngg`, `fragment_fs` |
259
260#### Mesh Shading Graphics Pipeline
261
262GFX10.3+:
263
264* TS will run as a CS and stores its output payload to VRAM
265* MS runs on NGG, loads its inputs from VRAM and stores outputs to LDS, then PC (or attribute ring)
266* Pixel Shaders work the same way as before
267
268| GFX10.3+ HW stages      | CS    | NGG   | PS | ACO terminology |
269| -----------------------:|:------|:------|:---|:----------------|
270| SW stages: only MS+PS:  |       | MS    | FS | `mesh_ngg`, `fragment_fs` |
271|            with task:   | TS    | MS    | FS | `task_cs`, `mesh_ngg`, `fragment_fs` |
272
273#### Compute pipeline
274
275GFX6-10:
276
277* Note that the SW CS always runs on the HW CS stage on all HW generations.
278
279| GFX6-10 HW stage        | CS   | ACO terminology |
280| -----------------------:|:-----|:----------------|
281| SW stage                | CS   | `compute_cs`    |
282
283
284## How to debug
285
286Handy `RADV_DEBUG` options that help with ACO debugging:
287
288* `nocache` - you always want to use this when debugging, otherwise you risk using a broken shader from the cache.
289* `shaders` - makes ACO print the IR after register allocation, as well as the disassembled shader binary.
290* `metashaders` - does the same thing as `shaders` but for built-in RADV shaders.
291* `preoptir` - makes ACO print the final NIR shader before instruction selection, as well as the ACO IR after instruction selection.
292* `nongg` - disables NGG support
293
294We also have `ACO_DEBUG` options:
295
296* `validateir` - Validate the ACO IR between compilation stages. By default, enabled in debug builds and disabled in release builds.
297* `validatera` - Perform a RA (register allocation) validation.
298* `force-waitcnt` - Forces ACO to emit a wait state after each instruction when there is something to wait for. Harms performance.
299* `novn` - Disables the ACO value numbering stage.
300* `noopt` - Disables the ACO optimizer.
301* `nosched` - Disables the ACO pre-RA and post-RA scheduler.
302* `nosched-ilp` - Disables the ACO post-RA ILP scheduler.
303
304Note that you need to **combine these options into a comma-separated list**, for example: `RADV_DEBUG=nocache,shaders` otherwise only the last one will take effect. (This is how all environment variables work, yet this is an often made mistake.) Example:
305
306```
307RADV_DEBUG=nocache,shaders ACO_DEBUG=validateir,validatera vkcube
308```
309
310### Using GCC sanitizers
311
312GCC has several sanitizers which can help figure out hard to diagnose issues. To use these, you need to pass
313the `-Dbsanitize` flag to `meson` when building mesa. For example `-Dbsanitize=undefined` will add support for
314the undefined behavior sanitizer.
315
316### Hardened builds and glibc++ assertions
317
318Several Linux distributions use "hardened" builds meaning several special compiler flags are added by
319downstream packaging which are not used in mesa builds by default. These may be responsible for
320some bug reports of inexplicable crashes with assertion failures you can't reproduce.
321
322Most notable are the glibc++ debug flags, which you can use by adding the `-D_GLIBCXX_ASSERTIONS=1` and
323`-D_GLIBCXX_DEBUG=1` flags.
324
325To see the full list of downstream compiler flags, you can use eg. `rpm --eval "%optflags"`
326on Red Hat based distros like Fedora.
327
328### Good practices
329
330Here are some good practices we learned while debugging visual corruption and hangs.
331
3321. Bisecting shaders:
333    * Use renderdoc when examining shaders. This is deterministic while real games often use multi-threading or change the order in which shaders get compiled.
334    * Edit `radv_shader.c` or `radv_pipeline.c` to change if they are compiled with LLVM or ACO.
3352. Things to check early:
336    * Disable value_numbering, optimizer and/or scheduler.
337      Note that if any of these change the output, it does not necessarily mean that the error is there, as register assignment does also change.
3383. Finding the instruction causing a hang:
339    * The ability to directly manipulate the binaries gives us an easy way to find the exact instruction which causes the hang.
340      Use NULL exports (for FS and VS) and `s_endpgm` to end the shader early to find the problematic instruction.
3414. Other faulty instructions:
342    * Use print_asm and check for illegal instructions.
343    * Compare to the ACO IR to see if the assembly matches what we want (this can take a while).
344      Typical issues might be a wrong instruction format leading to a wrong opcode or an sgpr used for vgpr field.
3455. Comparing to the LLVM backend:
346   * If everything else didn't help, we probably just do something wrong. The LLVM backend is quite mature, so its output might help find differences, but this can be a long road.
3476. Investigating regressions in shaders:
348   * If you know that something used to work and are sure it's a shader problem,
349     use `RADV_DEBUG=shaders` to print both the correct and incorrect version of the shader.
350     You can further filter by shader stage and compilation stage, eg. `RADV_DEBUG=ps,nir,asm`
351   * Copy the printed shaders into a diff viewer, eg. Meld, to quickly find what changed
352     between the two versions.
353