1===================== 2Booting AArch64 Linux 3===================== 4 5Author: Will Deacon <will.deacon@arm.com> 6 7Date : 07 September 2012 8 9This document is based on the ARM booting document by Russell King and 10is relevant to all public releases of the AArch64 Linux kernel. 11 12The AArch64 exception model is made up of a number of exception levels 13(EL0 - EL3), with EL0, EL1 and EL2 having a secure and a non-secure 14counterpart. EL2 is the hypervisor level, EL3 is the highest priority 15level and exists only in secure mode. Both are architecturally optional. 16 17For the purposes of this document, we will use the term `boot loader` 18simply to define all software that executes on the CPU(s) before control 19is passed to the Linux kernel. This may include secure monitor and 20hypervisor code, or it may just be a handful of instructions for 21preparing a minimal boot environment. 22 23Essentially, the boot loader should provide (as a minimum) the 24following: 25 261. Setup and initialise the RAM 272. Setup the device tree 283. Decompress the kernel image 294. Call the kernel image 30 31 321. Setup and initialise RAM 33--------------------------- 34 35Requirement: MANDATORY 36 37The boot loader is expected to find and initialise all RAM that the 38kernel will use for volatile data storage in the system. It performs 39this in a machine dependent manner. (It may use internal algorithms 40to automatically locate and size all RAM, or it may use knowledge of 41the RAM in the machine, or any other method the boot loader designer 42sees fit.) 43 44 452. Setup the device tree 46------------------------- 47 48Requirement: MANDATORY 49 50The device tree blob (dtb) must be placed on an 8-byte boundary and must 51not exceed 2 megabytes in size. Since the dtb will be mapped cacheable 52using blocks of up to 2 megabytes in size, it must not be placed within 53any 2M region which must be mapped with any specific attributes. 54 55NOTE: versions prior to v4.2 also require that the DTB be placed within 56the 512 MB region starting at text_offset bytes below the kernel Image. 57 583. Decompress the kernel image 59------------------------------ 60 61Requirement: OPTIONAL 62 63The AArch64 kernel does not currently provide a decompressor and 64therefore requires decompression (gzip etc.) to be performed by the boot 65loader if a compressed Image target (e.g. Image.gz) is used. For 66bootloaders that do not implement this requirement, the uncompressed 67Image target is available instead. 68 69 704. Call the kernel image 71------------------------ 72 73Requirement: MANDATORY 74 75The decompressed kernel image contains a 64-byte header as follows:: 76 77 u32 code0; /* Executable code */ 78 u32 code1; /* Executable code */ 79 u64 text_offset; /* Image load offset, little endian */ 80 u64 image_size; /* Effective Image size, little endian */ 81 u64 flags; /* kernel flags, little endian */ 82 u64 res2 = 0; /* reserved */ 83 u64 res3 = 0; /* reserved */ 84 u64 res4 = 0; /* reserved */ 85 u32 magic = 0x644d5241; /* Magic number, little endian, "ARM\x64" */ 86 u32 res5; /* reserved (used for PE COFF offset) */ 87 88 89Header notes: 90 91- As of v3.17, all fields are little endian unless stated otherwise. 92 93- code0/code1 are responsible for branching to stext. 94 95- when booting through EFI, code0/code1 are initially skipped. 96 res5 is an offset to the PE header and the PE header has the EFI 97 entry point (efi_stub_entry). When the stub has done its work, it 98 jumps to code0 to resume the normal boot process. 99 100- Prior to v3.17, the endianness of text_offset was not specified. In 101 these cases image_size is zero and text_offset is 0x80000 in the 102 endianness of the kernel. Where image_size is non-zero image_size is 103 little-endian and must be respected. Where image_size is zero, 104 text_offset can be assumed to be 0x80000. 105 106- The flags field (introduced in v3.17) is a little-endian 64-bit field 107 composed as follows: 108 109 ============= =============================================================== 110 Bit 0 Kernel endianness. 1 if BE, 0 if LE. 111 Bit 1-2 Kernel Page size. 112 113 * 0 - Unspecified. 114 * 1 - 4K 115 * 2 - 16K 116 * 3 - 64K 117 Bit 3 Kernel physical placement 118 119 0 120 2MB aligned base should be as close as possible 121 to the base of DRAM, since memory below it is not 122 accessible via the linear mapping 123 1 124 2MB aligned base such that all image_size bytes 125 counted from the start of the image are within 126 the 48-bit addressable range of physical memory 127 Bits 4-63 Reserved. 128 ============= =============================================================== 129 130- When image_size is zero, a bootloader should attempt to keep as much 131 memory as possible free for use by the kernel immediately after the 132 end of the kernel image. The amount of space required will vary 133 depending on selected features, and is effectively unbound. 134 135The Image must be placed text_offset bytes from a 2MB aligned base 136address anywhere in usable system RAM and called there. The region 137between the 2 MB aligned base address and the start of the image has no 138special significance to the kernel, and may be used for other purposes. 139At least image_size bytes from the start of the image must be free for 140use by the kernel. 141NOTE: versions prior to v4.6 cannot make use of memory below the 142physical offset of the Image so it is recommended that the Image be 143placed as close as possible to the start of system RAM. 144 145If an initrd/initramfs is passed to the kernel at boot, it must reside 146entirely within a 1 GB aligned physical memory window of up to 32 GB in 147size that fully covers the kernel Image as well. 148 149Any memory described to the kernel (even that below the start of the 150image) which is not marked as reserved from the kernel (e.g., with a 151memreserve region in the device tree) will be considered as available to 152the kernel. 153 154Before jumping into the kernel, the following conditions must be met: 155 156- Quiesce all DMA capable devices so that memory does not get 157 corrupted by bogus network packets or disk data. This will save 158 you many hours of debug. 159 160- Primary CPU general-purpose register settings: 161 162 - x0 = physical address of device tree blob (dtb) in system RAM. 163 - x1 = 0 (reserved for future use) 164 - x2 = 0 (reserved for future use) 165 - x3 = 0 (reserved for future use) 166 167- CPU mode 168 169 All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError, 170 IRQ and FIQ). 171 The CPU must be in non-secure state, either in EL2 (RECOMMENDED in order 172 to have access to the virtualisation extensions), or in EL1. 173 174- Caches, MMUs 175 176 The MMU must be off. 177 178 The instruction cache may be on or off, and must not hold any stale 179 entries corresponding to the loaded kernel image. 180 181 The address range corresponding to the loaded kernel image must be 182 cleaned to the PoC. In the presence of a system cache or other 183 coherent masters with caches enabled, this will typically require 184 cache maintenance by VA rather than set/way operations. 185 System caches which respect the architected cache maintenance by VA 186 operations must be configured and may be enabled. 187 System caches which do not respect architected cache maintenance by VA 188 operations (not recommended) must be configured and disabled. 189 190- Architected timers 191 192 CNTFRQ must be programmed with the timer frequency and CNTVOFF must 193 be programmed with a consistent value on all CPUs. If entering the 194 kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) set where 195 available. 196 197- Coherency 198 199 All CPUs to be booted by the kernel must be part of the same coherency 200 domain on entry to the kernel. This may require IMPLEMENTATION DEFINED 201 initialisation to enable the receiving of maintenance operations on 202 each CPU. 203 204- System registers 205 206 All writable architected system registers at or below the exception 207 level where the kernel image will be entered must be initialised by 208 software at a higher exception level to prevent execution in an UNKNOWN 209 state. 210 211 For all systems: 212 - If EL3 is present: 213 214 - SCR_EL3.FIQ must have the same value across all CPUs the kernel is 215 executing on. 216 - The value of SCR_EL3.FIQ must be the same as the one present at boot 217 time whenever the kernel is executing. 218 219 - If EL3 is present and the kernel is entered at EL2: 220 221 - SCR_EL3.HCE (bit 8) must be initialised to 0b1. 222 223 For systems with a GICv3 interrupt controller to be used in v3 mode: 224 - If EL3 is present: 225 226 - ICC_SRE_EL3.Enable (bit 3) must be initialised to 0b1. 227 - ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1. 228 - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across 229 all CPUs the kernel is executing on, and must stay constant 230 for the lifetime of the kernel. 231 232 - If the kernel is entered at EL1: 233 234 - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1 235 - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1. 236 237 - The DT or ACPI tables must describe a GICv3 interrupt controller. 238 239 For systems with a GICv3 interrupt controller to be used in 240 compatibility (v2) mode: 241 242 - If EL3 is present: 243 244 ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b0. 245 246 - If the kernel is entered at EL1: 247 248 ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0. 249 250 - The DT or ACPI tables must describe a GICv2 interrupt controller. 251 252 For CPUs with pointer authentication functionality: 253 254 - If EL3 is present: 255 256 - SCR_EL3.APK (bit 16) must be initialised to 0b1 257 - SCR_EL3.API (bit 17) must be initialised to 0b1 258 259 - If the kernel is entered at EL1: 260 261 - HCR_EL2.APK (bit 40) must be initialised to 0b1 262 - HCR_EL2.API (bit 41) must be initialised to 0b1 263 264 For CPUs with Activity Monitors Unit v1 (AMUv1) extension present: 265 266 - If EL3 is present: 267 268 - CPTR_EL3.TAM (bit 30) must be initialised to 0b0 269 - CPTR_EL2.TAM (bit 30) must be initialised to 0b0 270 - AMCNTENSET0_EL0 must be initialised to 0b1111 271 - AMCNTENSET1_EL0 must be initialised to a platform specific value 272 having 0b1 set for the corresponding bit for each of the auxiliary 273 counters present. 274 275 - If the kernel is entered at EL1: 276 277 - AMCNTENSET0_EL0 must be initialised to 0b1111 278 - AMCNTENSET1_EL0 must be initialised to a platform specific value 279 having 0b1 set for the corresponding bit for each of the auxiliary 280 counters present. 281 282 For CPUs with the Fine Grained Traps (FEAT_FGT) extension present: 283 284 - If EL3 is present and the kernel is entered at EL2: 285 286 - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1. 287 288 For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present: 289 290 - If EL3 is present and the kernel is entered at EL2: 291 292 - SCR_EL3.FGTEn2 (bit 59) must be initialised to 0b1. 293 294 For CPUs with support for HCRX_EL2 (FEAT_HCX) present: 295 296 - If EL3 is present and the kernel is entered at EL2: 297 298 - SCR_EL3.HXEn (bit 38) must be initialised to 0b1. 299 300 For CPUs with Advanced SIMD and floating point support: 301 302 - If EL3 is present: 303 304 - CPTR_EL3.TFP (bit 10) must be initialised to 0b0. 305 306 - If EL2 is present and the kernel is entered at EL1: 307 308 - CPTR_EL2.TFP (bit 10) must be initialised to 0b0. 309 310 For CPUs with the Scalable Vector Extension (FEAT_SVE) present: 311 312 - if EL3 is present: 313 314 - CPTR_EL3.EZ (bit 8) must be initialised to 0b1. 315 316 - ZCR_EL3.LEN must be initialised to the same value for all CPUs the 317 kernel is executed on. 318 319 - If the kernel is entered at EL1 and EL2 is present: 320 321 - CPTR_EL2.TZ (bit 8) must be initialised to 0b0. 322 323 - CPTR_EL2.ZEN (bits 17:16) must be initialised to 0b11. 324 325 - ZCR_EL2.LEN must be initialised to the same value for all CPUs the 326 kernel will execute on. 327 328 For CPUs with the Scalable Matrix Extension (FEAT_SME): 329 330 - If EL3 is present: 331 332 - CPTR_EL3.ESM (bit 12) must be initialised to 0b1. 333 334 - SCR_EL3.EnTP2 (bit 41) must be initialised to 0b1. 335 336 - SMCR_EL3.LEN must be initialised to the same value for all CPUs the 337 kernel will execute on. 338 339 - If the kernel is entered at EL1 and EL2 is present: 340 341 - CPTR_EL2.TSM (bit 12) must be initialised to 0b0. 342 343 - CPTR_EL2.SMEN (bits 25:24) must be initialised to 0b11. 344 345 - SCTLR_EL2.EnTP2 (bit 60) must be initialised to 0b1. 346 347 - SMCR_EL2.LEN must be initialised to the same value for all CPUs the 348 kernel will execute on. 349 350 - HWFGRTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01. 351 352 - HWFGWTR_EL2.nTPIDR2_EL0 (bit 55) must be initialised to 0b01. 353 354 - HWFGRTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01. 355 356 - HWFGWTR_EL2.nSMPRI_EL1 (bit 54) must be initialised to 0b01. 357 358 For CPUs with the Scalable Matrix Extension FA64 feature (FEAT_SME_FA64): 359 360 - If EL3 is present: 361 362 - SMCR_EL3.FA64 (bit 31) must be initialised to 0b1. 363 364 - If the kernel is entered at EL1 and EL2 is present: 365 366 - SMCR_EL2.FA64 (bit 31) must be initialised to 0b1. 367 368 For CPUs with the Memory Tagging Extension feature (FEAT_MTE2): 369 370 - If EL3 is present: 371 372 - SCR_EL3.ATA (bit 26) must be initialised to 0b1. 373 374 - If the kernel is entered at EL1 and EL2 is present: 375 376 - HCR_EL2.ATA (bit 56) must be initialised to 0b1. 377 378 For CPUs with the Scalable Matrix Extension version 2 (FEAT_SME2): 379 380 - If EL3 is present: 381 382 - SMCR_EL3.EZT0 (bit 30) must be initialised to 0b1. 383 384 - If the kernel is entered at EL1 and EL2 is present: 385 386 - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1. 387 388 For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9): 389 390 - If EL3 is present: 391 392 - MDCR_EL3.EnPM2 (bit 7) must be initialised to 0b1. 393 394 - If the kernel is entered at EL1 and EL2 is present: 395 396 - HDFGRTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1. 397 - HDFGRTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1. 398 - HDFGRTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1. 399 400 - HDFGWTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1. 401 - HDFGWTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1. 402 - HDFGWTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1. 403 404 For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS): 405 406 - If the kernel is entered at EL1 and EL2 is present: 407 408 - HCRX_EL2.MSCEn (bit 11) must be initialised to 0b1. 409 410 For CPUs with the Extended Translation Control Register feature (FEAT_TCR2): 411 412 - If EL3 is present: 413 414 - SCR_EL3.TCR2En (bit 43) must be initialised to 0b1. 415 416 - If the kernel is entered at EL1 and EL2 is present: 417 418 - HCRX_EL2.TCR2En (bit 14) must be initialised to 0b1. 419 420 For CPUs with the Stage 1 Permission Indirection Extension feature (FEAT_S1PIE): 421 422 - If EL3 is present: 423 424 - SCR_EL3.PIEn (bit 45) must be initialised to 0b1. 425 426 - If the kernel is entered at EL1 and EL2 is present: 427 428 - HFGRTR_EL2.nPIR_EL1 (bit 58) must be initialised to 0b1. 429 430 - HFGWTR_EL2.nPIR_EL1 (bit 58) must be initialised to 0b1. 431 432 - HFGRTR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1. 433 434 - HFGRWR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1. 435 436The requirements described above for CPU mode, caches, MMUs, architected 437timers, coherency and system registers apply to all CPUs. All CPUs must 438enter the kernel in the same exception level. Where the values documented 439disable traps it is permissible for these traps to be enabled so long as 440those traps are handled transparently by higher exception levels as though 441the values documented were set. 442 443The boot loader is expected to enter the kernel on each CPU in the 444following manner: 445 446- The primary CPU must jump directly to the first instruction of the 447 kernel image. The device tree blob passed by this CPU must contain 448 an 'enable-method' property for each cpu node. The supported 449 enable-methods are described below. 450 451 It is expected that the bootloader will generate these device tree 452 properties and insert them into the blob prior to kernel entry. 453 454- CPUs with a "spin-table" enable-method must have a 'cpu-release-addr' 455 property in their cpu node. This property identifies a 456 naturally-aligned 64-bit zero-initalised memory location. 457 458 These CPUs should spin outside of the kernel in a reserved area of 459 memory (communicated to the kernel by a /memreserve/ region in the 460 device tree) polling their cpu-release-addr location, which must be 461 contained in the reserved region. A wfe instruction may be inserted 462 to reduce the overhead of the busy-loop and a sev will be issued by 463 the primary CPU. When a read of the location pointed to by the 464 cpu-release-addr returns a non-zero value, the CPU must jump to this 465 value. The value will be written as a single 64-bit little-endian 466 value, so CPUs must convert the read value to their native endianness 467 before jumping to it. 468 469- CPUs with a "psci" enable method should remain outside of 470 the kernel (i.e. outside of the regions of memory described to the 471 kernel in the memory node, or in a reserved area of memory described 472 to the kernel by a /memreserve/ region in the device tree). The 473 kernel will issue CPU_ON calls as described in ARM document number ARM 474 DEN 0022A ("Power State Coordination Interface System Software on ARM 475 processors") to bring CPUs into the kernel. 476 477 The device tree should contain a 'psci' node, as described in 478 Documentation/devicetree/bindings/arm/psci.yaml. 479 480- Secondary CPU general-purpose register settings 481 482 - x0 = 0 (reserved for future use) 483 - x1 = 0 (reserved for future use) 484 - x2 = 0 (reserved for future use) 485 - x3 = 0 (reserved for future use) 486