1 2 3 switch (uarch) { 4 case cpuinfo_uarch_cortex_a5: 5 /* 6 * Cortex-A5 Technical Reference Manual: 7 * 6.3.1. Micro TLB 8 * The first level of caching for the page table information 9 * is a micro TLB of 10 entries that is implemented on each of 10 * the instruction and data sides. 6.3.2. Main TLB Misses from 11 * the instruction and data micro TLBs are handled by a unified 12 * main TLB. The main TLB is 128-entry two-way set-associative. 13 */ 14 break; 15 case cpuinfo_uarch_cortex_a7: 16 /* 17 * Cortex-A7 MPCore Technical Reference Manual: 18 * 5.3.1. Micro TLB 19 * The first level of caching for the page table information 20 * is a micro TLB of 10 entries that is implemented on each of 21 * the instruction and data sides. 5.3.2. Main TLB Misses from 22 * the micro TLBs are handled by a unified main TLB. This is a 23 * 256-entry 2-way set-associative structure. The main TLB 24 * supports all the VMSAv7 page sizes of 4KB, 64KB, 1MB and 16MB 25 * in addition to the LPAE page sizes of 2MB and 1G. 26 */ 27 break; 28 case cpuinfo_uarch_cortex_a8: 29 /* 30 * Cortex-A8 Technical Reference Manual: 31 * 6.1. About the MMU 32 * The MMU features include the following: 33 * - separate, fully-associative, 32-entry data and 34 * instruction TLBs 35 * - TLB entries that support 4KB, 64KB, 1MB, and 16MB pages 36 */ 37 break; 38 case cpuinfo_uarch_cortex_a9: 39 /* 40 * ARM Cortex‑A9 Technical Reference Manual: 41 * 6.2.1 Micro TLB 42 * The first level of caching for the page table information 43 * is a micro TLB of 32 entries on the data side, and 44 * configurable 32 or 64 entries on the instruction side. 6.2.2 45 * Main TLB The main TLB is implemented as a combination of: 46 * - A fully-associative, lockable array of four elements. 47 * - A 2-way associative structure of 2x32, 2x64, 2x128 or 48 * 2x256 entries. 49 */ 50 break; 51 case cpuinfo_uarch_cortex_a15: 52 /* 53 * ARM Cortex-A15 MPCore Processor Technical Reference Manual: 54 * 5.2.1. L1 instruction TLB 55 * The L1 instruction TLB is a 32-entry fully-associative 56 * structure. This TLB caches entries at the 4KB granularity of 57 * Virtual Address (VA) to Physical Address (PA) mapping only. 58 * If the page tables map the memory region to a larger 59 * granularity than 4K, it only allocates one mapping for the 60 * particular 4K region to which the current access 61 * corresponds. 5.2.2. L1 data TLB There are two separate 62 * 32-entry fully-associative TLBs that are used for data loads 63 * and stores, respectively. Similar to the L1 instruction TLB, 64 * both of these cache entries at the 4KB granularity of VA to 65 * PA mappings only. At implementation time, the Cortex-A15 66 * MPCore processor can be configured with the -l1tlb_1m option, 67 * to have the L1 data TLB cache entries at both the 4KB and 1MB 68 * granularity. With this configuration, any translation that 69 * results in a 1MB or larger page is cached in the L1 data TLB 70 * as a 1MB entry. Any translation that results in a page 71 * smaller than 1MB is cached in the L1 data TLB as a 4KB entry. 72 * By default, all translations are cached in the L1 data TLB as 73 * a 4KB entry. 5.2.3. L2 TLB Misses from the L1 instruction and 74 * data TLBs are handled by a unified L2 TLB. This is a 75 * 512-entry 4-way set-associative structure. The L2 TLB 76 * supports all the VMSAv7 page sizes of 4K, 64K, 1MB and 16MB 77 * in addition to the LPAE page sizes of 2MB and 1GB. 78 */ 79 break; 80 case cpuinfo_uarch_cortex_a17: 81 /* 82 * ARM Cortex-A17 MPCore Processor Technical Reference Manual: 83 * 5.2.1. Instruction micro TLB 84 * The instruction micro TLB is implemented as a 32, 48 or 64 85 * entry, fully-associative structure. This TLB caches entries 86 * at the 4KB and 1MB granularity of Virtual Address (VA) to 87 * Physical Address (PA) mapping only. If the translation tables 88 * map the memory region to a larger granularity than 4KB or 89 * 1MB, it only allocates one mapping for the particular 4KB 90 * region to which the current access corresponds. 5.2.2. Data 91 * micro TLB The data micro TLB is a 32 entry fully-associative 92 * TLB that is used for data loads and stores. The cache entries 93 * have a 4KB and 1MB granularity of VA to PA mappings 94 * only. 5.2.3. Unified main TLB Misses from the instruction and 95 * data micro TLBs are handled by a unified main TLB. This is a 96 * 1024 entry 4-way set-associative structure. The main TLB 97 * supports all the VMSAv7 page sizes of 4K, 64K, 1MB and 16MB 98 * in addition to the LPAE page sizes of 2MB and 1GB. 99 */ 100 break; 101 case cpuinfo_uarch_cortex_a35: 102 /* 103 * ARM Cortex‑A35 Processor Technical Reference Manual: 104 * A6.2 TLB Organization 105 * Micro TLB 106 * The first level of caching for the translation table 107 * information is a micro TLB of ten entries that is implemented 108 * on each of the instruction and data sides. Main TLB A unified 109 * main TLB handles misses from the micro TLBs. It has a 110 * 512-entry, 2-way, set-associative structure and supports all 111 * VMSAv8 block sizes, except 1GB. If it fetches a 1GB block, 112 * the TLB splits it into 512MB blocks and stores the 113 * appropriate block for the lookup. 114 */ 115 break; 116 case cpuinfo_uarch_cortex_a53: 117 /* 118 * ARM Cortex-A53 MPCore Processor Technical Reference Manual: 119 * 5.2.1. Micro TLB 120 * The first level of caching for the translation table 121 * information is a micro TLB of ten entries that is implemented 122 * on each of the instruction and data sides. 5.2.2. Main TLB A 123 * unified main TLB handles misses from the micro TLBs. This is 124 * a 512-entry, 4-way, set-associative structure. The main TLB 125 * supports all VMSAv8 block sizes, except 1GB. If a 1GB block 126 * is fetched, it is split into 512MB blocks and the appropriate 127 * block for the lookup stored. 128 */ 129 break; 130 case cpuinfo_uarch_cortex_a57: 131 /* 132 * ARM® Cortex-A57 MPCore Processor Technical Reference Manual: 133 * 5.2.1 L1 instruction TLB 134 * The L1 instruction TLB is a 48-entry fully-associative 135 * structure. This TLB caches entries of three different page 136 * sizes, natively 4KB, 64KB, and 1MB, of VA to PA mappings. If 137 * the page tables map the memory region to a larger granularity 138 * than 1MB, it only allocates one mapping for the particular 139 * 1MB region to which the current access corresponds. 5.2.2 L1 140 * data TLB The L1 data TLB is a 32-entry fully-associative TLB 141 * that is used for data loads and stores. This TLB caches 142 * entries of three different page sizes, natively 4KB, 64KB, 143 * and 1MB, of VA to PA mappings. 5.2.3 L2 TLB Misses from the 144 * L1 instruction and data TLBs are handled by a unified L2 TLB. 145 * This is a 1024-entry 4-way set-associative structure. The L2 146 * TLB supports the page sizes of 4K, 64K, 1MB and 16MB. It also 147 * supports page sizes of 2MB and 1GB for the long descriptor 148 * format translation in AArch32 state and in AArch64 state when 149 * using the 4KB translation granule. In addition, the L2 TLB 150 * supports the 512MB page map size defined for the AArch64 151 * translations that use a 64KB translation granule. 152 */ 153 break; 154 } 155