Lines Matching full:or
13 comprises multiple components or assemblies each of which may contain 0
14 or more CPUs, local memory, and/or IO buses. For brevity and to
22 connected together with some sort of system interconnect--e.g., a crossbar or
28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
30 is handled in hardware by the processor caches and/or the system interconnect.
33 away the cell containing the CPU or IO bus making the memory access is from the
43 [cache misses] to be to "local" memory--memory on the same cell, if any--or
51 architectures. As with physical cells, software nodes may contain 0 or more
52 CPUs, memory and/or IO buses. And, again, memory accesses to memory on
65 the existing nodes--or the system memory for non-NUMA platforms--into multiple
75 each memory zone [one or more of DMA, DMA32, NORMAL, HIGH_MEMORY, MOVABLE],
79 "overflow" or "fallback".
83 fall back to the same zone type on a different node, or to a different zone
85 such as DMA or DMA32, represent relatively scarce resources. Linux chooses
116 privileged user can specify in the scheduling or NUMA commands and functions
129 Some kernel allocations do not want or cannot tolerate this allocation fallback
131 or get notified that the node has no free memory. This is usually the case when
136 numa_node_id() or CPU_to_node() functions and then request memory from only
139 example of this. Or, the subsystem may choose to disable or not to enable
145 or some subsystems would fail to initialize if they attempted to allocated
148 or cpu_to_mem() function to locate the "local memory node" for the calling or