Lines Matching full:asid
10 * -Major rewrite of Core ASID allocation routine get_new_mmu_context
24 /* ARC700 ASID Management
26 * ARC MMU provides 8-bit ASID (0..255) to TAG TLB entries, allowing entries
30 * Linux assigns each task a unique ASID. A simple round-robin allocation
31 * of H/w ASID is done using software tracker @asid_cpu.
33 * the entire TLB and wrapping ASID back to zero.
35 * A new allocation cycle, post rollover, could potentially reassign an ASID
36 * to a different task. Thus the rule is to refresh the ASID in a new cycle.
37 * The 32 bit @asid_cpu (and mm->asid) have 8 bits MMU PID and rest 24 bits
48 #define asid_mm(mm, cpu) mm->context.asid[cpu]
55 * Get a new ASID if task doesn't have a valid one (unalloc or from prev cycle)
56 * Also set the MMU PID register to existing/updated ASID
66 * Move to new ASID if it was not from current alloc-cycle/generation. in get_new_mmu_context()
67 * This is done by ensuring that the generation bits in both mm->ASID in get_new_mmu_context()
68 * and cpu's ASID counter are exactly same. in get_new_mmu_context()
70 * Note: Callers needing new ASID unconditionally, independent of in get_new_mmu_context()
78 /* move to new ASID and handle rollover */ in get_new_mmu_context()
84 * Above check for rollover of 8 bit ASID in 32 bit container. in get_new_mmu_context()
92 /* Assign new ASID to tsk */ in get_new_mmu_context()
126 /* Prepare the MMU for task: setup PID reg with allocated ASID
127 If task doesn't have an ASID (never alloc or stolen, get a new ASID)
156 * Called at the time of execve() to get a new ASID
158 * vs. in switch_mm(). Here it always returns a new ASID, because mm has
159 * an unallocated "initial" value, while in latter, it moves to a new ASID,
168 * there is a good chance that task gets sched-out/in, making it's ASID valid