Lines Matching full:fault
38 * Returns 0 if mmiotrace is disabled, or if the fault is not
116 * If it was a exec (instruction fetch) fault on NX page, then in is_prefetch()
117 * do not ignore the fault: in is_prefetch()
201 * Handle a fault on the vmalloc or module mapping area
212 * unhandled page-fault when they are accessed.
423 * The OS sees this as a page fault with the upper 32bits of RIP cleared.
457 * We catch this in the page fault handler because these addresses
543 pr_alert("BUG: unable to handle page fault for address: %px\n", in show_fault_oops()
566 * contributory exception from user code and gets a page fault in show_fault_oops()
567 * during delivery, the page fault can be delivered as though in show_fault_oops()
651 /* Are we prepared to handle this kernel fault? */ in no_context()
654 * Any interrupt that takes a fault gets the fixup. This makes in no_context()
655 * the below recursive fault logic only apply to a faults from in no_context()
682 * Stack overflow? During boot, we can fault near the initial in no_context()
693 * double-fault even before we get this far, in which case in no_context()
694 * we're fine: the double-fault handler will deal with it. in no_context()
697 * and then double-fault, though, because we're likely to in no_context()
704 : "D" ("kernel stack overflow (page fault)"), in no_context()
714 * Valid to do another page fault here, because if this fault in no_context()
729 * Buggy firmware could access regions which might page fault, try to in no_context()
807 * Valid to do another page fault here because this one came in __bad_area_nosemaphore()
900 * A protection key fault means that the PKRU value did not allow in bad_area_access_error()
907 * fault and that there was a VMA once we got in the fault in bad_area_access_error()
915 * 5. T1 : enters fault handler, takes mmap_lock, etc... in bad_area_access_error()
929 vm_fault_t fault) in do_sigbus() argument
937 /* User-space => ok to do another page fault: */ in do_sigbus()
944 if (fault & (VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { in do_sigbus()
949 "MCE: Killing %s:%d due to hardware memory corruption fault at %lx\n", in do_sigbus()
951 if (fault & VM_FAULT_HWPOISON_LARGE) in do_sigbus()
952 lsb = hstate_index_to_shift(VM_FAULT_GET_HINDEX(fault)); in do_sigbus()
953 if (fault & VM_FAULT_HWPOISON) in do_sigbus()
964 unsigned long address, vm_fault_t fault) in mm_fault_error() argument
971 if (fault & VM_FAULT_OOM) { in mm_fault_error()
981 * userspace (which will retry the fault, or kill us if we got in mm_fault_error()
986 if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON| in mm_fault_error()
988 do_sigbus(regs, error_code, address, fault); in mm_fault_error()
989 else if (fault & VM_FAULT_SIGSEGV) in mm_fault_error()
1008 * Handle a spurious fault caused by a stale TLB entry.
1023 * Returns non-zero if a spurious fault was handled, zero otherwise.
1106 * a follow-up action to resolve the fault, like a COW. in access_error()
1169 * We can fault-in kernel-space virtual memory on-demand. The in do_kern_addr_fault()
1178 * fault is not any of the following: in do_kern_addr_fault()
1179 * 1. A fault on a PTE with a reserved bit set. in do_kern_addr_fault()
1180 * 2. A fault caused by a user-mode access. (Do not demand- in do_kern_addr_fault()
1181 * fault kernel memory due to user-mode accesses). in do_kern_addr_fault()
1182 * 3. A fault caused by a page-level protection violation. in do_kern_addr_fault()
1183 * (A demand fault would be on a non-present page which in do_kern_addr_fault()
1198 /* Was the fault spurious, caused by lazy TLB invalidation? */ in do_kern_addr_fault()
1209 * and handling kernel code that can fault, like get_user(). in do_kern_addr_fault()
1212 * fault we could otherwise deadlock: in do_kern_addr_fault()
1227 vm_fault_t fault; in do_user_addr_fault() local
1261 * in a region with pagefaults disabled then we must not take the fault in do_user_addr_fault()
1270 * vmalloc fault has been handled. in do_user_addr_fault()
1273 * potential system fault or CPU buglet: in do_user_addr_fault()
1311 * tables. But, an erroneous kernel fault occurring outside one of in do_user_addr_fault()
1313 * to validate the fault against the address space. in do_user_addr_fault()
1323 * Fault from code in kernel from in do_user_addr_fault()
1367 * If for any reason at all we couldn't handle the fault, in do_user_addr_fault()
1369 * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if in do_user_addr_fault()
1374 * repeat the page fault later with a VM_FAULT_NOPAGE retval in do_user_addr_fault()
1379 fault = handle_mm_fault(vma, address, flags, regs); in do_user_addr_fault()
1382 if (fault_signal_pending(fault, regs)) { in do_user_addr_fault()
1394 if (unlikely((fault & VM_FAULT_RETRY) && in do_user_addr_fault()
1401 if (unlikely(fault & VM_FAULT_ERROR)) { in do_user_addr_fault()
1402 mm_fault_error(regs, hw_error_code, address, fault); in do_user_addr_fault()
1432 /* Was the fault on kernel-controlled part of the address space? */ in handle_page_fault()
1438 * User address page fault handling might have reenabled in handle_page_fault()
1457 * (asynchronous page fault mechanism). The event happens when a in DEFINE_IDTENTRY_RAW_ERRORCODE()
1482 * be invoked because a kernel fault on a user space address might in DEFINE_IDTENTRY_RAW_ERRORCODE()
1485 * In case the fault hit a RCU idle region the conditional entry in DEFINE_IDTENTRY_RAW_ERRORCODE()