• Home
  • Raw
  • Download

Lines Matching full:stack

15  *			at the top of the kernel process stack.
73 * are not needed). SYSCALL does not save anything on the stack
106 /* Construct struct pt_regs on stack */
204 * Save old stack pointer and switch to trampoline stack.
214 * We are on the trampoline stack. All regs except RDI are live.
243 /* switch stack */
253 * When switching from a shallower to a deeper call stack
325 * @has_error_code: Hardware pushed error code on stack
349 * @has_error_code: Hardware pushed error code on stack
352 * and simple IDT entries. No IST stack, no paranoid entry checks.
386 + The interrupt stubs push (vector) onto the stack, which is the error_code
446 /* Switch to the regular task stack and use the noist entry point */
462 * runs on an IST stack and needs to be able to cause nested #VC exceptions.
465 * an IST stack by switching to the task stack if coming from user-space (which
466 * includes early SYSCALL entry path) or back to the stack in the IRET frame if
469 * If entered from kernel-mode the return stack is validated first, and if it is
470 * not safe to use (e.g. because it points to the entry stack) the #VC handler
471 * will switch to a fall-back stack (VC2) and call a special handler function.
497 * Switch off the IST stack to make it free for nested exceptions. The
499 * stack if it is safe to do so. If not it switches to the VC fall-back
500 * stack.
504 movq %rax, %rsp /* Switch to new stack */
517 * No need to switch back to the IST stack. The current stack is either
518 * identical to the stack in the IRET frame or the VC fall-back stack,
523 /* Switch to the regular task stack */
589 * The stack is now user RDI, orig_ax, RIP, CS, EFLAGS, RSP, SS.
590 * Save old stack pointer and switch to trampoline stack.
596 /* Copy the IRET frame to the trampoline stack. */
603 /* Push user RDI on the trampoline stack. */
607 * We are on the trampoline stack. All regs except RDI are live.
639 * Are we returning to a stack segment from the LDT? Note: in
640 * 64-bit mode SS:RSP on the exception stack is always valid.
660 * values. We have a percpu ESPFIX stack that is eight slots
662 * of the ESPFIX stack.
665 * normal stack and RAX on the ESPFIX stack.
667 * The ESPFIX stack layout we set up looks like this:
669 * --- top of ESPFIX stack ---
676 * --- bottom of ESPFIX stack ---
706 * still points to an RO alias of the ESPFIX stack.
718 * At this point, we cannot write to the stack any more, but we can
769 * rdi: New stack pointer points to the top word of the stack
778 * unwinder to handle the stack switch.
784 * The unwinder relies on the word at the top of the new stack
804 /* Restore the previous stack pointer from RBP. */
815 * popping the stack frame (can't be done atomically) and so it would still
816 * be possible to get enough handler activations to overflow the stack.
832 movq %rdi, %rsp /* we don't return, adjust the stack frame */
849 * to pop the stack frame we end up in an infinite loop of failsafe callbacks.
975 * "Paranoid" exit path from exception stack. This is invoked
1056 /* Put us onto the real thread stack. */
1060 movq %rax, %rsp /* switch stack */
1132 * Runs on exception stack. Xen PV does not go through this path at all,
1146 * NMI is using the top of the stack of the previous NMI. We
1148 * stack of the previous NMI. NMI handlers are not re-entrant
1152 * Check the a special location on the stack that contains
1154 * The interrupted task's stack is also checked to see if it
1155 * is an NMI stack.
1156 * If the variable is not set and the stack is not the NMI
1157 * stack then:
1158 * o Set the special variable on the stack
1160 * stack
1161 * o Copy the interrupt frame into an "iret" location on the stack
1163 * If the variable is set or the previous stack is the NMI stack:
1167 * Now on exit of the first NMI, we first clear the stack variable
1168 * The NMI stack will tell any nested NMIs at that point that it is
1169 * nested. Then we pop the stack normally with iret, and if there was
1170 * a nested NMI that updated the copy interrupt stack frame, a
1189 * NMI from user mode. We need to run on the thread stack, but we
1195 * We also must not push anything to the stack before switching
1220 * At this point we no longer need to worry about stack damage
1221 * due to nesting -- we're on the normal thread stack and we're
1222 * done with the NMI stack.
1237 * Here's what our stack frame will look like:
1305 * Now test if the previous stack was an NMI stack. This covers
1308 * there is one case in which RSP could point to the NMI stack
1317 /* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */
1319 /* If the stack pointer is above the NMI stack, this is a normal NMI */
1324 /* If it is below the NMI stack, it is a normal NMI */
1327 /* Ah, it is within the NMI stack. */
1347 /* Put stack back */
1397 * This makes it safe to copy to the stack frame that a nested
1490 * iretq reads the "iret" frame and exits the NMI stack in a