Lines Matching full:we
76 * The decoder _should_ fail nicely if we pass it a short buffer. in mpx_insn_decode()
77 * But, let's not depend on that implementation detail. If we in mpx_insn_decode()
85 * copy_from_user() tries to get as many bytes as we could see in in mpx_insn_decode()
86 * the largest possible instruction. If the instruction we are in mpx_insn_decode()
87 * after is shorter than that _and_ we attempt to copy from in mpx_insn_decode()
88 * something unreadable, we might get a short read. This is OK in mpx_insn_decode()
90 * instruction. Check to see if we got a partial instruction. in mpx_insn_decode()
97 * We only _really_ need to decode bndcl/bndcn/bndcu in mpx_insn_decode()
117 * Userspace could have, by the time we get here, written
118 * anything it wants in to the instructions. We can not
138 * We know at this point that we are only dealing with in mpx_generate_siginfo()
179 * We were not able to extract an address from the instruction, in mpx_generate_siginfo()
203 * only accessible if we first do an xsave. in mpx_get_bounds_dir()
238 * directory here means that we do not have to do xsave in the in mpx_enable_management()
239 * unmap path; we can just use mm->context.bd_addr instead. in mpx_enable_management()
281 * the pointer that we pass to it to figure out how much in mpx_cmpxchg_bd_entry()
282 * data to cmpxchg. We have to be careful here not to in mpx_cmpxchg_bd_entry()
283 * pass a pointer to a 64-bit data type when we only want in mpx_cmpxchg_bd_entry()
330 * we may race with another CPU instantiating the same table. in allocate_bt()
334 * This can fault, but that's OK because we do not hold in allocate_bt()
345 * for faults, *not* if the cmpxchg itself fails. Now we must in allocate_bt()
349 * We expected an empty 'expected_old_val', but instead found in allocate_bt()
350 * an apparently valid entry. Assume we raced with another in allocate_bt()
358 * We found a non-empty bd_entry but it did not have the in allocate_bt()
378 * the directory, a #BR is generated and we get here in order to
400 * entry via BNDSTATUS, so we don't have to go look it up. in do_mpx_bt_fault()
404 * Make sure the directory entry is within where we think in do_mpx_bt_fault()
439 * 0 means we failed to fault in and get anything, in mpx_resolve_fault()
464 * are ignored by the hardware, so we do the same. in mpx_bd_entry_to_bt_addr()
475 * We only want to do a 4-byte get_user() on 32-bit. Otherwise,
476 * we might run off the end of the bounds table if we are on
524 * If we could not resolve the fault, consider it in get_bt_addr()
537 * *OR* be completely empty. If we see a !valid entry *and* some in get_bt_addr()
538 * data in the address field, we know something is wrong. This in get_bt_addr()
544 * Do we have an completely zeroed bt entry? That is OK. It in get_bt_addr()
585 * We know the size of the table in to which we are in mpx_get_bt_entry_offset_bytes()
586 * indexing, and we have eliminated all the low bits in mpx_get_bt_entry_offset_bytes()
589 * Mask out all the high bits which we do not need in mpx_get_bt_entry_offset_bytes()
596 * We now have an entry offset in terms of *entries* in in mpx_get_bt_entry_offset_bytes()
597 * the table. We need to scale it back up to bytes. in mpx_get_bt_entry_offset_bytes()
607 * Note, we need a long long because 4GB doesn't fit in
645 * if we 'end' on a boundary, the offset will be 0 which in zap_bt_entries_mapping()
646 * is not what we want. Back it up a byte to get the in zap_bt_entries_mapping()
647 * last bt entry. Then once we have the entry itself, in zap_bt_entries_mapping()
670 * be split. So we need to look across the entire 'start -> end' in zap_bt_entries_mapping()
677 * We followed a bounds directory entry down in zap_bt_entries_mapping()
678 * here. If we find a non-MPX VMA, that's bad, in zap_bt_entries_mapping()
699 * There are several ways to derive the bd offsets. We in mpx_get_bd_entry_offset()
701 * 1. We know the size of the virtual address space in mpx_get_bd_entry_offset()
702 * 2. We know the number of entries in a bounds table in mpx_get_bd_entry_offset()
703 * 3. We know that each entry covers a fixed amount of in mpx_get_bd_entry_offset()
705 * So, we can just divide the virtual address by the in mpx_get_bd_entry_offset()
724 * The two return calls above are exact copies. If we in mpx_get_bd_entry_offset()
726 * realize that we're doing a power-of-2 divide and use in mpx_get_bd_entry_offset()
727 * shifts. It uses a real divide. If we put them up in mpx_get_bd_entry_offset()
752 * If we could not resolve the fault, consider it in unmap_entire_bt()
764 * That is OK, since we were both trying to do in unmap_entire_bt()
771 * entry. We hold mmap_sem for read or write in unmap_entire_bt()
779 * Note, we are likely being called under do_munmap() already. To in unmap_entire_bt()
793 * bounds table that we are unmapping. in try_unmap_single_bt()
801 * We already unlinked the VMAs from the mm's rbtree so 'start' in try_unmap_single_bt()
809 * Although theoretically possible, we do not allow bounds in try_unmap_single_bt()
811 * If we count them as neighbors here, we may end up with in try_unmap_single_bt()
812 * lots of tables even though we have no actual table in try_unmap_single_bt()
820 * We know 'start' and 'end' lie within an area controlled in try_unmap_single_bt()
823 * then we can "expand" the are we are unmapping to possibly in try_unmap_single_bt()
849 * We are unmapping an entire table. Either because the in try_unmap_single_bt()
874 * move it back so we only deal with a single one in mpx_unmap_tables()
912 * (start->end), we will not continue follow-up work. This in mpx_notify_unmap()
915 * helps ensure that we do not have an exploitable stack overflow. in mpx_notify_unmap()
940 * Requested len is larger than the whole area we're allowed to map in. in mpx_unmapped_area_check()