| /kernel/linux/linux-5.10/fs/btrfs/ |
| D | delalloc-space.c | 23 * We call into btrfs_reserve_data_bytes() for the user request bytes that 24 * they wish to write. We make this reservation and add it to 25 * space_info->bytes_may_use. We set EXTENT_DELALLOC on the inode io_tree 27 * make a real allocation if we are pre-allocating or doing O_DIRECT. 30 * At writepages()/prealloc/O_DIRECT time we will call into 31 * btrfs_reserve_extent() for some part or all of this range of bytes. We 35 * may allocate a smaller on disk extent than we previously reserved. 46 * This is the simplest case, we haven't completed our operation and we know 47 * how much we reserved, we can simply call 60 * We keep track of two things on a per inode bases [all …]
|
| D | space-info.c | 22 * 1) space_info. This is the ultimate arbiter of how much space we can use. 25 * reservations we care about total_bytes - SUM(space_info->bytes_) when 30 * metadata reservation we have. You can see the comment in the block_rsv 34 * 3) btrfs_calc*_size. These are the worst case calculations we used based 35 * on the number of items we will want to modify. We have one for changing 36 * items, and one for inserting new items. Generally we use these helpers to 42 * We call into either btrfs_reserve_data_bytes() or 43 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with 44 * num_bytes we want to reserve. 61 * Assume we are unable to simply make the reservation because we do not have [all …]
|
| D | locking.h | 20 * We are limited in number of subclasses by MAX_LOCKDEP_SUBCLASSES, which at 21 * the time of this patch is 8, which is how many we use. Keep this in mind if 28 * When we COW a block we are holding the lock on the original block, 30 * when we lock the newly allocated COW'd block. Handle this by having 36 * Oftentimes we need to lock adjacent nodes on the same level while 37 * still holding the lock on the original node we searched to, such as 40 * Because of this we need to indicate to lockdep that this is 48 * When splitting we will be holding a lock on the left/right node when 49 * we need to cow that node, thus we need a new set of subclasses for 56 * When splitting we may push nodes to the left or right, but still use [all …]
|
| /kernel/linux/linux-5.10/arch/powerpc/mm/nohash/ |
| D | tlb_low_64e.S | 95 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */ 97 /* We do the user/kernel test for the PID here along with the RW test 99 /* We pre-test some combination of permissions to avoid double 102 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE 107 * writeable, we will take a new fault later, but that should be 110 * We also move ESR_ST in _PAGE_DIRTY position 113 * MAS1 is preset for all we need except for TID that needs to 134 * We are entered with: 182 /* Now we build the MAS: 224 /* We need to check if it was an instruction miss */ [all …]
|
| /kernel/linux/linux-4.19/arch/powerpc/mm/ |
| D | tlb_low_64e.S | 105 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */ 107 /* We do the user/kernel test for the PID here along with the RW test 109 /* We pre-test some combination of permissions to avoid double 112 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE 117 * writeable, we will take a new fault later, but that should be 120 * We also move ESR_ST in _PAGE_DIRTY position 123 * MAS1 is preset for all we need except for TID that needs to 145 * We are entered with: 195 /* Now we build the MAS: 238 /* We need to check if it was an instruction miss */ [all …]
|
| /kernel/linux/linux-4.19/drivers/md/bcache/ |
| D | journal.h | 9 * never spans two buckets. This means (not implemented yet) we can resize the 15 * We also keep some things in the journal header that are logically part of the 20 * rewritten when we want to move/wear level the main journal. 22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be 25 * moving gc we work around it by flushing the btree to disk before updating the 35 * We track this by maintaining a refcount for every open journal entry, in a 38 * zero, we pop it off - thus, the size of the fifo tells us the number of open 41 * We take a refcount on a journal entry when we add some keys to a journal 42 * entry that we're going to insert (held by struct btree_op), and then when we 43 * insert those keys into the btree the btree write we're setting up takes a [all …]
|
| /kernel/linux/linux-5.10/drivers/md/bcache/ |
| D | journal.h | 9 * never spans two buckets. This means (not implemented yet) we can resize the 15 * We also keep some things in the journal header that are logically part of the 20 * rewritten when we want to move/wear level the main journal. 22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be 25 * moving gc we work around it by flushing the btree to disk before updating the 35 * We track this by maintaining a refcount for every open journal entry, in a 38 * zero, we pop it off - thus, the size of the fifo tells us the number of open 41 * We take a refcount on a journal entry when we add some keys to a journal 42 * entry that we're going to insert (held by struct btree_op), and then when we 43 * insert those keys into the btree the btree write we're setting up takes a [all …]
|
| /kernel/linux/linux-5.10/fs/xfs/ |
| D | xfs_log_cil.c | 24 * recover, so we don't allow failure here. Also, we allocate in a context that 25 * we don't want to be issuing transactions from, so we need to tell the 28 * We don't reserve any space for the ticket - we are going to steal whatever 29 * space we require from transactions as they commit. To ensure we reserve all 30 * the space required, we need to set the current reservation of the ticket to 31 * zero so that we know to steal the initial transaction overhead from the 43 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc() 51 * After the first stage of log recovery is done, we know where the head and 52 * tail of the log are. We need this log initialisation done before we can 55 * Here we allocate a log ticket to track space usage during a CIL push. This [all …]
|
| D | xfs_log_priv.h | 63 * By covering, we mean changing the h_tail_lsn in the last on-disk 72 * might include space beyond the EOF. So if we just push the EOF a 80 * system is idle. We need two dummy transaction because the h_tail_lsn 92 * we are done covering previous transactions. 93 * NEED -- logging has occurred and we need a dummy transaction 95 * DONE -- we were in the NEED state and have committed a dummy 97 * NEED2 -- we detected that a dummy transaction has gone to the 99 * DONE2 -- we committed a dummy transaction when in the NEED2 state. 101 * There are two places where we switch states: 103 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2. [all …]
|
| /kernel/linux/linux-4.19/fs/xfs/ |
| D | xfs_log_cil.c | 27 * recover, so we don't allow failure here. Also, we allocate in a context that 28 * we don't want to be issuing transactions from, so we need to tell the 31 * We don't reserve any space for the ticket - we are going to steal whatever 32 * space we require from transactions as they commit. To ensure we reserve all 33 * the space required, we need to set the current reservation of the ticket to 34 * zero so that we know to steal the initial transaction overhead from the 47 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc() 55 * After the first stage of log recovery is done, we know where the head and 56 * tail of the log are. We need this log initialisation done before we can 59 * Here we allocate a log ticket to track space usage during a CIL push. This [all …]
|
| D | xfs_log_priv.h | 69 * By covering, we mean changing the h_tail_lsn in the last on-disk 78 * might include space beyond the EOF. So if we just push the EOF a 86 * system is idle. We need two dummy transaction because the h_tail_lsn 98 * we are done covering previous transactions. 99 * NEED -- logging has occurred and we need a dummy transaction 101 * DONE -- we were in the NEED state and have committed a dummy 103 * NEED2 -- we detected that a dummy transaction has gone to the 105 * DONE2 -- we committed a dummy transaction when in the NEED2 state. 107 * There are two places where we switch states: 109 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2. [all …]
|
| /kernel/linux/linux-4.19/net/ipv4/ |
| D | tcp_vegas.c | 14 * o We do not change the loss detection or recovery mechanisms of 18 * only every-other RTT during slow start, we increase during 21 * we use the rate at which ACKs come back as the "actual" 23 * o To speed convergence to the right rate, we set the cwnd 24 * to achieve the right ("actual") rate when we exit slow start. 25 * o To filter out the noise caused by delayed ACKs, we use the 54 /* There are several situations when we must "re-start" Vegas: 59 * o when we send a packet and there is no outstanding 62 * In these circumstances we cannot do a Vegas calculation at the 63 * end of the first RTT, because any calculation we do is using [all …]
|
| /kernel/linux/linux-5.10/net/ipv4/ |
| D | tcp_vegas.c | 15 * o We do not change the loss detection or recovery mechanisms of 19 * only every-other RTT during slow start, we increase during 22 * we use the rate at which ACKs come back as the "actual" 24 * o To speed convergence to the right rate, we set the cwnd 25 * to achieve the right ("actual") rate when we exit slow start. 26 * o To filter out the noise caused by delayed ACKs, we use the 55 /* There are several situations when we must "re-start" Vegas: 60 * o when we send a packet and there is no outstanding 63 * In these circumstances we cannot do a Vegas calculation at the 64 * end of the first RTT, because any calculation we do is using [all …]
|
| /kernel/linux/linux-5.10/drivers/misc/vmw_vmci/ |
| D | vmci_route.c | 33 * which comes from the VMX, so we know it is coming from a in vmci_route() 36 * To avoid inconsistencies, test these once. We will test in vmci_route() 37 * them again when we do the actual send to ensure that we do in vmci_route() 49 * If this message already came from a guest then we in vmci_route() 57 * We must be acting as a guest in order to send to in vmci_route() 63 /* And we cannot send if the source is the host context. */ in vmci_route() 71 * then they probably mean ANY, in which case we in vmci_route() 87 * If it is not from a guest but we are acting as a in vmci_route() 88 * guest, then we need to send it down to the host. in vmci_route() 89 * Note that if we are also acting as a host then this in vmci_route() [all …]
|
| /kernel/linux/linux-4.19/arch/powerpc/kernel/ |
| D | machine_kexec_64.c | 45 * Since we use the kernel fault handlers and paging code to in default_machine_kexec_prepare() 46 * handle the virtual mode, we must make sure no destination in default_machine_kexec_prepare() 53 /* We also should not overwrite the tce tables */ in default_machine_kexec_prepare() 83 * We rely on kexec_load to create a lists that properly in copy_segments() 85 * We will still crash if the list is wrong, but at least in copy_segments() 117 * After this call we may not use anything allocated in dynamic in kexec_copy_flush() 125 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush() 142 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down() 149 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down() 165 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait() [all …]
|
| /kernel/linux/linux-5.10/arch/powerpc/kexec/ |
| D | core_64.c | 45 * Since we use the kernel fault handlers and paging code to in default_machine_kexec_prepare() 46 * handle the virtual mode, we must make sure no destination in default_machine_kexec_prepare() 53 /* We also should not overwrite the tce tables */ in default_machine_kexec_prepare() 83 * We rely on kexec_load to create a lists that properly in copy_segments() 85 * We will still crash if the list is wrong, but at least in copy_segments() 117 * After this call we may not use anything allocated in dynamic in kexec_copy_flush() 125 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush() 142 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down() 149 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down() 167 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait() [all …]
|
| /kernel/linux/linux-5.10/drivers/gpu/drm/i915/ |
| D | i915_request.c | 70 * We could extend the life of a context to beyond that of all in i915_fence_get_timeline_name() 72 * or we just give them a false name. Since in i915_fence_get_timeline_name() 118 * freed when the slab cache itself is freed, and so we would get in i915_fence_release() 127 * We do not hold a reference to the engine here and so have to be in i915_fence_release() 128 * very careful in what rq->engine we poke. The virtual engine is in i915_fence_release() 129 * referenced via the rq->context and we released that ref during in i915_fence_release() 130 * i915_request_retire(), ergo we must not dereference a virtual in i915_fence_release() 131 * engine here. Not that we would want to, as the only consumer of in i915_fence_release() 136 * we know that it will have been processed by the HW and will in i915_fence_release() 142 * power-of-two we assume that rq->engine may still be a virtual in i915_fence_release() [all …]
|
| /kernel/linux/linux-4.19/Documentation/filesystems/ |
| D | xfs-delayed-logging-design.txt | 25 That is, if we have a sequence of changes A through to F, and the object was 26 written to disk after change D, we would see in the log the following series 91 relogging technique XFS uses is that we can be relogging changed objects 92 multiple times before they are committed to disk in the log buffers. If we 98 contains all the changes from the previous changes. In other words, we have one 100 wasting space. When we are doing repeated operations on the same set of 103 log would greatly reduce the amount of metadata we write to the log, and this 110 formatting the changes in a transaction to the log buffer. Hence we cannot avoid 113 Delayed logging is the name we've given to keeping and tracking transactional 163 changes to the log buffers, we need to ensure that the object we are formatting [all …]
|
| /kernel/linux/linux-5.10/fs/xfs/scrub/ |
| D | bitmap.c | 90 * @bitmap as the list of blocks that are not accounted for, which we assume 120 * Now that we've sorted both lists, we iterate bitmap once, rolling in xbitmap_disunion() 121 * forward through sub and/or bitmap as necessary until we find an in xbitmap_disunion() 122 * overlap or reach the end of either list. We do not reset lp to the in xbitmap_disunion() 123 * head of bitmap nor do we reset sub_br to the head of sub. The in xbitmap_disunion() 124 * list traversal is similar to merge sort, but we're deleting in xbitmap_disunion() 125 * instead. In this manner we avoid O(n^2) operations. in xbitmap_disunion() 134 * Advance sub_br and/or br until we find a pair that in xbitmap_disunion() 135 * intersect or we run out of extents. in xbitmap_disunion() 147 /* trim sub_br to fit the extent we have */ in xbitmap_disunion() [all …]
|
| /kernel/linux/linux-4.19/fs/xfs/scrub/ |
| D | bitmap.c | 95 * @bitmap as the list of blocks that are not accounted for, which we assume 125 * Now that we've sorted both lists, we iterate bitmap once, rolling in xfs_bitmap_disunion() 126 * forward through sub and/or bitmap as necessary until we find an in xfs_bitmap_disunion() 127 * overlap or reach the end of either list. We do not reset lp to the in xfs_bitmap_disunion() 128 * head of bitmap nor do we reset sub_br to the head of sub. The in xfs_bitmap_disunion() 129 * list traversal is similar to merge sort, but we're deleting in xfs_bitmap_disunion() 130 * instead. In this manner we avoid O(n^2) operations. in xfs_bitmap_disunion() 139 * Advance sub_br and/or br until we find a pair that in xfs_bitmap_disunion() 140 * intersect or we run out of extents. in xfs_bitmap_disunion() 152 /* trim sub_br to fit the extent we have */ in xfs_bitmap_disunion() [all …]
|
| /kernel/linux/linux-4.19/arch/x86/mm/ |
| D | mpx.c | 76 * The decoder _should_ fail nicely if we pass it a short buffer. in mpx_insn_decode() 77 * But, let's not depend on that implementation detail. If we in mpx_insn_decode() 85 * copy_from_user() tries to get as many bytes as we could see in in mpx_insn_decode() 86 * the largest possible instruction. If the instruction we are in mpx_insn_decode() 87 * after is shorter than that _and_ we attempt to copy from in mpx_insn_decode() 88 * something unreadable, we might get a short read. This is OK in mpx_insn_decode() 90 * instruction. Check to see if we got a partial instruction. in mpx_insn_decode() 97 * We only _really_ need to decode bndcl/bndcn/bndcu in mpx_insn_decode() 117 * Userspace could have, by the time we get here, written 118 * anything it wants in to the instructions. We can not [all …]
|
| /kernel/linux/linux-5.10/drivers/usb/dwc2/ |
| D | hcd_queue.c | 61 /* If we get a NAK, wait this long before retrying */ 150 * @num_bits: The number of bits we need per period we want to reserve 152 * @interval: How often we need to be scheduled for the reservation this 156 * the interval or we return failure right away. 157 * @only_one_period: Normally we'll allow picking a start anywhere within the 158 * first interval, since we can still make all repetition 160 * here then we'll return failure if we can't fit within 163 * The idea here is that we want to schedule time for repeating events that all 168 * To keep things "simple", we'll represent our schedule with a bitmap that 170 * but does mean that we need to handle things specially (and non-ideally) if [all …]
|
| /kernel/linux/linux-4.19/drivers/usb/dwc2/ |
| D | hcd_queue.c | 61 /* If we get a NAK, wait this long before retrying */ 150 * @num_bits: The number of bits we need per period we want to reserve 152 * @interval: How often we need to be scheduled for the reservation this 156 * the interval or we return failure right away. 157 * @only_one_period: Normally we'll allow picking a start anywhere within the 158 * first interval, since we can still make all repetition 160 * here then we'll return failure if we can't fit within 163 * The idea here is that we want to schedule time for repeating events that all 168 * To keep things "simple", we'll represent our schedule with a bitmap that 170 * but does mean that we need to handle things specially (and non-ideally) if [all …]
|
| /kernel/linux/linux-4.19/drivers/misc/vmw_vmci/ |
| D | vmci_route.c | 41 * which comes from the VMX, so we know it is coming from a in vmci_route() 44 * To avoid inconsistencies, test these once. We will test in vmci_route() 45 * them again when we do the actual send to ensure that we do in vmci_route() 57 * If this message already came from a guest then we in vmci_route() 65 * We must be acting as a guest in order to send to in vmci_route() 71 /* And we cannot send if the source is the host context. */ in vmci_route() 79 * then they probably mean ANY, in which case we in vmci_route() 95 * If it is not from a guest but we are acting as a in vmci_route() 96 * guest, then we need to send it down to the host. in vmci_route() 97 * Note that if we are also acting as a host then this in vmci_route() [all …]
|
| /kernel/linux/linux-4.19/arch/x86/entry/ |
| D | entry_64.S | 76 * We need to change the IDT table before calling TRACE_IRQS_ON/OFF to 151 * fixed address). So we can't reference any symbols outside the entry 154 * Instead, we carefully abuse %rip-relative addressing. 156 * trampoline. We can thus find cpu_entry_area with this macro: 187 * x86 lacks a near absolute jump, and we can't jump to the real 188 * entry text with a relative jump. We could push the target 210 * We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON, 240 TRACE_IRQS_IRETQ /* we're about to change IF */ 243 * Try to use SYSRET instead of IRET if we're returning to 244 * a completely clean 64-bit userspace context. If we're not, [all …]
|