/kernel/linux/linux-5.10/block/partitions/ |
D | Kconfig | 8 Say Y here if you would like to use hard disks under Linux which 29 Say Y here if you would like to use hard disks under Linux which 42 Say Y here if you would like to use hard disks under Linux which 75 Say Y here if you would like to be able to read the hard disk 87 Say Y here if you would like to use hard disks under Linux which 94 Say Y here if you would like to use hard disks under Linux which 101 Say Y here if you would like to use hard disks under Linux which 108 Say Y here if you would like to be able to read the hard disk 116 Say Y here if you would like to use hard disks under Linux which 180 Say Y here if you would like to use hard disks under Linux which [all …]
|
/kernel/linux/linux-5.10/Documentation/filesystems/ |
D | directory-locking.rst | 77 the parent of object and it would have to lock the parent). 110 Otherwise the set of contended objects would be infinite - each of them 111 would have a contended child and we had assumed that no object is its 117 would again have an infinite set of contended objects). But that 128 source), such loop would have to contain these objects and the rest of it 129 would have to exist before rename(). I.e. at the moment of loop creation 130 rename() responsible for that would be holding filesystem lock and new parent 131 would have to be equal to or a descendent of source. But that means that 133 we had acquired filesystem lock and rename() would fail with -ELOOP in that 139 also preserved by all operations (cross-directory rename on a tree that would [all …]
|
D | ocfs2-online-filecheck.rst | 13 necessary, since turning the filesystem read-only would affect other running 15 Then, a mount option (errors=continue) is introduced, which would return the 34 the offline fsck should/would be recommended. 43 by the inode number which caused the error. This inode number would be the 51 mounted. The file above would accept inode numbers. This could be used to 91 On receiving the inode, the filesystem would read the inode and the 92 file metadata. In case of errors, the filesystem would fix the errors 97 small linked list buffer which would contain the last (N) inodes
|
D | fsverity.rst | 35 subject to the caveat that reads which would violate the hash will 60 accessed on a particular device. It would be slow and wasteful to 254 would circumvent the data verification. 481 are marked Uptodate. Merely hooking ``->read_iter()`` would be 548 direct I/O would bypass fs-verity. (They also do the same for 618 then you could simply do sha256(file) instead. That would be much 623 first read. However, it would be inefficient because every time a 650 :A: Write support would be very difficult and would require a 652 fs-verity. Write support would require: 662 - Rebuilding the Merkle tree after every write, which would be [all …]
|
/kernel/linux/linux-5.10/Documentation/w1/masters/ |
D | ds2490.rst | 32 was added to the API. The name is just a suggestion. It would take 52 clear the entire bulk in buffer. It would be possible to read the 60 with a OHCI controller, ds2490 running in the guest would operate 64 would fail. qemu sets a 50ms timeout and the bulk in would timeout 65 even when the status shows data available. A bulk out write would 66 show a successful completion, but the ds2490 status register would 68 reattaching would clear the problem. usbmon output in the guest and
|
/kernel/linux/linux-5.10/Documentation/RCU/ |
D | UP.rst | 26 from softirq, the list scan would find itself referencing a newly freed 47 its arguments would cause it to fail to make the fundamental guarantee 61 call_rcu() were to directly invoke the callback, the result would 64 In some cases, it would possible to restructure to code so that 69 the same critical section, then the code would need to create 81 or API changes would be required. 127 the process-context critical section. This would result in 141 simply immediately returned, it would prematurely signal the 142 end of the grace period, which would come as a nasty shock to
|
D | lockdep-splat.rst | 78 which would permit us to invoke rcu_dereference_protected as follows:: 83 With this change, there would be no lockdep-RCU splat emitted if this 85 or with the ->queue_lock held. In particular, this would have suppressed 104 read-side critical section, which again would have suppressed the 115 this change would also suppress the above lockdep-RCU splat.
|
/kernel/linux/linux-5.10/Documentation/bpf/ |
D | ringbuf.rst | 27 would solve the second problem automatically. 36 One way would be to, similar to ``BPF_MAP_TYPE_PERF_EVENT_ARRAY``, make 38 enforce "same CPU only" rule. This would be more familiar interface compatible 39 with existing perf buffer use in BPF, but would fail if application needed more 42 Additionally, given the performance of BPF ringbuf, many use cases would just 44 approach would be an overkill. 48 with lookup/update/delete operations. This approach would add a lot of extra 50 would also add another concept that BPF developers would have to familiarize 51 themselves with, new syntax in libbpf, etc. But then would really provide no 60 ring buffer for all CPUs, it's as simple and straightforward, as would be with [all …]
|
/kernel/linux/linux-5.10/Documentation/networking/ |
D | snmp_counter.rst | 44 multicast packets, and would always be updated together with 137 would be increased even if the ICMP packet has an invalid type. The 139 IcmpOutMsgs would still be updated if the IP header is constructed by 207 IcmpMsgOutType8 would increase 1. And if kernel gets an ICMP Echo Reply 208 packet, IcmpMsgInType0 would increase 1. 215 IcmpInMsgs would be updated but none of IcmpMsgInType[N] would be updated. 225 counters would be updated. The receiving packet path use IcmpInErrors 227 is increased, IcmpInErrors would always be increased too. 263 packets would be delivered to the TCP layer, but the TCP layer will discard 266 counter would only increase 1. [all …]
|
/kernel/linux/linux-5.10/Documentation/firmware-guide/acpi/ |
D | osi.rst | 73 The ACPI BIOS flow would include an evaluation of _OS, and the AML 74 interpreter in the kernel would return to it a string identifying the OS: 86 of every possible version of the OS that would run on it, and needed to know 87 all the quirks of those OS's. Certainly it would make more sense 94 that anybody would install those old operating systems 107 eg. _OSI("3.0 Thermal Model") would return TRUE if the OS knows how 109 An old OS that doesn't know about those extensions would answer FALSE, 124 and its successors. To do otherwise would virtually guarantee breaking 159 which would increment, based on the version of the spec supported. 161 Unfortunately, _REV was also misused. eg. some BIOS would check
|
/kernel/linux/linux-5.10/drivers/net/wireless/intel/iwlwifi/cfg/ |
D | 22000.c | 322 * HT size; mac80211 would otherwise pick the HE max (256) by default. 334 * HT size; mac80211 would otherwise pick the HE max (256) by default. 384 * HT size; mac80211 would otherwise pick the HE max (256) by default. 397 * HT size; mac80211 would otherwise pick the HE max (256) by default. 410 * HT size; mac80211 would otherwise pick the HE max (256) by default. 422 * HT size; mac80211 would otherwise pick the HE max (256) by default. 435 * HT size; mac80211 would otherwise pick the HE max (256) by default. 448 * HT size; mac80211 would otherwise pick the HE max (256) by default. 460 * HT size; mac80211 would otherwise pick the HE max (256) by default. 474 * HT size; mac80211 would otherwise pick the HE max (256) by default. [all …]
|
/kernel/linux/linux-5.10/Documentation/scsi/ |
D | lpfc.rst | 36 the LLDD would simply be queued for a short duration, allowing the device 38 to the system. If the driver did not hide these conditions, i/o would be 39 errored by the driver, the mid-layer would exhaust its retries, and the 40 device would be taken offline. Manual intervention would be required to
|
/kernel/linux/linux-5.10/Documentation/admin-guide/device-mapper/ |
D | log-writes.rst | 31 The log would show the following: 36 cases where a power failure at a particular point in time would create an 42 Any REQ_OP_DISCARD requests are treated like WRITE requests. Otherwise we would 48 If we logged DISCARD when it completed, the replay would look like this: 82 we're fsck'ing something reasonable, you would do something like 89 This would allow you to replay the log up to the mkfs mark and 104 Say you want to test fsync on your file system. You would do something like
|
/kernel/linux/linux-5.10/Documentation/admin-guide/LSM/ |
D | SafeSetID.rst | 36 program would still need CAP_SETUID to do any kind of transition, but the 37 additional restrictions imposed by this LSM would mean it is a "safer" version 53 For candidate applications that would like to have restricted setid capabilities 54 as implemented in this LSM, an alternative option would be to simply take away 58 number of semantics around process spawning that would be affected by this, such 63 userspace would likely be less appealing to incorporate into existing projects 68 Another possible approach would be to run a given process tree in its own user
|
/kernel/linux/linux-5.10/drivers/block/paride/ |
D | Transition-notes | 63 the thread was holding pd_lock and found pd_busy not set, which would 73 we would have to be called for the PIA that got ->claimed_cont 85 But that code does not reset pd_busy, so pd_busy would have to be 87 we were acquiring the lock, (1) would be already false, since 88 the thread that had reset it would be in the area simulateously. 89 If it was 0 before we tried to acquire pd_lock, (2) would be 108 point, we would have violated either (2.1) (if it was set while ps_set_intr()
|
/kernel/linux/linux-5.10/Documentation/ |
D | Kconfig | 19 written, it would be possible that some of those files would 20 have errors that would break them for being parsed by
|
D | atomic_t.txt | 70 There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for 110 In this case we would expect the atomic_set() from CPU1 to either happen 111 before the atomic_add_unless(), in which case that latter one would no-op, or 150 - misc; the special purpose operations that are commonly used and would, 259 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome, 260 because it would not order the W part of the RMW against the following
|
/kernel/linux/linux-5.10/Documentation/userspace-api/media/ |
D | gen-errors.rst | 25 is also returned when the ioctl would need to wait for an event, 36 change something that would affect the stream, or would require 69 that this request would overcommit the usb bandwidth reserved for
|
/kernel/linux/linux-5.10/drivers/leds/ |
D | TODO | 50 RGB LEDs are quite common, and it would be good to be able to turn LED 67 It would be also nice to have useful listing mode -- name, type, 70 In future, it would be good to be able to set rgb led to particular 74 ethernet interface would be nice.
|
/kernel/linux/linux-5.10/Documentation/core-api/ |
D | unaligned-memory-access.rst | 25 reading 4 bytes of data from address 0x10005 would be an unaligned memory 56 to architecture. It would be easy to write a whole document on the differences 94 starting at address 0x10000. With a basic level of understanding, it would 95 not be unreasonable to expect that accessing field2 would cause an unaligned 101 above case it would insert 2 bytes of padding in between field1 and field2. 116 where padding would otherwise be inserted, and hence reduce the overall 126 For a natural alignment scheme, the compiler would only have to add a single 172 Think about what would happen if addr1 was an odd address such as 0x10003. 218 To avoid the unaligned memory access, you would rewrite it as follows::
|
/kernel/linux/linux-5.10/Documentation/block/ |
D | biovecs.rst | 11 More specifically, old code that needed to partially complete a bio would 13 ended up partway through a biovec, it would increment bv_offset and decrement 85 It used to be the case that submitting a partially completed bio would work 87 norm, not all drivers would respect bi_idx and those would break. Now, 96 where previously you would have used bi_idx you'd now use a bvec_iter,
|
/kernel/linux/linux-5.10/Documentation/scheduler/ |
D | sched-nice-design.rst | 19 rule so that nice +19 level would be _exactly_ 1 jiffy. To better 39 So that if someone wanted to really renice tasks, +19 would give a much 40 bigger hit than the normal linear rule would do. (The solution of 47 millisec) rescheduling. (and would thus trash the cache, etc. Remember, 78 and another task with +2, the CPU split between the two tasks would
|
/kernel/linux/linux-5.10/Documentation/locking/ |
D | futex-requeue-pi.rst | 7 left without an owner if it has waiters; doing so would break the PI 20 implementation would wake the highest-priority waiter, and leave the 55 upon a successful futex_wait system call, the caller would return to 57 would be modified as follows:: 93 acquire the rt_mutex as it would open a race window between the
|
/kernel/linux/linux-5.10/drivers/gpu/drm/exynos/ |
D | exynos_drm_gem.h | 22 * - a new handle to this gem object would be created 35 * P.S. this object would be transferred to user as kms_bo.handle so 73 * with this function call, gem object reference count would be increased. 80 * gem object reference count would be decreased.
|
/kernel/linux/linux-5.10/tools/memory-model/Documentation/ |
D | explanation.txt | 157 Some predictions are trivial. For instance, no sane memory model would 170 unconditionally then we would instead have r1 = 0 and r2 = 1.) 173 If this were to occur it would mean the driver contains a bug, because 174 incorrect data would get sent to the user: 0 instead of 1. As it 203 would be 1 since a load obtains its value from the most recent 213 Consistency memory model; doing so would rule out too many valuable 266 Z ordered before X, because this would mean that X is ordered before 278 if those accesses would form a cycle, then the memory model predicts 410 Given this version of the code, the LKMM would predict that the load 620 If the final value stored in x after this code ran was 17, you would [all …]
|