| /Documentation/devicetree/bindings/pinctrl/ |
| D | sprd,pinctrl.txt | 12 to choose one function (like: UART0) for which system, since we 15 There are too much various configuration that we can not list all 16 of them, so we can not make every Spreadtrum-special configuration 18 global configuration in future. Then we add one "sprd,control" to 19 set these various global control configuration, and we need use 22 Moreover we recognise every fields comprising one bit or several 23 bits in one global control register as one pin, thus we should 32 Now we have 4 systems for sleep mode on SC9860 SoC: AP system, 42 In some situation we need set the pin sleep mode and pin sleep related 45 sleep mode. For example, if we set the pin sleep mode as PUBCP_SLEEP [all …]
|
| /Documentation/filesystems/ |
| D | xfs-delayed-logging-design.txt | 25 That is, if we have a sequence of changes A through to F, and the object was 26 written to disk after change D, we would see in the log the following series 91 relogging technique XFS uses is that we can be relogging changed objects 92 multiple times before they are committed to disk in the log buffers. If we 98 contains all the changes from the previous changes. In other words, we have one 100 wasting space. When we are doing repeated operations on the same set of 103 log would greatly reduce the amount of metadata we write to the log, and this 110 formatting the changes in a transaction to the log buffer. Hence we cannot avoid 113 Delayed logging is the name we've given to keeping and tracking transactional 163 changes to the log buffers, we need to ensure that the object we are formatting [all …]
|
| D | directory-locking.rst | 10 When taking the i_rwsem on multiple non-directory objects, we 11 always acquire the locks in order by increasing address. We'll call 16 1) read access. Locking rules: caller locks directory we are accessing. 29 lock it. If we need to lock both, lock them in inode pointer order. 31 NB: we might get away with locking the the source (and target in exchange 55 lock it. If we need to lock both, do so in inode pointer order. 58 All ->i_rwsem are taken exclusive. Again, we might get away with locking 69 First of all, at any moment we have a partial ordering of the 75 attempts to acquire lock on B, A will remain the parent of B until we 81 renames will be blocked on filesystem lock and we don't start changing [all …]
|
| D | path-lookup.txt | 49 the path given by the name's starting point (which we know in advance -- eg. 55 A parent, of course, must be a directory, and we must have appropriate 79 In order to lookup a dcache (parent, name) tuple, we take a hash on the tuple 81 in that bucket is then walked, and we do a full comparison of each entry 148 However, when inserting object 2 onto a new list, we end up with this: 161 Because we didn't wait for a grace period, there may be a concurrent lookup 182 As explained above, we would like to do path walking without taking locks or 188 than reloading from the dentry later on (otherwise we'd have interesting things 192 no non-atomic stores to shared data), and to recheck the seqcount when we are 194 Avoiding destructive or changing operations means we can easily unwind from [all …]
|
| D | xfs-self-describing-metadata.txt | 28 However, if we scale the filesystem up to 1PB, we now have 10x as much metadata 40 magic number in the metadata block, we have no other way of identifying what it 41 is supposed to be. We can't even identify if it is the right place. Put simply, 53 Hence we need to record more information into the metadata to allow us to 55 of analysis. We can't protect against every possible type of error, but we can 62 hence parse and verify the metadata object. IF we can't independently identify 68 magic numbers. Hence we can change the on-disk format of all these objects to 72 self identifying and we can do much more expansive automated verification of the 76 integrity checking. We cannot trust the metadata if we cannot verify that it has 77 not been changed as a result of external influences. Hence we need some form of [all …]
|
| /Documentation/x86/ |
| D | entry_64.rst | 58 so. If we mess that up even slightly, we crash. 60 So when we have a secondary entry, already in kernel mode, we *must 61 not* use SWAPGS blindly - nor must we forget doing a SWAPGS when it's 87 If we are at an interrupt or user-trap/gate-alike boundary then we can 89 whether SWAPGS was already done: if we see that we are a secondary 90 entry interrupting kernel mode execution, then we know that the GS 91 base has already been switched. If it says that we interrupted 92 user-space execution then we must do the SWAPGS. 94 But if we are in an NMI/MCE/DEBUG/whatever super-atomic entry context, 96 stack but before we executed SWAPGS, then the only safe way to check [all …]
|
| D | kernel-stacks.rst | 121 We always scan the full kernel stack for return addresses stored on 125 If it fits into the frame pointer chain, we print it without a question 128 If the address does not fit into our expected frame pointer chain we 129 still print it, but we print a '?'. It can mean two things: 136 up properly within the function, so we don't recognize it. 138 This way we will always print out the real call chain (plus a few more 140 or not - but in most cases we'll get the call chain right as well. The 144 The most important property of this method is that we _never_ lose 145 information: we always strive to print _all_ addresses on the stack(s) 147 we still print out the real call chain as well - just with more question [all …]
|
| D | intel_mpx.rst | 37 is how we expect the compiler, application and kernel to work together. 54 expected to keep the bounds directory at that location. We note it 65 6) Whenever memory is freed, we know that it can no longer contain valid 66 pointers, and we attempt to free the associated space in the bounds 67 tables. If an entire table becomes unused, we will attempt to free 105 We hook #BR handler to handle these two new situations. 131 are a few ways this could be done. We don't think any of them are practical 134 :Q: Can virtual space simply be reserved for the bounds tables so that we 139 even if we clean them up aggressively. In the worst-case scenario, the 143 If we were to preallocate them for the 128TB of user virtual address [all …]
|
| /Documentation/arm64/ |
| D | perf.txt | 35 For a VHE host this attribute is ignored as we consider the host kernel to 38 For a non-VHE host this attribute will exclude EL2 as we consider the 56 Due to the overlapping exception levels between host and guests we cannot 57 exclusively rely on the PMU's hardware exception filtering - therefore we 61 For non-VHE systems we exclude EL2 for exclude_host - upon entering and 62 exiting the guest we disable/enable the event as appropriate based on the 65 For VHE systems we exclude EL1 for exclude_guest and exclude both EL0,EL2 66 for exclude_host. Upon entering and exiting the guest we modify the event 77 On non-VHE hosts we enable/disable counters on the entry/exit of host/guest 79 enabling/disabling the counters and entering/exiting the guest. We are
|
| /Documentation/powerpc/ |
| D | pci_iov_resource_on_powernv.rst | 40 The following section provides a rough description of what we have on P8 52 For DMA, MSIs and inbound PCIe error messages, we have a table (in 55 We call this the RTT. 57 - For DMA we then provide an entire address space for each PE that can 63 - For MSIs, we have two windows in the address space (one at the top of 87 32-bit PCIe accesses. We configure that window at boot from FW and 91 reserved for MSIs but this is not a problem at this point; we just 93 ignores that however and will forward in that space if we try). 100 Now, this is the "main" window we use in Linux today (excluding 101 SR-IOV). We basically use the trick of forcing the bridge MMIO windows [all …]
|
| /Documentation/virt/kvm/ |
| D | locking.txt | 31 tracking i.e. the SPTE_SPECIAL_MASK is set. That means we need to 35 caused by write-protect. That means we just need to change the W bit of the 38 What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and 45 On fast page fault path, we will use cmpxchg to atomically set the spte W 50 But we need carefully check these cases: 52 The mapping from gfn to pfn may be changed since we can only ensure the pfn 79 We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. 81 For direct sp, we can easily avoid it since the spte of direct sp is fixed 82 to gfn. For indirect sp, before we do cmpxchg, we call gfn_to_pfn_atomic() 84 - We have held the refcount of pfn that means the pfn can not be freed and [all …]
|
| /Documentation/networking/ |
| D | fib_trie.txt | 33 verify that they actually do match the key we are searching for. 59 We have tried to keep the structure of the code as close to fib_hash as 68 fib_find_node(). Inserting a new node means we might have to run the 103 slower than the corresponding fib_hash function, as we have to walk the 119 The lookup is in its simplest form just like fib_find_node(). We descend the 120 trie, key segment by key segment, until we find a leaf. check_leaf() does 123 If we find a match, we are done. 125 If we don't find a match, we enter prefix matching mode. The prefix length, 127 and we backtrack upwards through the trie trying to find a longest matching 133 the child index until we find a match or the child index consists of nothing but [all …]
|
| /Documentation/process/ |
| D | kernel-enforcement-statement.rst | 6 As developers of the Linux kernel, we have a keen interest in how our software 12 contributions made to our community, we share an interest in ensuring that 16 actions, we agree that it is in the best interests of our development 20 Notwithstanding the termination provisions of the GPL-2.0, we agree that 41 software. We want companies and individuals to use, modify and distribute 42 this software. We want to work with users in an open and transparent way to 44 enforcement that might limit adoption of our software. We view legal action 48 Finally, once a non-compliance issue is resolved, we hope the user will feel 49 welcome to join us in our efforts on this project. Working together, we will 52 Except where noted below, we speak only for ourselves, and not for any company [all …]
|
| /Documentation/power/ |
| D | freezing-of-tasks.rst | 22 we only consider hibernation, but the description also applies to suspend). 33 it loop until PF_FROZEN is cleared for it. Then, we say that the task is 80 - freezes all tasks (including kernel threads) because we can't freeze 84 - thaws only kernel threads; this is particularly useful if we need to do 86 userspace tasks, or if we want to postpone the thawing of userspace tasks 89 - thaws all tasks (including kernel threads) because we can't thaw userspace 101 IV. Why do we do that? 107 hibernation. At the moment we have no simple means of checkpointing 109 metadata on disks, we cannot bring them back to the state from before the 114 usually making them almost impossible to repair). We therefore freeze [all …]
|
| /Documentation/devicetree/bindings/i2c/ |
| D | i2c-arb-gpio-challenge.txt | 21 others can see. These are all active low with pull-ups enabled. We'll 24 - OUR_CLAIM: output from us signaling to other hosts that we want the bus 31 Let's say we want to claim the bus. We: 35 3. Check THEIR_CLAIMS. If none are asserted then the we have the bus and we are 44 - our-claim-gpio: The GPIO that we use to claim the bus. 51 - wait-retry-us: we'll attempt another claim after this many microseconds. 53 - wait-free-us: we'll give up after this many microseconds. Default is 50000 us.
|
| /Documentation/RCU/ |
| D | rculist_nulls.txt | 23 * reuse these object before the RCU grace period, we 26 if (obj->key != key) { // not the object we expected 73 We need to make sure a reader cannot read the new 'obj->obj_next' value 87 * we need to make sure obj->key is updated before obj->next 98 Nothing special here, we can use a standard RCU hlist deletion. 112 With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() 115 For example, if we choose to store the slot number as the 'nulls' 116 end-of-list marker for each slot of the hash table, we can detect 120 is not the slot number, then we must restart the lookup at 135 if (obj->key != key) { // not the object we expected [all …]
|
| /Documentation/gpu/ |
| D | vkms.rst | 29 We want to be able to reconfigure vkms instance without having to reload the 35 - Configure planes/crtcs/connectors (we'd need some code to have more than 1 of 47 There's lots of plane features we could add support for: 61 For all of these, we also want to review the igt test coverage and make sure all 67 Currently vkms only computes a CRC for each frame. Once we have additional plane 68 features, we could write back the entire composited frame, and expose it as: 79 We already have vgem, which is a gem driver for testing rendering, similar to 90 sharing support, so that we can use vgem fences to simulate rendering in 106 we could add support for eBPF to validate any kind of atomic state, and
|
| /Documentation/x86/x86_64/ |
| D | 5level-paging.rst | 10 space and 64 TiB of physical address space. We are already bumping into 42 To mitigate this, we are not going to allocate virtual address space 48 If hint address set above 47-bit, but MAP_FIXED is not specified, we try 50 occupied, we look for unmapped area in *full* address space, rather than 64 One important case we need to handle here is interaction with MPX. 65 MPX (without MAWA extension) cannot handle addresses above 47-bit, so we 66 need to make sure that MPX cannot be enabled we already have VMA above
|
| /Documentation/block/ |
| D | inline-encryption.rst | 10 We want to support inline encryption (IE) in the kernel. 11 To allow for testing, we also want a crypto API fallback when actual 12 IE hardware is absent. We also want IE to work with layered devices 13 like dm and loopback (i.e. we want to be able to use the IE hardware 25 that specified keyslot. When possible, we want to make multiple requests with 28 - We need a way for filesystems to specify an encryption context to use for 32 - We need a way for device drivers to expose their capabilities in a unified 39 We add a struct bio_crypt_ctx to struct bio that can represent an 40 encryption context, because we need to be able to pass this encryption 47 We introduce a keyslot manager (KSM) that handles the translation from [all …]
|
| D | deadline-iosched.rst | 20 service time for a request. As we focus mainly on read latencies, this is 49 When we have to move requests from the io scheduler queue to the block 50 device dispatch queue, we always give a preference to reads. However, we 52 how many times we give preference to reads over writes. When that has been 53 done writes_starved number of times, we dispatch some writes based on the 68 that comes at basically 0 cost we leave that on. We simply disable the
|
| /Documentation/devicetree/bindings/usb/ |
| D | generic.txt | 4 - maximum-speed: tells USB controllers we want to work up to a certain 9 - dr_mode: tells Dual-Role USB controllers that we want to work on a 14 - phy_type: tells USB controllers that we want to configure the core to support 26 - hnp-disable: tells OTG controllers we want to disable OTG HNP, normally HNP 29 - srp-disable: tells OTG controllers we want to disable OTG SRP, SRP is 31 - adp-disable: tells OTG controllers we want to disable OTG ADP, ADP is
|
| /Documentation/ia64/ |
| D | efirtc.rst | 20 driver. We describe those calls as well the design of the driver in the 31 Because we wanted to minimize the impact on existing user-level apps using 32 the CMOS clock, we decided to expose an API that was very similar to the one 40 The Epoch is January 1st 1998. For backward compatibility reasons we don't 41 expose this new way of representing time. Instead we use something very 50 As of today we don't offer a /proc/sys interface. 53 we have created the include/linux/rtc.h header file to contain only the 108 RTC which is some kind of interval timer alarm. For this reason we don't use 109 the same ioctl()s to get access to the service. Instead we have 112 We have added 2 new ioctl()s that are specific to the EFI driver:
|
| /Documentation/vm/ |
| D | overcommit-accounting.rst | 74 * We account mmap memory mappings 75 * We account mprotect changes in commit 76 * We account mremap changes in size 77 * We account brk 78 * We account munmap 79 * We report the commit status in /proc
|
| D | active_mm.rst | 26 - we have "real address spaces" and "anonymous address spaces". The 28 user-level page tables at all, so when we do a context switch into an 29 anonymous address space we just leave the previous address space 44 - however, we obviously need to keep track of which address space we 45 "stole" for such an anonymous user. For that, we have "tsk->active_mm", 81 we have a user context", and is generally done by the page fault handler
|
| /Documentation/admin-guide/LSM/ |
| D | tomoyo.rst | 32 Materials we prepared for seminars and symposiums are available at 57 We believe that inode based security and name based security are complementary 58 and both should be used together. But unfortunately, so far, we cannot enable 59 multiple LSM modules at the same time. We feel sorry that you have to give up 62 We hope that LSM becomes stackable in future. Meanwhile, you can use non-LSM 64 LSM version of TOMOYO is a subset of non-LSM version of TOMOYO. We are planning
|