Home
last modified time | relevance | path

Searched full:writes (Results 1 – 25 of 314) sorted by relevance

12345678910>>...13

/Documentation/driver-api/
Dio_ordering.rst2 Ordering I/O writes to memory-mapped addresses
6 platforms, driver writers are responsible for ensuring that I/O writes to
9 chipset to flush pending writes to the device before any reads are posted. A
12 subsequent writes to I/O space arrived only after all prior writes (much like a
50 pending writes before actually posting the read to the chipset, preventing
Ddevice-io.rst76 are burned by the fact that PCI bus writes are posted asynchronously. A
78 writes have occurred in the specific cases the author cares. This kind
107 outstanding DMA writes from that bus, since for some devices the result of
110 next readb() call has no relation to any previous DMA writes
176 Note that posted writes are not strictly ordered against a spinlock, see
305 * Uncached - CPU-side caches are bypassed, and all reads and writes are handled
313 * No repetition - The CPU may not issue multiple reads or writes for a single
316 being issued to the device, and multiple writes are not combined into larger
317 writes. This may or may not be enforced when using __raw I/O accessors or
323 On many platforms and buses (e.g. PCI), writes issued through ioremap()
[all …]
/Documentation/filesystems/
Dfuse-io.rst17 In direct-io mode the page cache is completely bypassed for reads and writes.
23 after any writes to the file. All mmap modes are supported.
25 The cached mode has two sub modes controlling how writes are handled. The
32 uncached, but fully written pages). No READ requests are ever sent for writes,
35 In writeback-cache mode (enabled by the FUSE_WRITEBACK_CACHE flag) writes go to
/Documentation/driver-api/md/
Draid5-cache.rst19 In both modes, all writes to the array will hit cache disk first. This means
28 disks and it's possible the writes don't hit all RAID disks yet before the
53 write. For non-full-stripe writes, MD must read old data before the new parity
54 can be calculated. These synchronous reads hurt write throughput. Some writes
90 order in which MD writes data to cache disk and RAID disks. Specifically, in
91 write-through mode, MD calculates parity for IO data, writes both IO data and
92 parity to the log, writes the data and parity to RAID disks after the data and
96 In write-back mode, MD writes IO data to the log and reports IO completion. The
110 they are discarded too. MD then loads valid data and writes them to RAID disks
/Documentation/ABI/testing/
Dsysfs-block-bcache55 Sum of all reads and writes that have bypassed the cache (due
64 writes will be buffered in the cache. When off, caching is in
65 writethrough mode; reads and writes will be added to the
74 used to buffer writes until it is mostly full, at which point
75 writes transparently revert to writethrough mode. Intended only
94 place and reducing total number of writes sent to the backing
102 switched on and off. In synchronous mode all writes are ordered
104 if disabled bcache will not generally wait for writes to
156 For a cache, sum of all btree writes in human readable units.
Ddebugfs-msi-wmi-platform9 at file offset 0. Partial writes or writes at a different offset are not
Dsysfs-devices-platform-docg310 writes or both.
27 writes or both.
Dsysfs-class-mei23 The ME FW writes its status information into fw status
55 Set maximal number of pending writes
90 The ME FW writes Glitch Detection HW (TRC)
Dsysfs-class-net-grcan8 and writes the "Enable 0" bit of the configuration register.
20 and writes the "Enable 1" bit of the configuration register.
Dprocfs-diskstats17 8 writes completed
18 9 writes merged
/Documentation/admin-guide/device-mapper/
Dlog-writes.rst2 dm-log-writes
24 the FLUSH request completes we log all of the WRITEs and then the FLUSH. Only
25 completed WRITEs, at the time the REQ_PREFLUSH is issued, are added in order to
59 log-writes <dev_path> <log_dev_path>
93 Every log has a mark at the end labeled "dm-log-writes-end".
99 It can be found here: https://github.com/josefbacik/log-writes
107 TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc"
127 TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc"
Dwritecache.rst5 The writecache target caches writes on persistent memory or on SSD. It
37 when the application writes this amount of blocks without
58 new writes (however, writes to already cached blocks are
60 writes) and it will gradually writeback any cached
Ddelay.rst5 Device-Mapper's "delay" target delays reads and/or writes
43 # splitting reads to device $1 but writes and flushs to different device $2
51 # Create mapped device delaying reads for 50ms, writes for 100ms and flushs for 333ms
/Documentation/admin-guide/
Diostats.rst68 Field 2 -- # of reads merged, field 6 -- # of writes merged (unsigned long)
69 Reads and writes which are adjacent to each other may be merged for
81 Field 5 -- # of writes completed (unsigned long)
82 This is the total number of writes completed successfully.
84 Field 6 -- # of writes merged (unsigned long)
91 This is the total number of milliseconds spent by all writes (as
168 Field 3 -- # of writes issued
169 This is the total number of writes issued to this partition.
183 reads/writes before merges for partitions and after for disks. Since a
185 the number of reads/writes issued can be several times higher than the
[all …]
Dbcache.rst20 designed to avoid random writes at all costs; it fills up an erase block
27 writes as completed until they're on stable storage).
29 Writeback caching can use most of the cache for buffering writes - writing
138 - For writethrough writes, if the write to the cache errors we just switch to
142 - For writeback writes, we currently pass that error back up to the
383 The default is 2000 us (2 milliseconds) for reads, and 20000 for writes.
399 Solution: warm the cache by doing writes, or use the testing branch (there's
496 Amount of IO (both reads and writes) that has bypassed the cache
556 Journal writes will delay for up to this many milliseconds, unless a cache
610 Minimum granularity of writes - should match hardware sector size.
[all …]
/Documentation/block/
Dkyber-iosched.rst6 reads and synchronous writes. Kyber will throttle requests in order to meet
15 Target latency for synchronous writes (in nanoseconds).
Ddeadline-iosched.rst29 Similar to read_expire mentioned above, but for writes.
51 don't want to starve writes indefinitely either. So writes_starved controls
52 how many times we give preference to reads over writes. When that has been
53 done writes_starved number of times, we dispatch some writes based on the
/Documentation/litmus-tests/rcu/
DRCU+sync+free.litmus7 * follows a grace period, if it did not see writes that precede that grace
11 * period assigns a pointer, and the writes following the grace period destroy
/Documentation/ABI/removed/
Dsysfs-selinux-checkreqprot25 but will discard writes of the "0" value and will reject writes of the
/Documentation/filesystems/ext4/
Dallocators.rst13 effect of concentrating writes on a single erase block, which can speed
22 speculation is correct (typically the case for full writes of small
25 Under this scheme, when a file needs more blocks to absorb file writes,
/Documentation/admin-guide/nfs/
Dnfsd-admin-interfaces.rst29 or down by additional writes to nfsd/threads or by writes to
/Documentation/device-mapper/
Ddm-bow.txt19 State 1: All writes to the device cause the underlying data to be backed up to
21 However, the writes, with one exception, then happen exactly as they would
25 isn't enough free space, writes are failed with -ENOSPC.
61 writes from the true sector zero are redirected to. Note that like any backup
/Documentation/driver-api/rapidio/
Dmport_cdev.rst27 - Reads and writes from/to configuration registers of mport devices
29 - Reads and writes from/to configuration registers of remote RapidIO devices.
30 This operations are defined as RapidIO Maintenance reads/writes in RIO spec.
42 port-writes or both (RIO_SET_EVENT_MASK/RIO_GET_EVENT_MASK)
/Documentation/cdrom/
Dpacket-writing.rst49 shall implement "true random writes with 2KB granularity", which means
59 host to perform aligned writes at 32KB boundaries. Other drives do
61 writes are not 32KB aligned.
64 generates aligned writes::
/Documentation/devicetree/bindings/timer/
Darm,arch_timer.yaml75 affects writes to the tval register, due to the implicit counter read.
81 by back-to-back reads. This also affects writes to the tval register, due
88 return a value 32 beyond the correct value. This also affects writes to

12345678910>>...13