Searched full:flush (Results 1 – 25 of 125) sorted by relevance
12345
| /Documentation/arch/x86/ |
| D | tlb.rst | 10 1. Flush the entire TLB with a two-instruction sequence. This is 12 from areas other than the one we are trying to flush will be 21 1. The size of the flush being performed. A flush of the entire 25 be no collateral damage caused by doing the global flush, and 26 all of the individual flush will have ended up being wasted 29 damage we do with a full flush. So, the larger the TLB, the 30 more attractive an individual flush looks. Data and 37 especially the contents of the TLB during a given flush. The 38 sizes of the flush will vary greatly depending on the workload as 48 This will cause us to do the global flush for more cases. [all …]
|
| D | pti.rst | 98 PCID support, the context switch code must flush both the user 99 and kernel entries out of the TLB. The user PCID TLB flush is 121 flushing a kernel address, we need to flush all PCIDs, so a 122 single kernel address flush will require a TLB-flushing CR3
|
| /Documentation/admin-guide/device-mapper/ |
| D | writecache.rst | 38 issuing the FLUSH request, the blocks are automatically 42 committed if this time passes and no FLUSH request is 50 flag when writing back data and send the FLUSH request 91 13. the number of flush requests 95 flush 96 Flush the cache device. The message returns successfully 99 Flush the cache device on next suspend. Use this message
|
| D | delay.rst | 15 3: apply offset and delay to read, write and flush operations on device 18 to write and flush operations on optionally different write_device with 35 # Create mapped device named "delayed" delaying read, write and flush operations for 500ms. 42 # Create mapped device delaying write and flush operations for 400ms and
|
| D | log-writes.rst | 24 the FLUSH request completes we log all of the WRITEs and then the FLUSH. Only 33 W3,W2,flush,W1.... 43 have all the DISCARD requests, and then the WRITE requests and then the FLUSH 46 WRITE block 1, DISCARD block 1, FLUSH 50 DISCARD 1, WRITE 1, FLUSH
|
| /Documentation/block/ |
| D | stat.rst | 44 flush I/Os requests number of flush I/Os processed 45 flush ticks milliseconds total wait time for flush requests 53 flush I/Os 56 These values increment when an flush I/O request completes. 58 Block layer combines flush requests and executes at most one at a time. 59 This counts flush requests executed by disk. Not tracked for partitions. 75 read ticks, write ticks, discard ticks, flush ticks
|
| D | writeback_cache_control.rst | 17 a forced cache flush, and the Force Unit Access (FUA) flag for requests. 29 flush without any dependent I/O. It is recommend to use 30 the blkdev_issue_flush() helper for a pure cache flush. 82 devices, and a global flush needs to be implemented for bios with the
|
| /Documentation/admin-guide/hw-vuln/ |
| D | l1tf.rst | 145 - L1D Flush mode: 150 'L1D conditional cache flushes' L1D flush is conditionally enabled 152 'L1D cache flushes' L1D flush is unconditionally enabled 170 1. L1D flush on VMENTER 187 The kernel provides two L1D flush modes: 202 The general recommendation is to enable L1D flush on VMENTER. The kernel 205 **Note**, that L1D flush does not prevent the SMT problem because the 209 L1D flush can be controlled by the administrator via the kernel command 345 line parameter in combination with L1D flush control. See 375 SMT control and L1D flush control via the sysfs interface [all …]
|
| D | l1d_flush.rst | 6 mechanism to flush the L1D cache on context switch. 34 When PR_SET_L1D_FLUSH is enabled for a task a flush of the L1D cache is 44 The kernel command line allows to control the L1D flush mitigations at boot
|
| /Documentation/devicetree/bindings/dma/xilinx/ |
| D | xilinx_dma.txt | 59 - xlnx,flush-fsync: Tells which channel to Flush on Frame sync. 61 {1}, flush both channels 62 {2}, flush mm2s channel 63 {3}, flush s2mm channel 96 xlnx,flush-fsync = <0x1>;
|
| /Documentation/core-api/ |
| D | cachetlb.rst | 20 on a cpu (see mm_cpumask()), one need not perform a flush 29 invoke one of the following flush methods _after_ the page table 34 The most severe flush of all. After this interface runs, 124 The cache level flush will always be first, because this allows 299 flush here to handle D-cache aliasing, to make sure these kernel stores 315 actual flush if there are currently no user processes mapping this 323 of this flag bit, and if set the flush is done and the flag bit 328 It is often important, if you defer the flush, 329 that the actual flush occurs on the same CPU 346 likely that you will need to flush the instruction cache [all …]
|
| /Documentation/devicetree/bindings/arm/mstar/ |
| D | mstar,l3bridge.yaml | 19 The l3bridge region contains registers that allow such a flush 23 are and install a barrier that triggers the required pipeline flush.
|
| /Documentation/features/vm/TLB/ |
| D | arch-support.txt | 2 # Feature name: batch-unmap-tlb-flush 4 # description: arch supports deferral of TLB flush until multiple pages are unmapped
|
| /Documentation/devicetree/bindings/interrupt-controller/ |
| D | qca,ath79-cpu-intc.txt | 3 On most SoC the IRQ controller need to flush the DDR FIFO before running 21 buffer flush
|
| /Documentation/devicetree/bindings/memory-controllers/ |
| D | qca,ath79-ddr-controller.yaml | 14 flush the FIFO between various devices and the DDR. This is mainly used by 15 the IRQ controller to flush the FIFO before running the interrupt handler of
|
| /Documentation/ABI/testing/ |
| D | procfs-diskstats | 36 Kernel 5.5+ appends two more fields for flush requests: 39 19 flush requests completed successfully
|
| D | sysfs-bus-surface_aggregator-tabletsw | 19 part-ways, but does not lie flush with the back side of the 24 lies flush with the back side of the device.
|
| D | sysfs-bus-coresight-devices-etb10 | 72 Description: (Read) Shows the value held by the ETB Formatter and Flush Status 80 Description: (Read) Shows the value held by the ETB Formatter and Flush Control
|
| /Documentation/arch/riscv/ |
| D | cmodx.rst | 30 an icache flush, this deferred icache flush will be skipped as it is redundant. 31 Therefore, there will be no additional flush when using the riscv_flush_icache()
|
| /Documentation/devicetree/bindings/i2c/ |
| D | i2c-mux-pca954x.yaml | 80 maxim,send-flush-out-sequence: 82 description: Send a flush-out sequence to stuck auxiliary buses 125 maxim,send-flush-out-sequence: false
|
| /Documentation/filesystems/xfs/ |
| D | xfs-delayed-logging-design.rst | 141 a "log force" to flush the outstanding committed transactions to stable storage 307 the log at any given time. This allows the log to avoid needing to flush each 459 trying to get the lock on object A to flush it to the log buffer. This appears 479 Hence we avoid the need to lock items when we need to flush outstanding 633 the CIL would look like this before the flush:: 653 And after the flush the CIL head is empty, and the checkpoint context log 679 start, while the checkpoint flush code works over the log vector chain to 736 that are currently committing to the log. When we flush a checkpoint, the 758 is, we need to flush the CIL and potentially wait for it to complete. This is a 760 and push if required. Indeed, placing the current sequence checkpoint flush in [all …]
|
| /Documentation/driver-api/ |
| D | io_ordering.rst | 9 chipset to flush pending writes to the device before any reads are posted. A 49 Here, the reads from safe_register will cause the I/O chipset to flush any
|
| /Documentation/admin-guide/ |
| D | iostats.rst | 125 Field 16 -- # of flush requests completed 126 This is the total number of flush requests completed successfully. 128 Block layer combines flush requests and executes at most one at a time. 129 This counts flush requests executed by disk. Not tracked for partitions. 132 This is the total number of milliseconds spent by all flush requests.
|
| /Documentation/translations/zh_CN/ |
| D | glossary.rst | 19 * flush: 刷新,一般指对cache的冲洗操作。
|
| /Documentation/driver-api/md/ |
| D | raid5-cache.rst | 56 overhead too. Write-back cache will aggregate the data and flush the data to 98 memory cache. If some conditions are met, MD will flush the data to RAID disks. 101 release the memory cache. The flush conditions could be stripe becomes a full
|
12345