Lines Matching refs:i
56 2. New flexible and generic but minimalist i/o structure or descriptor
57 (instead of using buffer heads at the i/o layer)
96 capabilities to the maximum extent for better i/o performance. This is
103 Sophisticated devices with large built-in caches, intelligent i/o scheduling
115 i. Per-queue limits/values exported to the generic layer by the driver
117 Various parameters that the generic i/o scheduler logic uses are set at
170 ii. High-mem i/o capabilities are now considered the default
173 by default copyin/out i/o requests on high-memory buffers to low-memory buffers
176 for which the device cannot handle i/o. A driver can specify this by
179 where a device is capable of handling high memory i/o.
181 In order to enable high-memory i/o where the device is capable of supporting
193 Special handling is required only for cases where i/o needs to happen on
196 is used for performing the i/o with copyin/copyout as needed depending on
198 data read has to be copied to the original buffer on i/o completion, so a
222 routine on its own to bounce highmem i/o to low memory for specific requests
225 iii. The i/o scheduler algorithm itself can be replaced/set as appropriate
227 As in 2.4, it is possible to plugin a brand new i/o scheduler for a particular
230 of the i/o scheduler. There are more pluggable callbacks, e.g for init,
232 i/o scheduling algorithm aspects and details outside of the generic loop.
234 the i/o scheduler from block drivers.
241 i. Application capabilities for raw i/o
244 requirements where an application prefers to make its own i/o scheduling
245 decisions based on an understanding of the access patterns and i/o
251 Kernel components like filesystems could also take their own i/o scheduling
253 some control over i/o ordering.
258 from above e.g indicating that an i/o is just a readahead request, or for
266 There is a way to enforce strict ordering for i/os through barriers.
273 A flag in the bio structure, BIO_BARRIER is used to identify a barrier i/o.
274 The generic i/o scheduler would make sure that it places the barrier request and
284 control (high/med/low) over the priority of an i/o request vs other pending
299 to the device bypassing some of the intermediate i/o layers.
304 it possible to perform bottom up validation of the i/o path, layer by
307 The normal i/o submission interfaces, e.g submit_bio, could be bypassed
323 bio segments or uses the block layer end*request* functions for i/o
356 in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for
368 called by elv_next_request(), i.e. typically just before servicing a request.
373 Pre-building could possibly even be done early, i.e before placing the
383 2. Flexible and generic but minimalist i/o structure/descriptor.
387 Prior to 2.5, buffer heads were used as the unit of i/o at the generic block
389 buffer heads for a contiguous i/o request. This led to certain inefficiencies
390 when it came to large i/o requests and readv/writev style operations, as it
392 on to the generic block layer, only to be merged by the i/o scheduler
393 when the underlying device was capable of handling the i/o in one shot.
394 Also, using the buffer head as an i/o structure for i/os that didn't originate
399 redesign of the block i/o data structure in 2.5.
401 i. Should be appropriate as a descriptor for both raw and buffered i/o -
402 avoid cache related fields which are irrelevant in the direct/page i/o path,
404 for raw i/o.
407 iii.Ability to represent large i/os w/o unnecessarily breaking them up (i.e
409 iv. At the same time, ability to retain independent identity of i/os from
410 different sources or i/o units requiring individual completion (e.g. for
412 v. Ability to represent an i/o involving multiple physical memory segments
426 is uniformly used for all i/o at the block layer ; it forms a part of the
427 bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are
433 of <page, offset, len> to describe the i/o buffer, and has various other
434 fields describing i/o parameters and state that needs to be maintained for
435 performing the i/o.
473 - Large i/os can be sent down in one go using a bio_vec list consisting
476 - Splitting of an i/o request across multiple devices (as in the case of
493 bi_end_io() i/o callback gets called on i/o completion of the entire bio.
505 which in turn means that only raw I/O uses it (direct i/o may not work
577 available. Some bits are used by the block layer or i/o scheduler.
582 be one of the many segments in the current bio (i.e i/o completion unit).
587 end_that_request_first, i.e. every time the driver completes a part of the
596 of the i/o buffer in cases where the buffer resides in low-memory. For high
597 memory i/o, this field is not valid and must not be used by drivers.
638 amount of time (in the case of bio, that would be after the i/o is completed).
640 case i/o) must already be in progress and memory would be available when it
655 to i/o submission, if the bio fields are likely to be accessed after the
656 i/o is issued (since the bio may otherwise get freed in case i/o completion
660 shares the bio_vec_list with the original bio (i.e. both point to the
661 same bio_vec_list). This would typically be used for splitting i/o requests
694 the i/o hardware can handle, based on various queue properties.
700 DMA remapping (hw_segments) (i.e. IOMMU aware limits).
705 hw data segments in a request (i.e. the maximum number of address/length
709 of physical data segments in a request (i.e. the largest sized scatter list
715 end_that_request_first and end_that_request_last can be used for i/o
716 completion (and setting things up so the rest of the i/o or the next
725 segments and do not support i/o into high memory addresses (require bounce
842 The routine submit_bio() is used to submit a single io. Higher level i/o
845 (a) Buffered i/o:
849 (b) Kiobuf i/o (for raw/direct i/o):
852 perform the i/o on each of these.
856 blocks array as well, but it's currently in there to kludge around direct i/o.]
863 So right now it wouldn't work for direct i/o on non-contiguous blocks.
867 Badari Pulavarty has a patch to implement direct i/o correctly using
871 (c) Page i/o:
886 (d) Direct access i/o:
892 Kvec i/o:
908 to continue to use the vector descriptor (kvec) after i/o completes. Instead,
929 A block layer call to the i/o scheduler follows the convention elv_xxx(). This
1004 i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn ->
1012 The generic i/o scheduler algorithm attempts to sort/merge/batch requests for
1015 i. improved throughput
1021 i. Binary tree
1022 AS and deadline i/o schedulers use red black binary trees for disk position
1044 support is anticipated for 2.5. Also with a priority-based i/o scheduler,
1047 Plugging is an approach that the current i/o scheduling algorithm resorts to so
1079 for an example of usage in an i/o scheduler.
1093 granularity). The locking semantics are the same, i.e. locking is
1120 so the i/o scheduler also gets to operate on whole disk sector numbers. This
1154 etc per queue now. Drivers that used to define their own merge functions i
1184 - orig kiobuf & raw i/o patches (now in 2.4 tree)
1185 - direct kiobuf based i/o to devices (no intermediate bh's)
1186 - page i/o using kiobuf
1192 8.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11
1193 8.6. Async i/o implementation patch (Ben LaHaise)
1201 8.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar,
1203 8.13 Priority based i/o scheduler - prepatches (Arjan van de Ven)
1204 8.14 IDE Taskfile i/o patch (Andre Hedrick)
1206 8.16 Direct i/o patches for 2.5 using kvec and bio (Badari Pulavarthy)