• Home
  • Raw
  • Download

Lines Matching full:i

18 September 2003: Updated I/O Scheduler portions
59 - Highmem I/O support
60 - I/O scheduler modularization
66 2. New flexible and generic but minimalist i/o structure or descriptor
67 (instead of using buffer heads at the i/o layer)
76 3.2.3 I/O completion
79 3.3 I/O submission
80 4. The I/O scheduler
105 capabilities to the maximum extent for better i/o performance. This is
113 Sophisticated devices with large built-in caches, intelligent i/o scheduling
125 i. Per-queue limits/values exported to the generic layer by the driver
127 Various parameters that the generic i/o scheduler logic uses are set at
141 Enable I/O to highmem pages, dma_address being the
180 ii. High-mem i/o capabilities are now considered the default
183 by default copyin/out i/o requests on high-memory buffers to low-memory buffers
186 for which the device cannot handle i/o. A driver can specify this by
189 where a device is capable of handling high memory i/o.
191 In order to enable high-memory i/o where the device is capable of supporting
203 Special handling is required only for cases where i/o needs to happen on
206 is used for performing the i/o with copyin/copyout as needed depending on
208 data read has to be copied to the original buffer on i/o completion, so a
231 the blk_queue_bounce() routine on its own to bounce highmem i/o to low
234 iii. The i/o scheduler algorithm itself can be replaced/set as appropriate
236 As in 2.4, it is possible to plugin a brand new i/o scheduler for a particular
239 of the i/o scheduler. There are more pluggable callbacks, e.g for init,
241 i/o scheduling algorithm aspects and details outside of the generic loop.
243 the i/o scheduler from block drivers.
245 I/O scheduler wrappers are to be used instead of accessing the queue directly.
246 See section 4. The I/O scheduler for details.
251 i. Application capabilities for raw i/o
254 requirements where an application prefers to make its own i/o scheduling
255 decisions based on an understanding of the access patterns and i/o
261 Kernel components like filesystems could also take their own i/o scheduling
263 some control over i/o ordering.
268 from above e.g indicating that an i/o is just a readahead request, or priority
279 control (high/med/low) over the priority of an i/o request vs other pending
296 to the device bypassing some of the intermediate i/o layers.
301 it possible to perform bottom up validation of the i/o path, layer by
304 The normal i/o submission interfaces, e.g submit_bio, could be bypassed
320 bio segments or uses the block layer end*request* functions for i/o
339 <JENS: I dont understand the above, why is end_that_request_first() not
340 usable? Or _last for that matter. I must be missing something>
342 <SUP: What I meant here was that if the request doesn't have a bio, then
349 _last works OK in this case, and is not a problem, as I mentioned earlier
357 in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for
369 called by elv_next_request(), i.e. typically just before servicing a request.
374 Pre-building could possibly even be done early, i.e before placing the
384 2. Flexible and generic but minimalist i/o structure/descriptor
390 Prior to 2.5, buffer heads were used as the unit of i/o at the generic block
392 buffer heads for a contiguous i/o request. This led to certain inefficiencies
393 when it came to large i/o requests and readv/writev style operations, as it
395 on to the generic block layer, only to be merged by the i/o scheduler
396 when the underlying device was capable of handling the i/o in one shot.
397 Also, using the buffer head as an i/o structure for i/os that didn't originate
402 redesign of the block i/o data structure in 2.5.
404 1. Should be appropriate as a descriptor for both raw and buffered i/o -
405 avoid cache related fields which are irrelevant in the direct/page i/o path,
407 for raw i/o.
410 3. Ability to represent large i/os w/o unnecessarily breaking them up (i.e
412 4. At the same time, ability to retain independent identity of i/os from
413 different sources or i/o units requiring individual completion (e.g. for
415 5. Ability to represent an i/o involving multiple physical memory segments
429 is uniformly used for all i/o at the block layer ; it forms a part of the
430 bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are
437 of <page, offset, len> to describe the i/o buffer, and has various other
438 fields describing i/o parameters and state that needs to be maintained for
439 performing the i/o.
453 * main unit of I/O for the block layer and lower layers (ie drivers)
476 - Large i/os can be sent down in one go using a bio_vec list consisting
479 - Splitting of an i/o request across multiple devices (as in the case of
498 bi_end_io() i/o callback gets called on i/o completion of the entire bio.
510 which in turn means that only raw I/O uses it (direct i/o may not work
583 flags available. Some bits are used by the block layer or i/o scheduler.
588 be one of the many segments in the current bio (i.e i/o completion unit).
593 end_that_request_first, i.e. every time the driver completes a part of the
602 of the i/o buffer in cases where the buffer resides in low-memory. For high
603 memory i/o, this field is not valid and must not be used by drivers.
649 amount of time (in the case of bio, that would be after the i/o is completed).
651 case i/o) must already be in progress and memory would be available when it
662 to i/o submission, if the bio fields are likely to be accessed after the
663 i/o is issued (since the bio may otherwise get freed in case i/o completion
667 shares the bio_vec_list with the original bio (i.e. both point to the
668 same bio_vec_list). This would typically be used for splitting i/o requests
688 I/O completion callbacks are per-bio rather than per-segment, so drivers
706 the i/o hardware can handle, based on various queue properties.
712 DMA remapping (hw_segments) (i.e. IOMMU aware limits).
717 hw data segments in a request (i.e. the maximum number of address/length
721 of physical data segments in a request (i.e. the largest sized scatter list
724 3.2.3 I/O completion
728 end_that_request_first and end_that_request_last can be used for i/o
729 completion (and setting things up so the rest of the i/o or the next
740 segments and do not support i/o into high memory addresses (require bounce
751 3.3 I/O Submission
754 The routine submit_bio() is used to submit a single io. Higher level i/o
757 (a) Buffered i/o:
762 (b) Kiobuf i/o (for raw/direct i/o):
766 perform the i/o on each of these.
770 blocks array as well, but it's currently in there to kludge around direct i/o.]
777 So right now it wouldn't work for direct i/o on non-contiguous blocks.
781 Badari Pulavarty has a patch to implement direct i/o correctly using
785 (c) Page i/o:
801 (d) Direct access i/o:
808 Kvec i/o:
824 to continue to use the vector descriptor (kvec) after i/o completes. Instead,
829 4. The I/O scheduler
832 I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch
833 queue and specific I/O schedulers. Unless stated otherwise, elevator is used
834 to refer to both parts and I/O scheduler to specific I/O schedulers.
840 Specific I/O schedulers are responsible for ordering normal filesystem
843 multiple I/O schedulers. They can be built as modules but at least one should
847 A block layer call to the i/o scheduler follows the convention elv_xxx(). This
853 4.1. I/O scheduler API
863 never seen by I/O scheduler again. IOW, after
883 I/O schedulers are free to postpone requests by
885 is non-zero. Once dispatched, I/O schedulers
903 I/O schedulers can use this callback to
914 4.2 Request flows seen by I/O schedulers
917 All requests seen by I/O schedulers strictly follow one of the following three
922 i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn ->
929 4.3 I/O scheduler implementation
932 The generic i/o scheduler algorithm attempts to sort/merge/batch requests for
936 i. improved throughput
942 i. Binary tree
943 AS and deadline i/o schedulers use red black binary trees for disk position
955 multiple I/O streams are being performed at once on one disk.
958 are far less common than "back merges" due to the nature of most I/O patterns.
964 Plugging is an approach that the current i/o scheduling algorithm resorts to so
988 4.4 I/O contexts
991 I/O contexts provide a dynamically allocated per process data area. They may
992 be used in I/O schedulers, and in the block layer (could be used for IO statis,
994 for an example of usage in an i/o scheduler.
1010 granularity). The locking semantics are the same, i.e. locking is
1040 so the i/o scheduler also gets to operate on whole disk sector numbers. This
1075 etc per queue now. Drivers that used to define their own merge functions i
1097 transfer a virtual mapping is needed. If the driver supports highmem I/O,
1108 - orig kiobuf & raw i/o patches (now in 2.4 tree)
1109 - direct kiobuf based i/o to devices (no intermediate bh's)
1110 - page i/o using kiobuf
1121 8.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11
1123 8.6. Async i/o implementation patch (Ben LaHaise)
1138 8.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar, Badari)
1140 8.13 Priority based i/o scheduler - prepatches (Arjan van de Ven)
1142 8.14 IDE Taskfile i/o patch (Andre Hedrick)
1146 8.16 Direct i/o patches for 2.5 using kvec and bio (Badari Pulavarthy)
1152 9.1 The Splice I/O Model