Lines Matching full:and
11 (Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends"
12 and the only necessary changes to the core kernel for transcendent memory;
15 for a detailed overview of frontswap and related kernel parts)
25 kernel and is of unknown and possibly time-varying size. The driver
27 frontswap_ops funcs appropriately and the functions it provides must
32 copy the page to transcendent memory and associate it with the type and
36 from transcendent memory and an "invalidate_area" will remove ALL pages
37 associated with the swap type (e.g., like swapoff) and notify the "device"
43 success, the data has been successfully saved to transcendent memory and
44 a disk write and, if the data is later read back, a disk read are avoided.
45 If a store returns failure, transcendent memory has rejected the data, and the
50 in swap device writes is lost (and also a non-trivial performance advantage)
54 Note that if a page is stored and the page already exists in transcendent memory
55 (a "duplicate" store), either the store succeeds and the data is overwritten,
56 or the store fails AND the page is invalidated. This ensures stale data may
84 providing a clean, dynamic interface to read and write swap pages to
87 and size (such as with compression) or secretly moved (as might be
88 useful for write-balancing for some RAM-like devices). Swap pages (and
90 but-much-faster-than-disk "pseudo-RAM device" and the frontswap (and
92 and write -- and indirectly "name" -- the pages.
94 Frontswap -- and cleancache -- with a fairly small impact on the kernel,
98 In the single kernel case, aka "zcache", pages are compressed and
109 allows RAM to be dynamically load-balanced back-and-forth as needed,
110 i.e. when system A is overcommitted, it can swap to system B, and
118 virtual machines. This is really hard to do with RAM and efforts to do
123 virtual machines, but the pages can be compressed and deduplicated to
124 optimize RAM utilization. And when guest OS's are induced to surrender
127 to be swapped to and from hypervisor RAM (if overall host system memory
131 A KVM implementation is underway and has been RFC'ed to lkml. And,
139 nothingness and the only overhead is a few extra bytes per swapon'ed
143 AND a frontswap backend registers AND the backend fails every "store"
145 CPU overhead is still negligible -- and since every frontswap fail
147 to be I/O bound and using a small fraction of a percent of a CPU
150 As for space, if CONFIG_FRONTSWAP is enabled AND a frontswap backend
173 entirely dynamic and random.
182 consults with the frontswap backend and if the backend says it does NOT
183 have room, frontswap_store returns -1 and the kernel swaps the page
188 has already been copied and associated with the type and offset,
189 and the backend guarantees the persistence of the data. In this case,
197 it was, the page of data is filled from the frontswap backend and
202 and (potentially) a swap device write are replaced by a "frontswap backend
203 store" and (possibly) a "frontswap backend loads", which are presumably much
214 assumes a swap device is fixed size and any page in it is linearly
216 and works around the constraints of the block I/O subsystem to provide
217 a great deal of flexibility and dynamicity.
223 "Poorly" compressible pages can be rejected, and "poorly" can itself be
227 device is, by definition, asynchronous and uses block I/O. The
231 required to ensure the dynamicity of the backend and to avoid thorny race
232 conditions that would unnecessarily and greatly complicate frontswap
233 and/or the block I/O subsystem. That said, only the initial "store"
234 and "load" operations need be synchronous. A separate asynchronous thread
239 and use "batched" hypercalls.
251 and the possibility that it might hold no pages at all. This means
256 some kind of "ghost" swap device and ensure that it is never used.
263 where data is compressed and the original 4K page has been compressed
265 is non-compressible and so would take the entire 4K. But the backend
268 the old data and ensure that it is no longer accessible. Since the
279 of the memory managed by frontswap and back into kernel-addressable memory.
288 structures that have, over the years, moved back and forth between
289 static and global. This seemed a reasonable compromise: Define