• Home
  • Raw
  • Download

Lines Matching +full:shared +full:- +full:dma +full:- +full:pool

14 Cleancache can be thought of as a page-granularity victim cache for clean
20 of unknown and possibly time-varying size.
22 Later, when a cleancache-enabled filesystem wishes to access a page
28 in Xen (using hypervisor memory) and zcache (using in-kernel compressed
48 Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
49 pool id which, if positive, must be saved in the filesystem's superblock;
51 (presumably about-to-be-evicted) page into cleancache and associate it with
52 the pool id, a file key, and a page index into the file. (The combination
53 of a pool id, a file key, and an index is sometimes called a "handle".)
58 all pages in all files specified by the given pool id and also surrender
59 the pool id.
61 An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache
62 to treat the pool as shared using a 128-bit UUID as a key. On systems
65 may be shared among those kernels, calls to init_shared_fs that specify the
66 same UUID will receive the same pool id, thus allowing the pages to
67 be shared. Note that any security requirements must be imposed outside
72 If a get_page is successful on a non-shared pool, the page is invalidated
73 (thus making cleancache an "exclusive" cache). On a shared pool, the page
76 cleancache (shared or not), the page cache, and the filesystem, using
79 Note that cleancache must enforce put-put-get coherency and get-get
82 subsequent get can never return the stale data (AAA). For get-get coherency,
128 fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.
131 as with compression) or secretly moved (as might be useful for write-
132 balancing for some RAM-like devices). Evicted page-cache pages (and
133 swap pages) are a great use for this kind of slower-than-RAM-but-much-
134 faster-than-disk transcendent memory, and the cleancache (and frontswap)
135 "page-object-oriented" specification provides a nice way to read and
136 write -- and indirectly "name" -- the pages.
142 well-publicized special-case workloads). Cleancache -- and frontswap --
146 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
149 underutilized RAM (e.g. with "self-ballooning"), page cache pages
154 physical systems as well. The zcache driver acts as a memory-hungry
166 cleancache is config'ed off and turn into a function-pointer-
167 compare-to-NULL if config'ed on but no backend claims the ops
168 functions, or to a compare-struct-element-to-negative if a
176 incomplete and one or more hooks in fs-specific code are required.
182 be sufficient to validate the concept, the opt-in approach means
191 easily interface with real devices with DMA instead of copying each
194 The one-page-at-a-time copy semantics simplifies the implementation
196 do fancy things on-the-fly like page compression and
201 or for real kernel-addressable RAM, it makes perfect sense for
204 * Why is non-shared cleancache "exclusive"? And where is the
210 put-after-get for inclusive becomes common, the interface could
223 cleancache replaces I/O with memory-copy-CPU-overhead; on older
224 single-core systems with slow memory-copy speeds, cleancache
230 Filesystems that are well-behaved and conform to certain
239 - The FS should be block-device-based (e.g. a ram-based FS such
241 - To ensure coherency/correctness, the FS must ensure that all
244 - To ensure coherency/correctness, either inode numbers must
245 be unique across the lifetime of the on-disk file OR the
247 - The FS must call the VFS superblock alloc and deactivate routines
249 - To maximize performance, all pages fetched from the FS should
252 - Currently, the FS blocksize must be the same as PAGESIZE. This
255 - A clustered FS should invoke the "shared_init_fs" cleancache
261 inode/filehandle, the pool id could be eliminated. But, this
275 The cleancache_enabled flag is checked in all of the frequently-used
279 tens-of-thousands) of unnecessary function calls per second. So the