Lines Matching +full:d +full:- +full:cache +full:- +full:block +full:- +full:size
12 * N-way replication of data across storage nodes
22 on symmetric access by all clients to shared block devices, Ceph
28 re-replicated in a distributed fashion by the storage nodes themselves
33 in-memory cache above the file namespace that is extremely scalable,
35 and can tolerate arbitrary (well, non-Byzantine) node failures. The
40 loaded into its cache with a single I/O operation. The contents of
57 files and bytes. That is, a 'getfattr -d foo' on any directory in the
68 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
69 getfattr -n ceph.quota.max_bytes /some/dir
81 # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt
89 # mount -t ceph 1.2.3.4:/ /mnt/ceph
99 ip=A.B.C.D[:N]
106 Specify the maximum write size in bytes. Default: 16 MB.
109 Specify the maximum read size in bytes. Default: 16 MB.
112 Specify the maximum readahead size in bytes. Default: 8 MB.
116 of a non-responsive Ceph file system. The default is 30
140 its cache. (This does not change correctness; the client uses
157 Don't use the RADOS 'copy-from' operation to perform remote object
182 https://github.com/ceph/ceph-client.git
183 git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git