Lines Matching +full:down +full:- +full:scaling
1 .. SPDX-License-Identifier: GPL-2.0
13 * Seamless scaling from 1 to many thousands of nodes
15 * N-way replication of data across storage nodes
32 re-replicated in a distributed fashion by the storage nodes themselves
37 in-memory cache above the file namespace that is extremely scalable,
39 and can tolerate arbitrary (well, non-Byzantine) node failures. The
48 The system offers automatic data rebalancing/migration when scaling
61 files and bytes. That is, a 'getfattr -d foo' on any directory in the
72 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
73 getfattr -n ceph.quota.max_bytes /some/dir
85 # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt
89 happens to be down, the mount won't succeed.) The port can be left
93 # mount -t ceph 1.2.3.4:/ /mnt/ceph
120 of a non-responsive Ceph file system. The default is 60
151 and is useful for tracking down bugs.
161 Don't use the RADOS 'copy-from' operation to perform remote object
186 - https://github.com/ceph/ceph-client.git
187 - git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git