• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Ceph Distributed File System
2============================
3
4Ceph is a distributed network file system designed to provide good
5performance, reliability, and scalability.
6
7Basic features include:
8
9 * POSIX semantics
10 * Seamless scaling from 1 to many thousands of nodes
11 * High availability and reliability.  No single point of failure.
12 * N-way replication of data across storage nodes
13 * Fast recovery from node failures
14 * Automatic rebalancing of data on node addition/removal
15 * Easy deployment: most FS components are userspace daemons
16
17Also,
18 * Flexible snapshots (on any directory)
19 * Recursive accounting (nested files, directories, bytes)
20
21In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
22on symmetric access by all clients to shared block devices, Ceph
23separates data and metadata management into independent server
24clusters, similar to Lustre.  Unlike Lustre, however, metadata and
25storage nodes run entirely as user space daemons.  File data is striped
26across storage nodes in large chunks to distribute workload and
27facilitate high throughputs.  When storage nodes fail, data is
28re-replicated in a distributed fashion by the storage nodes themselves
29(with some minimal coordination from a cluster monitor), making the
30system extremely efficient and scalable.
31
32Metadata servers effectively form a large, consistent, distributed
33in-memory cache above the file namespace that is extremely scalable,
34dynamically redistributes metadata in response to workload changes,
35and can tolerate arbitrary (well, non-Byzantine) node failures.  The
36metadata server takes a somewhat unconventional approach to metadata
37storage to significantly improve performance for common workloads.  In
38particular, inodes with only a single link are embedded in
39directories, allowing entire directories of dentries and inodes to be
40loaded into its cache with a single I/O operation.  The contents of
41extremely large directories can be fragmented and managed by
42independent metadata servers, allowing scalable concurrent access.
43
44The system offers automatic data rebalancing/migration when scaling
45from a small cluster of just a few nodes to many hundreds, without
46requiring an administrator carve the data set into static volumes or
47go through the tedious process of migrating data between servers.
48When the file system approaches full, new nodes can be easily added
49and things will "just work."
50
51Ceph includes flexible snapshot mechanism that allows a user to create
52a snapshot on any subdirectory (and its nested contents) in the
53system.  Snapshot creation and deletion are as simple as 'mkdir
54.snap/foo' and 'rmdir .snap/foo'.
55
56Ceph also provides some recursive accounting on directories for nested
57files and bytes.  That is, a 'getfattr -d foo' on any directory in the
58system will reveal the total number of nested regular files and
59subdirectories, and a summation of all nested file sizes.  This makes
60the identification of large disk space consumers relatively quick, as
61no 'du' or similar recursive scan of the file system is required.
62
63Finally, Ceph also allows quotas to be set on any directory in the system.
64The quota can restrict the number of bytes or the number of files stored
65beneath that point in the directory hierarchy.  Quotas can be set using
66extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg:
67
68 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
69 getfattr -n ceph.quota.max_bytes /some/dir
70
71A limitation of the current quotas implementation is that it relies on the
72cooperation of the client mounting the file system to stop writers when a
73limit is reached.  A modified or adversarial client cannot be prevented
74from writing as much data as it needs.
75
76Mount Syntax
77============
78
79The basic mount syntax is:
80
81 # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt
82
83You only need to specify a single monitor, as the client will get the
84full list when it connects.  (However, if the monitor you specify
85happens to be down, the mount won't succeed.)  The port can be left
86off if the monitor is using the default.  So if the monitor is at
871.2.3.4,
88
89 # mount -t ceph 1.2.3.4:/ /mnt/ceph
90
91is sufficient.  If /sbin/mount.ceph is installed, a hostname can be
92used instead of an IP address.
93
94
95
96Mount Options
97=============
98
99  ip=A.B.C.D[:N]
100	Specify the IP and/or port the client should bind to locally.
101	There is normally not much reason to do this.  If the IP is not
102	specified, the client's IP address is determined by looking at the
103	address its connection to the monitor originates from.
104
105  wsize=X
106	Specify the maximum write size in bytes.  Default: 16 MB.
107
108  rsize=X
109	Specify the maximum read size in bytes.  Default: 16 MB.
110
111  rasize=X
112	Specify the maximum readahead size in bytes.  Default: 8 MB.
113
114  mount_timeout=X
115	Specify the timeout value for mount (in seconds), in the case
116	of a non-responsive Ceph file system.  The default is 30
117	seconds.
118
119  caps_max=X
120	Specify the maximum number of caps to hold. Unused caps are released
121	when number of caps exceeds the limit. The default is 0 (no limit)
122
123  rbytes
124	When stat() is called on a directory, set st_size to 'rbytes',
125	the summation of file sizes over all files nested beneath that
126	directory.  This is the default.
127
128  norbytes
129	When stat() is called on a directory, set st_size to the
130	number of entries in that directory.
131
132  nocrc
133	Disable CRC32C calculation for data writes.  If set, the storage node
134	must rely on TCP's error correction to detect data corruption
135	in the data payload.
136
137  dcache
138        Use the dcache contents to perform negative lookups and
139        readdir when the client has the entire directory contents in
140        its cache.  (This does not change correctness; the client uses
141        cached metadata only when a lease or capability ensures it is
142        valid.)
143
144  nodcache
145        Do not use the dcache as above.  This avoids a significant amount of
146        complex code, sacrificing performance without affecting correctness,
147        and is useful for tracking down bugs.
148
149  noasyncreaddir
150	Do not use the dcache as above for readdir.
151
152  noquotadf
153        Report overall filesystem usage in statfs instead of using the root
154        directory quota.
155
156  nocopyfrom
157        Don't use the RADOS 'copy-from' operation to perform remote object
158        copies.  Currently, it's only used in copy_file_range, which will revert
159        to the default VFS implementation if this option is used.
160
161  recover_session=<no|clean>
162	Set auto reconnect mode in the case where the client is blacklisted. The
163	available modes are "no" and "clean". The default is "no".
164
165	* no: never attempt to reconnect when client detects that it has been
166	blacklisted. Operations will generally fail after being blacklisted.
167
168	* clean: client reconnects to the ceph cluster automatically when it
169	detects that it has been blacklisted. During reconnect, client drops
170	dirty data/metadata, invalidates page caches and writable file handles.
171	After reconnect, file locks become stale because the MDS loses track
172	of them. If an inode contains any stale file locks, read/write on the
173	inode is not allowed until applications release all stale file locks.
174
175More Information
176================
177
178For more information on Ceph, see the home page at
179	https://ceph.com/
180
181The Linux kernel client source tree is available at
182	https://github.com/ceph/ceph-client.git
183	git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
184
185and the source for the full system is at
186	https://github.com/ceph/ceph.git
187