• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1Introduction
2============
3
4This document describes a collection of device-mapper targets that
5between them implement thin-provisioning and snapshots.
6
7The main highlight of this implementation, compared to the previous
8implementation of snapshots, is that it allows many virtual devices to
9be stored on the same data volume.  This simplifies administration and
10allows the sharing of data between volumes, thus reducing disk usage.
11
12Another significant feature is support for an arbitrary depth of
13recursive snapshots (snapshots of snapshots of snapshots ...).  The
14previous implementation of snapshots did this by chaining together
15lookup tables, and so performance was O(depth).  This new
16implementation uses a single data structure to avoid this degradation
17with depth.  Fragmentation may still be an issue, however, in some
18scenarios.
19
20Metadata is stored on a separate device from data, giving the
21administrator some freedom, for example to:
22
23- Improve metadata resilience by storing metadata on a mirrored volume
24  but data on a non-mirrored one.
25
26- Improve performance by storing the metadata on SSD.
27
28Status
29======
30
31These targets are very much still in the EXPERIMENTAL state.  Please
32do not yet rely on them in production.  But do experiment and offer us
33feedback.  Different use cases will have different performance
34characteristics, for example due to fragmentation of the data volume.
35
36If you find this software is not performing as expected please mail
37dm-devel@redhat.com with details and we'll try our best to improve
38things for you.
39
40Userspace tools for checking and repairing the metadata are under
41development.
42
43Cookbook
44========
45
46This section describes some quick recipes for using thin provisioning.
47They use the dmsetup program to control the device-mapper driver
48directly.  End users will be advised to use a higher-level volume
49manager such as LVM2 once support has been added.
50
51Pool device
52-----------
53
54The pool device ties together the metadata volume and the data volume.
55It maps I/O linearly to the data volume and updates the metadata via
56two mechanisms:
57
58- Function calls from the thin targets
59
60- Device-mapper 'messages' from userspace which control the creation of new
61  virtual devices amongst other things.
62
63Setting up a fresh pool device
64------------------------------
65
66Setting up a pool device requires a valid metadata device, and a
67data device.  If you do not have an existing metadata device you can
68make one by zeroing the first 4k to indicate empty metadata.
69
70    dd if=/dev/zero of=$metadata_dev bs=4096 count=1
71
72The amount of metadata you need will vary according to how many blocks
73are shared between thin devices (i.e. through snapshots).  If you have
74less sharing than average you'll need a larger-than-average metadata device.
75
76As a guide, we suggest you calculate the number of bytes to use in the
77metadata device as 48 * $data_dev_size / $data_block_size but round it up
78to 2MB if the answer is smaller.  If you're creating large numbers of
79snapshots which are recording large amounts of change, you may find you
80need to increase this.
81
82The largest size supported is 16GB: If the device is larger,
83a warning will be issued and the excess space will not be used.
84
85Reloading a pool table
86----------------------
87
88You may reload a pool's table, indeed this is how the pool is resized
89if it runs out of space.  (N.B. While specifying a different metadata
90device when reloading is not forbidden at the moment, things will go
91wrong if it does not route I/O to exactly the same on-disk location as
92previously.)
93
94Using an existing pool device
95-----------------------------
96
97    dmsetup create pool \
98	--table "0 20971520 thin-pool $metadata_dev $data_dev \
99		 $data_block_size $low_water_mark"
100
101$data_block_size gives the smallest unit of disk space that can be
102allocated at a time expressed in units of 512-byte sectors.
103$data_block_size must be between 128 (64KB) and 2097152 (1GB) and a
104multiple of 128 (64KB).  $data_block_size cannot be changed after the
105thin-pool is created.  People primarily interested in thin provisioning
106may want to use a value such as 1024 (512KB).  People doing lots of
107snapshotting may want a smaller value such as 128 (64KB).  If you are
108not zeroing newly-allocated data, a larger $data_block_size in the
109region of 256000 (128MB) is suggested.
110
111$low_water_mark is expressed in blocks of size $data_block_size.  If
112free space on the data device drops below this level then a dm event
113will be triggered which a userspace daemon should catch allowing it to
114extend the pool device.  Only one such event will be sent.
115
116No special event is triggered if a just resumed device's free space is below
117the low water mark. However, resuming a device always triggers an
118event; a userspace daemon should verify that free space exceeds the low
119water mark when handling this event.
120
121A low water mark for the metadata device is maintained in the kernel and
122will trigger a dm event if free space on the metadata device drops below
123it.
124
125Updating on-disk metadata
126-------------------------
127
128On-disk metadata is committed every time a FLUSH or FUA bio is written.
129If no such requests are made then commits will occur every second.  This
130means the thin-provisioning target behaves like a physical disk that has
131a volatile write cache.  If power is lost you may lose some recent
132writes.  The metadata should always be consistent in spite of any crash.
133
134If data space is exhausted the pool will either error or queue IO
135according to the configuration (see: error_if_no_space).  If metadata
136space is exhausted or a metadata operation fails: the pool will error IO
137until the pool is taken offline and repair is performed to 1) fix any
138potential inconsistencies and 2) clear the flag that imposes repair.
139Once the pool's metadata device is repaired it may be resized, which
140will allow the pool to return to normal operation.  Note that if a pool
141is flagged as needing repair, the pool's data and metadata devices
142cannot be resized until repair is performed.  It should also be noted
143that when the pool's metadata space is exhausted the current metadata
144transaction is aborted.  Given that the pool will cache IO whose
145completion may have already been acknowledged to upper IO layers
146(e.g. filesystem) it is strongly suggested that consistency checks
147(e.g. fsck) be performed on those layers when repair of the pool is
148required.
149
150Thin provisioning
151-----------------
152
153i) Creating a new thinly-provisioned volume.
154
155  To create a new thinly- provisioned volume you must send a message to an
156  active pool device, /dev/mapper/pool in this example.
157
158    dmsetup message /dev/mapper/pool 0 "create_thin 0"
159
160  Here '0' is an identifier for the volume, a 24-bit number.  It's up
161  to the caller to allocate and manage these identifiers.  If the
162  identifier is already in use, the message will fail with -EEXIST.
163
164ii) Using a thinly-provisioned volume.
165
166  Thinly-provisioned volumes are activated using the 'thin' target:
167
168    dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
169
170  The last parameter is the identifier for the thinp device.
171
172Internal snapshots
173------------------
174
175i) Creating an internal snapshot.
176
177  Snapshots are created with another message to the pool.
178
179  N.B.  If the origin device that you wish to snapshot is active, you
180  must suspend it before creating the snapshot to avoid corruption.
181  This is NOT enforced at the moment, so please be careful!
182
183    dmsetup suspend /dev/mapper/thin
184    dmsetup message /dev/mapper/pool 0 "create_snap 1 0"
185    dmsetup resume /dev/mapper/thin
186
187  Here '1' is the identifier for the volume, a 24-bit number.  '0' is the
188  identifier for the origin device.
189
190ii) Using an internal snapshot.
191
192  Once created, the user doesn't have to worry about any connection
193  between the origin and the snapshot.  Indeed the snapshot is no
194  different from any other thinly-provisioned device and can be
195  snapshotted itself via the same method.  It's perfectly legal to
196  have only one of them active, and there's no ordering requirement on
197  activating or removing them both.  (This differs from conventional
198  device-mapper snapshots.)
199
200  Activate it exactly the same way as any other thinly-provisioned volume:
201
202    dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
203
204External snapshots
205------------------
206
207You can use an external _read only_ device as an origin for a
208thinly-provisioned volume.  Any read to an unprovisioned area of the
209thin device will be passed through to the origin.  Writes trigger
210the allocation of new blocks as usual.
211
212One use case for this is VM hosts that want to run guests on
213thinly-provisioned volumes but have the base image on another device
214(possibly shared between many VMs).
215
216You must not write to the origin device if you use this technique!
217Of course, you may write to the thin device and take internal snapshots
218of the thin volume.
219
220i) Creating a snapshot of an external device
221
222  This is the same as creating a thin device.
223  You don't mention the origin at this stage.
224
225    dmsetup message /dev/mapper/pool 0 "create_thin 0"
226
227ii) Using a snapshot of an external device.
228
229  Append an extra parameter to the thin target specifying the origin:
230
231    dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image"
232
233  N.B. All descendants (internal snapshots) of this snapshot require the
234  same extra origin parameter.
235
236Deactivation
237------------
238
239All devices using a pool must be deactivated before the pool itself
240can be.
241
242    dmsetup remove thin
243    dmsetup remove snap
244    dmsetup remove pool
245
246Reference
247=========
248
249'thin-pool' target
250------------------
251
252i) Constructor
253
254    thin-pool <metadata dev> <data dev> <data block size (sectors)> \
255	      <low water mark (blocks)> [<number of feature args> [<arg>]*]
256
257    Optional feature arguments:
258
259      skip_block_zeroing: Skip the zeroing of newly-provisioned blocks.
260
261      ignore_discard: Disable discard support.
262
263      no_discard_passdown: Don't pass discards down to the underlying
264			   data device, but just remove the mapping.
265
266      read_only: Don't allow any changes to be made to the pool
267		 metadata.
268
269      error_if_no_space: Error IOs, instead of queueing, if no space.
270
271    Data block size must be between 64KB (128 sectors) and 1GB
272    (2097152 sectors) inclusive.
273
274
275ii) Status
276
277    <transaction id> <used metadata blocks>/<total metadata blocks>
278    <used data blocks>/<total data blocks> <held metadata root>
279    [no_]discard_passdown ro|rw
280
281    transaction id:
282	A 64-bit number used by userspace to help synchronise with metadata
283	from volume managers.
284
285    used data blocks / total data blocks
286	If the number of free blocks drops below the pool's low water mark a
287	dm event will be sent to userspace.  This event is edge-triggered and
288	it will occur only once after each resume so volume manager writers
289	should register for the event and then check the target's status.
290
291    held metadata root:
292	The location, in blocks, of the metadata root that has been
293	'held' for userspace read access.  '-' indicates there is no
294	held root.
295
296    discard_passdown|no_discard_passdown
297	Whether or not discards are actually being passed down to the
298	underlying device.  When this is enabled when loading the table,
299	it can get disabled if the underlying device doesn't support it.
300
301    ro|rw|out_of_data_space
302	If the pool encounters certain types of device failures it will
303	drop into a read-only metadata mode in which no changes to
304	the pool metadata (like allocating new blocks) are permitted.
305
306	In serious cases where even a read-only mode is deemed unsafe
307	no further I/O will be permitted and the status will just
308	contain the string 'Fail'.  The userspace recovery tools
309	should then be used.
310
311    error_if_no_space|queue_if_no_space
312	If the pool runs out of data or metadata space, the pool will
313	either queue or error the IO destined to the data device.  The
314	default is to queue the IO until more space is added or the
315	'no_space_timeout' expires.  The 'no_space_timeout' dm-thin-pool
316	module parameter can be used to change this timeout -- it
317	defaults to 60 seconds but may be disabled using a value of 0.
318
319    needs_check
320	A metadata operation has failed, resulting in the needs_check
321	flag being set in the metadata's superblock.  The metadata
322	device must be deactivated and checked/repaired before the
323	thin-pool can be made fully operational again.  '-' indicates
324	needs_check is not set.
325
326iii) Messages
327
328    create_thin <dev id>
329
330	Create a new thinly-provisioned device.
331	<dev id> is an arbitrary unique 24-bit identifier chosen by
332	the caller.
333
334    create_snap <dev id> <origin id>
335
336	Create a new snapshot of another thinly-provisioned device.
337	<dev id> is an arbitrary unique 24-bit identifier chosen by
338	the caller.
339	<origin id> is the identifier of the thinly-provisioned device
340	of which the new device will be a snapshot.
341
342    delete <dev id>
343
344	Deletes a thin device.  Irreversible.
345
346    set_transaction_id <current id> <new id>
347
348	Userland volume managers, such as LVM, need a way to
349	synchronise their external metadata with the internal metadata of the
350	pool target.  The thin-pool target offers to store an
351	arbitrary 64-bit transaction id and return it on the target's
352	status line.  To avoid races you must provide what you think
353	the current transaction id is when you change it with this
354	compare-and-swap message.
355
356    reserve_metadata_snap
357
358        Reserve a copy of the data mapping btree for use by userland.
359        This allows userland to inspect the mappings as they were when
360        this message was executed.  Use the pool's status command to
361        get the root block associated with the metadata snapshot.
362
363    release_metadata_snap
364
365        Release a previously reserved copy of the data mapping btree.
366
367'thin' target
368-------------
369
370i) Constructor
371
372    thin <pool dev> <dev id> [<external origin dev>]
373
374    pool dev:
375	the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
376
377    dev id:
378	the internal device identifier of the device to be
379	activated.
380
381    external origin dev:
382	an optional block device outside the pool to be treated as a
383	read-only snapshot origin: reads to unprovisioned areas of the
384	thin target will be mapped to this device.
385
386The pool doesn't store any size against the thin devices.  If you
387load a thin target that is smaller than you've been using previously,
388then you'll have no access to blocks mapped beyond the end.  If you
389load a target that is bigger than before, then extra blocks will be
390provisioned as and when needed.
391
392ii) Status
393
394     <nr mapped sectors> <highest mapped sector>
395
396	If the pool has encountered device errors and failed, the status
397	will just contain the string 'Fail'.  The userspace recovery
398	tools should then be used.
399