• Home
  • Raw
  • Download

Lines Matching +full:compare +full:- +full:and +full:- +full:swap

8 This document describes a collection of device-mapper targets that
9 between them implement thin-provisioning and snapshots.
13 be stored on the same data volume. This simplifies administration and
19 lookup tables, and so performance was O(depth). This new
27 - Improve metadata resilience by storing metadata on a mirrored volume
28 but data on a non-mirrored one.
30 - Improve performance by storing the metadata on SSD.
40 dm-devel@redhat.com with details and we'll try our best to improve
43 Userspace tools for checking and repairing the metadata have been fully
44 developed and are available as 'thin_check' and 'thin_repair'. The name
46 a Red Hat distribution it is named 'device-mapper-persistent-data').
52 They use the dmsetup program to control the device-mapper driver
53 directly. End users will be advised to use a higher-level volume
57 -----------
59 The pool device ties together the metadata volume and the data volume.
60 It maps I/O linearly to the data volume and updates the metadata via
63 - Function calls from the thin targets
65 - Device-mapper 'messages' from userspace which control the creation of new
69 ------------------------------
71 Setting up a pool device requires a valid metadata device, and a
79 less sharing than average you'll need a larger-than-average metadata device.
88 a warning will be issued and the excess space will not be used.
91 ----------------------
96 wrong if it does not route I/O to exactly the same on-disk location as
100 -----------------------------
105 --table "0 20971520 thin-pool $metadata_dev $data_dev \
109 allocated at a time expressed in units of 512-byte sectors.
110 $data_block_size must be between 128 (64KB) and 2097152 (1GB) and a
112 thin-pool is created. People primarily interested in thin provisioning
115 not zeroing newly-allocated data, a larger $data_block_size in the
128 A low water mark for the metadata device is maintained in the kernel and
132 Updating on-disk metadata
133 -------------------------
135 On-disk metadata is committed every time a FLUSH or FUA bio is written.
137 means the thin-provisioning target behaves like a physical disk that has
144 until the pool is taken offline and repair is performed to 1) fix any
145 potential inconsistencies and 2) clear the flag that imposes repair.
148 is flagged as needing repair, the pool's data and metadata devices
158 -----------------
160 i) Creating a new thinly-provisioned volume.
162 To create a new thinly- provisioned volume you must send a message to an
167 Here '0' is an identifier for the volume, a 24-bit number. It's up
168 to the caller to allocate and manage these identifiers. If the
169 identifier is already in use, the message will fail with -EEXIST.
171 ii) Using a thinly-provisioned volume.
173 Thinly-provisioned volumes are activated using the 'thin' target::
175 dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
180 ------------------
196 Here '1' is the identifier for the volume, a 24-bit number. '0' is the
202 between the origin and the snapshot. Indeed the snapshot is no
203 different from any other thinly-provisioned device and can be
205 have only one of them active, and there's no ordering requirement on
207 device-mapper snapshots.)
209 Activate it exactly the same way as any other thinly-provisioned volume::
211 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
214 ------------------
217 thinly-provisioned volume. Any read to an unprovisioned area of the
222 thinly-provisioned volumes but have the base image on another device
226 Of course, you may write to the thin device and take internal snapshots
242 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image"
248 ------------
262 'thin-pool' target
263 ------------------
269 thin-pool <metadata dev> <data dev> <data block size (sectors)> \
275 Skip the zeroing of newly-provisioned blocks.
287 thin-pool has been created and first used in full
289 thin-pool creation.
294 Data block size must be between 64KB (128 sectors) and 1GB
305 needs_check|- metadata_low_watermark
308 A 64-bit number used by userspace to help synchronise with metadata
313 dm event will be sent to userspace. This event is edge-triggered and
315 should register for the event and then check the target's status.
319 'held' for userspace read access. '-' indicates there is no
329 drop into a read-only metadata mode in which no changes to
332 In serious cases where even a read-only mode is deemed unsafe
333 no further I/O will be permitted and the status will just
341 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool
342 module parameter can be used to change this timeout -- it
348 device must be deactivated and checked/repaired before the
349 thin-pool can be made fully operational again. '-' indicates
360 Create a new thinly-provisioned device.
361 <dev id> is an arbitrary unique 24-bit identifier chosen by
365 Create a new snapshot of another thinly-provisioned device.
366 <dev id> is an arbitrary unique 24-bit identifier chosen by
368 <origin id> is the identifier of the thinly-provisioned device
377 pool target. The thin-pool target offers to store an
378 arbitrary 64-bit transaction id and return it on the target's
381 compare-and-swap message.
393 -------------
402 the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
410 read-only snapshot origin: reads to unprovisioned areas of the
417 provisioned as and when needed.
422 If the pool has encountered device errors and failed, the status
427 mapped sector and the value of <highest mapped sector> is unspecified.