Lines Matching +full:off +full:- +full:on +full:- +full:delay +full:- +full:us
11 This is the git repository of bcache-tools:
12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/
17 It's designed around the performance characteristics of SSDs - it only allocates
24 off, but can be switched on and off arbitrarily at runtime. Bcache goes to
25 great lengths to protect your data - it reliably handles unclean shutdown. (It
27 writes as completed until they're on stable storage).
29 Writeback caching can use most of the cache for buffering writes - writing
36 average is above the cutoff it will skip all IO from that task - instead of
40 In the event of a data IO error on the flash it will try to recover by reading
47 You'll need bcache util from the bcache-tools repository. Both the cache device
50 bcache make -B /dev/sdb
51 bcache make -C /dev/sdc
53 `bcache make` has the ability to format multiple devices at the same time - if
57 bcache make -B /dev/sda /dev/sdb -C /dev/sdc
59 If your bcache-tools is not updated to latest version and does not have the
60 unified `bcache` utility, you may use the legacy `make-bcache` utility to format
61 bcache device with same -B and -C parameters.
63 bcache-tools now ships udev rules, and bcache devices are known to the kernel
83 /dev/bcache/by-uuid/<uuid>
84 /dev/bcache/by-label/<label>
92 You can also control them through /sys/fs//bcache/<cset-uuid>/ .
99 ---------
106 echo <CSET-UUID> > /sys/block/bcache0/bcache/attach
110 /dev/bcache<N> device won't be created until the cache shows up - particularly
111 important if you have writeback caching turned on.
124 cache, don't expect the filesystem to be recoverable - you will have massive
128 --------------
135 - For reads from the cache, if they error we just retry the read from the
138 - For writethrough writes, if the write to the cache errors we just switch to
142 - For writeback writes, we currently pass that error back up to the
143 filesystem/userspace. This could be improved - we could retry it as a write
146 - When we detach, we first try to flush any dirty data (if we were running in
152 --------------
173 host:/sys/block/md5/bcache# echo 0226553a-37cf-41d5-b3ce-8b1e944543a8 > attach
175 [ 1933.478179] bcache: __cached_dev_store() Can't attach 0226553a-37cf-41d5-b3ce-8b1e944543a8
179 or disappeared and came back, and needs to be (re-)registered::
187 Please report it to the bcache development list: linux-bcache@vger.kernel.org
195 If bcache is not available in the kernel, a filesystem on the backing
197 of the backing device created with --offset 8K, or any value defined by
198 --data-offset when you originally formatted bcache with `bcache make`.
202 losetup -o 8192 /dev/loop0 /dev/your_bcache_backing_dev
214 host:~# wipefs -a /dev/sdh2
220 host:~# bcache make -C /dev/sdh2
221 UUID: 7be7e175-8f4c-4f99-94b2-9c904d227045
222 Set UUID: 5bc072a8-ab17-446d-9744-e247949913c1
239 host:/sys/block/md5/bcache# echo 5bc072a8-ab17-446d-9744-e247949913c1 > attach
240 …6616] bcache: bch_cached_dev_attach() Caching md5 as bcache0 on set 5bc072a8-ab17-446d-9744-e24794…
248 host:~# wipefs -a /dev/nvme0n1p4
254 host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# ls -l cache0
255 …lrwxrwxrwx 1 root root 0 Feb 25 18:33 cache0 -> ../../../devices/pci0000:00/0000:00:1d.0/0000:70:0…
256 host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# echo 1 > stop
257 …kernel: [ 917.041908] bcache: cache_set_free() Cache set b7ba27a1-2398-4649-8ae3-0959f57ba128 unr…
261 host:~# wipefs -a /dev/nvme0n1p4
265 G) dm-crypt and bcache
267 First setup bcache unencrypted and then install dmcrypt on top of
269 and caching devices and then install bcache on top. [benchmarks?]
275 fdisk run and re-register a changed partition table, which won't work
276 if there are any active backing or caching devices left on it:
294 bcache: cache_set_free() Cache set 5bc072a8-ab17-446d-9744-e247949913c1 unregistered
302 host:/sys/fs/bcache# ls -l */{cache?,bdev?}
303 …xrwx 1 root root 0 Mar 5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/bdev1 -> ../../../devices/vir…
304 …xrwx 1 root root 0 Mar 5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/cache0 -> ../../../devices/vi…
305 …lrwxrwxrwx 1 root root 0 Mar 5 09:39 5bc072a8-ab17-446d-9744-e247949913c1/cache0 -> ../../../devi…
310 host:/sys/fs/bcache/5bc072a8-ab17-446d-9744-e247949913c1# echo 1 > stop
318 ---------------------------
324 - Backing device alignment
328 width using `bcache make --data-offset`. If you intend to expand your
338 volume to the following data-spindle counts without re-aligning::
342 - Bad write performance
351 - Bad performance, or traffic not going to the SSD that you'd expect
353 By default, bcache doesn't cache everything. It tries to skip sequential IO -
359 writing an 8 gigabyte test file - so you want to disable that::
367 - Traffic's still going to the spindle/still getting cache misses
369 In the real world, SSDs don't always keep up with disks - particularly with
383 The default is 2000 us (2 milliseconds) for reads, and 20000 for writes.
385 - Still getting cache misses, of the same data
397 and there's no other traffic - that can be a problem.
403 Sysfs - backing device
404 ----------------------
407 (if attached) /sys/fs/bcache/<cset-uuid>/bdev*
425 updated unlike the cache set's version, but may be slightly off.
461 dirty data cached but the cache set was unavailable; whatever data was on the
479 Rate in sectors per second - if writeback_percent is nonzero, background
484 If off, writeback of dirty data will not take place at all. Dirty data will
486 benchmarking. Defaults to on.
488 Sysfs - backing device stats
511 Sysfs - cache set
514 Available at /sys/fs/bcache/<cset-uuid>
556 Journal writes will delay for up to this many milliseconds, unless a cache
564 Write to this file to shut down the cache set - waits until all attached
574 Sysfs - cache set internal
598 was reused and invalidated - i.e. where the pointer was stale after the read
604 Sysfs - Cache device
610 Minimum granularity of writes - should match hardware sector size.
622 Boolean; if on a discard/TRIM will be issued to each bucket before it is
623 reused. Defaults to off, since SATA TRIM is an unqueued command (and thus
628 increase the number of buckets kept on the freelist, which lets you
631 since buckets are discarded when they move on to the freelist will also make