• Home
  • Raw
  • Download

Lines Matching +full:erase +full:- +full:size

11 This is the git repository of bcache-tools:
12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/
17 It's designed around the performance characteristics of SSDs - it only allocates
18 in erase block sized buckets, and it uses a hybrid btree/log to track cached
19 extents (which can be anywhere from a single sector to the bucket size). It's
20 designed to avoid random writes at all costs; it fills up an erase block
25 great lengths to protect your data - it reliably handles unclean shutdown. (It
29 Writeback caching can use most of the cache for buffering writes - writing
36 average is above the cutoff it will skip all IO from that task - instead of
47 You'll need bcache util from the bcache-tools repository. Both the cache device
50 bcache make -B /dev/sdb
51 bcache make -C /dev/sdc
53 `bcache make` has the ability to format multiple devices at the same time - if
57 bcache make -B /dev/sda /dev/sdb -C /dev/sdc
59 If your bcache-tools is not updated to latest version and does not have the
60 unified `bcache` utility, you may use the legacy `make-bcache` utility to format
61 bcache device with same -B and -C parameters.
63 bcache-tools now ships udev rules, and bcache devices are known to the kernel
83 /dev/bcache/by-uuid/<uuid>
84 /dev/bcache/by-label/<label>
92 You can also control them through /sys/fs//bcache/<cset-uuid>/ .
99 ---------
106 echo <CSET-UUID> > /sys/block/bcache0/bcache/attach
110 /dev/bcache<N> device won't be created until the cache shows up - particularly
124 cache, don't expect the filesystem to be recoverable - you will have massive
128 --------------
135 - For reads from the cache, if they error we just retry the read from the
138 - For writethrough writes, if the write to the cache errors we just switch to
142 - For writeback writes, we currently pass that error back up to the
143 filesystem/userspace. This could be improved - we could retry it as a write
146 - When we detach, we first try to flush any dirty data (if we were running in
152 --------------
173 host:/sys/block/md5/bcache# echo 0226553a-37cf-41d5-b3ce-8b1e944543a8 > attach
175 [ 1933.478179] bcache: __cached_dev_store() Can't attach 0226553a-37cf-41d5-b3ce-8b1e944543a8
179 or disappeared and came back, and needs to be (re-)registered::
187 Please report it to the bcache development list: linux-bcache@vger.kernel.org
197 of the backing device created with --offset 8K, or any value defined by
198 --data-offset when you originally formatted bcache with `bcache make`.
202 losetup -o 8192 /dev/loop0 /dev/your_bcache_backing_dev
214 host:~# wipefs -a /dev/sdh2
220 host:~# bcache make -C /dev/sdh2
221 UUID: 7be7e175-8f4c-4f99-94b2-9c904d227045
222 Set UUID: 5bc072a8-ab17-446d-9744-e247949913c1
239 host:/sys/block/md5/bcache# echo 5bc072a8-ab17-446d-9744-e247949913c1 > attach
240 … bcache: bch_cached_dev_attach() Caching md5 as bcache0 on set 5bc072a8-ab17-446d-9744-e247949913c1
248 host:~# wipefs -a /dev/nvme0n1p4
254 host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# ls -l cache0
255 …lrwxrwxrwx 1 root root 0 Feb 25 18:33 cache0 -> ../../../devices/pci0000:00/0000:00:1d.0/0000:70:0…
256 host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# echo 1 > stop
257 …kernel: [ 917.041908] bcache: cache_set_free() Cache set b7ba27a1-2398-4649-8ae3-0959f57ba128 unr…
261 host:~# wipefs -a /dev/nvme0n1p4
265 G) dm-crypt and bcache
275 fdisk run and re-register a changed partition table, which won't work
294 bcache: cache_set_free() Cache set 5bc072a8-ab17-446d-9744-e247949913c1 unregistered
302 host:/sys/fs/bcache# ls -l */{cache?,bdev?}
303 …xrwx 1 root root 0 Mar 5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/bdev1 -> ../../../devices/vir…
304 …xrwx 1 root root 0 Mar 5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/cache0 -> ../../../devices/vi…
305 …lrwxrwxrwx 1 root root 0 Mar 5 09:39 5bc072a8-ab17-446d-9744-e247949913c1/cache0 -> ../../../devi…
310 host:/sys/fs/bcache/5bc072a8-ab17-446d-9744-e247949913c1# echo 1 > stop
318 ---------------------------
324 - Backing device alignment
326 The default metadata size in bcache is 8k. If your backing device is
328 width using `bcache make --data-offset`. If you intend to expand your
330 raid stripe size to get the disk multiples that you would like.
332 For example: If you have a 64k stripe size, then the following offset
338 volume to the following data-spindle counts without re-aligning::
342 - Bad write performance
351 - Bad performance, or traffic not going to the SSD that you'd expect
353 By default, bcache doesn't cache everything. It tries to skip sequential IO -
359 writing an 8 gigabyte test file - so you want to disable that::
367 - Traffic's still going to the spindle/still getting cache misses
369 In the real world, SSDs don't always keep up with disks - particularly with
385 - Still getting cache misses, of the same data
397 and there's no other traffic - that can be a problem.
403 Sysfs - backing device
404 ----------------------
407 (if attached) /sys/fs/bcache/<cset-uuid>/bdev*
431 Size of readahead that should be performed. Defaults to 0. If set to e.g.
432 1M, it will round cache miss reads up to that size, but without overlapping
449 maximum acceptable sequential size for any single request.
479 Rate in sectors per second - if writeback_percent is nonzero, background
488 Sysfs - backing device stats
511 Sysfs - cache set
514 Available at /sys/fs/bcache/<cset-uuid>
523 Block size of the cache devices.
529 Size of buckets
547 Echoing a size to this file (in human readable units, k/M/G) creates a thinly
564 Write to this file to shut down the cache set - waits until all attached
574 Sysfs - cache set internal
598 was reused and invalidated - i.e. where the pointer was stale after the read
604 Sysfs - Cache device
610 Minimum granularity of writes - should match hardware sector size.
616 Size of buckets
627 Size of the freelist as a percentage of nbuckets. Can be written to to
629 artificially reduce the size of the cache at runtime. Mostly for testing
630 purposes (i.e. testing how different size caches affect your hit rate), but
646 This can reveal your working set size. Unused is the percentage of