1# 2# Block device driver configuration 3# 4 5menuconfig MD 6 bool "Multiple devices driver support (RAID and LVM)" 7 depends on BLOCK 8 select SRCU 9 help 10 Support multiple physical spindles through a single logical device. 11 Required for RAID and logical volume management. 12 13if MD 14 15config BLK_DEV_MD 16 tristate "RAID support" 17 ---help--- 18 This driver lets you combine several hard disk partitions into one 19 logical block device. This can be used to simply append one 20 partition to another one or to combine several redundant hard disks 21 into a RAID1/4/5 device so as to provide protection against hard 22 disk failures. This is called "Software RAID" since the combining of 23 the partitions is done by the kernel. "Hardware RAID" means that the 24 combining is done by a dedicated controller; if you have such a 25 controller, you do not need to say Y here. 26 27 More information about Software RAID on Linux is contained in the 28 Software RAID mini-HOWTO, available from 29 <http://www.tldp.org/docs.html#howto>. There you will also learn 30 where to get the supporting user space utilities raidtools. 31 32 If unsure, say N. 33 34config MD_AUTODETECT 35 bool "Autodetect RAID arrays during kernel boot" 36 depends on BLK_DEV_MD=y 37 default y 38 ---help--- 39 If you say Y here, then the kernel will try to autodetect raid 40 arrays as part of its boot process. 41 42 If you don't use raid and say Y, this autodetection can cause 43 a several-second delay in the boot time due to various 44 synchronisation steps that are part of this step. 45 46 If unsure, say Y. 47 48config MD_LINEAR 49 tristate "Linear (append) mode" 50 depends on BLK_DEV_MD 51 ---help--- 52 If you say Y here, then your multiple devices driver will be able to 53 use the so-called linear mode, i.e. it will combine the hard disk 54 partitions by simply appending one to the other. 55 56 To compile this as a module, choose M here: the module 57 will be called linear. 58 59 If unsure, say Y. 60 61config MD_RAID0 62 tristate "RAID-0 (striping) mode" 63 depends on BLK_DEV_MD 64 ---help--- 65 If you say Y here, then your multiple devices driver will be able to 66 use the so-called raid0 mode, i.e. it will combine the hard disk 67 partitions into one logical device in such a fashion as to fill them 68 up evenly, one chunk here and one chunk there. This will increase 69 the throughput rate if the partitions reside on distinct disks. 70 71 Information about Software RAID on Linux is contained in the 72 Software-RAID mini-HOWTO, available from 73 <http://www.tldp.org/docs.html#howto>. There you will also 74 learn where to get the supporting user space utilities raidtools. 75 76 To compile this as a module, choose M here: the module 77 will be called raid0. 78 79 If unsure, say Y. 80 81config MD_RAID1 82 tristate "RAID-1 (mirroring) mode" 83 depends on BLK_DEV_MD 84 ---help--- 85 A RAID-1 set consists of several disk drives which are exact copies 86 of each other. In the event of a mirror failure, the RAID driver 87 will continue to use the operational mirrors in the set, providing 88 an error free MD (multiple device) to the higher levels of the 89 kernel. In a set with N drives, the available space is the capacity 90 of a single drive, and the set protects against a failure of (N - 1) 91 drives. 92 93 Information about Software RAID on Linux is contained in the 94 Software-RAID mini-HOWTO, available from 95 <http://www.tldp.org/docs.html#howto>. There you will also 96 learn where to get the supporting user space utilities raidtools. 97 98 If you want to use such a RAID-1 set, say Y. To compile this code 99 as a module, choose M here: the module will be called raid1. 100 101 If unsure, say Y. 102 103config MD_RAID10 104 tristate "RAID-10 (mirrored striping) mode" 105 depends on BLK_DEV_MD 106 ---help--- 107 RAID-10 provides a combination of striping (RAID-0) and 108 mirroring (RAID-1) with easier configuration and more flexible 109 layout. 110 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to 111 be the same size (or at least, only as much as the smallest device 112 will be used). 113 RAID-10 provides a variety of layouts that provide different levels 114 of redundancy and performance. 115 116 RAID-10 requires mdadm-1.7.0 or later, available at: 117 118 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/ 119 120 If unsure, say Y. 121 122config MD_RAID456 123 tristate "RAID-4/RAID-5/RAID-6 mode" 124 depends on BLK_DEV_MD 125 select RAID6_PQ 126 select LIBCRC32C 127 select ASYNC_MEMCPY 128 select ASYNC_XOR 129 select ASYNC_PQ 130 select ASYNC_RAID6_RECOV 131 ---help--- 132 A RAID-5 set of N drives with a capacity of C MB per drive provides 133 the capacity of C * (N - 1) MB, and protects against a failure 134 of a single drive. For a given sector (row) number, (N - 1) drives 135 contain data sectors, and one drive contains the parity protection. 136 For a RAID-4 set, the parity blocks are present on a single drive, 137 while a RAID-5 set distributes the parity across the drives in one 138 of the available parity distribution methods. 139 140 A RAID-6 set of N drives with a capacity of C MB per drive 141 provides the capacity of C * (N - 2) MB, and protects 142 against a failure of any two drives. For a given sector 143 (row) number, (N - 2) drives contain data sectors, and two 144 drives contains two independent redundancy syndromes. Like 145 RAID-5, RAID-6 distributes the syndromes across the drives 146 in one of the available parity distribution methods. 147 148 Information about Software RAID on Linux is contained in the 149 Software-RAID mini-HOWTO, available from 150 <http://www.tldp.org/docs.html#howto>. There you will also 151 learn where to get the supporting user space utilities raidtools. 152 153 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To 154 compile this code as a module, choose M here: the module 155 will be called raid456. 156 157 If unsure, say Y. 158 159config MD_MULTIPATH 160 tristate "Multipath I/O support" 161 depends on BLK_DEV_MD 162 help 163 MD_MULTIPATH provides a simple multi-path personality for use 164 the MD framework. It is not under active development. New 165 projects should consider using DM_MULTIPATH which has more 166 features and more testing. 167 168 If unsure, say N. 169 170config MD_FAULTY 171 tristate "Faulty test module for MD" 172 depends on BLK_DEV_MD 173 help 174 The "faulty" module allows for a block device that occasionally returns 175 read or write errors. It is useful for testing. 176 177 In unsure, say N. 178 179 180config MD_CLUSTER 181 tristate "Cluster Support for MD (EXPERIMENTAL)" 182 depends on BLK_DEV_MD 183 depends on DLM 184 default n 185 ---help--- 186 Clustering support for MD devices. This enables locking and 187 synchronization across multiple systems on the cluster, so all 188 nodes in the cluster can access the MD devices simultaneously. 189 190 This brings the redundancy (and uptime) of RAID levels across the 191 nodes of the cluster. 192 193 If unsure, say N. 194 195source "drivers/md/bcache/Kconfig" 196 197config BLK_DEV_DM_BUILTIN 198 bool 199 200config BLK_DEV_DM 201 tristate "Device mapper support" 202 select BLK_DEV_DM_BUILTIN 203 ---help--- 204 Device-mapper is a low level volume manager. It works by allowing 205 people to specify mappings for ranges of logical sectors. Various 206 mapping types are available, in addition people may write their own 207 modules containing custom mappings if they wish. 208 209 Higher level volume managers such as LVM2 use this driver. 210 211 To compile this as a module, choose M here: the module will be 212 called dm-mod. 213 214 If unsure, say N. 215 216config DM_MQ_DEFAULT 217 bool "request-based DM: use blk-mq I/O path by default" 218 depends on BLK_DEV_DM 219 ---help--- 220 This option enables the blk-mq based I/O path for request-based 221 DM devices by default. With the option the dm_mod.use_blk_mq 222 module/boot option defaults to Y, without it to N, but it can 223 still be overriden either way. 224 225 If unsure say N. 226 227config DM_DEBUG 228 bool "Device mapper debugging support" 229 depends on BLK_DEV_DM 230 ---help--- 231 Enable this for messages that may help debug device-mapper problems. 232 233 If unsure, say N. 234 235config DM_BUFIO 236 tristate 237 depends on BLK_DEV_DM 238 ---help--- 239 This interface allows you to do buffered I/O on a device and acts 240 as a cache, holding recently-read blocks in memory and performing 241 delayed writes. 242 243config DM_BIO_PRISON 244 tristate 245 depends on BLK_DEV_DM 246 ---help--- 247 Some bio locking schemes used by other device-mapper targets 248 including thin provisioning. 249 250source "drivers/md/persistent-data/Kconfig" 251 252config DM_CRYPT 253 tristate "Crypt target support" 254 depends on BLK_DEV_DM 255 select CRYPTO 256 select CRYPTO_CBC 257 ---help--- 258 This device-mapper target allows you to create a device that 259 transparently encrypts the data on it. You'll need to activate 260 the ciphers you're going to use in the cryptoapi configuration. 261 262 For further information on dm-crypt and userspace tools see: 263 <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt> 264 265 To compile this code as a module, choose M here: the module will 266 be called dm-crypt. 267 268 If unsure, say N. 269 270config DM_SNAPSHOT 271 tristate "Snapshot target" 272 depends on BLK_DEV_DM 273 select DM_BUFIO 274 ---help--- 275 Allow volume managers to take writable snapshots of a device. 276 277config DM_THIN_PROVISIONING 278 tristate "Thin provisioning target" 279 depends on BLK_DEV_DM 280 select DM_PERSISTENT_DATA 281 select DM_BIO_PRISON 282 ---help--- 283 Provides thin provisioning and snapshots that share a data store. 284 285config DM_CACHE 286 tristate "Cache target (EXPERIMENTAL)" 287 depends on BLK_DEV_DM 288 default n 289 select DM_PERSISTENT_DATA 290 select DM_BIO_PRISON 291 ---help--- 292 dm-cache attempts to improve performance of a block device by 293 moving frequently used data to a smaller, higher performance 294 device. Different 'policy' plugins can be used to change the 295 algorithms used to select which blocks are promoted, demoted, 296 cleaned etc. It supports writeback and writethrough modes. 297 298config DM_CACHE_MQ 299 tristate "MQ Cache Policy (EXPERIMENTAL)" 300 depends on DM_CACHE 301 default y 302 ---help--- 303 A cache policy that uses a multiqueue ordered by recent hit 304 count to select which blocks should be promoted and demoted. 305 This is meant to be a general purpose policy. It prioritises 306 reads over writes. 307 308config DM_CACHE_SMQ 309 tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)" 310 depends on DM_CACHE 311 default y 312 ---help--- 313 A cache policy that uses a multiqueue ordered by recent hits 314 to select which blocks should be promoted and demoted. 315 This is meant to be a general purpose policy. It prioritises 316 reads over writes. This SMQ policy (vs MQ) offers the promise 317 of less memory utilization, improved performance and increased 318 adaptability in the face of changing workloads. 319 320config DM_CACHE_CLEANER 321 tristate "Cleaner Cache Policy (EXPERIMENTAL)" 322 depends on DM_CACHE 323 default y 324 ---help--- 325 A simple cache policy that writes back all data to the 326 origin. Used when decommissioning a dm-cache. 327 328config DM_ERA 329 tristate "Era target (EXPERIMENTAL)" 330 depends on BLK_DEV_DM 331 default n 332 select DM_PERSISTENT_DATA 333 select DM_BIO_PRISON 334 ---help--- 335 dm-era tracks which parts of a block device are written to 336 over time. Useful for maintaining cache coherency when using 337 vendor snapshots. 338 339config DM_MIRROR 340 tristate "Mirror target" 341 depends on BLK_DEV_DM 342 ---help--- 343 Allow volume managers to mirror logical volumes, also 344 needed for live data migration tools such as 'pvmove'. 345 346config DM_LOG_USERSPACE 347 tristate "Mirror userspace logging" 348 depends on DM_MIRROR && NET 349 select CONNECTOR 350 ---help--- 351 The userspace logging module provides a mechanism for 352 relaying the dm-dirty-log API to userspace. Log designs 353 which are more suited to userspace implementation (e.g. 354 shared storage logs) or experimental logs can be implemented 355 by leveraging this framework. 356 357config DM_RAID 358 tristate "RAID 1/4/5/6/10 target" 359 depends on BLK_DEV_DM 360 select MD_RAID0 361 select MD_RAID1 362 select MD_RAID10 363 select MD_RAID456 364 select BLK_DEV_MD 365 ---help--- 366 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings 367 368 A RAID-5 set of N drives with a capacity of C MB per drive provides 369 the capacity of C * (N - 1) MB, and protects against a failure 370 of a single drive. For a given sector (row) number, (N - 1) drives 371 contain data sectors, and one drive contains the parity protection. 372 For a RAID-4 set, the parity blocks are present on a single drive, 373 while a RAID-5 set distributes the parity across the drives in one 374 of the available parity distribution methods. 375 376 A RAID-6 set of N drives with a capacity of C MB per drive 377 provides the capacity of C * (N - 2) MB, and protects 378 against a failure of any two drives. For a given sector 379 (row) number, (N - 2) drives contain data sectors, and two 380 drives contains two independent redundancy syndromes. Like 381 RAID-5, RAID-6 distributes the syndromes across the drives 382 in one of the available parity distribution methods. 383 384config DM_ZERO 385 tristate "Zero target" 386 depends on BLK_DEV_DM 387 ---help--- 388 A target that discards writes, and returns all zeroes for 389 reads. Useful in some recovery situations. 390 391config DM_MULTIPATH 392 tristate "Multipath target" 393 depends on BLK_DEV_DM 394 # nasty syntax but means make DM_MULTIPATH independent 395 # of SCSI_DH if the latter isn't defined but if 396 # it is, DM_MULTIPATH must depend on it. We get a build 397 # error if SCSI_DH=m and DM_MULTIPATH=y 398 depends on !SCSI_DH || SCSI 399 ---help--- 400 Allow volume managers to support multipath hardware. 401 402config DM_MULTIPATH_QL 403 tristate "I/O Path Selector based on the number of in-flight I/Os" 404 depends on DM_MULTIPATH 405 ---help--- 406 This path selector is a dynamic load balancer which selects 407 the path with the least number of in-flight I/Os. 408 409 If unsure, say N. 410 411config DM_MULTIPATH_ST 412 tristate "I/O Path Selector based on the service time" 413 depends on DM_MULTIPATH 414 ---help--- 415 This path selector is a dynamic load balancer which selects 416 the path expected to complete the incoming I/O in the shortest 417 time. 418 419 If unsure, say N. 420 421config DM_DELAY 422 tristate "I/O delaying target" 423 depends on BLK_DEV_DM 424 ---help--- 425 A target that delays reads and/or writes and can send 426 them to different devices. Useful for testing. 427 428 If unsure, say N. 429 430config DM_UEVENT 431 bool "DM uevents" 432 depends on BLK_DEV_DM 433 ---help--- 434 Generate udev events for DM events. 435 436config DM_FLAKEY 437 tristate "Flakey target" 438 depends on BLK_DEV_DM 439 ---help--- 440 A target that intermittently fails I/O for debugging purposes. 441 442config DM_VERITY 443 tristate "Verity target support" 444 depends on BLK_DEV_DM 445 select CRYPTO 446 select CRYPTO_HASH 447 select DM_BUFIO 448 ---help--- 449 This device-mapper target creates a read-only device that 450 transparently validates the data on one underlying device against 451 a pre-generated tree of cryptographic checksums stored on a second 452 device. 453 454 You'll need to activate the digests you're going to use in the 455 cryptoapi configuration. 456 457 To compile this code as a module, choose M here: the module will 458 be called dm-verity. 459 460 If unsure, say N. 461 462config DM_VERITY_FEC 463 bool "Verity forward error correction support" 464 depends on DM_VERITY 465 select REED_SOLOMON 466 select REED_SOLOMON_DEC8 467 ---help--- 468 Add forward error correction support to dm-verity. This option 469 makes it possible to use pre-generated error correction data to 470 recover from corrupted blocks. 471 472 If unsure, say N. 473 474config DM_SWITCH 475 tristate "Switch target support (EXPERIMENTAL)" 476 depends on BLK_DEV_DM 477 ---help--- 478 This device-mapper target creates a device that supports an arbitrary 479 mapping of fixed-size regions of I/O across a fixed set of paths. 480 The path used for any specific region can be switched dynamically 481 by sending the target a message. 482 483 To compile this code as a module, choose M here: the module will 484 be called dm-switch. 485 486 If unsure, say N. 487 488config DM_LOG_WRITES 489 tristate "Log writes target support" 490 depends on BLK_DEV_DM 491 ---help--- 492 This device-mapper target takes two devices, one device to use 493 normally, one to log all write operations done to the first device. 494 This is for use by file system developers wishing to verify that 495 their fs is writing a consistent file system at all times by allowing 496 them to replay the log in a variety of ways and to check the 497 contents. 498 499 To compile this code as a module, choose M here: the module will 500 be called dm-log-writes. 501 502 If unsure, say N. 503 504config DM_VERITY_AVB 505 tristate "Support AVB specific verity error behavior" 506 depends on DM_VERITY 507 ---help--- 508 Enables Android Verified Boot platform-specific error 509 behavior. In particular, it will modify the vbmeta partition 510 specified on the kernel command-line when non-transient error 511 occurs (followed by a panic). 512 513 If unsure, say N. 514 515config DM_ANDROID_VERITY 516 bool "Android verity target support" 517 depends on DM_VERITY=y 518 depends on X509_CERTIFICATE_PARSER 519 depends on SYSTEM_TRUSTED_KEYRING 520 depends on PUBLIC_KEY_ALGO_RSA 521 depends on KEYS 522 depends on ASYMMETRIC_KEY_TYPE 523 depends on ASYMMETRIC_PUBLIC_KEY_SUBTYPE 524 depends on MD_LINEAR=y 525 ---help--- 526 This device-mapper target is virtually a VERITY target. This 527 target is setup by reading the metadata contents piggybacked 528 to the actual data blocks in the block device. The signature 529 of the metadata contents are verified against the key included 530 in the system keyring. Upon success, the underlying verity 531 target is setup. 532 533config DM_ANDROID_VERITY_AT_MOST_ONCE_DEFAULT_ENABLED 534 bool "Verity will validate blocks at most once" 535 depends on DM_VERITY 536 ---help--- 537 Default enables at_most_once option for dm-verity 538 539 Verify data blocks only the first time they are read from the 540 data device, rather than every time. This reduces the overhead 541 of dm-verity so that it can be used on systems that are memory 542 and/or CPU constrained. However, it provides a reduced level 543 of security because only offline tampering of the data device's 544 content will be detected, not online tampering. 545 546 Hash blocks are still verified each time they are read from the 547 hash device, since verification of hash blocks is less performance 548 critical than data blocks, and a hash block will not be verified 549 any more after all the data blocks it covers have been verified anyway. 550 551 If unsure, say N. 552endif # MD 553