| /Documentation/devicetree/bindings/display/ |
| D | xylon,logicvc-display.yaml | 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 4 --- 5 $id: http://devicetree.org/schemas/display/xylon,logicvc-display.yaml# 6 $schema: http://devicetree.org/meta-schemas/core.yaml# 11 - Paul Kocialkowski <paul.kocialkowski@bootlin.com> 16 with Xilinx Zynq-7000 SoCs and Xilinx FPGAs. 20 synthesis time. As a result, many of the device-tree bindings are meant to 24 Layers are declared in the "layers" sub-node and have dedicated configuration. 25 In version 3 of the controller, each layer has fixed memory offset and address 32 - xylon,logicvc-3.02.a-display [all …]
|
| /Documentation/scsi/ |
| D | lpfc.rst | 1 .. SPDX-License-Identifier: GPL-2.0 15 The proposed modifications to the transport layer for FC remote ports 20 The driver now requires a 2.6.12 (if pre-release, 2.6.12-rc1) or later 26 The following information is provided for additional background on the 39 errored by the driver, the mid-layer would exhaust its retries, and the 41 re-enable the device. 45 queuing is unnecessary as the block layer already performs the 56 The proposed patch was posted to the linux-scsi mailing list. The patch 57 is contained in the 2.6.10-rc2 (and later) patch kits. As such, this 71 At this time, the driver requires the 2.6.12 (if pre-release, 2.6.12-rc1)
|
| /Documentation/admin-guide/ |
| D | syscall-user-dispatch.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Background chapter 8 ---------- 11 calls of only a part of their process - the part that has the 12 incompatible code - while being able to execute native syscalls without 21 multiple-personality application can then flip the switch without 22 invoking the kernel, when crossing the compatibility layer API 27 The goal of this design is to provide very quick compatibility layer 29 personality every time the compatibility layer executes. Instead, a 40 non-native applications, it must function on syscalls whose invocation [all …]
|
| D | devices.txt | 1 0 Unnamed devices (e.g. non-device mounts) 7 2 = /dev/kmem OBSOLETE - replaced by /proc/kcore 11 6 = /dev/core OBSOLETE - replaced by /proc/kcore 18 12 = /dev/oldmem OBSOLETE - replaced by /proc/vmcore 31 2 char Pseudo-TTY masters 37 Pseudo-tty's are named as follows: 40 the 1st through 16th series of 16 pseudo-ttys each, and 44 These are the old-style (BSD) PTY devices; Unix98 106 3 char Pseudo-TTY slaves 112 These are the old-style (BSD) PTY devices; Unix98 [all …]
|
| D | bcache.rst | 2 A block layer cache (bcache) 11 This is the git repository of bcache-tools: 12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/ 17 It's designed around the performance characteristics of SSDs - it only allocates 25 great lengths to protect your data - it reliably handles unclean shutdown. (It 29 Writeback caching can use most of the cache for buffering writes - writing 36 average is above the cutoff it will skip all IO from that task - instead of 47 You'll need bcache util from the bcache-tools repository. Both the cache device 50 bcache make -B /dev/sdb 51 bcache make -C /dev/sdc [all …]
|
| D | ext4.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 (64 bit) in keeping with increasing disk capacities and state-of-the-art 12 Mailing list: linux-ext4@vger.kernel.org 23 - The latest version of e2fsprogs can be found at: 35 - Create a new filesystem using the ext4 filesystem type: 37 # mke2fs -t ext4 /dev/hda1 41 # tune2fs -O extents /dev/hda1 46 # tune2fs -I 256 /dev/hda1 48 - Mounting: 50 # mount -t ext4 /dev/hda1 /wherever [all …]
|
| /Documentation/block/ |
| D | blk-mq.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 Multi-Queue Block IO Queueing Mechanism (blk-mq) 7 The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage 15 Background section in Introduction 16 ---------- 22 any layer on the storage stack. One example of such optimization technique 26 However, with the development of Solid State Drives and Non-Volatile Memories 30 in those devices' design, the multi-queue mechanism was introduced. 36 to different CPUs) wanted to perform block IO. Instead of this, the blk-mq API 42 --------- [all …]
|
| D | inline-encryption.rst | 1 .. SPDX-License-Identifier: GPL-2.0 9 Background chapter 19 keys directly in low-level I/O requests. However, most inline encryption 22 low-level I/O request then just contains a keyslot index and data unit number. 28 managed by the block layer, not the kernel crypto API. 30 Inline encryption hardware is also very different from "self-encrypting drives", 31 such as those based on the TCG Opal or ATA Security standards. Self-encrypting 32 drives don't provide fine-grained control of encryption and provide no way to 34 provides fine-grained control of encryption, including the choice of key and 43 layered devices like device-mapper and loopback (i.e. we want to be able to use [all …]
|
| /Documentation/driver-api/nfc/ |
| D | nfc-hci.rst | 5 - Author: Eric Lapuyade, Samuel Ortiz 6 - Contact: eric.lapuyade@intel.com, samuel.ortiz@intel.com 9 ------- 11 The HCI layer implements much of the ETSI TS 102 622 V10.2.0 specification. It 12 enables easy writing of HCI-based NFC drivers. The HCI layer runs as an NFC Core 17 --- 21 they are translated in a sequence of HCI commands sent to the HCI layer in the 30 - one for executing commands : nfc_hci_msg_tx_work(). Only one command 32 - one for dispatching received events and commands : nfc_hci_msg_rx_work(). 35 -------------------------- [all …]
|
| /Documentation/core-api/ |
| D | swiotlb.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is 10 the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms 17 the DMA layer of the DMA attributes of the devices they are managing, and use 19 These APIs use the device DMA attributes and kernel-wide settings to determine 20 if bounce buffering is necessary. If so, the DMA layer manages the allocation, 30 --------------- 33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below 40 directed to guest memory that is unencrypted. CoCo VMs set a kernel-wide option 43 the Linux kernel DMA layer does "sync" operations to cause the CPU to copy the [all …]
|
| /Documentation/networking/ |
| D | multi-pf-netdev.rst | 1 .. SPDX-License-Identifier: GPL-2.0 5 Multi-PF Netdev 11 - `Background`_ 12 - `Overview`_ 13 - `mlx5 implementation`_ 14 - `Channels distribution`_ 15 - `Observability`_ 16 - `Steering`_ 17 - `Mutually exclusive features`_ 19 Background chapter [all …]
|
| D | rds.rst | 1 .. SPDX-License-Identifier: GPL-2.0 10 This readme tries to provide some background on the hows and whys of RDS, 14 http://oss.oracle.com/pipermail/rds-devel/2007-November/000228.html 22 cluster - so in a cluster with N processes you need N sockets, in contrast 23 to N*N if you use a connection-oriented socket transport like TCP. 25 RDS is not Infiniband-specific; it was designed to support different 29 The high-level semantics of RDS from the application's point of view are 39 transport has to be IP-based. In fact, RDS over IB uses a 59 a active-active HA scenario), but only as long as the address 72 to create RDS sockets. SOL_RDS is the socket-level to be used [all …]
|
| /Documentation/filesystems/nfs/ |
| D | rpc-server-gss.rst | 13 - RFC2203 v1: https://tools.ietf.org/rfc/rfc2203.txt 14 - RFC5403 v2: https://tools.ietf.org/rfc/rfc5403.txt 18 - RFC7861 v3: https://tools.ietf.org/rfc/rfc7861.txt 20 Background chapter 35 - initial context establishment 36 - integrity/privacy protection (signing and encrypting of individual 39 The former is more complex and policy-independent, but less 40 performance-sensitive. The latter is simpler and needs to be very fast. 42 Therefore, we perform per-packet integrity and privacy protection in the 51 nfs-utils package. [all …]
|
| /Documentation/driver-api/media/drivers/ |
| D | pvrusb2.rst | 1 .. SPDX-License-Identifier: GPL-2.0 8 Background chapter 9 ---------- 13 Its history started with the reverse-engineering effort by Björn 29 1. Low level wire-protocol implementation with the device. 37 4. A "context" layer which manages instancing of driver, setup, 38 tear-down, arbitration, and interaction with high level 45 The most important shearing layer is between the top 2 layers. A 61 -------- 70 -------------------------------------- [all …]
|
| /Documentation/networking/devlink/ |
| D | devlink-health.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Background chapter 49 auto-dump is set and there is no other dump which is already stored) 52 - Auto-recovery configuration 53 - Grace period vs. time passed since last recover 63 json-like format. The API allows the driver to add nested attributes such as 69 the data using SKBs to the netlink layer, it fragments the data between 85 .. list-table:: List of devlink health interfaces 88 * - Name 89 - Description [all …]
|
| D | devlink-trap.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Background chapter 14 For example, a device acting as a multicast-aware bridge must be able to send 31 The ``devlink-trap`` mechanism allows capable device drivers to register their 35 Upon receiving trapped packets, ``devlink`` will perform a per-trap packets and 38 port). This is especially useful for drop traps (see :ref:`Trap-Types`) 42 The following diagram provides a general overview of ``devlink-trap``:: 49 +---------------------------------------------------+ 52 +-------+--------+ 56 +-------^--------+ [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | switch.rst | 2 dm-switch 5 The device-mapper switch target creates a device that supports an 6 arbitrary mapping of fixed-size regions of I/O across a fixed set of 11 number of fixed-sized address regions but there is no simple pattern 13 dm-stripe. 15 Background subtitle 16 ---------- 42 A device-mapper table already lets you map different regions of a 48 Using this device-mapper switch target we can now build a two-layer 51 Upper Tier - Determine which array member the I/O should be sent to. [all …]
|
| /Documentation/userspace-api/media/dvb/ |
| D | fe_property_parameters.rst | 1 .. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later 15 .. _DTV-UNDEFINED: 24 .. _DTV-TUNE: 33 .. _DTV-CLEAR: 42 .. _DTV-FREQUENCY: 57 of the transponder/channel. The exception is for ISDB-T, where 60 #. For ISDB-T, the channels are usually transmitted with an offset of 65 #. In ISDB-Tsb, the channel consists of only one or three segments the 69 .. _DTV-MODULATION: 88 ATSC (version 1) 8-VSB and 16-VSB. [all …]
|
| /Documentation/admin-guide/cgroup-v1/ |
| D | blkio-controller.rst | 11 and based on user options switch IO policies in the background. 15 generic block layer and can be used on leaf nodes as well as higher 22 ----------------------------- 27 Enable throttling in block layer:: 33 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio 92 Enable block device throttling support in block layer. 98 -------------------------------- 106 see Documentation/block/bfq-iosched.rst. 110 weight. For more details, see Documentation/block/bfq-iosched.rst. 152 are further divided by the type of operation - read or write, sync [all …]
|
| /Documentation/target/ |
| D | tcmu-design.rst | 9 a) Background 19 2) Writing a user pass-through handler 29 TCM is another name for LIO, an in-kernel iSCSI target (server). 38 built-in modules are implemented entirely as kernel code. 40 Background section in Design 41 ---------- 52 use case that other non-kernel target solutions, such as tgt, are able 55 in these non-traditional networked storage systems, while still only 65 kernel, another approach is to create a userspace pass-through 70 -------- [all …]
|
| /Documentation/ABI/testing/ |
| D | sysfs-fs-f2fs | 28 gc_idle = 3 will select the age-threshold based approach. 49 Description: Controls the in-place-update policy. 75 Description: Controls the FS utilization condition for the in-place-update 81 Description: Controls the dirty page count condition for the in-place-update 257 Supported on-disk features: 308 Description: Do background GC aggressively when set. Set to 0 by default. 315 and will override age-threshold GC approach if ATGC is enabled 320 age-threshold GC approach if ATGC is enabled at the same time. 343 - Query: cat /sys/fs/f2fs/<disk>/extension_list 344 - Add: echo '[h/c]extension' > /sys/fs/f2fs/<disk>/extension_list [all …]
|
| /Documentation/filesystems/ |
| D | f2fs.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 WHAT IS Flash-Friendly File System (F2FS)? 7 NAND flash memory-based storage devices, such as SSD, eMMC, and SD cards, have 10 disks, a file system, an upper layer to the storage device, should adapt to the 13 F2FS is a file system exploiting NAND flash memory-based storage devices, which 14 is based on Log-structured File System (LFS). The design has been focused on 18 Since a NAND flash memory-based storage device shows different characteristic 20 F2FS and its tools support various parameters not only for configuring on-disk 26 - git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git 30 - linux-f2fs-devel@lists.sourceforge.net [all …]
|
| /Documentation/arch/x86/ |
| D | sva.rst | 1 .. SPDX-License-Identifier: GPL-2.0 7 Background chapter 19 application page-faults. For more information please refer to the PCIe 34 Unlike Single Root I/O Virtualization (SR-IOV), Scalable IOV (SIOV) permits 40 ID (PASID), which is a 20-bit number defined by the PCIe SIG. 43 IOMMU to track I/O on a per-PASID granularity in addition to using the PCIe 55 ENQCMD works with non-posted semantics and carries a status back if the 67 A new thread-scoped MSR (IA32_PASID) provides the connection between 69 accesses an SVA-capable device, this MSR is initialized with a newly 70 allocated PASID. The driver for the device calls an IOMMU-specific API [all …]
|
| /Documentation/driver-api/pm/ |
| D | devices.rst | 1 .. SPDX-License-Identifier: GPL-2.0 10 :Copyright: |copy| 2010-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. 18 management (PM) code is also driver-specific. Most drivers will do very 22 This writeup gives an overview of how drivers interact with system-wide 25 background for the domain-specific work you'd do with any specific driver. 31 Drivers will use one or both of these models to put devices into low-power 36 Drivers can enter low-power states as part of entering system-wide 37 low-power states like "suspend" (also known as "suspend-to-RAM"), or 39 "suspend-to-disk"). 42 by implementing various role-specific suspend and resume methods to [all …]
|
| /Documentation/admin-guide/pm/ |
| D | cpufreq.rst | 1 .. SPDX-License-Identifier: GPL-2.0 20 Operating Performance Points or P-states (in ACPI terminology). As a rule, 24 time (or the more power is drawn) by the CPU in the given P-state. Therefore 29 as possible and then there is no reason to use any P-states different from the 30 highest one (i.e. the highest-performance frequency/voltage configuration 38 put into different P-states. 41 capacity, so as to decide which P-states to put the CPUs into. Of course, since 64 information on the available P-states (or P-state ranges in some cases) and 65 access platform-specific hardware interfaces to change CPU P-states as requested 70 performance scaling algorithms for P-state selection can be represented in a [all …]
|