| /Documentation/scsi/ |
| D | st.rst | 4 The SCSI Tape Driver 7 This file contains brief information about the SCSI tape driver. 8 The driver is currently maintained by Kai Mäkisara (email 17 The driver is generic, i.e., it does not contain any code tailored 18 to any specific tape drive. The tape parameters can be specified with 19 one of the following three methods: 21 1. Each user can specify the tape parameters he/she wants to use 24 in a multiuser environment the next user finds the tape parameters in 25 state the previous user left them. 27 2. The system manager (root) can define default values for some tape [all …]
|
| /Documentation/crypto/ |
| D | userspace-if.rst | 7 The concepts of the kernel crypto API visible to kernel space is fully 8 applicable to the user space interface as well. Therefore, the kernel 9 crypto API high level discussion for the in-kernel use cases applies 12 The major difference, however, is that user space can only act as a 16 The following covers the user space interface exported by the kernel 19 applications that require cryptographic services from the kernel. 21 Some details of the in-kernel kernel crypto API aspects do not apply to 22 user space, however. This includes the difference between synchronous 23 and asynchronous invocations. The user space API call is fully 31 The kernel crypto API is accessible from user space. Currently, the [all …]
|
| /Documentation/input/ |
| D | multi-touch-protocol.rst | 13 In order to utilize the full power of the new multi-touch and multi-user 15 objects in direct contact with the device surface, is needed. This 16 document describes the multi-touch (MT) protocol which allows kernel 19 The protocol is divided into two types, depending on the capabilities of the 20 hardware. For devices handling anonymous contacts (type A), the protocol 21 describes how to send the raw data for all contacts to the receiver. For 22 devices capable of tracking identifiable contacts (type B), the protocol 33 events. Only the ABS_MT events are recognized as part of a contact 35 applications, the MT protocol can be implemented on top of the ST protocol 39 input_mt_sync() at the end of each packet. This generates a SYN_MT_REPORT [all …]
|
| /Documentation/admin-guide/pm/ |
| D | cpuidle.rst | 19 Modern processors are generally able to enter states in which the execution of 21 memory or executed. Those states are the *idle* states of the processor. 23 Since part of the processor hardware is not used in idle states, entering them 24 generally allows power drawn by the processor to be reduced and, in consequence, 28 the idle states of processors for this purpose. 33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that 34 is the part of the kernel responsible for the distribution of computational 35 work in the system). In its view, CPUs are *logical* units. That is, they need 42 First, if the whole processor can only follow one sequence of instructions (one 43 program) at a time, it is a CPU. In that case, if the hardware is asked to [all …]
|
| D | cpufreq.rst | 15 The Concept of CPU Performance Scaling 18 The majority of modern processors are capable of operating in a number of 21 the higher the clock frequency and the higher the voltage, the more instructions 22 can be retired by the CPU over a unit of time, but also the higher the clock 23 frequency and the higher the voltage, the more energy is consumed over a unit of 24 time (or the more power is drawn) by the CPU in the given P-state. Therefore 25 there is a natural tradeoff between the CPU capacity (the number of instructions 26 that can be executed over a unit of time) and the power drawn by the CPU. 28 In some situations it is desirable or even necessary to run the program as fast 29 as possible and then there is no reason to use any P-states different from the [all …]
|
| /Documentation/admin-guide/device-mapper/ |
| D | vdo-design.rst | 7 The dm-vdo (virtual data optimizer) target provides inline deduplication, 13 Permabit was acquired by Red Hat. This document describes the design of 14 dm-vdo. For usage, see vdo.rst in the same directory as this file. 16 Because deduplication rates fall drastically as the block size increases, a 25 The design of dm-vdo is based on the idea that deduplication is a two-part 26 problem. The first is to recognize duplicate data. The second is to avoid 30 maps from logical block addresses to the actual storage location of the 36 Due to the complexity of data optimization, the number of metadata 47 thread will access the portion of the data structure in that zone. 49 request object (the "data_vio") which will be added to a work queue when [all …]
|
| D | dm-integrity.rst | 5 The dm-integrity target emulates a block device that has additional 9 writing the sector and the integrity tag must be atomic - i.e. in case of 12 To guarantee write atomicity, the dm-integrity target uses journal, it 13 writes sector data and integrity tags into a journal, commits the journal 14 and then copies the data and integrity tags to their respective location. 16 The dm-integrity target can be used with the dm-crypt target - in this 17 situation the dm-crypt target creates the integrity data and passes them 18 to the dm-integrity target via bio_integrity_payload attached to the bio. 19 In this mode, the dm-crypt and dm-integrity targets provide authenticated 20 disk encryption - if the attacker modifies the encrypted device, an I/O [all …]
|
| /Documentation/core-api/ |
| D | debug-objects.rst | 2 The object-lifetime debugging infrastructure 10 debugobjects is a generic infrastructure to track the life time of 11 kernel objects and validate the operations on those. 13 debugobjects is useful to check for the following error patterns: 21 debugobjects is not changing the data structure of the real object so it 28 A kernel subsystem needs to provide a data structure which describes the 29 object type and add calls into the debug code at appropriate places. The 30 data structure to describe the object type needs at minimum the name of 31 the object type. Optional functions can and should be provided to fixup 32 detected problems so the kernel can continue to work and the debug [all …]
|
| /Documentation/power/ |
| D | userland-swsusp.rst | 7 First, the warnings at the beginning of swsusp.txt still apply. 9 Second, you should read the FAQ in swsusp.txt _now_ if you have not 12 Now, to use the userland interface for software suspend you need special 13 utilities that will read/write the system memory snapshot from/to the 18 The interface consists of a character device providing the open(), 20 commands defined in include/linux/suspend_ioctls.h . The major and minor 21 numbers of the device are, respectively, 10 and 231, and they can 24 The device can be open either for reading or for writing. If open for 25 reading, it is considered to be in the suspend mode. Otherwise it is 26 assumed to be in the resume mode. The device cannot be open for simultaneous [all …]
|
| D | pci.rst | 7 An overview of concepts and the Linux kernel's interfaces related to PCI power 11 This document only covers the aspects of power management specific to PCI 12 devices. For general description of the kernel's interfaces related to device 31 devices into states in which they draw less power (low-power states) at the 35 completely inactive. However, when it is necessary to use the device once 36 again, it has to be put back into the "fully functional" state (full-power 37 state). This may happen when there are some data for the device to handle or 38 as a result of an external event requiring the device to be active, which may 39 be signaled by the device itself. 41 PCI devices may be put into low-power states in two ways, by using the device [all …]
|
| D | pm_qos_interface.rst | 7 one of the parameters. 11 * The per-device PM QoS framework provides the API to manage the 14 The latency unit used in the PM QoS framework is the microsecond (usec). 21 (effective) target value. The aggregated target value is updated with changes 22 to the request list or elements of the list. For CPU latency QoS, the 23 aggregated target value is simply the min of the request values held in the list 26 Note: the aggregated target value is implemented as an atomic variable so that 27 reading the aggregated value does not require any locking mechanism. 29 From kernel space the use of this interface is simple: 32 Will insert an element into the CPU latency QoS list with the target value. [all …]
|
| /Documentation/userspace-api/media/v4l/ |
| D | selection-api-configuration.rst | 7 Applications can use the :ref:`selection API <VIDIOC_G_SELECTION>` to 13 factors, or have different scaling abilities in the horizontal and 14 vertical directions. Also it may not support scaling at all. At the same 15 time the cropping/composing rectangles may have to be aligned, and both 16 the source and the sink may have arbitrary upper and lower size limits. 17 Therefore, as usual, drivers are expected to adjust the requested 18 parameters and return the actual values selected. An application can 19 control the rounding behaviour using 26 See figure :ref:`sel-targets-capture` for examples of the selection 28 configure the cropping targets before to the composing targets. [all …]
|
| /Documentation/filesystems/xfs/ |
| D | xfs-delayed-logging-design.rst | 10 This document describes the design and algorithms that the XFS journalling 11 subsystem is based on. This document describes the design and algorithms that 12 the XFS journalling subsystem is based on so that readers may familiarize 13 themselves with the general concepts of how transaction processing in XFS works. 19 the basic concepts covered, the design of the delayed logging mechanism is 26 XFS uses Write Ahead Logging for ensuring changes to the filesystem metadata 27 are atomic and recoverable. For reasons of space and time efficiency, the 29 physical logging mechanisms to provide the necessary recovery guarantees the 32 Some objects, such as inodes and dquots, are logged in logical format where the 33 details logged are made up of the changes to in-core structures rather than [all …]
|
| D | xfs-online-fsck-design.rst | 15 does in the kernel. 21 This document captures the design of the online filesystem check feature for 23 The purpose of this document is threefold: 25 - To help kernel distributors understand exactly what the XFS online fsck 28 - To help people reading the code to familiarize themselves with the relevant 29 concepts and design points before they start digging into the code. 31 - To help developers maintaining the system by capturing the reasons 34 As the online fsck code is merged, the links in this document to topic branches 37 This document is licensed under the terms of the GNU Public License, v2. 38 The primary author is Darrick J. Wong. [all …]
|
| /Documentation/timers/ |
| D | highres.rst | 5 Further information can be found in the paper of the OLS 2006 talk "hrtimers 6 and beyond". The paper is part of the OLS 2006 Proceedings Volume 1, which can 7 be found on the OLS website: 10 The slides to this talk are available from: 13 The slides contain five figures (pages 2, 15, 18, 20, 22), which illustrate the 14 changes in the time(r) related Linux subsystems. Figure #1 (p. 2) shows the 15 design of the Linux time(r) system before hrtimers and other building blocks 18 Note: the paper and the slides are talking about "clock event source", while we 19 switched to the name "clock event devices" in meantime. 21 The design contains the following basic building blocks: [all …]
|
| /Documentation/virt/hyperv/ |
| D | vpci.rst | 7 that are mapped directly into the VM's physical address space. 8 Guest device drivers can interact directly with the hardware 9 without intermediation by the host hypervisor. This approach 10 provides higher bandwidth access to the device with lower 11 latency, compared with devices that are virtualized by the 12 hypervisor. The device should appear to the guest just as it 14 to the Linux device drivers for the device. 24 and produces the same benefits by allowing a guest device 25 driver to interact directly with the hardware. See Hyper-V 36 it is operating, so the Linux device driver for the device can [all …]
|
| /Documentation/networking/ |
| D | ppp_generic.rst | 12 The generic PPP driver in linux-2.4 provides an implementation of the 15 * the network interface unit (ppp0 etc.) 16 * the interface to the networking code 19 * the interface to pppd, via a /dev/ppp character device 25 For sending and receiving PPP frames, the generic PPP driver calls on 26 the services of PPP ``channels``. A PPP channel encapsulates a 29 has a very simple interface with the generic PPP code: it merely has 37 be linked to each ppp network interface unit. The generic layer is 45 See include/linux/ppp_channel.h for the declaration of the types and 46 functions used to communicate between the generic PPP layer and PPP [all …]
|
| /Documentation/locking/ |
| D | rt-mutex-design.rst | 7 Licensed under the GNU Free Documentation License, Version 1.2 10 This document tries to describe the design of the rtmutex.c implementation. 11 It doesn't describe the reasons why rtmutex.c exists. For that please see 13 that happen without this code, but that is in the concept to understand 14 what the code actually is doing. 16 The goal of this document is to help others understand the priority 17 inheritance (PI) algorithm that is used, as well as reasons for the 18 decisions that were made to implement PI in the manner that was done. 26 most of the time it can't be helped. Anytime a high priority process wants 28 the high priority process must wait until the lower priority process is done [all …]
|
| /Documentation/filesystems/spufs/ |
| D | spufs.rst | 10 spufs - the SPU file system 16 The SPU file system is used on PowerPC machines that implement the Cell 20 The file system provides a name space similar to posix shared memory or 21 message queues. Users that have write permissions on the file system 22 can use spu_create(2) to establish SPU contexts in the spufs root. 25 set of files. These files can be used for manipulating the state of the 34 set the user owning the mount point, the default is 0 (root). 37 set the group owning the mount point, the default is 0 (root). 43 The files in spufs mostly follow the standard behavior for regular sys- 45 the operations supported on regular file systems. This list details the [all …]
|
| /Documentation/mm/ |
| D | hugetlbfs_reserv.rst | 10 in a task's address space at page fault time if the VMA indicates huge pages 11 are to be used. If no huge page exists at page fault time, the task is sent 14 of huge pages at mmap() time. The idea is that if there were not enough 15 huge pages to cover the mapping, the mmap() would fail. This was first 16 done with a simple check in the code at mmap() time to determine if there 17 were enough free huge pages to cover the mapping. Like most things in the 18 kernel, the code has evolved over time. However, the basic idea was to 20 available for page faults in that mapping. The description below attempts to 21 describe how huge page reserve processing is done in the v4.10 kernel. 30 The Data Structures [all …]
|
| /Documentation/security/tpm/ |
| D | tpm-security.rst | 6 The object of this document is to describe how we make the kernel's 7 use of the TPM reasonably robust in the face of external snooping and 9 in the literature). The current security document is for TPM 2.0. 14 The TPM is usually a discrete chip attached to a PC via some type of 15 low bandwidth bus. There are exceptions to this such as the Intel 17 close to the CPU, which are subject to different attacks, but right at 18 the moment, most hardened security environments require a discrete 19 hardware TPM, which is the use case discussed here. 21 Snooping and Alteration Attacks against the bus 24 The current state of the art for snooping the `TPM Genie`_ hardware [all …]
|
| /Documentation/power/powercap/ |
| D | dtpm.rst | 7 On the embedded world, the complexity of the SoC leads to an 9 as a whole in order to prevent the temperature to go above the 12 Another aspect is to sustain the performance for a given power budget, 13 for example virtual reality where the user can feel dizziness if the 15 reduce the battery charging because the dissipated power is too high 16 compared with the power consumed by other devices. 18 The user space is the most adequate place to dynamically act on the 20 profile: it has the knowledge of the platform. 22 The Dynamic Thermal Power Management (DTPM) is a technique acting on 23 the device power by limiting and/or balancing a power budget among [all …]
|
| /Documentation/admin-guide/mm/ |
| D | userfaultfd.rst | 8 Userfaults allow the implementation of on-demand paging from userland 10 memory page faults, something otherwise only the kernel code could do. 13 of the ``PROT_NONE+SIGSEGV`` trick. 19 regions of virtual memory with it. Then, any page faults which occur within the 20 region(s) result in a message being delivered to the userfaultfd, notifying 21 userspace of the fault. 23 The ``userfaultfd`` (aside from registering and unregistering virtual 26 1) ``read/POLLIN`` protocol to notify a userland thread of the faults 29 2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions 30 registered in the ``userfaultfd`` that allows userland to efficiently [all …]
|
| /Documentation/networking/device_drivers/ethernet/toshiba/ |
| D | spider_net.rst | 4 The Spidernet Device Driver 13 This document sketches the structure of portions of the spidernet 14 device driver in the Linux kernel tree. The spidernet is a gigabit 15 ethernet device built into the Toshiba southbridge commonly used 16 in the SONY Playstation 3 and the IBM QS20 Cell blade. 18 The Structure of the RX Ring. 20 The receive (RX) ring is a circular linked list of RX descriptors, 21 together with three pointers into the ring that are used to manage its 24 The elements of the ring are called "descriptors" or "descrs"; they 25 describe the received data. This includes a pointer to a buffer [all …]
|
| /Documentation/driver-api/ |
| D | ipmi.rst | 2 The Linux IPMI Driver 7 The Intelligent Platform Management Interface, or IPMI, is a 9 It provides for dynamic discovery of sensors in the system and the 10 ability to monitor the sensors and be informed when the sensor's 17 management software that can use the IPMI system. 19 This document describes how to use the IPMI driver for Linux. If you 20 are not familiar with IPMI itself, see the web site at 27 The Linux IPMI driver is modular, which means you have to pick several 29 these are available in the 'Character Devices' menu then the IPMI 35 The message handler does not provide any user-level interfaces. [all …]
|