Searched full:level (Results 1 – 25 of 1096) sorted by relevance
12345678910>>...44
/Documentation/devicetree/bindings/opp/ |
D | opp-v2-qcom-level.yaml | 4 $id: http://devicetree.org/schemas/opp/opp-v2-qcom-level.yaml# 17 const: operating-points-v2-qcom-level 25 opp-level: true 27 qcom,opp-fuse-level: 29 A positive value representing the fuse corner/level associated with 31 corner/level. A fuse corner/level contains e.g. ref uV, min uV, 38 - opp-level 39 - qcom,opp-fuse-level 49 compatible = "operating-points-v2-qcom-level"; 52 opp-level = <1>; [all …]
|
/Documentation/infiniband/ |
D | core_locking.rst | 7 both low-level drivers that sit below the midlayer and upper level 13 With the following exceptions, a low-level driver implementation of 28 The corresponding functions exported to upper level protocol 45 used by low-level drivers to dispatch asynchronous events through 51 All of the methods in struct ib_device exported by a low-level 52 driver must be fully reentrant. The low-level driver is required to 59 Because low-level drivers are reentrant, upper level protocol 69 A low-level driver must not perform a callback directly from the 71 allowed for a low-level driver to call a consumer's completion event 72 handler directly from its post_send method. Instead, the low-level [all …]
|
/Documentation/networking/ |
D | netif-msg.rst | 4 NETIF Msg Level 7 The design of the network interface message level setting. 18 integer variable that controls the debug message level. The message 19 level ranged from 0 to 7, and monotonically increased in verbosity. 21 The message level was not precisely defined past level 3, but were 22 always implemented within +-1 of the specified level. Drivers tended 23 to shed the more verbose level messages as they matured. 34 Initially this message level variable was uniquely named in each driver 44 - Using an ioctl() call to modify the level. 45 - Per-interface rather than per-driver message level setting. [all …]
|
/Documentation/devicetree/bindings/power/supply/ |
D | dlg,da9150-fuel-gauge.yaml | 21 description: Interval time (milliseconds) between battery level checks. 23 dlg,warn-soc-level: 27 description: Battery discharge level (%) where warning event raised. 29 dlg,crit-soc-level: 34 Battery discharge level (%) where critical event raised. 35 This value should be lower than the warning level. 48 dlg,warn-soc-level = /bits/ 8 <15>; 49 dlg,crit-soc-level = /bits/ 8 <5>;
|
/Documentation/devicetree/bindings/power/ |
D | qcom,rpmpd.yaml | 86 opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>; 90 opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>; 94 opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>; 98 opp-level = <RPMH_REGULATOR_LEVEL_SVS>; 102 opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; 106 opp-level = <RPMH_REGULATOR_LEVEL_NOM>; 110 opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>; 114 opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>; 118 opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; 122 opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; [all …]
|
/Documentation/devicetree/bindings/cache/ |
D | socionext,uniphier-system-cache.yaml | 11 controller system. All of them have a level 2 cache controller, and some 12 have a level 3 cache controller as well. 43 cache-level: 47 next-level-cache: true 62 - cache-level 75 cache-level = <2>; 79 // L2 should specify the next level cache by 'next-level-cache'. 88 cache-level = <2>; 89 next-level-cache = <&l3>; 100 cache-level = <3>;
|
/Documentation/arch/x86/x86_64/ |
D | 5level-paging.rst | 4 5-level paging 9 Original x86-64 was limited by 4-level paging to 256 TiB of virtual address 14 5-level paging. It is a straight-forward extension of the current page 20 QEMU 2.9 and later support 5-level paging. 22 Virtual memory layout for 5-level paging is described in 26 Enabling 5-level paging 30 Kernel with CONFIG_X86_5LEVEL=y still able to boot on 4-level hardware. 31 In this case additional page table level -- p4d -- will be folded at 36 On x86, 5-level paging enables 56-bit userspace virtual address space. 39 information. It collides with valid pointers with 5-level paging and [all …]
|
/Documentation/devicetree/bindings/cpufreq/ |
D | cpufreq-qcom-hw.yaml | 220 next-level-cache = <&L2_0>; 226 cache-level = <2>; 227 next-level-cache = <&L3_0>; 231 cache-level = <3>; 241 next-level-cache = <&L2_100>; 247 cache-level = <2>; 248 next-level-cache = <&L3_0>; 257 next-level-cache = <&L2_200>; 263 cache-level = <2>; 264 next-level-cache = <&L3_0>; [all …]
|
D | qcom-cpufreq-nvmem.yaml | 17 on the CPU OPP in use. The CPUFreq driver sets the CPR power domain level 53 const: operating-points-v2-qcom-level 55 $ref: /schemas/opp/opp-v2-qcom-level.yaml# 120 next-level-cache = <&L2_0>; 134 next-level-cache = <&L2_0>; 148 next-level-cache = <&L2_0>; 162 next-level-cache = <&L2_0>; 190 compatible = "operating-points-v2-qcom-level"; 193 opp-level = <1>; 194 qcom,opp-fuse-level = <1>; [all …]
|
/Documentation/ABI/obsolete/ |
D | sysfs-bus-usb | 1 What: /sys/bus/usb/devices/.../power/level 7 power/level. This file holds a power-level setting for 17 level. The "on" level is meant for administrative uses. 23 left in the "on" level. Although the USB spec requires 26 initializes all non-hub devices in the "on" level. Some
|
/Documentation/scsi/ |
D | megaraid.rst | 14 interfaces with the applications on one side and all the low level drivers 19 i. Avoid duplicate code from the low level drivers. 20 ii. Unburden the low level drivers from having to export the 24 multiple low level drivers. 27 ioctl commands. But this module is envisioned to handle all user space level 60 module acts as a registry for low level hba drivers. The low level drivers 66 The lower level drivers now understand only a new improved ioctl packet called 75 can easily be more than one. But since megaraid is the only low level driver
|
/Documentation/userspace-api/media/dvb/ |
D | ca_high_level.rst | 3 The High level CI API 10 This document describes the high level CI API as in accordance to the 14 With the High Level CI approach any new card with almost any random 39 #define CA_CI 1 /* CI high level interface */ 40 #define CA_CI_LINK 2 /* CI link layer level interface */ 41 #define CA_CI_PHYS 4 /* CI physical layer level interface */ 50 This CI interface follows the CI high level interface, which is not 65 With this High Level CI interface, the interface can be defined with the 68 All these ioctls are also valid for the High level CI interface 89 APP: CI High level interface [all …]
|
/Documentation/admin-guide/pm/ |
D | intel_uncore_frequency_scaling.rst | 72 The current sysfs interface supports controls at package and die level. 74 fabric cluster level. 80 To represent controls at fabric cluster level in addition to the 81 controls at package and die level (like systems without TPMI 100 The other attributes are same as presented at package_*_die_* level. 103 is updated at "package_*_die_*" level. This model will be still supported 106 When user uses controls at "package_*_die_*" level, then every fabric 110 still update "max_freq_khz" at each uncore* level, which is more restrictive. 111 Similarly, user can update "min_freq_khz" at "package_*_die_*" level 112 to apply at each uncore* level. [all …]
|
/Documentation/core-api/ |
D | genericirq.rst | 36 - Level type 51 This split implementation of high-level IRQ handlers allows us to 59 and low-level hardware logic, and it also leads to unnecessary code 61 ``ioapic_edge_irq`` IRQ-type which share many of the low-level details but 69 and only need to add the chip-level specific code. The separation is 74 Each interrupt descriptor is assigned its own high-level flow handler, 75 which is normally one of the generic implementations. (This high-level 82 IRQ-flow implementation for 'level type' interrupts and add a 102 1. High-level driver API 104 2. High-level IRQ flow handlers [all …]
|
/Documentation/admin-guide/mm/ |
D | numaperf.rst | 106 by the last memory level in the hierarchy. The system meanwhile uses 110 The term "far memory" is used to denote the last level memory in the 111 hierarchy. Each increasing cache level provides higher performing 115 This numbering is different than CPU caches where the cache level (ex: 116 L1, L2, L3) uses the CPU-side view where each increased level is lower 117 performing. In contrast, the memory cache level is centric to the last 118 level memory, so the higher numbered cache level corresponds to memory 124 accesses the next level of memory until there is either a hit in that 125 cache level, or it reaches far memory. 142 The attributes for each level of cache is provided under its cache [all …]
|
/Documentation/driver-api/media/drivers/ |
D | pvrusb2.rst | 29 1. Low level wire-protocol implementation with the device. 34 3. High level hardware driver implementation which coordinates all 38 tear-down, arbitration, and interaction with high level 42 5. High level interfaces which glue the driver to various published 54 right now the V4L high level interface is the most complete, the 55 sysfs high level interface will work equally well for similar 57 possible to produce a DVB high level interface that can sit right 83 here. Hotplugging is ultimately coordinated here. All high level 116 access to the driver should be through one of the high level 118 level interfaces are restricted to the API defined in [all …]
|
/Documentation/virt/ |
D | paravirt_ops.rst | 16 corresponding to low-level critical instructions and high-level 18 time by enabling binary patching of the low-level critical operations 24 These operations correspond to high-level functionality where it is 28 Usually these operations correspond to low-level critical instructions. They
|
/Documentation/arch/ia64/ |
D | fsys.rst | 18 switched over to kernel memory. The user-level state is saved 23 user memory. The user-level state is contained in the 28 interruption-handlers start execution in. The user-level 34 - execution is at privilege level 0 (most-privileged) 36 - CPU registers may contain a mixture of user-level and kernel-level 38 security-sensitive kernel-level state is leaked back to 39 user-level) 46 in fsys-mode (they point to the user-level stacks, which may 51 privilege level is at level 0, this means that fsys-mode requires some 58 Linux operates in fsys-mode when (a) the privilege level is 0 (most [all …]
|
/Documentation/devicetree/bindings/interrupt-controller/ |
D | img,pdc-intc.txt | 38 - <2nd-cell>: The level-sense information, encoded using the Linux interrupt 44 4 = active-high level-sensitive (required for perip irqs) 45 8 = active-low level-sensitive 73 interrupts = <18 4 /* level */>, /* Syswakes */ 74 <30 4 /* level */>, /* Peripheral 0 (RTC) */ 75 <29 4 /* level */>, /* Peripheral 1 (IR) */ 76 <31 4 /* level */>; /* Peripheral 2 (WDT) */ 102 // Interrupt source SysWake 0 that is active-low level-sensitive
|
D | snps,archs-idu-intc.txt | 3 This optional 2nd level interrupt controller can be used in SMP configurations 18 - bits[3:0] trigger type and level flags 21 4 = active high level-sensitive <<< DEFAULT 22 8 = NOT SUPPORTED (active low level-sensitive) 23 When no second cell is specified, the interrupt is assumed to be level
|
/Documentation/scheduler/ |
D | sched-nice-design.rst | 12 scheduler, (otherwise we'd have done it long ago) because nice level 19 rule so that nice +19 level would be _exactly_ 1 jiffy. To better 34 -*----------------------------------*-----> [nice level] 59 within the constraints of HZ and jiffies and their nasty design level 63 about Linux's nice level support was its asymmetry around the origin 65 accurately: the fact that nice level behavior depended on the _absolute_ 66 nice level as well, while the nice API itself is fundamentally 74 Note that the 'inc' is relative to the current nice level. Tools like 79 depend on the nice level of the parent shell - if it was at nice -10 the 82 A third complaint against Linux's nice level support was that negative [all …]
|
/Documentation/mm/ |
D | page_tables.rst | 53 to mark large areas as unmapped at a higher level in the page table hierarchy. 55 Additionally, on modern CPUs, a higher level page table entry can point directly 57 megabytes or even gigabytes in a single high-level page table entry, taking 97 this did refer to a single page table entry in the single top level page 98 table, it was retrofitted to be an array of mapping elements when two-level 106 the other levels to handle 4-level page tables. It is potentially unused, 109 - **p4d**, `p4d_t`, `p4dval_t` = **Page Level 4 Directory** was introduced to 110 handle 5-level page tables after the *pud* was introduced. Now it was clear 112 directory level and that we cannot go on with ad hoc names any more. This 124 To repeat: each level in the page table hierarchy is a *array of pointers*, so [all …]
|
/Documentation/misc-devices/ |
D | bh1770glc.rst | 37 interrupts the delayed work is pushed forward. So, when proximity level goes 77 RW - HI level threshold value 84 RW - LO level threshold value 119 RW - Measurement rate (in Hz) when the level is above threshold 123 RW - Measurement rate (in Hz) when the level is below threshold 130 RW - threshold level which trigs proximity events. 135 RW - threshold level which trigs event immediately
|
/Documentation/devicetree/bindings/leds/backlight/ |
D | pwm-backlight.yaml | 46 level (PWM duty cycle) will be interpolated from these values. 0 means a 51 default-brightness-level: 53 The default brightness level (index into the array defined by the 61 having to list out every possible value in the brightness-level array. 65 default-brightness-level: [brightness-levels] 81 default-brightness-level = <6>; 97 default-brightness-level = <4096>;
|
D | led-backlight.yaml | 33 backlight brightness level into a LED brightness level. If it is not 37 default-brightness-level: 39 The default brightness level (index into the array defined by the 56 default-brightness-level = <6>;
|
12345678910>>...44