Home
last modified time | relevance | path

Searched +full:run +full:- +full:time (Results 1 – 25 of 355) sorted by relevance

12345678910>>...15

/Documentation/scheduler/
Dsched-rt-group.rst2 Real-Time group scheduling
12 2.1 System-wide settings
28 resolution, or the time it takes to handle the budget refresh itself.
33 are real-time processes).
40 ---------------
43 the amount of bandwidth (eg. CPU time) being constant. In order to schedule
45 of the CPU time available. Without a minimum guarantee a realtime group can
50 ----------------
52 CPU time is divided by means of specifying how much time can be spent running
53 in a given period. We allocate this "run time" for each realtime group which
[all …]
Dsched-bwc.rst6 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst ]
13 microseconds of CPU time. That quota is assigned to per-cpu run queues in
16 throttled. Throttled threads will not be able to run again until the next
21 is transferred to cpu-local "silos" on a demand basis. The amount transferred
25 ----------
28 cpu.cfs_quota_us: the total available run-time within a period (in microseconds)
35 cpu.cfs_quota=-1
37 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
39 bandwidth group. This represents the traditional work-conserving behavior for
55 --------------------
[all …]
Dsched-design-CFS.rst15 an "ideal, precise multi-tasking CPU" on real hardware.
17 "Ideal multi-tasking CPU" is a (non-existent :-)) CPU that has 100% physical
18 power and which can run each task at precise equal speed, in parallel, each at
20 each at 50% physical power --- i.e., actually in parallel.
22 On real hardware, we can run only a single task at once, so we have to
25 multi-tasking CPU described above. In practice, the virtual runtime of a task
33 In CFS the virtual runtime is expressed and tracked via the per-task
34 p->se.vruntime (nanosec-unit) value. This way, it's possible to accurately
35 timestamp and measure the "expected CPU time" a task should have gotten.
37 [ small detail: on "ideal" hardware, at any time all tasks would have the same
[all …]
/Documentation/admin-guide/
Dlockup-watchdogs.rst10 details), without giving other tasks a chance to run. The current
14 "softlockup_panic" (see "Documentation/admin-guide/kernel-parameters.rst" for
20 details), without letting other interrupts have a chance to run.
24 'hardlockup_panic', a compile time knob, "BOOTPARAM_HARDLOCKUP_PANIC",
26 (see "Documentation/admin-guide/kernel-parameters.rst" for details).
31 of time.
43 (compile-time initialized to 10 and configurable through sysctl of the
45 does not receive any hrtimer interrupt during that time the
51 timestamp every time it is scheduled. If that timestamp is not updated
64 event. The right value for a particular environment is a trade-off
[all …]
/Documentation/leds/
Dledtrig-transient.rst8 to be off. The delay_on value specifies the time period an LED should stay
11 gets deactivated. There is no provision for one time activation to implement
20 As a specific example of this use-case, let's look at vibrate feature on
42 that are active at the time driver gets suspended, continue to run, without
62 non-transient state. When driver gets suspended, irrespective of the transient
77 - duration allows setting timer value in msecs. The initial value is 0.
78 - activate allows activating and deactivating the timer specified by
81 - state allows user to specify a transient state to be held for the specified
85 - one shot timer activate mechanism.
96 - one shot timer value. When activate is set, duration value
[all …]
/Documentation/virt/
Dparavirt_ops.rst1 .. SPDX-License-Identifier: GPL-2.0
12 allows a single kernel binary to run on all supported execution environments
13 including native machine -- without any hypervisors.
17 functionalities in various areas. pv-ops allows for optimizations at run
18 time by enabling binary patching of the low-ops critical operations
19 at boot time.
23 - simple indirect call
27 - indirect call which allows optimization with binary patch
32 - a set of macros for hand written assembly code
/Documentation/virt/kvm/
Dhalt-polling.txt6 for some time period after the guest has elected to no longer run by cedeing.
9 before giving up the cpu to the scheduler in order to let something else run.
11 Polling provides a latency advantage in cases where the guest can be run again
13 the order of a few micro-seconds, although performance benefits are workload
17 wakeup periods where the time spent halt polling is minimised and the time
24 The powerpc kvm-hv specific case is implemented in:
31 The maximum time for which to poll before invoking the scheduler, referred to
36 kvm_vcpu->halt_poll_ns
38 or in the case of powerpc kvm-hv, in the vcore struct:
40 kvmppc_vcore->halt_poll_ns
[all …]
/Documentation/power/
Druntime_pm.rst5 (C) 2009-2011 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
18 put their PM-related work items. It is strongly recommended that pm_wq be
20 them to be synchronized with system-wide power transitions (suspend to RAM,
53 The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks
57 1. PM domain of the device, if the device's PM domain object, dev->pm_domain,
60 2. Device type of the device, if both dev->type and dev->type->pm are present.
62 3. Device class of the device, if both dev->class and dev->class->pm are
65 4. Bus type of the device, if both dev->bus and dev->bus->pm are present.
69 dev->driver->pm directly (if present).
73 and bus type. Moreover, the high-priority one will always take precedence over
[all …]
/Documentation/locking/
Dlocktorture.rst18 acquire the lock and hold it for specific amount of time, thus simulating
20 can be simulated by either enlarging this critical region hold time and/or
30 Locktorture-specific
31 --------------------
49 - "lock_busted":
52 - "spin_lock":
55 - "spin_lock_irq":
58 - "rw_lock":
61 - "rw_lock_irq":
65 - "mutex_lock":
[all …]
/Documentation/admin-guide/pm/
Dcpuidle.rst1 .. SPDX-License-Identifier: GPL-2.0
8 CPU Idle Time Management
27 CPU idle time management is an energy-efficiency feature concerned about using
31 ------------
33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that
37 software as individual single-core processors. In other words, a CPU is an
43 program) at a time, it is a CPU. In that case, if the hardware is asked to
46 Second, if the processor is multi-core, each core in it is able to follow at
47 least one program at a time. The cores need not be entirely independent of each
48 other (for example, they may share caches), but still most of the time they
[all …]
/Documentation/RCU/
Dstallwarn.txt5 options that can be used to fine-tune the detector's operation. Finally,
15 o A CPU looping in an RCU read-side critical section.
29 keep up with the boot-time console-message rate. For example,
30 a 115Kbaud serial console can be -way- too slow to keep up
31 with boot-time message rates, and will frequently result in
35 o Anything that prevents RCU's grace-period kthreads from running.
36 This can result in the "All QSes seen" console-log message.
38 ran and how often it should be expected to run. It can also
39 result in the "rcu_.*kthread starved for" console-log message,
42 o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
[all …]
/Documentation/driver-api/thermal/
Dcpu-cooling-api.rst22 --------------------------------------------
30 "thermal-cpufreq-%x". This api can support multiple instances of cpufreq
42 the name "thermal-cpufreq-%x" linking it with a device tree node, in
54 This interface function unregisters the "thermal-cpufreq-%x" cooling device.
63 supported currently). This power model requires that the operating-points of
73 - The time the processor spends running, consuming dynamic power, as
74 compared to the time in idle states where dynamic consumption is
76 - The voltage and frequency levels as a result of DVFS. The DVFS
78 - In running time the 'execution' behaviour (instruction types, memory
85 Pdyn = f(run) * Voltage^2 * Frequency * Utilisation
[all …]
/Documentation/ABI/testing/
Dsysfs-devices-power40 space to control the run-time power management of the device.
45 + "auto\n" to allow the device to be power managed at run time;
51 from power managing the device at run time. Doing that while
61 with the main suspend/resume thread) during system-wide power
86 attribute is read-only. If the device is not capable to wake up
98 is read-only. If the device is not capable to wake up the
110 state in progress. This attribute is read-only. If the device
122 read-only. If the device is not capable to wake up the system
133 the device is being processed (1). This attribute is read-only.
144 the total time of processing wakeup events associated with the
[all …]
/Documentation/driver-api/dmaengine/
Ddmatest.rst11 capability of the following: DMA_MEMCPY (memory-to-memory), DMA_MEMSET
12 (const-to-memory or memory-to-memory, when emulated), DMA_XOR, DMA_PQ.
18 Part 1 - How to build the test module
23 Device Drivers -> DMA Engine support -> DMA Test client
28 Part 2 - When dmatest is built as a module
33 % modprobe dmatest timeout=2000 iterations=1 channel=dma0chan0 run=1
41 % echo 1 > /sys/module/dmatest/parameters/run
45 dmatest.timeout=2000 dmatest.iterations=1 dmatest.channel=dma0chan0 dmatest.run=1
47 Example of multi-channel test usage (new in the 5.0 kernel)::
55 % echo 1 > /sys/module/dmatest/parameters/run
[all …]
/Documentation/sound/designs/
Dseq-oss.rst15 What this does - it provides the emulation of the OSS sequencer, access
17 The most of applications using OSS can run if the appropriate ALSA
51 You can run two or more applications simultaneously (even for OSS
53 However, each MIDI device is exclusive - that is, if a MIDI device
57 * Real-time event processing:
59 The events can be processed in real time without using out of bound
60 ioctl. To switch to real-time mode, send ABSTIME 0 event. The followed
61 events will be processed in real-time without queued. To switch off the
62 real-time mode, send RELTIME 0 event.
67 ``/proc/asound/seq/oss`` at any time. In the later version,
[all …]
/Documentation/driver-api/mmc/
Dmmc-async-req.rst11 pre-fetch makes the cache overhead relatively significant. If the DMA
15 The intention of non-blocking (asynchronous) MMC requests is to minimize the
16 time between when an MMC request ends and another MMC request begins.
19 dma_unmap_sg are processing. Using non-blocking MMC requests makes it
26 The mmc_blk_issue_rw_rq() in the MMC block driver is made non-blocking.
28 The increase in throughput is proportional to the time it takes to
31 more significant the prepare request time becomes. Roughly the expected
33 platform. In power save mode, when clocks run on a lower frequency, the DMA
34 preparation may cost even more. As long as these slower preparations are run
40 https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
[all …]
/Documentation/admin-guide/mm/
Dtranshuge.rst28 requiring larger clear-page copy-page in page faults which is a
32 only matters the first time the memory is accessed for the lifetime of
38 1) the TLB miss will run faster (especially with virtualization using
48 going to run faster.
78 possible to disable hugepages system-wide and to only have them inside
83 only run faster.
95 -------------------
110 time to defrag memory, we would expect to gain even more by the fact we
149 should be self-explanatory.
168 -------------------
[all …]
/Documentation/devicetree/bindings/leds/
Dleds-lm3532.txt1 * Texas Instruments - lm3532 White LED driver with ambient light sensing
4 The LM3532 provides the 3 high-voltage, low-side current sinks. The device is
5 programmable over an I2C-compatible interface and has independent
11 each with 32 internal voltage setting resistors, 8-bit logarithmic and linear
16 - compatible : "ti,lm3532"
17 - reg : I2C slave address
18 - #address-cells : 1
19 - #size-cells : 0
22 - enable-gpios : gpio pin to enable (active high)/disable the device.
23 - ramp-up-us - The Run time ramp rates/step are from one current
[all …]
/Documentation/networking/
Dsnmp_counter.rst17 .. _RFC1213 ipInReceives: https://tools.ietf.org/html/rfc1213#page-26
30 .. _RFC1213 ipInDelivers: https://tools.ietf.org/html/rfc1213#page-28
41 .. _RFC1213 ipOutRequests: https://tools.ietf.org/html/rfc1213#page-28
60 .. _Explicit Congestion Notification: https://tools.ietf.org/html/rfc3168#page-6
73 .. _RFC1213 ipInHdrErrors: https://tools.ietf.org/html/rfc1213#page-27
81 .. _RFC1213 ipInAddrErrors: https://tools.ietf.org/html/rfc1213#page-27
98 .. _RFC1213 ipInUnknownProtos: https://tools.ietf.org/html/rfc1213#page-27
111 .. _RFC1213 ipInDiscards: https://tools.ietf.org/html/rfc1213#page-28
118 .. _RFC1213 ipOutDiscards: https://tools.ietf.org/html/rfc1213#page-28
125 .. _RFC1213 ipOutNoRoutes: https://tools.ietf.org/html/rfc1213#page-29
[all …]
/Documentation/trace/
Devents-nmi.rst11 -----------
14 NMI handlers are hogging large amounts of CPU time. The kernel
15 will warn if it sees long-running handlers::
17 INFO: NMI handler took too long to run: 9.207 msecs
30 really hogging a lot of CPU time, like a millisecond at a time.
41 …<idle>-0 [000] d.h3 505.397558: nmi_handler: perf_event_nmi_handler() delta_ns: 3236765 hand…
42 …<idle>-0 [000] d.h3 505.805893: nmi_handler: perf_event_nmi_handler() delta_ns: 3174234 hand…
43 …<idle>-0 [000] d.h3 506.158206: nmi_handler: perf_event_nmi_handler() delta_ns: 3084642 hand…
44 …<idle>-0 [000] d.h3 506.334346: nmi_handler: perf_event_nmi_handler() delta_ns: 3080351 hand…
/Documentation/arm64/
Dperf.txt5 Date: 2019-03-06
8 ------------
16 --------------
20 The kernel runs at EL2 with VHE and EL1 without. Guest kernels always run
31 ----------
38 For a non-VHE host this attribute will exclude EL2 as we consider the
47 ----------------------------
51 The KVM host may run at EL0 (userspace), EL1 (non-VHE kernel) and EL2 (VHE
52 kernel or non-VHE hypervisor).
54 The KVM guest may run at EL0 (userspace) and EL1 (kernel).
[all …]
/Documentation/timers/
Dno_hz.rst2 NO_HZ: Reducing Scheduling-Clock Ticks
7 reduce the number of scheduling-clock interrupts, thereby improving energy
9 some types of computationally intensive high-performance computing (HPC)
10 applications and for real-time applications.
12 There are three main ways of managing scheduling-clock interrupts
13 (also known as "scheduling-clock ticks" or simply "ticks"):
15 1. Never omit scheduling-clock ticks (CONFIG_HZ_PERIODIC=y or
16 CONFIG_NO_HZ=n for older kernels). You normally will -not-
19 2. Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or
23 3. Omit scheduling-clock ticks on CPUs that are either idle or that
[all …]
/Documentation/driver-api/serial/
Dmoxa-smartio.rst36 - 2 ports multiport board
37 CP-102U, CP-102UL, CP-102UF
38 CP-132U-I, CP-132UL,
39 CP-132, CP-132I, CP132S, CP-132IS,
40 CI-132, CI-132I, CI-132IS,
41 (C102H, C102HI, C102HIS, C102P, CP-102, CP-102S)
43 - 4 ports multiport board
44 CP-104EL,
45 CP-104UL, CP-104JU,
46 CP-134U, CP-134U-I,
[all …]
/Documentation/admin-guide/device-mapper/
Ddm-dust.txt1 dm-dust
6 at an arbitrary time.
8 This target behaves similarly to a linear target. At a given time,
26 encounter more bad sectors, at an unknown time or location.
27 With dm-dust, the user can use the "addbadblock" and "removebadblock"
31 This allows the pre-writing of test data and metadata prior to
35 -----------------
45 -------------------
47 First, find the size (in 512-byte sectors) of the device to be used:
49 $ sudo blockdev --getsz /dev/vdb1
[all …]
/Documentation/media/v4l-drivers/
Dcafe_ccic.rst1 .. SPDX-License-Identifier: GPL-2.0
9 ------------
12 controller. This is the controller found in first-generation OLPC systems,
19 sensor is known to work with this controller at this time.
23 .. code-block:: none
25 $ mplayer tv:// -tv driver=v4l2:width=640:height=480 -nosound
26 $ mplayer tv:// -tv driver=v4l2:width=640:height=480:outfmt=bgr16 -nosound
30 Load time options
31 -----------------
33 There are a few load-time options, most of which can be changed after
[all …]

12345678910>>...15