Home
last modified time | relevance | path

Searched +full:always +full:- +full:running (Results 1 – 25 of 1049) sorted by relevance

12345678910>>...42

/kernel/linux/linux-6.6/drivers/thermal/
Dcpuidle_cooling.c1 // SPDX-License-Identifier: GPL-2.0
21 * struct cpuidle_cooling_device - data for the idle cooling device
31 * cpuidle_cooling_runtime - Running time computation
35 * The running duration is computed from the idle injection duration
37 * means the running duration is zero. If we have a 50% ratio
39 * running duration.
43 * running = idle x ((100 / ratio) - 1)
47 * running = (idle x 100) / ratio - idle
50 * with 10ms of idle injection and 10ms of running duration.
60 return ((idle_duration_us * 100) / state) - idle_duration_us; in cpuidle_cooling_runtime()
[all …]
/kernel/linux/linux-5.10/drivers/thermal/
Dcpuidle_cooling.c1 // SPDX-License-Identifier: GPL-2.0
20 * struct cpuidle_cooling_device - data for the idle cooling device
32 * cpuidle_cooling_runtime - Running time computation
36 * The running duration is computed from the idle injection duration
38 * means the running duration is zero. If we have a 50% ratio
40 * running duration.
44 * running = idle x ((100 / ratio) - 1)
48 * running = (idle x 100) / ratio - idle
51 * with 10ms of idle injection and 10ms of running duration.
61 return ((idle_duration_us * 100) / state) - idle_duration_us; in cpuidle_cooling_runtime()
[all …]
/kernel/linux/linux-6.6/arch/x86/xen/
DKconfig1 # SPDX-License-Identifier: GPL-2.0
29 Support running as a Xen PV guest.
32 bool "Limit Xen pv-domain memory to 512GB"
39 pv-domains with more than 512 GB of RAM. This option controls the
41 It is always possible to change the default via specifying the
65 Support running as a Xen PVHVM guest.
86 Support for running as a Xen PVH guest.
95 Support running as a Xen Dom0 guest.
98 bool "Always use safe MSR accesses in PV guests"
/kernel/linux/linux-6.6/include/uapi/linux/
Dmembarrier.h31 * enum membarrier_cmd - membarrier system call command
34 * @MEMBARRIER_CMD_GLOBAL: Execute a memory barrier on all running threads.
36 * is ensured that all running threads have passed
38 * user-space addresses match program order between
40 * (non-running threads are de facto in such a
42 * running on the system. This command returns 0.
44 * Execute a memory barrier on all running threads
48 * is ensured that all running threads have passed
50 * user-space addresses match program order between
52 * (non-running threads are de facto in such a
[all …]
/kernel/linux/linux-5.10/include/uapi/linux/
Dmembarrier.h31 * enum membarrier_cmd - membarrier system call command
34 * @MEMBARRIER_CMD_GLOBAL: Execute a memory barrier on all running threads.
36 * is ensured that all running threads have passed
38 * user-space addresses match program order between
40 * (non-running threads are de facto in such a
42 * running on the system. This command returns 0.
44 * Execute a memory barrier on all running threads
48 * is ensured that all running threads have passed
50 * user-space addresses match program order between
52 * (non-running threads are de facto in such a
[all …]
/kernel/linux/linux-6.6/Documentation/admin-guide/hw-vuln/
Dcore-scheduling.rst1 .. SPDX-License-Identifier: GPL-2.0
9 workloads may benefit from running on the same core as they don't need the same
15 ----------------
16 A cross-HT attack involves the attacker and victim running on different Hyper
18 full mitigation of cross-HT attacks is to disable Hyper Threading (HT). Core
19 scheduling is a scheduler feature that can mitigate some (not all) cross-HT
21 user-designated trusted group can share a core. This increase in core sharing
23 will always improve, though that is seen to be the case with a number of real
26 not always: as synchronizing scheduling decisions across 2 or more CPUs in a
27 core involves additional overhead - especially when the system is lightly
[all …]
/kernel/linux/linux-5.10/arch/x86/xen/
DKconfig1 # SPDX-License-Identifier: GPL-2.0
27 Support running as a Xen PV guest.
39 Support running as a Xen PV Dom0 guest.
46 Support running as a Xen PVHVM guest.
53 bool "Limit Xen pv-domain memory to 512GB"
60 pv-domains with more than 512 GB of RAM. This option controls the
62 It is always possible to change the default via specifying the
79 bool "Support for running as a Xen PVH guest"
/kernel/linux/linux-6.6/Documentation/scheduler/
Dsched-nice-design.rst6 nice-levels implementation in the new Linux scheduler.
8 Nice levels were always pretty weak under Linux and people continuously
34 -*----------------------------------*-----> [nice level]
35 -20 | +19
49 people were running number crunching apps at nice +19.)
52 right minimal granularity - and this translates to 5% CPU utilization.
53 But the fundamental HZ-sensitive property for nice+19 still remained,
56 too _strong_ :-)
58 To sum it up: we always wanted to make nice levels more consistent, but
79 depend on the nice level of the parent shell - if it was at nice -10 the
[all …]
Dsched-util-clamp.rst1 .. SPDX-License-Identifier: GPL-2.0
57 foreground, top-app, etc. Util clamp can be used to constrain how much
60 the ones belonging to the currently active app (top-app group). Beside this
65 1. The big cores are free to run top-app tasks immediately. top-app
90 UCLAMP_MIN=1024 will ensure such tasks will always see the highest performance
91 level when they start running.
106 Note that by design RT tasks don't have per-task PELT signal and must always
109 Note that using schedutil always implies a single delay to modify the frequency
111 helps picking what frequency to request instead of schedutil always requesting
114 See :ref:`section 3.4 <uclamp-default-values>` for default values and
[all …]
/kernel/linux/linux-5.10/Documentation/scheduler/
Dsched-nice-design.rst6 nice-levels implementation in the new Linux scheduler.
8 Nice levels were always pretty weak under Linux and people continuously
34 -*----------------------------------*-----> [nice level]
35 -20 | +19
49 people were running number crunching apps at nice +19.)
52 right minimal granularity - and this translates to 5% CPU utilization.
53 But the fundamental HZ-sensitive property for nice+19 still remained,
56 too _strong_ :-)
58 To sum it up: we always wanted to make nice levels more consistent, but
79 depend on the nice level of the parent shell - if it was at nice -10 the
[all …]
/kernel/linux/linux-6.6/rust/kernel/
Dtask.rs1 // SPDX-License-Identifier: GPL-2.0
10 /// Returns the currently running task.
14 // SAFETY: Deref + addr-of below create a temporary `TaskRef` that cannot outlive the
26 /// Instances of this type are always ref-counted, that is, a call to `get_task_struct` ensures
56 /// fn new() -> Self {
90 pub unsafe fn current() -> impl Deref<Target = Task> { in current()
99 fn deref(&self) -> &Self::Target { in current()
108 // SAFETY: If the current thread is still running, the current task is valid. Given in current()
117 pub fn group_leader(&self) -> &Task { in group_leader()
118 // SAFETY: By the type invariant, we know that `self.0` is a valid task. Valid tasks always in group_leader()
[all …]
/kernel/linux/linux-6.6/Documentation/virt/kvm/x86/
Drunning-nested-guests.rst1 .. SPDX-License-Identifier: GPL-2.0
4 Running nested guests with KVM
8 can be KVM-based or a different hypervisor). The straightforward
12 .----------------. .----------------.
17 |----------------'--'----------------|
22 .------------------------------------------------------.
25 |------------------------------------------------------|
27 '------------------------------------------------------'
31 - L0 – level-0; the bare metal host, running KVM
33 - L1 – level-1 guest; a VM running on L0; also called the "guest
[all …]
/kernel/linux/linux-5.10/Documentation/virt/kvm/
Drunning-nested-guests.rst2 Running nested guests with KVM
6 can be KVM-based or a different hypervisor). The straightforward
10 .----------------. .----------------.
15 |----------------'--'----------------|
20 .------------------------------------------------------.
23 |------------------------------------------------------|
25 '------------------------------------------------------'
29 - L0 – level-0; the bare metal host, running KVM
31 - L1 – level-1 guest; a VM running on L0; also called the "guest
32 hypervisor", as it itself is capable of running KVM.
[all …]
/kernel/linux/linux-6.6/Documentation/leds/
Dleds-lp55xx.rst8 -----------
14 Device attributes for user-space interface
15 Program memory for running LED patterns
50 - Maximum number of channels
51 - Reset command, chip enable command
52 - Chip specific initialization
53 - Brightness control register access
54 - Setting LED output current
55 - Program memory address access for running patterns
56 - Additional device specific attributes
[all …]
/kernel/linux/linux-5.10/Documentation/leds/
Dleds-lp55xx.rst8 -----------
14 Device attributes for user-space interface
15 Program memory for running LED patterns
50 - Maximum number of channels
51 - Reset command, chip enable command
52 - Chip specific initialization
53 - Brightness control register access
54 - Setting LED output current
55 - Program memory address access for running patterns
56 - Additional device specific attributes
[all …]
/kernel/linux/linux-6.6/scripts/basic/
DMakefile1 # SPDX-License-Identifier: GPL-2.0-only
5 hostprogs-always-y += fixdep
7 # randstruct: the seed is needed before building the gcc-plugin or
8 # before running a Clang kernel build.
9 gen-randstruct-seed := $(srctree)/scripts/gen-randstruct-seed.sh
12 $(CONFIG_SHELL) $(gen-randstruct-seed) \
14 $(obj)/randstruct.seed: $(gen-randstruct-seed) FORCE
16 always-$(CONFIG_RANDSTRUCT) += randstruct.seed
/kernel/linux/linux-6.6/arch/powerpc/kvm/
Dbook3s_hv_hmi.c1 // SPDX-License-Identifier: GPL-2.0-or-later
23 * been loaded yet and hence no guests are running, or running in wait_for_subcore_guest_exit()
26 * If no KVM is in use, no need to co-ordinate among threads in wait_for_subcore_guest_exit()
27 * as all of them will always be in host and no one is going in wait_for_subcore_guest_exit()
34 if (!local_paca->sibling_subcore_state) in wait_for_subcore_guest_exit()
38 while (local_paca->sibling_subcore_state->in_guest[i]) in wait_for_subcore_guest_exit()
44 if (!local_paca->sibling_subcore_state) in wait_for_tb_resync()
48 &local_paca->sibling_subcore_state->flags)) in wait_for_tb_resync()
/kernel/linux/linux-5.10/drivers/remoteproc/
Dremoteproc_sysfs.c1 // SPDX-License-Identifier: GPL-2.0-only
18 return sprintf(buf, "%s", rproc->recovery_disabled ? "disabled\n" : "enabled\n"); in recovery_show()
53 rproc->recovery_disabled = false; in recovery_store()
56 rproc->recovery_disabled = true; in recovery_store()
61 return -EINVAL; in recovery_store()
69 * A coredump-configuration-to-string lookup table, for exposing a
70 * human readable configuration via sysfs. Always keep in sync with
85 return sprintf(buf, "%s\n", rproc_coredump_str[rproc->dump_conf]); in coredump_show()
110 if (rproc->state == RPROC_CRASHED) { in coredump_store()
111 dev_err(&rproc->dev, "can't change coredump configuration\n"); in coredump_store()
[all …]
/kernel/linux/linux-6.6/sound/usb/
Dcard.h1 /* SPDX-License-Identifier: GPL-2.0 */
9 #define SYNC_URBS 4 /* always four urbs for sync */
16 unsigned int fmt_type; /* USB audio format type (1-3) */
18 unsigned int frame_size; /* samples per frame for non-audio */
68 int opened; /* open refcount; protect with chip->mutex */
69 atomic_t running; /* running status */ member
77 atomic_t state; /* running state */
122 unsigned int fill_max:1; /* fill max packet size always */
131 bool lowlatency_playback; /* low-latency playback mode */
132 bool need_setup; /* (re-)need for hw_params? */
[all …]
/kernel/linux/linux-5.10/Documentation/dev-tools/kunit/
Dusage.rst1 .. SPDX-License-Identifier: GPL-2.0
21 to unit test code that was otherwise un-unit-testable.
27 --------------
33 the kernel; for example, it does not intend to be an end-to-end testing
37 ---------------------
48 -------------
57 .. code-block:: c
68 In the above example ``example_test_success`` always passes because it does
70 ``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
81 .. code-block:: c
[all …]
Dfaq.rst1 .. SPDX-License-Identifier: GPL-2.0
25 Does KUnit support running on architectures other than UML?
40 For more information, see :ref:`kunit-on-non-uml`.
45 test, or an end-to-end test.
47 - A unit test is supposed to test a single unit of code in isolation, hence the
52 - An integration test tests the interaction between a minimal set of components,
59 - An end-to-end test usually tests the entire system from the perspective of the
60 code under test. For example, someone might write an end-to-end test for the
71 1. Try running ``./tools/testing/kunit/kunit.py run`` with the ``--raw_output``
74 2. Instead of running ``kunit.py run``, try running ``kunit.py config``,
[all …]
/kernel/linux/linux-5.10/kernel/sched/
Dpelt.h2 #include "sched-pelt.h"
7 int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
8 int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
15 return READ_ONCE(rq->avg_thermal.load_avg); in thermal_load_avg()
31 int update_irq_load_avg(struct rq *rq, u64 running);
34 update_irq_load_avg(struct rq *rq, u64 running) in update_irq_load_avg() argument
40 #define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)
44 return PELT_MIN_DIVIDER + avg->period_contrib; in get_pelt_divider()
55 enqueued = avg->util_est.enqueued; in cfs_se_util_change()
61 WRITE_ONCE(avg->util_est.enqueued, enqueued); in cfs_se_util_change()
[all …]
/kernel/linux/linux-6.6/Documentation/networking/
Dxfrm_sync.rst1 .. SPDX-License-Identifier: GPL-2.0
21 This way a backup stays as closely up-to-date as an active member.
25 For this reason, we also add a nagle-like algorithm to restrict
28 These thresholds are set system-wide via sysctls or can be updated
32 - the lifetime byte counter
36 - the replay sequence for both inbound and outbound
39 ----------------------
41 nlmsghdr:aevent_id:optional-TLVs.
76 message (kernel<->user) as well the cause (config, query or event).
87 -----------------------------------------
[all …]
/kernel/linux/linux-5.10/Documentation/networking/
Dxfrm_sync.rst1 .. SPDX-License-Identifier: GPL-2.0
21 This way a backup stays as closely up-to-date as an active member.
25 For this reason, we also add a nagle-like algorithm to restrict
28 These thresholds are set system-wide via sysctls or can be updated
32 - the lifetime byte counter
36 - the replay sequence for both inbound and outbound
39 ----------------------
41 nlmsghdr:aevent_id:optional-TLVs.
76 message (kernel<->user) as well the cause (config, query or event).
87 -----------------------------------------
[all …]
/kernel/linux/linux-6.6/kernel/sched/
Dpelt.h2 #include "sched-pelt.h"
7 int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
8 int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
15 return READ_ONCE(rq->avg_thermal.load_avg); in thermal_load_avg()
31 int update_irq_load_avg(struct rq *rq, u64 running);
34 update_irq_load_avg(struct rq *rq, u64 running) in update_irq_load_avg() argument
40 #define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)
44 return PELT_MIN_DIVIDER + avg->period_contrib; in get_pelt_divider()
55 enqueued = avg->util_est.enqueued; in cfs_se_util_change()
61 WRITE_ONCE(avg->util_est.enqueued, enqueued); in cfs_se_util_change()
[all …]

12345678910>>...42