• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1The cpucontroller testplan includes a complete set of testcases that test the
2cpu controller in different scenarios.
3
4**These testcases test the cpu controller under flat hierarchy.**
5
6General Note:-
7============
8How to calculate expected cpu time:
9
10Say if there are n groups then
11
12total share weight = grp1 share + grp2 shares +......upto n grps
13
14then expected time of any grp (say grp1)
15	=100 * (grp1 share) /(total share weight) %
16
17(* If there were say 2 task in a group then both task might devide the time
18 * of this group and it would create no effect on tasks in other group. So a
19 * task will always get time from it's group time only and will never affect
20 * other groups share of time.
21 * Even if task is run with different nice value it will affect the tasks in
22 * that group only.
23)
24
25!!! User need not worry about this calculation at all as the test dynamicaly
26calculates expected cpu time of a task and prints for each task in results
27file. This calculation is given for refernce to interested users.
28For any other information please refer to cgroup.txt in kernel documentation.
29
30TESTCASE DESCRIPTION:
31====================
32
33Test 01: FAIRNESS TEST
34---------------------
35
36This test consists of two test cases.
37The test plan for this testcase is as below:
38
39First of all mount the cpu controller on /dev/cpuctl and create n groups.
40The number of groups should be > the number of cpus for checking scheduling
41fairness(as we will run 1 task per group). then we create say n groups. By
42default each group is assigned 1024 shares. The cpu controller schedules the
43tasks in different groups on the basis of the shares assigned to that group.
44So the cpu usage of a task depends on the amount of shares it's group has out
45of the total number of shares(no upper limit but a lower limit of 2) and the
46number of tasks in that group(in this case only 1).
47So until and unless this ratio(group A' shares/ Total shares of all groups)
48changes, the cpu time for this group A remains constant.
49
50Let us say we have 3 groups(1 task each) A,B,C each having 2, 4, 6 shares
51respectively. Hence if the tasks are running infinitely they are supposed to
52get 16.66%, 33.33%, 50% cpu time respectively. This test case tests that each
53group should get the cpu time in the same(above) ratio irrespective of the
54shares absolute values provided the ratio is not changed i.e. the cpu time per
55group should not change if we change the shares from 2, 4, 6 to 200, 400, 600 or
56to 20K, 40K, 60K etc (provided the working conditions do not change).
57Thus the scheduling is proportional bandwidth scheduling and not absolute
58bandwidth scheduling.
59This was the test and outcome for test01. For test02 the setup is kept same.
60Test 02 tests if the fairness persists among different runs over a period of
61time. So in this test more than one sets of reading are taken and the expected
62outcome is that the cpu time for a task should remain constant among all the
63runs provided the working environment is same for the test.
64Currently the support to create an ideal environment for all the runs is not
65available in the test because of some required feature in the kernel. Hence
66there may be some variations among different runs depending on the execution
67of system default tasks which can run any time.
68The fix for this is supposed to be merged with next release.
69
70
71How to view the results:
72-----------------------
73
74The cpu time for each group(task ) is calculated in %. There are two outcomes of the test:
751. A group should get cpu time in the same ratio as it's shares.
762. This time should not change with the changes in share values while the ratio in those
77   values is same.
78
79NOTE: In case 1 a variation of 1-2 % is acceptable.
80(here we have 1 task per group)
81
82Test 03: GRANULARITY TEST
83-------------------------
84Granularity test with respect to shares values
85In this test the shares value of some of the groups is increased and for some
86groups is decreased. Accordingly the expected cpu time of a task is calculated.
87The outcome of the test is that calc cpu time must change in accordance with
88change in shares value.
89(i.e. calculated cpu time and expected cpu time should be same)
90
91Test 04: NICE VALUE TEST
92-------------------------
93Renice all tasks of a group to -20 and let tasks in all other groups run with
94normal priority. The aim is to test that nice effect is within the groups and
95not shares are dominant over nice values.
96
97Test 05: TASK MIGRATION TEST
98----------------------------
99In this test for the first run the test is run with say n tasks in m groups.
100After first run a task is migrated from one group to another group. This task
101now consumes the cpu time from the new group.
102
103Test 06-08 : NUM GROUPS vs NUMBER of TASKS TEST
104----------------------------------------------
105In the following three testcases a total tasks are same and each task is
106expected to get same cpu time. Theses testcases test the effect of creating
107more groups on fairness.(however latency check will be done in future)
108
109Test 06:      N X M (N groups with M tasks each)
110-------
111
112Test 07:      N*M X 1 (N*M groups with 1 task each)
113-------
114
115Test 08:      1 X N*M (1 group with N*M tasks)
116-------
117
118Test 09-10: STRESS TEST
119----------
120The next two testcases put stress on system and create a large number of groups
121each running a large number of tasks in it.
122
123Test 09:      Heavy stress test with nice value change
124-------
125Creates 4 windows of different NICE values. Each window runs some n groups.
126
127Test 10:      Heavy stress test (effect of heavy group on light group)
128-------
129In this test one group has very less tasks while others have large number of
130tasks. This tests if fairness still remains.
131
132
133Test 11-12: LATENCY TESTS
134----------
135The latency tests refer to testing if the cpu is available(within a reasonable
136amount of time) to a task which woke up after a sleep(), when the system is
137having enough load.
138
139In the following two testcases we run n(NUM_TASKS set in script) tasks as the
140load tasks which simply hog the cpu by doing some sqrt calculation of a number
141of type double. A task named latency check task is launched after these tasks.
142This task sleeps frequently and measures the latency as the difference b/n
143the actual and expected sleep durations.
144
145In case of test 12 the tasks are running under different groups created
146dynamically depending on the number of cpus in the machine.(min 2, else 1.5 *
147NUM_CPUS). The tasks migrate to their groups automatically, before they start
148hogging the cpu. The latency check task also runs under any of the groups.
149
150Test 11:      cpuctl latency test 1
151-------
152This test adds one testcase for testing the latency when the group scheduler
153is not mounted, but compiled in kernel.(i.e. no task group is created)
154
155Test 12:      cpuctl latency test 2
156-------
157This test adds one testcase for testing the latency when the group scheduler
158is mounted, and has tasks in different groups.
159
160NOTE: There is no clear consensus on the maximum latency that the scheduler
161should guarantee. Latency may vary from few milliseconds in normal desktops
162to even a minute in virtaul guests. However a latency of more than 20 ms
163(under normal load, as it is load dependent) is not considered good.
164This test is to keep an eye on the max latency in different kernel versions
165with respect to further development of the group scheduler.
166
167The threshold for this test is calculated based on the value exported by the
168kernel in /proc/sys/kernel/sched_wakeup_granularity_ns. In case the kernel is
169not compiled to reflect the same, we use the same logic as by the kernel.
170sysctl_sched_wakeup_granularity = (1 + ln(nr_cpus))*default_gran_logic.
171For making the allowed latency to be more practical we multiply it by 2.
172
173 So even if the test may show FAIL, it may not be an actual failure.
174
175
176(
177 In all(1-10) tests calc cpu time and exp cpu time should be same
178 Time:: calc:- calculated cpu time obtained for a running task
179 exp:- expected time as per the shares of the group and num of tasks in group.
180)
181