• Home
  • Raw
  • Download

Lines Matching +full:operations +full:- +full:per +full:- +full:run

3 PYBENCH - A Python Benchmark Suite
6 Extendable suite of low-level benchmarks for measuring
22 The command line interface for pybench is the file pybench.py. Run
23 this script with option '--help' to get a listing of the possible
28 Micro-Manual
29 ------------
31 Run 'pybench.py -h' to see the help screen. Run 'pybench.py' to run
32 the benchmark suite using default settings and 'pybench.py -f <file>'
35 It is usually a good idea to run pybench.py multiple times to see
36 whether the environment, timers and benchmark run-times are suitable
39 You can use the comparison feature of pybench.py ('pybench.py -c
41 reference run.
48 inconsistent. Examples include: web-browsers, email clients, RSS
52 use the filtering option, e.g. 'pybench.py -t string' will only
53 run/show the tests that have 'string' in their name.
55 This is the current output of pybench.py --help:
58 ------------------------------------------------------------------------
59 PYBENCH - a benchmark test suite for Python interpreters/compilers.
60 ------------------------------------------------------------------------
66 -n arg number of rounds (10)
67 -f arg save benchmark to file arg ()
68 -c arg compare benchmark with the one in file arg ()
69 -s arg show benchmark in file arg, then exit ()
70 -w arg set warp factor to arg (10)
71 -t arg run only tests with names matching arg ()
72 -C arg set the number of calibration runs to arg (20)
73 -d hide noise in comparisons (0)
74 -v verbose output (not recommended) (0)
75 --with-gc enable garbage collection (0)
76 --with-syscheck use default sys check interval (0)
77 --timer arg use given timer (time.time)
78 -h show this help text
79 --help show this help text
80 --debug enable debugging
81 --copyright show copyright
82 --examples show examples of usage
87 The normal operation is to run the suite and display the
88 results. Use -f to save them for later reuse or comparisons.
98 python2.1 pybench.py -f p21.pybench
99 python2.5 pybench.py -f p25.pybench
100 python pybench.py -s p25.pybench -c p21.pybench
104 -------
110 -------------
113 -------------------------------------------------------------------------------
115 -------------------------------------------------------------------------------
131 -------------------------------------------------------------------------------
132 Benchmark: 2006-06-12 12:09:25
133 -------------------------------------------------------------------------------
140 Platform ID: Linux-2.6.8-24.19-default-x86_64-with-SuSE-9.2-x86-64
153 -------------------------------------------------------------------------------
206 -------------------------------------------------------------------------------
218 .test() which runs .rounds number of .operations test operations each
220 execute the operations.
224 ------------------
231 # for comparisons of benchmark runs - tests with unequal version
235 # The number of abstract operations done in each round of the
237 # measure. The benchmark will output the amount of run-time per
240 # sets of operations more than once per test round. The measured
241 # overhead per test round should be less than 1 second.
242 operations = 20
244 # Number of rounds to execute per test run. This should be
245 # adjusted to a figure that results in a test run-time of between
246 # 1-2 seconds (at warp 1).
251 """ Run the test.
253 The test needs to run self.rounds executing
254 self.operations number of operations each.
260 # Run test rounds
267 # Repeat the operations per round to raise the run-time
268 # per operation significantly above the noise level of the
269 # for-loop overhead.
271 # Execute 20 operations (a += 1):
298 setup and run the test - except for the actual operations
306 # Run test rounds (without actually doing any operation)
309 # Skip the actual execution of the operations, since we
314 -----------------------------
323 ----------------------
332 ---------------
336 - made timer a parameter
337 - changed the platform default timer to use high-resolution
340 - added option to select timer
341 - added process time timer (using systimes.py)
342 - changed to use min() as timing estimator (average
344 - garbage collection is turned off per default
345 - sys check interval is set to the highest possible value
346 - calibration is now a separate step and done using
349 - modified the tests to each give a run-time of between
350 100-200ms using warp 10
351 - changed default warp factor to 10 (from 20)
352 - compared results with timeit.py and confirmed measurements
353 - bumped all test versions to 2.0
354 - updated platform.py to the latest version
355 - changed the output format a bit to make it look
357 - refactored the APIs somewhat
366 --
367 Marc-Andre Lemburg