1# VM Benchmarks 2 3VMB is a tool for running benchmarks for variety of Virtual Machines, 4platforms and languages. 5 6## Quick start 7 8The only requirement is python3 (3.7+), no additional modules needed. 9Virtual Machine to test should be installed on host and/or device. 10See [Notes on setup for platforms](#platforms). 11 12#### Option 1: Using wrapper script 13 14This option is to try VMB without any installation involved. 15 16```shell 17# Run all js and ts examples on Node (needs node and tsc): 18./run-vmb.sh all -p node_host `pwd`/examples/benchmarks 19 20# Run all only js examples on Node (needs node installed): 21./run-vmb.sh all -l js -p node_host `pwd`/examples/benchmarks 22 23# See all options explained: 24./run-vmb.sh help 25 26# List all available plugins: 27./run-vmb.sh list 28``` 29 30#### Option2: Build and install vmb module (preferred) 31 32Wrapper script `run-vmb.sh` has all the module functionality, 33but it requires absolute paths in cmdline. 34Python package, built and installed, provides more flexibility 35in dealing with various benchmark sources and repos. 36Once installed it exposes single `vmb` command for generating, 37running and reporting tests. 38 39```sh 40# This command will build and install VMB python module on your machine. 41# No root permissions required. 42# After that `vmb` cli will appear in your $PATH 43 44make vmb 45 46# Run all js and ts tests in current dir 47vmb all -p node_host 48 49# Run only js tests with tag 'sanity' from 'examples' 50vmb all -p node_host -l js -T sanity ./examples 51``` 52 53#### Usage Example: Compare 2 runs 54 55```shell 56export PANDA_BUILD=~/arkcompiler/runtime_core/static_core/build 57# Run ets and ts tests on ArkTS in 58# 1) interpretation mode 59vmb all -p arkts_host --aot-skip-libs \ 60 --mode=int --report-json=int.json ./examples/benchmarks/ 61# 2) in JIT mode 62vmb all -p arkts_host --aot-skip-libs \ 63 --mode=jit --report-json=jit.json ./examples/benchmarks/ 64# Compare results 65vmb report --compare int.json jit.json 66 67 Comparison: arkts_host-int vs arkts_host-jit 68 ======================================== 69 Time: 1.99e+05->3.80e+04(better -80.9%); Size: 9.17e+04->9.17e+04(same); RSS: 4.94e+04->5.34e+04(worse +8.1%) 70 ============================================================================================================= 71``` 72 73## Commands: 74`vmb` cli supports following commands: 75- `gen` - generate benchmark tests from [doclets](#doclet-format) 76- `run` - run tests generated by `gen` or [non doclet tests](#non-doclet-tests) 77- `all` = `gen` + `run` 78- `report` - works with json report (display, compare) 79- `list` - show info supported langs and platforms 80- `help` - print options for all the commands. (Also `vmb <cmd> --help`) 81 82## Selecting platform 83Required `-p` (`--platform`) option to `run` or `all` command should be the name 84of plugin from `<vmb-package-root>/plugins/platforms` 85or from `<extra-plugins-dir>/platforms` if specified. 86F.e. `ark_js_vm_host` - for ArkHz on host machine, 87or `arkts_ohos` - for ArkTS on OHOS device 88 89## Selecting language and source files 90`gen` command requires `-l` (`--langs`) option. 91F.e. `vmb gen -l ets,swift,ts,js ./examples/benchmarks` 92will generate benches for all 4 languages in examples. 93 94Then provided to `all` command `--langs` will override langs, supported by platform. 95F.e. `vmb all -l js -p v_8_host ./tests` will skip `ts` (typescript) 96 97Source files (doclets) could be overridden 98by `--src-langs` option to `all` or `run` command. 99 100Each platform defines languages it can deal with, 101and each language defines its source file extension. 102Defaults are: 103 104| platform | langs | sources | 105|---------------|------------|-----------------------------| 106| `arkts_*` | `ets` | `*.ets` | 107| `ark_js_vm_*` | `ts` | `*.ts` | 108| `swift_*` | `swift` | `*.swift` | 109| `v_8_*` | `ts`, `js` | `*.ts`, `*.js` | 110| `node_*` | `ts`, `js` | `*.ts`, `*.js` | 111| `interop_s2d` | `ets` | `*.ets` + `lib*.*` | 112| `interop_d2s` | `ts`, `js` | `*.ts`, `*.js` + `lib*.ets` | 113| `interop_d2d` | `ts`, `js` | `*.ts`, `*.js` + `lib*.*` | 114 115## Selecting and filtering tests: 116- Any positional argument to `all` or `gen` command would be treated 117as path to doclets: `vmb all -p x ./test1/test1.js ./more-tests/ ./even-more` 118- Any positional argument to `run` command would be treated 119as path to generated tests: `vmb run -p x ./generated` 120- Files with `.lst` extension would be treated as lists of paths, relative to `CWD` 121- `--tags=sanity,my` (`-T`) will generate and run only tests tagged with `sanity` OR `my` 122- `--tests=my_test` (`-t`) will run only tests containing `my_test` in name 123- To exclude some tests from run point `--exclude-list` to the file with test names to exclude 124- To exclude tests with tags use `-ST` (`--skip-tags`) option. F.e. `--skip-tags=negative,flaky` 125 126## Benchmark's measurement options: 127* `-wi` (`--warmup-iters`) controls the number of warmup iterations, 128 default is 2. 129* `-mi` (`--measure-iters`) controls the number of measurement iterations, 130 default is 3. 131* `-it` (`--iter-time`) controls the duration of iterations in seconds, 132 default is 1. 133* `-wt` (`--warmup-time`) controls the duration of warmup iterations in seconds, 134 default is 1. 135* `-gc` (`--sys-gc-pause`) Non-negative value forces GC (twice) 136 and <value> milliseconds sleep before each iteration 137 (GC finish can't be guaranteed on all VM's), default is -1 (no forced GC). 138* `-fi` (`--fast-iters`) Number of 'fast' iterations 139 (no warmup, no tuning cycles). 140 Benchmark will run this number of iterations, regardless of time elapsed. 141 142## Custom options 143To provide additional option to compiler or virtual machine 144`--<tool>-custom-option` could be used. F.e. add cpu profiling for `node`: 145`vmb all -p node_host --node-custom-option='--cpu-prof' -v debug ./examples/benchmarks/ts/VoidBench.ts` 146 147 148## Reports: 149* `--report-json=path.json` to save results in json format, 150 this is most detailed report. 151 `--report-json-compact` disables prettifying of json. 152* `--report-csv=path.csv` to save results in csv format. Only basic info included. 153 154## Controlling log: 155There are several log levels which could be set via `--log-level` option. 156- `fatal`: only critical errors will be printed 157- `pass`: print single line for each test after it finishes 158- `error` 159- `warn` 160- `info`: default level 161- `debug` 162- `trace`: most verbose 163 164Each level prints in its own color. 165`--no-color` disables color and adds `[%LEVEL]` prefix to each message. 166 167## Extra (custom) plugins: 168 169Providing option `--extra-plugins` you can add any 170language, tool, platform or hook. 171This parameter should point to the directory containing any combination of 172`hooks`, `langs`, `platforms`, `tools` sub-dirs. 173As the simplest case you can copy any of the existing plugins 174and safely experiment modifying it. 175`extra-plugins` has higher 'priority', so any tools, platforms and langs 176could be 'overridden' by custom ones. 177 178```sh 179# Example: 180# 2 demo plugins: Tool=dummy and Platform=dummy_host 181mkdir bu_Fake 182touch bu_Fake/bench_Fake.txt 183vmb run -p dummy_host \ 184 --extra-plugins=./examples/plugins \ 185 ./bu_Fake/bench_Fake.txt 186``` 187 188## Doclet format: 189This is a mean for benchmark declaration in source file. 190In a single root class it is possible to define any number of benchmarks, 191setup method, benchmarks parameters, run options and some additional meta-info. 192See examples in `examples/benchamrks` directory. 193 194Using this test format involves separate generation stage (`gen` command). 195Resulting benchmark programs will be generated using templates 196in `plugins/templates` into `generated` directory 197(which could be overriden by `--outdir` option) 198 199Supported doclets are: 200* `@State` on a root class which contains benchmarks tests as its methods. 201* `@Setup` on an initialization method of a state. 202* `@Benchmark` on a method that is measured. 203 It should not accept any parameters and 204 (preferably) it may return a value which is consumed. 205* `@Param p1 [, p2...]` on a state field. 206 Attribute define values to create several benchmarks using same code, 207 and all combinations of params. 208 Value can be an int, a string or a comma separated list of ints or strings. 209* `@Tags t1 [, t2...]` on a root class or method. List of benchmark tags. 210* `@Bugs b1 [, b2...]` on a root class or method. List of associated issues. 211 212JS specific doclets (JSDoc-like) are: 213* `@arg {<State name>} <name>` on a benchmark method. 214 Links a proper state class to each parameter. 215* `@returns {Bool|Char|Int|Float|Obj|Objs}` optionally on a benchmark method. 216 Helps to build automatic `Consumer` usage. 217 218## Non-doclet tests 219 220It is possible to write 'free-style' tests which can be ran as is. 221VMB will search bench units in directories with `bu_` prefix and files with 222`bench_` prefix. 223Test result should be reported by program 224as `Benchmark result: <TestName> <time in ns>`. 225 226## Platforms 227 228### ArkTS (arkts_*) 229On host: env var `PANDA_BUILD` should be set to point to `build` directory 230f.e. `~/arkcompiler/runtime_core/static_core/build` 231 232On device: `ark` and `ark_aot` binaries with required libraries should be pushed to device. 233Default place is `/data/local/tmp/vmb` and could be configured via `--device-dir` 234 235By default `etsstdlib` will be compiled into native code, 236to avoid this use `--aot-skip-libs` (`-A`) option 237 238### Ark JS VM (ark_js_vm_*) 239On host: env var `OHOS_SDK` sould be set to point to OHOS SDK source directory 240inside which `out/*.release` should contain built binaries 241 242On device: `ark_js_vm` and `ark_aot_compiler` should be pushed to device. 243Default place is `/data/local/tmp/vmb` and could be configured via `--device-dir` 244 245### V_8 246For device platforms: required binaries should be pushed to device. 247Default place is `/data/local/tmp/vmb/v_8`. 248`/data/local/tmp/vmb` part could be configured via `--device-dir` 249 250### Hap (Ability Package) - Experimental 251This platform allows to run `arkts/ets` benchmarks as application ability 252 253##### Prerequisites: 254- `PANDA_SDK` env var should point to unpacked Panda OHOS SDK (package dir) 255- `OHOS_BASE_SDK_HOME` env var should point to unpacked OHOS SDK 256- `hvigorw` should be in PATH or `HVIGORW` env var should point to hvigor script or binary 257- `hdc` should be in PATH or `HDC` env var should point to hdc script or binary 258- For signing package `HAP_SIGNING_CONFIG` env var should point to json config 259 as in `app/signingConfigs/material` section of `build-profile.json5` 260 261##### Run 262```sh 263vmb all -p hap -A examples/benchmarks/ets 264 265# --tests-per-batch option could be used to tune amount of benchmarks per one hap package (25 is the default) 266``` 267##### Limitations 268- Total test run inside app is limited to 5 sec, so `-wi` is limited to 0..1 and `-mi` to 1..2. 269 `-wt` and `it` has no effect for `hap` platform. 270- "Macro" benchmarks (i.e. test functions which run longer than 1 sec) won't produce benchmark result. 271 272 273## Platform Features 274 275| platform | int-mode | aot-mode | jit-mode | gc-stats | jit-stats | aot-stats | imports | 276|----------------|:--------:|:--------:|:--------:|:--------:|:---------:|:---------:|:-------:| 277| ark_js_vm_host | n/a | V | V | n/a | n/a | n/a | V | 278| ark_js_vm_ohos | n/a | V | V | n/a | n/a | n/a | V | 279| arkts_device | V | V | V | V | V | V | V | 280| arkts_host | V | V | V | V | V | V | V | 281| arkts_ohos | V | V | V | V | V | V | V | 282| node_host | n/a | n/a | V | n/a | n/a | n/a | V | 283| swift_device | n/a | n/a | n/a | n/a | n/a | n/a | X | 284| swift_host | n/a | n/a | n/a | n/a | n/a | n/a | X | 285| v_8_device | V | n/a | V | n/a | n/a | n/a | V | 286| v_8_host | V | n/a | V | n/a | n/a | n/a | V | 287| v_8_ohos | V | n/a | V | n/a | n/a | n/a | V | 288| interop_d2s | V | n/a | n/a | n/a | n/a | n/a | V | 289| interop_s2d | V | n/a | n/a | n/a | n/a | n/a | V | 290| interop_d2d | V | n/a | n/a | n/a | n/a | n/a | V | 291 292## Interoperability tests: 293 294Please refer to [this manual](./examples/benchmarks/interop/readme.md) and [examples](./examples/benchmarks/interop). 295 296See also [readme here](../../plugins/ets/tests/benchmarks/interop_js/README.md) 297 298## Self tests and linters 299```sh 300make tox 301```