Name |
Date |
Size |
#Lines |
LOC |
||
---|---|---|---|---|---|---|
.. | - | - | ||||
data/ | 12-May-2024 | - | 198,851 | 164,330 | ||
runs/ | 12-May-2024 | - | 386,277 | 386,228 | ||
src/ | 12-May-2024 | - | 3,223 | 2,665 | ||
Cargo.lock | D | 12-May-2024 | 15.5 KiB | 620 | 551 | |
Cargo.toml | D | 12-May-2024 | 562 | 28 | 24 | |
README.md | D | 12-May-2024 | 1.9 KiB | 45 | 35 |
README.md
1This directory defines a large suite of benchmarks for both the memchr and 2memmem APIs in this crate. A selection of "competitor" implementations are 3chosen. In general, benchmarks are meant to be a tool for optimization. That's 4why there is so many: we want to be sure we get enough coverage such that our 5benchmarks approximate real world usage. When some benchmarks look a bit slower 6than we expect (for one reason another), we can use profiling tools to look at 7codegen and attempt to improve that case. 8 9Because there are so many benchmarks, if you run all of them, you might want to 10step away for a cup of coffee (or two). Therefore, the typical way to run them 11is to select a subset. For example, 12 13``` 14$ cargo bench -- 'memmem/krate/.*never.*' 15``` 16 17runs all benchmarks for the memmem implementation in this crate with searches 18that never produce any matches. This will still take a bit, but perhaps only a 19few minutes. 20 21Running a specific benchmark can be useful for profiling. For example, if you 22want to see where `memmem/krate/prebuiltiter/huge-en/common-one-space` is 23spending all of its time, you would first want to run it (to make sure the code 24is compiled): 25 26``` 27$ cargo bench -- memmem/krate/prebuiltiter/huge-en/common-one-space 28``` 29 30And then run it under your profiling tool (I use `perf` on Linux): 31 32``` 33$ perfr --callgraph cargo bench -- memmem/krate/prebuiltiter/huge-en/common-one-space --profile-time 3 34``` 35 36Where 37[`perfr` is my own wrapper around `perf`](https://github.com/BurntSushi/dotfiles/blob/master/bin/perfr), 38and the `--profile-time 3` flag means, "just run the code for 3 seconds, but 39don't do anything else." This makes the benchmark harness get out of the way, 40which lets the profile focus as much as possible on the code being measured. 41 42See the README in the `runs` directory for a bit more info on how to use 43`critcmp` to look at benchmark data in a way that makes it easy to do 44comparisons. 45