• Home
Name Date Size #Lines LOC

..--

cpp/12-May-2024-255189

datasets/12-May-2024-8,8068,226

go/12-May-2024-125108

java/12-May-2024-314284

js/12-May-2024-117102

php/12-May-2024-197166

protobuf.js/12-May-2024-9379

python/12-May-2024-193156

util/12-May-2024-1,230959

Makefile.amD12-May-202441.9 KiB665520

README.mdD12-May-20246.3 KiB244177

__init__.pyD12-May-20240 10

benchmarks.protoD12-May-20242.9 KiB6459

download_data.shD12-May-2024129 62

google_size.protoD12-May-20244.6 KiB139131

README.md

1
2# Protocol Buffers Benchmarks
3
4This directory contains benchmarking schemas and data sets that you
5can use to test a variety of performance scenarios against your
6protobuf language runtime. If you are looking for performance
7numbers of officially support languages, see [here](
8https://github.com/protocolbuffers/protobuf/blob/master/docs/performance.md)
9
10## Prerequisite
11
12First, you need to follow the instruction in the root directory's README to
13build your language's protobuf, then:
14
15### CPP
16You need to install [cmake](https://cmake.org/) before building the benchmark.
17
18We are using [google/benchmark](https://github.com/google/benchmark) as the
19benchmark tool for testing cpp. This will be automatically made during build the
20cpp benchmark.
21
22The cpp protobuf performance can be improved by linking with [tcmalloc library](
23https://gperftools.github.io/gperftools/tcmalloc.html). For using tcmalloc, you
24need to build [gpertools](https://github.com/gperftools/gperftools) to generate
25libtcmallc.so library.
26
27### Java
28We're using maven to build the java benchmarks, which is the same as to build
29the Java protobuf. There're no other tools need to install. We're using
30[google/caliper](https://github.com/google/caliper) as benchmark tool, which
31can be automatically included by maven.
32
33### Python
34We're using python C++ API for testing the generated
35CPP proto version of python protobuf, which is also a prerequisite for Python
36protobuf cpp implementation. You need to install the correct version of Python
37C++ extension package before run generated CPP proto version of Python
38protobuf's benchmark. e.g. under Ubuntu, you need to
39
40```
41$ sudo apt-get install python-dev
42$ sudo apt-get install python3-dev
43```
44And you also need to make sure `pkg-config` is installed.
45
46### Go
47Go protobufs are maintained at [github.com/golang/protobuf](
48http://github.com/golang/protobuf). If not done already, you need to install the
49toolchain and the Go protoc-gen-go plugin for protoc.
50
51To install protoc-gen-go, run:
52
53```
54$ go get -u github.com/golang/protobuf/protoc-gen-go
55$ export PATH=$PATH:$(go env GOPATH)/bin
56```
57
58The first command installs `protoc-gen-go` into the `bin` directory in your local `GOPATH`.
59The second command adds the `bin` directory to your `PATH` so that `protoc` can locate the plugin later.
60
61### PHP
62PHP benchmark's requirement is the same as PHP protobuf's requirements. The benchmark will automatically
63include PHP protobuf's src and build the c extension if required.
64
65### Node.js
66Node.js benchmark need [node](https://nodejs.org/en/)(higher than V6) and [npm](https://www.npmjs.com/) package manager installed. This benchmark is using the [benchmark](https://www.npmjs.com/package/benchmark) framework to test, which needn't to manually install. And another prerequisite is [protobuf js](https://github.com/protocolbuffers/protobuf/tree/master/js), which needn't to manually install either
67
68### C#
69The C# benchmark code is built as part of the main Google.Protobuf
70solution. It requires the .NET Core SDK, and depends on
71[BenchmarkDotNet](https://github.com/dotnet/BenchmarkDotNet), which
72will be downloaded automatically.
73
74### Big data
75
76There's some optional big testing data which is not included in the directory
77initially, you need to run the following command to download the testing data:
78
79```
80$ ./download_data.sh
81```
82
83After doing this the big data file will automatically generated in the
84benchmark directory.
85
86## Run instructions
87
88To run all the benchmark dataset:
89
90### Java:
91
92```
93$ make java
94```
95
96### CPP:
97
98```
99$ make cpp
100```
101
102For linking with tcmalloc:
103
104```
105$ env LD_PRELOAD={directory to libtcmalloc.so} make cpp
106```
107
108### Python:
109
110We have three versions of python protobuf implementation: pure python, cpp
111reflection and cpp generated code. To run these version benchmark, you need to:
112
113#### Pure Python:
114
115```
116$ make python-pure-python
117```
118
119#### CPP reflection:
120
121```
122$ make python-cpp-reflection
123```
124
125#### CPP generated code:
126
127```
128$ make python-cpp-generated-code
129```
130
131### Go
132```
133$ make go
134```
135
136
137### PHP
138We have two version of php protobuf implemention: pure php, php with c extension. To run these version benchmark, you need to:
139#### Pure PHP
140```
141$ make php
142```
143#### PHP with c extension
144```
145$ make php_c
146```
147
148### Node.js
149```
150$ make js
151```
152
153To run a specific dataset or run with specific options:
154
155### Java:
156
157```
158$ make java-benchmark
159$ ./java-benchmark $(specific generated dataset file name) [$(caliper options)]
160```
161
162### CPP:
163
164```
165$ make cpp-benchmark
166$ ./cpp-benchmark $(specific generated dataset file name) [$(benchmark options)]
167```
168
169### Python:
170
171For Python benchmark we have `--json` for outputting the json result
172
173#### Pure Python:
174
175```
176$ make python-pure-python-benchmark
177$ ./python-pure-python-benchmark [--json] $(specific generated dataset file name)
178```
179
180#### CPP reflection:
181
182```
183$ make python-cpp-reflection-benchmark
184$ ./python-cpp-reflection-benchmark [--json] $(specific generated dataset file name)
185```
186
187#### CPP generated code:
188
189```
190$ make python-cpp-generated-code-benchmark
191$ ./python-cpp-generated-code-benchmark [--json] $(specific generated dataset file name)
192```
193
194### Go:
195```
196$ make go-benchmark
197$ ./go-benchmark $(specific generated dataset file name) [go testing options]
198```
199
200### PHP
201#### Pure PHP
202```
203$ make php-benchmark
204$ ./php-benchmark $(specific generated dataset file name)
205```
206#### PHP with c extension
207```
208$ make php-c-benchmark
209$ ./php-c-benchmark $(specific generated dataset file name)
210```
211
212### Node.js
213```
214$ make js-benchmark
215$ ./js-benchmark $(specific generated dataset file name)
216```
217
218### C#
219From `csharp/src/Google.Protobuf.Benchmarks`, run:
220
221```
222$ dotnet run -c Release
223```
224
225We intend to add support for this within the makefile in due course.
226
227## Benchmark datasets
228
229Each data set is in the format of benchmarks.proto:
230
2311. name is the benchmark dataset's name.
2322. message_name is the benchmark's message type full name (including package and message name)
2333. payload is the list of raw data.
234
235The schema for the datasets is described in `benchmarks.proto`.
236
237Benchmark likely want to run several benchmarks against each data set (parse,
238serialize, possibly JSON, possibly using different APIs, etc).
239
240We would like to add more data sets.  In general we will favor data sets
241that make the overall suite diverse without being too large or having
242too many similar tests.  Ideally everyone can run through the entire
243suite without the test run getting too long.
244