• Home
Name Date Size #Lines LOC

..--

cpp/03-May-2024-271203

datasets/03-May-2024-9,0988,488

go/03-May-2024-125108

java/03-May-2024-308281

js/03-May-2024-117102

php/03-May-2024-197166

protobuf.js/03-May-2024-9379

python/03-May-2024-195160

util/03-May-2024-1,2821,008

BUILDD03-May-20241.2 KiB6659

Makefile.amD03-May-202441.9 KiB665520

README.mdD03-May-20246.3 KiB252182

__init__.pyD03-May-20240 10

benchmarks.protoD03-May-20242.9 KiB6459

google_size.protoD03-May-20244.6 KiB139131

README.md

1
2# Protocol Buffers Benchmarks
3
4This directory contains benchmarking schemas and data sets that you
5can use to test a variety of performance scenarios against your
6protobuf language runtime. If you are looking for performance
7numbers of officially supported languages, see [Protobuf Performance](
8https://github.com/protocolbuffers/protobuf/blob/master/docs/performance.md).
9
10## Prerequisite
11
12First, you need to follow the instruction in the root directory's README to
13build your language's protobuf, then:
14
15### CPP
16You need to install [cmake](https://cmake.org/) before building the benchmark.
17
18We are using [google/benchmark](https://github.com/google/benchmark) as the
19benchmark tool for testing cpp. This will be automatically made during build the
20cpp benchmark.
21
22The cpp protobuf performance can be improved by linking with
23[TCMalloc](https://google.github.io/tcmalloc).
24
25### Java
26We're using maven to build the java benchmarks, which is the same as to build
27the Java protobuf. There're no other tools need to install. We're using
28[google/caliper](https://github.com/google/caliper) as benchmark tool, which
29can be automatically included by maven.
30
31### Python
32We're using python C++ API for testing the generated
33CPP proto version of python protobuf, which is also a prerequisite for Python
34protobuf cpp implementation. You need to install the correct version of Python
35C++ extension package before run generated CPP proto version of Python
36protobuf's benchmark. e.g. under Ubuntu, you need to
37
38```
39$ sudo apt-get install python-dev
40$ sudo apt-get install python3-dev
41```
42And you also need to make sure `pkg-config` is installed.
43
44### Go
45Go protobufs are maintained at [github.com/golang/protobuf](
46http://github.com/golang/protobuf). If not done already, you need to install the
47toolchain and the Go protoc-gen-go plugin for protoc.
48
49To install protoc-gen-go, run:
50
51```
52$ go get -u github.com/golang/protobuf/protoc-gen-go
53$ export PATH=$PATH:$(go env GOPATH)/bin
54```
55
56The first command installs `protoc-gen-go` into the `bin` directory in your local `GOPATH`.
57The second command adds the `bin` directory to your `PATH` so that `protoc` can locate the plugin later.
58
59### PHP
60PHP benchmark's requirement is the same as PHP protobuf's requirements. The benchmark will automatically
61include PHP protobuf's src and build the c extension if required.
62
63### Node.js
64Node.js benchmark need [node](https://nodejs.org/en/)(higher than V6) and [npm](https://www.npmjs.com/) package manager installed. This benchmark is using the [benchmark](https://www.npmjs.com/package/benchmark) framework to test, which needn't to manually install. And another prerequisite is [protobuf js](https://github.com/protocolbuffers/protobuf/tree/master/js), which needn't to manually install either
65
66### C#
67The C# benchmark code is built as part of the main Google.Protobuf
68solution. It requires the .NET Core SDK, and depends on
69[BenchmarkDotNet](https://github.com/dotnet/BenchmarkDotNet), which
70will be downloaded automatically.
71
72### Big data
73
74There's some optional big testing data which is not included in the directory
75initially, you need to run the following command to download the testing data:
76
77```
78$ ./download_data.sh
79```
80
81After doing this the big data file will automatically generated in the
82benchmark directory.
83
84## Run instructions
85
86To run all the benchmark dataset:
87
88### Java:
89
90First build the Java binary in the usual way with Maven:
91
92```
93$ cd java
94$ mvn install
95```
96
97Assuming that completes successfully,
98
99```
100$ cd ../benchmarks
101$ make java
102```
103
104### CPP:
105
106```
107$ make cpp
108```
109
110For linking with tcmalloc:
111
112```
113$ env LD_PRELOAD={directory to libtcmalloc.so} make cpp
114```
115
116### Python:
117
118We have three versions of python protobuf implementation: pure python, cpp
119reflection and cpp generated code. To run these version benchmark, you need to:
120
121#### Pure Python:
122
123```
124$ make python-pure-python
125```
126
127#### CPP reflection:
128
129```
130$ make python-cpp-reflection
131```
132
133#### CPP generated code:
134
135```
136$ make python-cpp-generated-code
137```
138
139### Go
140```
141$ make go
142```
143
144
145### PHP
146We have two version of php protobuf implementation: pure php, php with c extension. To run these version benchmark, you need to:
147#### Pure PHP
148```
149$ make php
150```
151#### PHP with c extension
152```
153$ make php_c
154```
155
156### Node.js
157```
158$ make js
159```
160
161To run a specific dataset or run with specific options:
162
163### Java:
164
165```
166$ make java-benchmark
167$ ./java-benchmark $(specific generated dataset file name) [$(caliper options)]
168```
169
170### CPP:
171
172```
173$ make cpp-benchmark
174$ ./cpp-benchmark $(specific generated dataset file name) [$(benchmark options)]
175```
176
177### Python:
178
179For Python benchmark we have `--json` for outputting the json result
180
181#### Pure Python:
182
183```
184$ make python-pure-python-benchmark
185$ ./python-pure-python-benchmark [--json] $(specific generated dataset file name)
186```
187
188#### CPP reflection:
189
190```
191$ make python-cpp-reflection-benchmark
192$ ./python-cpp-reflection-benchmark [--json] $(specific generated dataset file name)
193```
194
195#### CPP generated code:
196
197```
198$ make python-cpp-generated-code-benchmark
199$ ./python-cpp-generated-code-benchmark [--json] $(specific generated dataset file name)
200```
201
202### Go:
203```
204$ make go-benchmark
205$ ./go-benchmark $(specific generated dataset file name) [go testing options]
206```
207
208### PHP
209#### Pure PHP
210```
211$ make php-benchmark
212$ ./php-benchmark $(specific generated dataset file name)
213```
214#### PHP with c extension
215```
216$ make php-c-benchmark
217$ ./php-c-benchmark $(specific generated dataset file name)
218```
219
220### Node.js
221```
222$ make js-benchmark
223$ ./js-benchmark $(specific generated dataset file name)
224```
225
226### C#
227From `csharp/src/Google.Protobuf.Benchmarks`, run:
228
229```
230$ dotnet run -c Release
231```
232
233We intend to add support for this within the makefile in due course.
234
235## Benchmark datasets
236
237Each data set is in the format of benchmarks.proto:
238
2391. name is the benchmark dataset's name.
2402. message_name is the benchmark's message type full name (including package and message name)
2413. payload is the list of raw data.
242
243The schema for the datasets is described in `benchmarks.proto`.
244
245Benchmark likely want to run several benchmarks against each data set (parse,
246serialize, possibly JSON, possibly using different APIs, etc).
247
248We would like to add more data sets.  In general we will favor data sets
249that make the overall suite diverse without being too large or having
250too many similar tests.  Ideally everyone can run through the entire
251suite without the test run getting too long.
252