• Home
  • Raw
  • Download

Lines Matching +full:build +full:- +full:with +full:- +full:python

13 build your language's protobuf, then:
19 benchmark tool for testing cpp. This will be automatically made during build the
22 The cpp protobuf performance can be improved by linking with [tcmalloc library](
24 need to build [gpertools](https://github.com/gperftools/gperftools) to generate
28 We're using maven to build the java benchmarks, which is the same as to build
33 ### Python subsection
34 We're using python C++ API for testing the generated
35 CPP proto version of python protobuf, which is also a prerequisite for Python
36 protobuf cpp implementation. You need to install the correct version of Python
37 C++ extension package before run generated CPP proto version of Python
41 $ sudo apt-get install python-dev
42 $ sudo apt-get install python3-dev
44 And you also need to make sure `pkg-config` is installed.
49 toolchain and the Go protoc-gen-go plugin for protoc.
51 To install protoc-gen-go, run:
54 $ go get -u github.com/golang/protobuf/protoc-gen-go
58 The first command installs `protoc-gen-go` into the `bin` directory in your local `GOPATH`.
63 include PHP protobuf's src and build the c extension if required.
102 For linking with tcmalloc:
108 ### Python:
110 We have three versions of python protobuf implementation: pure python, cpp
113 #### Pure Python: argument
116 $ make python-pure-python
122 $ make python-cpp-reflection
128 $ make python-cpp-generated-code
138 We have two version of php protobuf implemention: pure php, php with c extension. To run these vers…
143 #### PHP with c extension
153 To run a specific dataset or run with specific options:
158 $ make java-benchmark
159 $ ./java-benchmark $(specific generated dataset file name) [$(caliper options)]
165 $ make cpp-benchmark
166 $ ./cpp-benchmark $(specific generated dataset file name) [$(benchmark options)]
169 ### Python:
171 For Python benchmark we have `--json` for outputting the json result
173 #### Pure Python: argument
176 $ make python-pure-python-benchmark
177 $ ./python-pure-python-benchmark [--json] $(specific generated dataset file name)
183 $ make python-cpp-reflection-benchmark
184 $ ./python-cpp-reflection-benchmark [--json] $(specific generated dataset file name)
190 $ make python-cpp-generated-code-benchmark
191 $ ./python-cpp-generated-code-benchmark [--json] $(specific generated dataset file name)
196 $ make go-benchmark
197 $ ./go-benchmark $(specific generated dataset file name) [go testing options]
203 $ make php-benchmark
204 $ ./php-benchmark $(specific generated dataset file name)
206 #### PHP with c extension
208 $ make php-c-benchmark
209 $ ./php-c-benchmark $(specific generated dataset file name)
214 $ make js-benchmark
215 $ ./js-benchmark $(specific generated dataset file name)
222 $ dotnet run -c Release