Lines Matching +full:go +full:- +full:version
33 CPP proto version of python protobuf, which is also a prerequisite for Python
34 protobuf cpp implementation. You need to install the correct version of Python
35 C++ extension package before run generated CPP proto version of Python
39 $ sudo apt-get install python-dev
40 $ sudo apt-get install python3-dev
42 And you also need to make sure `pkg-config` is installed.
44 ### Go subsection
45 Go protobufs are maintained at [github.com/golang/protobuf](
47 toolchain and the Go protoc-gen-go plugin for protoc.
49 To install protoc-gen-go, run:
52 $ go get -u github.com/golang/protobuf/protoc-gen-go
53 $ export PATH=$PATH:$(go env GOPATH)/bin
56 The first command installs `protoc-gen-go` into the `bin` directory in your local `GOPATH`.
119 reflection and cpp generated code. To run these version benchmark, you need to:
124 $ make python-pure-python
130 $ make python-cpp-reflection
136 $ make python-cpp-generated-code
139 ### Go subsection
141 $ make go
146 We have two version of php protobuf implementation: pure php, php with c extension. To run these ve…
166 $ make java-benchmark
167 $ ./java-benchmark $(specific generated dataset file name) [$(caliper options)]
173 $ make cpp-benchmark
174 $ ./cpp-benchmark $(specific generated dataset file name) [$(benchmark options)]
179 For Python benchmark we have `--json` for outputting the json result
184 $ make python-pure-python-benchmark
185 $ ./python-pure-python-benchmark [--json] $(specific generated dataset file name)
191 $ make python-cpp-reflection-benchmark
192 $ ./python-cpp-reflection-benchmark [--json] $(specific generated dataset file name)
198 $ make python-cpp-generated-code-benchmark
199 $ ./python-cpp-generated-code-benchmark [--json] $(specific generated dataset file name)
202 ### Go:
204 $ make go-benchmark
205 $ ./go-benchmark $(specific generated dataset file name) [go testing options]
211 $ make php-benchmark
212 $ ./php-benchmark $(specific generated dataset file name)
216 $ make php-c-benchmark
217 $ ./php-c-benchmark $(specific generated dataset file name)
222 $ make js-benchmark
223 $ ./js-benchmark $(specific generated dataset file name)
230 $ dotnet run -c Release