Home
last modified time | relevance | path

Searched +full:build +full:- +full:benchmarks (Results 1 – 25 of 569) sorted by relevance

12345678910>>...23

/external/grpc-grpc/tools/profiling/microbenchmarks/bm_diff/
Dbm_build.py9 # http://www.apache.org/licenses/LICENSE-2.0
16 """ Python utility to build opt and counters benchmarks """
30 "-b",
31 "--benchmarks",
35 help="Which benchmarks to build",
38 "-j",
39 "--jobs",
43 "Deprecated. Bazel chooses number of CPUs to build with"
48 "-n",
49 "--name",
[all …]
DREADME.md20 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -d master`
22 This will build the `bm_error` binary on your branch, and then it will checkout
23 master and build it there too. It will then run these benchmarks 5 times each.
28 If you have already invoked bm_main with `-d master`, you should instead use
29 `-o` for subsequent runs. This allows the script to skip re-building and
30 re-running the unchanged master branch. For example:
32 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -o`
34 This will only build and run `bm_error` on your branch. It will then compare
40 fine tuned benchmark comparisons. For example, you could build, run, and save
48 This scrips builds the benchmarks. It takes in a name parameter, and will
[all …]
/external/rust/crates/grpcio-sys/grpc/tools/profiling/microbenchmarks/bm_diff/
Dbm_build.py9 # http://www.apache.org/licenses/LICENSE-2.0
16 """ Python utility to build opt and counters benchmarks """
29 argp.add_argument('-b',
30 '--benchmarks',
34 help='Which benchmarks to build')
36 '-j',
37 '--jobs',
41 'Deprecated. Bazel chooses number of CPUs to build with automatically.')
43 '-n',
44 '--name',
[all …]
DREADME.md20 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -d master`
22 This will build the `bm_error` binary on your branch, and then it will checkout
23 master and build it there too. It will then run these benchmarks 5 times each.
28 If you have already invoked bm_main with `-d master`, you should instead use
29 `-o` for subsequent runs. This allows the script to skip re-building and
30 re-running the unchanged master branch. For example:
32 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -o`
34 This will only build and run `bm_error` on your branch. It will then compare
40 fine tuned benchmark comparisons. For example, you could build, run, and save
48 This scrips builds the benchmarks. It takes in a name parameter, and will
[all …]
/external/grpc-grpc-java/benchmarks/src/generated/main/grpc/io/grpc/benchmarks/proto/
DBenchmarkServiceGrpc.java1 package io.grpc.benchmarks.proto;
18 private static volatile io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Messages.SimpleRequest,
19 io.grpc.benchmarks.proto.Messages.SimpleResponse> getUnaryCallMethod;
23 requestType = io.grpc.benchmarks.proto.Messages.SimpleRequest.class,
24 responseType = io.grpc.benchmarks.proto.Messages.SimpleResponse.class,
26 public static io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Messages.SimpleRequest,
27 io.grpc.benchmarks.proto.Messages.SimpleResponse> getUnaryCallMethod() { in getUnaryCallMethod()
28 …io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Messages.SimpleRequest, io.grpc.benchmarks.proto… in getUnaryCallMethod()
33 …io.grpc.MethodDescriptor.<io.grpc.benchmarks.proto.Messages.SimpleRequest, io.grpc.benchmarks.prot… in getUnaryCallMethod()
38 io.grpc.benchmarks.proto.Messages.SimpleRequest.getDefaultInstance())) in getUnaryCallMethod()
[all …]
DWorkerServiceGrpc.java1 package io.grpc.benchmarks.proto;
18 private static volatile io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Control.ServerArgs,
19 io.grpc.benchmarks.proto.Control.ServerStatus> getRunServerMethod;
23 requestType = io.grpc.benchmarks.proto.Control.ServerArgs.class,
24 responseType = io.grpc.benchmarks.proto.Control.ServerStatus.class,
26 public static io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Control.ServerArgs,
27 io.grpc.benchmarks.proto.Control.ServerStatus> getRunServerMethod() { in getRunServerMethod()
28 …io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Control.ServerArgs, io.grpc.benchmarks.proto.Con… in getRunServerMethod()
33 …io.grpc.MethodDescriptor.<io.grpc.benchmarks.proto.Control.ServerArgs, io.grpc.benchmarks.proto.Co… in getRunServerMethod()
38 io.grpc.benchmarks.proto.Control.ServerArgs.getDefaultInstance())) in getRunServerMethod()
[all …]
DReportQpsScenarioServiceGrpc.java1 package io.grpc.benchmarks.proto;
18 private static volatile io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Control.ScenarioResult,
19 io.grpc.benchmarks.proto.Control.Void> getReportScenarioMethod;
23 requestType = io.grpc.benchmarks.proto.Control.ScenarioResult.class,
24 responseType = io.grpc.benchmarks.proto.Control.Void.class,
26 public static io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Control.ScenarioResult,
27 io.grpc.benchmarks.proto.Control.Void> getReportScenarioMethod() { in getReportScenarioMethod()
28 …io.grpc.MethodDescriptor<io.grpc.benchmarks.proto.Control.ScenarioResult, io.grpc.benchmarks.proto… in getReportScenarioMethod()
33 …io.grpc.MethodDescriptor.<io.grpc.benchmarks.proto.Control.ScenarioResult, io.grpc.benchmarks.prot… in getReportScenarioMethod()
38 io.grpc.benchmarks.proto.Control.ScenarioResult.getDefaultInstance())) in getReportScenarioMethod()
[all …]
/external/libcxx/docs/
DTestingLibcxx.rst12 libc++ tests is by using make check-libcxx. However since libc++ can be used
22 --------------------------
30 .. code-block:: bash
34 #. Tell LIT where to find your build configuration.
36 .. code-block:: bash
38 $ export LIBCXX_SITE_CONFIG=path/to/build-libcxx/test/lit.site.cfg
41 -------------
47 .. code-block:: bash
50 $ lit -sv test/std/re # Run all of the std::regex tests
51 $ lit -sv test/std/depr/depr.c.headers/stdlib_h.pass.cpp # Run a single test
[all …]
/external/cronet/third_party/libc++/src/docs/
DTestingLibcxx.rst15 The primary way to run the libc++ tests is by using ``make check-cxx``.
27 -----
30 running ``llvm-lit`` on a specified test or directory. If you're unsure
32 ``cxx-test-depends`` target. For example:
34 .. code-block:: bash
36 $ cd <monorepo-root>
37 $ make -C <build> cxx-test-depends # If you want to make sure the targets get rebuilt
38 $ <build>/bin/llvm-lit -sv libcxx/test/std/re # Run all of the std::regex tests
39 …$ <build>/bin/llvm-lit -sv libcxx/test/std/depr/depr.c.headers/stdlib_h.pass.cpp # Run a single te…
40 …$ <build>/bin/llvm-lit -sv libcxx/test/std/atomics libcxx/test/std/threads # Test std::thread and …
[all …]
/external/toolchain-utils/crosperf/
Dexperiment_factory.py1 # -*- coding: utf-8 -*-
3 # Use of this source code is governed by a BSD-style license that can be
70 ChromeOS benchmarks, but the idea is that in the future, other types
76 benchmarks, argument
89 """Add all the tests in a set to the benchmarks list."""
105 benchmarks.append(telemetry_benchmark)
199 # Construct benchmarks.
202 benchmarks = []
207 # Check if in cwp_dso mode, all benchmarks should have same iterations
217 # Rename benchmark name if 'story-filter' or 'story-tag-filter' specified
[all …]
Dexperiment_factory_unittest.py2 # -*- coding: utf-8 -*-
5 # Use of this source code is governed by a BSD-style license that can be
29 board: x86-alex
30 remote: chromeos-alex3
39 test_args: --story-filter=datachannel
52 board: x86-alex
53 remote: chromeos-alex3
77 # pylint: disable=too-many-function-args
91 self.assertEqual(exp.remote, ["chromeos-alex3"])
93 self.assertEqual(len(exp.benchmarks), 2)
[all …]
/external/harfbuzz_ng/perf/
DREADME.md3 Benchmarks are implemented using [Google Benchmark](https://github.com/google/benchmark).
5 To build the benchmarks in this directory you need to set the benchmark
6 option while configuring the build with meson:
9 meson build -Dbenchmark=enabled --buildtype=release
13 meson build -Dbenchmark=enabled --buildtype=debugoptimized
17 Then build a specific benchmark binaries with ninja:
19 ninja -Cbuild perf/benchmark-set
21 or just build the whole project:
23 ninja -Cbuild
26 Finally, to run one of the benchmarks:
[all …]
/external/cronet/third_party/protobuf/kokoro/linux/benchmark/
Drun.sh6 set -ex
16 pushd benchmarks
17 datasets=$(for file in $(find . -type f -name "dataset.*.pb" -not -path "./tmp/*"); do echo "$(pwd)…
21 # build Python protobuf
23 ./configure CXXFLAGS="-fPIC -O2"
24 make -j8
26 python3 -m venv env
28 python3 setup.py build --cpp_implementation
29 pip3 install --install-option="--cpp_implementation" .
32 # build and run Python benchmark
[all …]
/external/protobuf/kokoro/linux/benchmark/
Drun.sh6 set -ex
16 pushd benchmarks
17 datasets=$(for file in $(find . -type f -name "dataset.*.pb" -not -path "./tmp/*"); do echo "$(pwd)…
21 # build Python protobuf
23 ./configure CXXFLAGS="-fPIC -O2"
24 make -j8
26 python3 -m venv env
28 python3 setup.py build --cpp_implementation
29 pip3 install --install-option="--cpp_implementation" .
32 # build and run Python benchmark
[all …]
/external/protobuf/csharp/
Dgenerate_protos.sh6 set -e
14 if [ -z "$PROTOC" ]; then
16 if [ -x solution/Debug/protoc.exe ]; then
18 elif [ -x cmake/build/Debug/protoc.exe ]; then
19 PROTOC=cmake/build/Debug/protoc.exe
20 elif [ -x cmake/build/Release/protoc.exe ]; then
21 PROTOC=cmake/build/Release/protoc.exe
22 elif [ -x src/protoc ]; then
30 # descriptor.proto and well-known types
31 $PROTOC -Isrc --csharp_out=csharp/src/Google.Protobuf \
[all …]
/external/cronet/third_party/protobuf/csharp/
Dgenerate_protos.sh6 set -e
14 if [ -z "$PROTOC" ]; then
16 if [ -x cmake/build/Debug/protoc.exe ]; then
17 PROTOC=cmake/build/Debug/protoc.exe
18 elif [ -x cmake/build/Release/protoc.exe ]; then
19 PROTOC=cmake/build/Release/protoc.exe
20 elif [ -x src/protoc ]; then
28 # descriptor.proto and well-known types
29 $PROTOC -Isrc --csharp_out=csharp/src/Google.Protobuf \
30 --csharp_opt=base_namespace=Google.Protobuf \
[all …]
/external/aws-sdk-java-v2/test/sdk-benchmarks/
DREADME.md7 JMH configurations tailored to SDK's build job and you might need to
11 There are three ways to run benchmarks.
13 - Using the executable JAR (Preferred usage per JMH site)
15 mvn clean install -P quick -pl :sdk-benchmarks --am
18 java -jar target/benchmarks.jar ApacheHttpClientBenchmark
20 # Run all benchmarks: 3 warm up iterations, 3 benchmark iterations, 1 fork
21 java -jar target/benchmarks.jar -wi 3 -i 3 -f 1
24 - Using`mvn exec:exec` commands to invoke `BenchmarkRunner` main method
26 mvn clean install -P quick -pl :sdk-benchmarks --am
27 mvn clean install -pl :bom-internal
[all …]
/external/toolchain-utils/pgo_tools_rust/
Dpgo_rust.py3 # Use of this source code is governed by a BSD-style license that can be
6 # pylint: disable=line-too-long
13 gs://chromeos-localmirror/distfiles/rust-pgo-{rust_version}-frontend.profdata{s}.tz
15 gs://chromeos-localmirror/distfiles/rust-pgo-{rust_version}-llvm.profdata{s}.tz
22 you need to generate manifests for dev-lang/rust and dev-lang/rust-host before
25 variable SRC_URI in cros-rustc.eclass which refer to profdata files.
31 $ ./pgo_rust.py benchmark-pgo # benchmark with PGO
32 $ ./pgo_rust.py benchmark-nopgo # benchmark without PGO
33 $ ./pgo_rust.py upload-profdata # upload profdata to localmirror
39 gs://chromeos-toolchain-artifacts/rust-pgo/benchmarks/{rust_version}/
[all …]
/external/grpc-grpc-java/benchmarks/src/main/java/io/grpc/benchmarks/qps/
DAsyncClient.java8 * http://www.apache.org/licenses/LICENSE-2.0
17 package io.grpc.benchmarks.qps;
19 import static io.grpc.benchmarks.Utils.HISTOGRAM_MAX_VALUE;
20 import static io.grpc.benchmarks.Utils.HISTOGRAM_PRECISION;
21 import static io.grpc.benchmarks.Utils.saveHistogram;
22 import static io.grpc.benchmarks.qps.ClientConfiguration.ClientParam.ADDRESS;
23 import static io.grpc.benchmarks.qps.ClientConfiguration.ClientParam.CHANNELS;
24 import static io.grpc.benchmarks.qps.ClientConfiguration.ClientParam.CLIENT_PAYLOAD;
25 import static io.grpc.benchmarks.qps.ClientConfiguration.ClientParam.DIRECTEXECUTOR;
26 import static io.grpc.benchmarks.qps.ClientConfiguration.ClientParam.DURATION;
[all …]
/external/okhttp/okio/benchmarks/
DREADME.md1 Okio Benchmarks
2 ------------
4 … used to measure various aspects of performance for Okio buffers. Okio benchmarks are written usin…
7 -------------
9 To run benchmarks locally, first build and package the project modules:
15 This should create a `benchmarks.jar` file in the `target` directory, which is a typical JMH benchm…
18 $ java -jar benchmarks/target/benchmarks.jar -l
19 Benchmarks:
20 com.squareup.okio.benchmarks.BufferPerformanceBench.cold
21 com.squareup.okio.benchmarks.BufferPerformanceBench.threads16hot
[all …]
/external/toolchain-utils/crosperf/experiment_files/
Dtelemetry-crosperf-suites.exp6 # You should replace all the placeholders, marked by angle-brackets,
11 board: <your-board-goes-here>
14 # multiple machines. e.g. "remote: test-machine-1.com test-machine2.come
15 # test-machine3.com"
16 remote: <your-remote-goes-here>
18 # The example below will run all the benchmarks in the perf_v2 suite.
19 # The exact list of benchmarks that will be run can be seen in
26 # The example below will run all the Telemetry page_cycler benchmarks.
27 # The exact list of benchmarks that will be run can be seen in
34 # The example below will run all the Telemetry page_cycler benchmarks.
[all …]
/external/protobuf/benchmarks/
DREADME.md2 # Protocol Buffers Benchmarks
13 build your language's protobuf, then:
19 benchmark tool for testing cpp. This will be automatically made during build the
26 We're using maven to build the java benchmarks, which is the same as to build
39 $ sudo apt-get install python-dev
40 $ sudo apt-get install python3-dev
42 And you also need to make sure `pkg-config` is installed.
47 toolchain and the Go protoc-gen-go plugin for protoc.
49 To install protoc-gen-go, run:
52 $ go get -u github.com/golang/protobuf/protoc-gen-go
[all …]
/external/swiftshader/third_party/subzero/docs/
DASAN.rst4 AddressSanitizer is a powerful compile-time tool used to detect and report
7 <https://www.usenix.org/system/files/conference/atc12/atc12-final39.pdf>`_.
18 Furthermore, pnacl-clang automatically inlines some calls to calloc(),
20 sz-clang.py and sz-clang++.py, that normally just pass their arguments
21 through to pnacl-clang or pnacl-clang++, but add instrumentation to
23 -fsanitize-address.
27 sz-clang.py -fsanitize-address -o hello.nonfinal.pexe hello.c
28 pnacl-finalize --no-strip-syms -o hello.pexe hello.nonfinal.pexe
29 pnacl-sz -fsanitize-address -filetype=obj -o hello.o hello.pexe
31 The resulting object file must be linked with the Subzero-specific
[all …]
/external/google-fruit/extras/benchmark/
DREADME.md2 ## Fruit benchmarks
9 The `suites` folder contains suites of Fruit (or Fruit-related) benchmarks that can be run using `r…
14 --continue-benchmark=true \
15 --benchmark-definition ~/projects/fruit/extras/benchmark/suites/fruit_full.yml
16 --output-file ~/fruit_bench_results.txt \
17 --fruit-sources-dir ~/projects/fruit \
18 --fruit-benchmark-sources-dir ~/projects/fruit \
19 --boost-di-sources-dir ~/projects/boost-di
22 Once the benchmark run completes, you can format the results using some pre-defined tables, see the…
26 * `fruit_full.yml`: full set of Fruit benchmarks (using the Fruit 3.x API). argument
[all …]
/external/cronet/third_party/protobuf/benchmarks/
DREADME.md2 # Protocol Buffers Benchmarks
13 build your language's protobuf, then:
19 benchmark tool for testing cpp. This will be automatically made during build the
26 We're using maven to build the java benchmarks, which is the same as to build
39 $ sudo apt-get install python-dev
40 $ sudo apt-get install python3-dev
42 And you also need to make sure `pkg-config` is installed.
47 toolchain and the Go protoc-gen-go plugin for protoc.
49 To install protoc-gen-go, run:
52 $ go get -u github.com/golang/protobuf/protoc-gen-go
[all …]

12345678910>>...23