| /external/ltp/testcases/open_posix_testsuite/Documentation/ |
| D | HOWTO_RunTests | 4 3. Building and Running the Tests 7 ------------ 8 This document describes how to run the tests in the POSIX Test Suite. 10 Our framework currently has the ability to build and run conformance, 11 functional, and stress tests. All tests are built with make all, but 13 tests may leave the system in an indeterminate state. 16 ---------------------------------------------------- 18 * Conformance Tests 19 The build and execution process varies for conformance tests. 21 For definitions tests, the build and execution process is the same since [all …]
|
| /external/fsverity-utils/.github/workflows/ |
| D | ci.yml | 1 # SPDX-License-Identifier: MIT 4 # Use of this source code is governed by an MIT-style 12 static-linking-test: 14 runs-on: ubuntu-latest 16 - uses: actions/checkout@v4 17 - run: scripts/run-tests.sh static_linking 19 dynamic-linking-test: 21 runs-on: ubuntu-latest 23 - uses: actions/checkout@v4 24 - run: scripts/run-tests.sh dynamic_linking [all …]
|
| /external/sdv/vsomeip/third_party/boost/parameter/test/ |
| D | Jamfile.v2 | 14 default-build 22 [ run maybe.cpp : : : : : <preserve-target-tests>off ] 23 [ run singular.cpp : : : : : <preserve-target-tests>off ] 24 [ run tutorial.cpp : : : : : <preserve-target-tests>off ] 25 [ run compose.cpp 32 <preserve-target-tests>off 34 [ run sfinae.cpp 42 <preserve-target-tests>off 44 [ run efficiency.cpp 51 <preserve-target-tests>off [all …]
|
| /external/ltp/testcases/realtime/ |
| D | README | 17 /* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */ 21 realtime tests is an open-source testsuite for testing real-time Linux. It is 24 Real-Time team. Please send bug reports, contributions, questions to 27 The testsuite contains some functional tests and a few performance 28 and latency measurement tests. This is still a work in (early) progress! 37 Most of the tests need the user to have a privileges that allow 40 RUNNING TESTS THROUGH LTP 42 Simplest method to run realtime tests through LTP is: 43 The command will configure,build and run tests specified through 46 Run command below from LTP root directory with argument: [all …]
|
| /external/skia/infra/wasm-common/docker/ |
| D | README.md | 6 emsdk-base 7 ---------- 19 docker build -t emsdk-base ./emsdk-base/ 20 # Run bash in it to poke around and make sure things are properly installed 21 docker run -it emsdk-base /bin/bash 23 …docker run -v $SKIA_ROOT:/SRC -v $SKIA_ROOT/out/dockerpathkit:/OUT emsdk-base /SRC/infra/pathkit/b… 25 karma-chrome-tests 26 ------------------ 29 be used to run JS tests. 32 it Skia-exclusive. [all …]
|
| /external/curl/tests/ |
| D | runtests.1 | 21 .\" * SPDX-License-Identifier: curl 27 runtests.pl \- run one or more test cases 29 .B runtests.pl [options] [tests] 34 .SH "TESTS" 35 Specify which test(s) to run by specifying test numbers or keywords. 37 If no test number or keyword is given, all existing tests that the script can 38 find will be considered for running. You can specify single test cases to run 39 by specifying test numbers space-separated, like "1 3 5 7 11", and you can 40 specify a range of tests like "45 to 67". 42 Specify tests to not run with a leading exclamation point, like "!66", which [all …]
|
| D | README.md | 1 <!-- 4 SPDX-License-Identifier: curl 5 --> 11 See the "Requires to run" section for prerequisites. 17 To run a specific set of tests (e.g. 303 and 410): 21 To run the tests faster, pass the -j (parallelism) flag: 23 make test TFLAGS="-j10" 26 perl script to run all the tests. The value of `TFLAGS` is passed 29 When you run tests via make, the flags `-a` and `-s` are passed, meaning 30 to continue running tests even after one fails, and to emit short output. [all …]
|
| /external/junit/src/main/java/org/junit/runner/ |
| D | JUnitCore.java | 12 * <code>JUnitCore</code> is a facade for running tests. It supports running JUnit 4 tests, 13 * JUnit 3.8.x tests, and mixtures. To run tests from the command line, run 15 * For one-shot test runs, use the static method {@link #runClasses(Class[])}. 17 * create an instance of {@link org.junit.runner.JUnitCore} first and use it to run the tests. 28 * Run the tests contained in the classes named in the <code>args</code>. 29 * If all tests run successfully, exit with a status of 0. Otherwise exit with a status of 1. 30 * Write feedback while tests are running and write 31 * stack traces for all failed tests after the tests all complete. 33 * @param args names of classes in which to find tests to run 41 * Run the tests contained in <code>classes</code>. Write feedback while the tests [all …]
|
| D | Request.java | 16 * A <code>Request</code> is an abstract description of tests to be run. Older versions of 17 * JUnit did not need such a concept--tests to be run were described either by classes containing 18 * tests or a tree of {@link org.junit.Test}s. However, we want to support filtering and sorting, 19 * so we need a more abstract specification than the tests themselves and a richer 22 …* <p>The flow when JUnit runs tests is that a <code>Request</code> specifies some tests to be run … 23 …{@link org.junit.runner.Runner} is created for each class implied by the <code>Request</code> -> 25 * which is a tree structure of the tests to be run. 31 * Create a <code>Request</code> that, when processed, will run a single test. 32 * This is done by filtering out all other tests. This method is used to support rerunning 33 * single tests. [all …]
|
| /external/llvm/docs/ |
| D | TestingGuide.rst | 18 infrastructure, the tools needed to use it, and how to add and run 19 tests. 28 If you intend to run the :ref:`test-suite <test-suite-overview>`, you will also 29 need a development version of zlib (zlib1g-dev is known to work on several Linux 35 The LLVM testing infrastructure contains two major categories of tests: 36 regression tests and whole programs. The regression tests are contained 38 to always pass -- they should be run before every commit. 40 The whole programs tests are referred to as the "LLVM test suite" (or 41 "test-suite") and are in the ``test-suite`` module in subversion. For 42 historical reasons, these tests are also referred to as the "nightly [all …]
|
| /external/angle/src/tests/perf_tests/ |
| D | README.md | 1 # ANGLE Performance Tests 4 tests for the OpenGL API. `angle_trace_tests` is a suite to run captures traces for correctness and 8 The tests currently run on the Chromium ANGLE infrastructure and report 14 ## Running the Tests 18 test scores are higher-is-better. You should also ensure `is_debug=false` in 22 Variance can be a problem when benchmarking. We have a test harness to run a 23 tests repeatedly to find a lower variance measurement. See `src/tests/run_perf_tests.py`. 27 `run_perf_tests.py` script. Use `--test-suite` to specify your test suite, 28 and `--filter` to specify a test filter. 30 ### Choosing the Test to Run [all …]
|
| /external/igt-gpu-tools/ |
| D | README.md | 5 ----------- 8 drivers. There are many macro-level test suites that get used against the 12 results. Therefore, IGT GPU Tools includes low-level tools and tests 22 The benchmarks require KMS to be enabled. When run with an X Server 23 running, they must be run as root to avoid the authentication 26 Note that a few other microbenchmarks are in tests (like gem_gtt_speed). 28 **tests/** 30 This is a set of automated tests to run against the DRM to validate 31 changes. Many of the tests have subtests, which can be listed by using 32 the --list-subtests command line option and then run using the [all …]
|
| /external/sdv/vsomeip/third_party/boost/algorithm/test/ |
| D | Jamfile.v2 | 1 # Boost algorithm library test suite Jamfile ---------------------------- 3 # Copyright Marshall Clow 2010-2012. Use, modification and 19 test-suite algorithm 20 # Search tests 21 : [ run empty_search_test.cpp unit_test_framework : : : : empty_search_test ] 22 [ run search_test1.cpp unit_test_framework : : : : search_test1 ] 23 [ run search_test2.cpp unit_test_framework : : : : search_test2 ] 24 [ run search_test3.cpp unit_test_framework : : : : search_test3 ] 25 [ run search_test4.cpp unit_test_framework : : : : search_test4 ] 26 [ compile-fail search_fail1.cpp : : : : ] [all …]
|
| /external/python/mobly/mobly/ |
| D | test_runner.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 43 .. code-block:: python 66 tests = None 67 if args.tests: 68 tests = args.tests 77 runner.add_test_class(config, test_class, tests) 79 runner.run() 109 '-c', 110 '--config', 116 '-l', [all …]
|
| /external/elfutils/tests/ |
| D | ChangeLog | 1 2023-09-27 Omar Sandoval <osandov@fb.com> 3 * dwarf-getmacros.c (mac): Add DW_MACRO_define_sup, 10 2023-09-27 Omar Sandoval <osandov@fb.com> 12 * run-varlocs.sh: Add entry PC to split units. 14 2023-04-21 Frank Ch. Eigler <fche@redhat.com> 16 * run-debuginfod-IXr.sh: New test. 17 * Makefile.am: Run it, ship it. 19 2023-02-10 Mark Wielaard <mark@klomp.org> 23 2023-02-07 Mark Wielaard <mark@klomp.org> 25 * tests/funcretval.c (handle_function): Check for [all …]
|
| /external/llvm/utils/lit/lit/ |
| D | main.py | 4 lit - LLVM Integrated Tester. 15 import lit.run 80 def write_test_results(run, lit_config, testing_time, output_path): argument 94 # Encode the tests. 95 data['tests'] = tests_data = [] 96 for test in run.tests: 125 def sort_by_incremental_cache(run): argument 129 return -os.path.getmtime(fname) 132 run.tests.sort(key = lambda t: sortIndex(t)) 141 parser = OptionParser("usage: %prog [options] {file-or-path}") [all …]
|
| /external/python/cpython3/Lib/idlelib/idle_test/ |
| D | README.txt | 1 README FOR IDLE TESTS IN IDLELIB.IDLE_TEST 5 Automated unit tests were added in 3.3 for Python 3.x. 6 To run the tests from a command line: 8 python -m test.test_idle 10 Human-mediated tests were added later in 3.4. 12 python -m idlelib.idle_test.htest 27 contains code needed or possibly needed for gui tests. See the next 28 section if doing gui tests. If not, and not needed for further classes, 42 2. GUI Tests 44 When run as part of the Python test suite, Idle GUI tests need to run [all …]
|
| /external/python/cpython2/Lib/idlelib/idle_test/ |
| D | README.txt | 1 README FOR IDLE TESTS IN IDLELIB.IDLE_TEST 5 Automated unit tests were added in 2.7 for Python 2.x and 3.3 for Python 3.x. 6 To run the tests from a command line: 8 python -m test.test_idle 10 Human-mediated tests were added later in 2.7 and in 3.4. 12 python -m idlelib.idle_test.htest 43 2. GUI Tests 45 When run as part of the Python test suite, Idle GUI tests need to run 50 tests must be 'guarded' by "requires('gui')" in a setUp function or method. 53 To avoid interfering with other GUI tests, all GUI objects must be destroyed and [all …]
|
| /external/sdv/vsomeip/third_party/boost/serialization/util/ |
| D | test.jam | 3 # (C) Copyright Robert Ramey 2002-2004. 11 # tests. 28 # enable the tests which don't depend on a particular archive 32 rule run-template ( test-name : sources * : files * : requirements * ) { 34 run 41 <toolset>borland:<cxxflags>"-w-8080 -w-8071 -w-8057 -w-8062 -w-8008 -w-0018 -w-8066" 43 <toolset>gcc:<cxxflags>"-Wno-unused-variable -Wno-long-long" 46 <toolset>darwin:<cxxflags>"-Wno-unused-variable -Wno-long-long" 52 <toolset>msvc:<cxxflags>"-wd4996" 53 <toolset>clang:<variant>debug:<cxxflags>"-fsanitize=memory" [all …]
|
| /external/grpc-grpc/src/python/grpcio_tests/ |
| D | commands.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 53 def run(self): member in GatherProto 70 def run(self): member in BuildPy 75 build_py.build_py.run(self) 79 """Command to run tests without fetching or building anything.""" 81 description = "run tests without fetching or building anything." 91 def run(self): member in TestLite 92 import tests 94 loader = tests.Loader() 95 loader.loadTestsFromNames(["tests"]) [all …]
|
| /external/bcc/.github/workflows/ |
| D | bcc-test.yml | 1 name: BCC Build and tests 6 - master 17 pull-requests: read # to read pull requests (dorny/paths-filter) 21 runs-on: ubuntu-20.04 26 - TYPE: Debug 29 - TYPE: Debug 32 - TYPE: Release 36 - uses: actions/checkout@v2 37 - uses: dorny/paths-filter@v2 42 - 'docker/build/**' [all …]
|
| /external/rust/crates/serde_json/.github/workflows/ |
| D | ci.yml | 12 RUSTFLAGS: -Dwarnings 17 runs-on: ${{matrix.os}}-latest 19 fail-fast: false 22 timeout-minutes: 45 24 - uses: actions/checkout@v3 25 - uses: dtolnay/rust-toolchain@nightly 26 - run: cargo test 27 - run: cargo test --features preserve_order --tests -- --skip ui --exact 28 - run: cargo test --features float_roundtrip --tests -- --skip ui --exact 29 - run: cargo test --features arbitrary_precision --tests -- --skip ui --exact [all …]
|
| /external/python/cpython3/Lib/test/libregrtest/ |
| D | cmdline.py | 9 python -m test [options] [test_name1 [test_name2 ...]] 14 Run Python regression tests. 18 them in alphabetical order (but see -M and -u, below, for exceptions). 23 python -E -Wd -m test [options] [test_name1 ...] 29 -r randomizes test execution order. You can use --randseed=int to provide an 33 -s On the first invocation of regrtest using -s, the first test file found 34 or the first test file given on the command line is run, and the name of 35 the next test is recorded in a file named pynexttest. If run from the 38 the test in pynexttest is run, and the next test is written to pynexttest. 39 When the last test has been run, pynexttest is deleted. In this way it [all …]
|
| /external/tensorflow/tensorflow/compiler/tests/ |
| D | BUILD | 3 load("//tensorflow:tensorflow.bzl", "tf_cuda_cc_test") # buildifier: disable=same-origin-load 6 "//tensorflow/compiler/tests:build_defs.bzl", 35 "//platforms/xla/tests/neural_nets", 82 "no_pip", # TODO(b/149738646): fix pip install so these tests run on kokoro pip 96 "no_pip", # TODO(b/149738646): fix pip install so these tests run on kokoro pip 114 "no_pip", # TODO(b/149738646): fix pip install so these tests run on kokoro pip 134 "no_pip", # TODO(b/149738646): fix pip install so these tests run on kokoro pip 153 "no_pip", # TODO(b/149738646): fix pip install so these tests run on kokoro pip 169 # TensorList ops are not implemented in the on-demand compilation model yet. 174 "no_pip", # TODO(b/149738646): fix pip install so these tests run on kokoro pip [all …]
|
| /external/rappor/ |
| D | regtest.sh | 4 Run end-to-end tests in parallel. 11 run [<pattern> [<lang>]] - run tests matching <pattern> in 14 run-seq [<pattern> [<lang>]] - ditto, except that tests are run 16 run-all - run all tests, in parallel 19 $ ./regtest.sh run-seq unif-small-typical # Run, the unif-small-typical test 20 $ ./regtest.sh run-seq unif-small- # Sequential, the tests containing: 21 # 'unif-small-' 22 $ ./regtest.sh run unif- # Parallel run, matches multiple cases 23 $ ./regtest.sh run-all # Run all tests 25 The <pattern> argument is a regex in 'grep -E' format. (Detail: Don't [all …]
|