| /third_party/openssl/doc/man3/ |
| D | OPENSSL_LH_stats.pod | 7 OPENSSL_LH_node_stats_bio, OPENSSL_LH_node_usage_stats_bio - LHASH statistics 35 hash table. It prints the 'load' and the 'actual load'. The load is 36 the average number of data items per 'bucket' in the hash table. The 37 'actual load' is the average number of items per 'bucket', but only 38 for buckets which contain entries. So the 'actual load' is the 39 average number of searches that will need to find an item in the hash 40 table, while the 'load' is the average number that will be done to 62 Copyright 2000-2022 The OpenSSL Project Authors. All Rights Reserved.
|
| /third_party/icu/docs/ide4c/vscode/ |
| D | README.md | 1 <!--- © 2020 and later: Unicode, Inc. and others. ---> 2 <!--- License & terms of use: http://www.unicode.org/copyright.html ---> 6 - Create a `.vscode` folder in icu4c/source 7 - Copy the `tasks.json`, `launch.json` and `c_cpp_properties.json` files into 9 - To test only specific test targets, specify them under `args` in 11 - To adjust the parallelism when building, adjust the `args` in `tasks.json`. 12 - `-l20` tells VSCode to not launch jobs if the system load average is above 13 20 (note that the [system load 14 average](https://en.wikipedia.org/wiki/Load_(computing)) is *not* a CPU 16 - `-j24` limits the number of jobs launched in parallel to 24. The system [all …]
|
| D | tasks.json | 21 "--debug", // Enable debug mode. 22 "-j24", // Launch no more than 24 jobs in parallel 23 "-l20" // Stop launching jobs if system load average is too high 31 "-j24", // Launch no more than 24 jobs in parallel 32 "-l20" // Stop launching jobs if system load average is too high
|
| /third_party/mesa3d/src/gallium/drivers/r600/ |
| D | r600_query.c | 18 * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 62 rscreen->b.fence_reference(&rscreen->b, &query->fence, NULL); in r600_query_sw_destroy() 98 switch(query->b.type) { in r600_query_sw_begin() 103 query->begin_result = rctx->num_draw_calls; in r600_query_sw_begin() 106 query->begin_result = rctx->num_decompress_calls; in r600_query_sw_begin() 109 query->begin_result = rctx->num_mrt_draw_calls; in r600_query_sw_begin() 112 query->begin_result = rctx->num_prim_restart_calls; in r600_query_sw_begin() 115 query->begin_result = rctx->num_spill_draw_calls; in r600_query_sw_begin() 118 query->begin_result = rctx->num_compute_calls; in r600_query_sw_begin() 121 query->begin_result = rctx->num_spill_compute_calls; in r600_query_sw_begin() [all …]
|
| /third_party/python/Doc/library/ |
| D | tracemalloc.rst | 1 :mod:`tracemalloc` --- Trace memory allocations 11 -------------- 18 total size, number and average size of allocated memory blocks 23 variable to ``1``, or by using :option:`-X` ``tracemalloc`` command line 30 :option:`-X` ``tracemalloc=25`` command line option. 34 -------- 58 <frozen importlib._bootstrap>:716: size=4855 KiB, count=39328, average=126 B 59 <frozen importlib._bootstrap>:284: size=521 KiB, count=3199, average=167 B 60 /usr/lib/python3.4/collections/__init__.py:368: size=244 KiB, count=2315, average=108 B 61 /usr/lib/python3.4/unittest/case.py:381: size=185 KiB, count=779, average=243 B [all …]
|
| /third_party/ltp/utils/benchmark/kernbench-0.42/ |
| D | README | 15 for the average of each group of runs and logs them to kernbench.log 20 Ideally it should be run in single user mode on a non-journalled filesystem. 37 kernbench [-n runs] [-o jobs] [-s] [-H] [-O] [-M] [-h] [-v] 41 H : don't perform half load runs (default do) 42 O : don't perform optimal load runs (default do) 43 M : don't perform maximal load runs (default do) 51 Changed -j to at least 4GB ram. 62 detect -j2 and change to -j3. Added syncs. Improved warnings and 65 v0.20 Change to average of runs, add options to choose which runs to perform
|
| D | kernbench | 20 echo "kernbench [-n runs] [-o jobs] [-s] [-H] [-O] [-M] [-h] [-v]" 24 echo "H : don't perform half load runs (default do)" 25 echo "O : don't perform optimal load runs (default do)" 26 echo "M : don't perform maximal load runs (default do)" 42 if [[ ! -f include/linux/kernel.h ]] ; then 50 if [[ ! -a $iname ]] ; then 58 if [[ $nruns -gt 0 ]] ; then 60 elif [[ $fast_run -eq 1 ]]; then 75 if [[ ! -d /proc ]] ; then 81 if [[ $mem -lt 4000000 && $max_runs -gt 0 ]] ; then [all …]
|
| /third_party/python/Lib/test/libregrtest/ |
| D | win_utils.py | 11 # Exponential damping factor to compute exponentially weighted moving average 14 # Initialize the load using the arithmetic mean of the first NVALUE values 22 the system load on Windows. A "raw" thread is used here to prevent 27 # Pre-flight test for access to the performance data; 92 # We use an exponentially weighted moving average, imitating the 93 # load calculation on Unix systems. 94 # https://en.wikipedia.org/wiki/Load_(computing)#Unix-style_load_calculation 98 + processor_queue_length * (1.0 - LOAD_FACTOR_1)) 117 _wait(self._stopped, -1)
|
| /third_party/mindspore/mindspore-src/source/tests/st/networks/ |
| D | test_mindcv_overfit.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 16 # For more details, please refer to MindCV (https://github.com/mindspore-lab/mindcv) 201 … data1 = ms.Tensor(np.load(os.path.join(test_datapath, "image.npy")))[: args.batch_size, :, :, :] 202 data2 = ms.Tensor(np.load(os.path.join(test_datapath, "label.npy")))[: args.batch_size] 204 …data1 = ms.Tensor(np.load(os.path.join(test_datapath, "image_299.npy")))[: args.batch_size, :, :, … 205 data2 = ms.Tensor(np.load(os.path.join(test_datapath, "label_299.npy")))[: args.batch_size] 216 step_time = time() - step_start 229 compile_time = first_step_time - step_time 231 if i == train_steps - 1: 235 print(f"Average step time is: {average_step_time:.2f}ms") [all …]
|
| /third_party/lz4/tests/ |
| D | README.md | 5 - `datagen` : Synthetic and parametrable data generator, for tests 6 - `frametest` : Test tool that checks lz4frame integrity on target platform 7 - `fullbench` : Precisely measure speed for each lz4 inner functions 8 - `fuzzer` : Test tool, to check lz4 integrity on target platform 9 - `test-lz4-speed.py` : script for testing lz4 speed difference between commits 10 - `test-lz4-versions.py` : compatibility test between lz4 versions stored on Github 13 #### `test-lz4-versions.py` - script for testing lz4 interoperability between versions 20 #### `test-lz4-speed.py` - script for testing lz4 speed difference between commits 28 If second results are also lower than `lowerLimit` the warning e-mail is sent to recipients from th… 31 - To be sure that speed results are accurate the script should be run on a "stable" target system w… [all …]
|
| /third_party/musl/libc-test/src/functionalext/supplement/legacy/ |
| D | getloadavg.c | 7 * http://www.apache.org/licenses/LICENSE-2.0 22 …* @tc.desc : Verify the average number of processes in the system running queue in different … 28 double load[3]; in getloadavg_0100() local 30 result = getloadavg(load, -1); in getloadavg_0100() 31 EXPECT_EQ("getloadavg_0100", result, -1); in getloadavg_0100() 32 result = getloadavg(load, INT_MIN); in getloadavg_0100() 33 EXPECT_EQ("getloadavg_0100", result, -1); in getloadavg_0100() 35 result = getloadavg(load, 0); in getloadavg_0100() 38 result = getloadavg(load, 1); in getloadavg_0100() 40 result = getloadavg(load, 2); in getloadavg_0100() [all …]
|
| /third_party/skia/third_party/externals/oboe/src/common/ |
| D | StabilizedCallback.cpp | 8 * http://www.apache.org/licenses/LICENSE-2.0 47 int64_t durationSinceEpochNanos = startTimeNanos - mEpochTimeNanos; in onAudioReady() 53 int64_t idealStartTimeNanos = (mFrameCount * kNanosPerSecond) / oboeStream->getSampleRate(); in onAudioReady() 54 int64_t lateStartNanos = durationSinceEpochNanos - idealStartTimeNanos; in onAudioReady() 63 int64_t numFramesAsNanos = (numFrames * kNanosPerSecond) / oboeStream->getSampleRate(); in onAudioReady() 65 (numFramesAsNanos * kPercentageOfCallbackToUse) - lateStartNanos); in onAudioReady() 67 Trace::beginSection("Actual load"); in onAudioReady() 68 DataCallbackResult result = mCallback->onAudioReady(oboeStream, audioData, numFrames); in onAudioReady() 71 int64_t executionDurationNanos = AudioClock::getNanoseconds() - startTimeNanos; in onAudioReady() 72 int64_t stabilizingLoadDurationNanos = targetDurationNanos - executionDurationNanos; in onAudioReady() [all …]
|
| /third_party/rust/crates/nix/src/sys/ |
| D | sysinfo.rs | 13 // The fields are c_ulong on 32-bit linux, u64 on 64-bit linux; x32's ulong is u32 20 /// Returns the load average tuple. 22 /// The returned values represent the load average over time intervals of 24 pub fn load_average(&self) -> (f64, f64, f64) { in load_average() 35 pub fn uptime(&self) -> Duration { in uptime() 41 pub fn process_count(&self) -> u16 { in process_count() 46 pub fn swap_total(&self) -> u64 { in swap_total() 51 pub fn swap_free(&self) -> u64 { in swap_free() 56 pub fn ram_total(&self) -> u64 { in ram_total() 65 pub fn ram_unused(&self) -> u64 { in ram_unused() [all …]
|
| /third_party/skia/src/shaders/gradients/ |
| D | Sk4fLinearGradient.cpp | 4 * Use of this source code is governed by a BSD-style license that can be 38 n -= 4; in ramp() 101 SkASSERT(fIntervals->count() > 0); in LinearGradient4fContext() 102 fCachedInterval = fIntervals->begin(); in LinearGradient4fContext() 107 SkASSERT(in_range(fx, fIntervals->front().fT0, fIntervals->back().fT1)); in findInterval() 111 SkASSERT(fCachedInterval >= fIntervals->begin()); in findInterval() 112 SkASSERT(fCachedInterval < fIntervals->end()); in findInterval() 113 const int search_dir = fDstToPos.getScaleX() >= 0 ? 1 : -1; in findInterval() 114 while (!in_range(fx, fCachedInterval->fT0, fCachedInterval->fT1)) { in findInterval() 116 if (fCachedInterval >= fIntervals->end()) { in findInterval() [all …]
|
| /third_party/ffmpeg/libavfilter/dnn/ |
| D | dnn_backend_native_layer_avgpool.h | 18 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 37 * @brief Load Average Pooling Layer. 39 * It assigns the Average Pooling layer with AvgPoolParams 54 * @brief Execute the Average Pooling Layer. 60 * @param parameters average pooling parameters
|
| /third_party/typescript/src/testRunner/unittests/tsserver/ |
| D | versionCache.ts | 7 return lineIndex.absolutePositionOfStartOfLine(line) + (col - 1); 37 lineIndex.load(lines); 51 lineIndex.load([]); 60 lineIndex.load(lines); 109 lineIndex.load(lines); 128 validateEditAtPosition(lines[0].length - 1, lines[1].length, ""); 132 validateEditAtPosition(lineMap[lineMap.length - 2], lines[lines.length - 1].length, ""); 221 lineIndex.load(lines); 227 la[j] = Math.floor(Math.random() * (totalChars - rsa[j])); 236 ela[j] = Math.floor(Math.random() * (etotalChars - ersa[j])); [all …]
|
| /third_party/vk-gl-cts/doc/testspecs/GLES2/ |
| D | performance.txt | 1 ------------------------------------------------------------------------- 3 ----------------------------------------------- 11 http://www.apache.org/licenses/LICENSE-2.0 18 ------------------------------------------------------------------------- 30 - Use of any other features should be kept at minimum. 31 - Architecture-specific optimizations should not affect result unless they 33 - Measurement overhead should be minimized. 36 - In most cases total throughput is more important than latency. 37 - Latency is measured where it matters. 38 - Test cases should behave like other graphics-intensive applications on [all …]
|
| /third_party/vk-gl-cts/doc/testspecs/GLES3/ |
| D | performance.txt | 1 ------------------------------------------------------------------------- 3 ----------------------------------------------- 11 http://www.apache.org/licenses/LICENSE-2.0 18 ------------------------------------------------------------------------- 30 - Use of any other features should be kept at minimum. 31 - Architecture-specific optimizations should not affect result unless they 33 - Measurement overhead should be minimized. 36 - In most cases total throughput is more important than latency. 37 - Latency is measured where it matters. 38 - Test cases should behave like other graphics-intensive applications on [all …]
|
| /third_party/python/Lib/ |
| D | tracemalloc.py | 58 average = self.size / self.count 59 text += ", average=%s" % _format_size(average, False) 105 average = self.size / self.count 106 text += ", average=%s" % _format_size(average, False) 126 stat.size, stat.size - previous.size, 127 stat.count, stat.count - previous.count) 135 stat = StatisticDiff(traceback, 0, -stat.size, 0, -stat.count) 240 frame_slice = self[-limit:] 341 filename = filename[:-1] 434 def load(filename): member in Snapshot [all …]
|
| /third_party/ffmpeg/libavcodec/aarch64/ |
| D | me_cmp_neon.S | 18 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 33 ld1 {v0.16b}, [x1], x3 // load pix1 34 ld1 {v4.16b}, [x2], x3 // load pix2 35 ld1 {v1.16b}, [x1], x3 // load pix1 36 ld1 {v5.16b}, [x2], x3 // load pix2 39 ld1 {v2.16b}, [x1], x3 // load pix1 40 ld1 {v6.16b}, [x2], x3 // load pix2 47 sub w4, w4, #4 // h -= 4 62 ld1 {v0.16b}, [x1], x3 // load pix1 63 ld1 {v4.16b}, [x2], x3 // load pix2 [all …]
|
| /third_party/mindspore/mindspore-src/source/tests/st/networks/models/fasterrcnn/ |
| D | test_fasterrcnn_overfit.py | 7 # http://www.apache.org/licenses/LICENSE-2.0 39 "--config", 45 "--ckpt", 51 "--test_data", 56 parser.add_argument("--seed", type=int, default=1234, help="runtime seed") 58 … "--ms_mode", type=int, default=0, help="Running in GRAPH_MODE(0) or PYNATIVE_MODE(1) (default=0)" 60 …parser.add_argument("--ms_loss_scaler", type=str, default="static", help="train loss scaler, stati… 61 …parser.add_argument("--ms_loss_scaler_value", type=float, default=256.0, help="static loss scale v… 62 …parser.add_argument("--num_parallel_workers", type=int, default=8, help="num parallel worker for d… 63 …parser.add_argument("--device_target", type=str, default="Ascend", help="device target, Ascend/GPU… [all …]
|
| /third_party/toybox/tests/ |
| D | uptime.test | 3 [ -f testing.sh ] && . testing.sh 7 testing "uptime" "uptime | grep -q 'load average:' && echo t" "t\n" "" "" 8 testing "uptime -s" \ 9 …"uptime -s | grep -q '^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]…
|
| /third_party/skia/site/docs/user/sample/ |
| D | viewer.md | 2 --- 6 --- 10 * Observe rendering performance - placing the Viewer in stats mode displays average frame times. 11 * Try different rendering methods - it's possible to cycle among the three rendering methods: raste… 14 Some slides require resources stored outside the program. These resources are stored in the `<skia-… 17 ---------------------------- 21 bin/gn gen out/Release --args='is_debug=false' 22 ninja -C out/Release viewer 24 To load resources in the desktop Viewers, use the `--resourcePath` option: 26 <skia-path>/out/Release/viewer --resourcePath <skia-path>/resources [all …]
|
| /third_party/toybox/toys/other/ |
| D | uptime.c | 1 /* uptime.c - Tell how long the system has been running. 14 usage: uptime [-ps] 17 of users, and the system load averages for the past 1, 5 and 15 minutes. 19 -p Pretty (human readable) uptime 20 -s Since when has the system been up? 41 t -= info.uptime; in uptime_main() 69 xprintf(" %02d:%02d:%02d up ", tm->tm_hour, tm->tm_min, tm->tm_sec); in uptime_main() 76 while ((entry = getutxent())) if (entry->ut_type == USER_PROCESS) users++; in uptime_main() 80 printf(" load average: %.02f, %.02f, %.02f\n", info.loads[0]/65536.0, in uptime_main()
|
| /third_party/skia/third_party/externals/swiftshader/third_party/subzero/docs/ |
| D | ASAN.rst | 4 AddressSanitizer is a powerful compile-time tool used to detect and report 7 <https://www.usenix.org/system/files/conference/atc12/atc12-final39.pdf>`_. 18 Furthermore, pnacl-clang automatically inlines some calls to calloc(), 20 sz-clang.py and sz-clang++.py, that normally just pass their arguments 21 through to pnacl-clang or pnacl-clang++, but add instrumentation to 23 -fsanitize-address. 27 sz-clang.py -fsanitize-address -o hello.nonfinal.pexe hello.c 28 pnacl-finalize --no-strip-syms -o hello.pexe hello.nonfinal.pexe 29 pnacl-sz -fsanitize-address -filetype=obj -o hello.o hello.pexe 31 The resulting object file must be linked with the Subzero-specific [all …]
|