Home
last modified time | relevance | path

Searched +full:- +full:- +full:load +full:- +full:average (Results 1 – 25 of 323) sorted by relevance

12345678910>>...13

/third_party/openssl/doc/man3/
DOPENSSL_LH_stats.pod7 OPENSSL_LH_node_stats_bio, OPENSSL_LH_node_usage_stats_bio - LHASH statistics
35 hash table. It prints the 'load' and the 'actual load'. The load is
36 the average number of data items per 'bucket' in the hash table. The
37 'actual load' is the average number of items per 'bucket', but only
38 for buckets which contain entries. So the 'actual load' is the
39 average number of searches that will need to find an item in the hash
40 table, while the 'load' is the average number that will be done to
62 Copyright 2000-2022 The OpenSSL Project Authors. All Rights Reserved.
/third_party/icu/docs/ide4c/vscode/
DREADME.md1 <!--- © 2020 and later: Unicode, Inc. and others. --->
2 <!--- License & terms of use: http://www.unicode.org/copyright.html --->
6 - Create a `.vscode` folder in icu4c/source
7 - Copy the `tasks.json`, `launch.json` and `c_cpp_properties.json` files into
9 - To test only specific test targets, specify them under `args` in
11 - To adjust the parallelism when building, adjust the `args` in `tasks.json`.
12 - `-l20` tells VSCode to not launch jobs if the system load average is above
13 20 (note that the [system load
14 average](https://en.wikipedia.org/wiki/Load_(computing)) is *not* a CPU
16 - `-j24` limits the number of jobs launched in parallel to 24. The system
[all …]
Dtasks.json21 "--debug", // Enable debug mode.
22 "-j24", // Launch no more than 24 jobs in parallel
23 "-l20" // Stop launching jobs if system load average is too high
31 "-j24", // Launch no more than 24 jobs in parallel
32 "-l20" // Stop launching jobs if system load average is too high
/third_party/mesa3d/src/gallium/drivers/r600/
Dr600_query.c4 * SPDX-License-Identifier: MIT
44 rscreen->b.fence_reference(&rscreen->b, &query->fence, NULL); in r600_query_sw_destroy()
80 switch(query->b.type) { in r600_query_sw_begin()
85 query->begin_result = rctx->num_draw_calls; in r600_query_sw_begin()
88 query->begin_result = rctx->num_decompress_calls; in r600_query_sw_begin()
91 query->begin_result = rctx->num_mrt_draw_calls; in r600_query_sw_begin()
94 query->begin_result = rctx->num_prim_restart_calls; in r600_query_sw_begin()
97 query->begin_result = rctx->num_spill_draw_calls; in r600_query_sw_begin()
100 query->begin_result = rctx->num_compute_calls; in r600_query_sw_begin()
103 query->begin_result = rctx->num_spill_compute_calls; in r600_query_sw_begin()
[all …]
/third_party/python/Doc/library/
Dtracemalloc.rst1 :mod:`tracemalloc` --- Trace memory allocations
11 --------------
18 total size, number and average size of allocated memory blocks
23 variable to ``1``, or by using :option:`-X` ``tracemalloc`` command line
30 :option:`-X` ``tracemalloc=25`` command line option.
34 --------
58 <frozen importlib._bootstrap>:716: size=4855 KiB, count=39328, average=126 B
59 <frozen importlib._bootstrap>:284: size=521 KiB, count=3199, average=167 B
60 /usr/lib/python3.4/collections/__init__.py:368: size=244 KiB, count=2315, average=108 B
61 /usr/lib/python3.4/unittest/case.py:381: size=185 KiB, count=779, average=243 B
[all …]
/third_party/ltp/utils/benchmark/kernbench-0.42/
DREADME15 for the average of each group of runs and logs them to kernbench.log
20 Ideally it should be run in single user mode on a non-journalled filesystem.
37 kernbench [-n runs] [-o jobs] [-s] [-H] [-O] [-M] [-h] [-v]
41 H : don't perform half load runs (default do)
42 O : don't perform optimal load runs (default do)
43 M : don't perform maximal load runs (default do)
51 Changed -j to at least 4GB ram.
62 detect -j2 and change to -j3. Added syncs. Improved warnings and
65 v0.20 Change to average of runs, add options to choose which runs to perform
/third_party/python/Lib/test/libregrtest/
Dwin_utils.py11 # Exponential damping factor to compute exponentially weighted moving average
14 # Initialize the load using the arithmetic mean of the first NVALUE values
22 the system load on Windows. A "raw" thread is used here to prevent
27 # Pre-flight test for access to the performance data;
92 # We use an exponentially weighted moving average, imitating the
93 # load calculation on Unix systems.
94 # https://en.wikipedia.org/wiki/Load_(computing)#Unix-style_load_calculation
98 + processor_queue_length * (1.0 - LOAD_FACTOR_1))
117 _wait(self._stopped, -1)
/third_party/lz4/tests/
DREADME.md5 - `datagen` : Synthetic and parametrable data generator, for tests
6 - `frametest` : Test tool that checks lz4frame integrity on target platform
7 - `fullbench` : Precisely measure speed for each lz4 inner functions
8 - `fuzzer` : Test tool, to check lz4 integrity on target platform
9 - `test-lz4-speed.py` : script for testing lz4 speed difference between commits
10 - `test-lz4-versions.py` : compatibility test between lz4 versions stored on Github
13 #### `test-lz4-versions.py` - script for testing lz4 interoperability between versions
20 #### `test-lz4-speed.py` - script for testing lz4 speed difference between commits
28 If second results are also lower than `lowerLimit` the warning e-mail is sent to recipients from th…
31 - To be sure that speed results are accurate the script should be run on a "stable" target system w…
[all …]
/third_party/musl/libc-test/src/functionalext/supplement/legacy/
Dgetloadavg.c7 * http://www.apache.org/licenses/LICENSE-2.0
22 …* @tc.desc : Verify the average number of processes in the system running queue in different …
28 double load[3]; in getloadavg_0100() local
30 result = getloadavg(load, -1); in getloadavg_0100()
31 EXPECT_EQ("getloadavg_0100", result, -1); in getloadavg_0100()
32 result = getloadavg(load, INT_MIN); in getloadavg_0100()
33 EXPECT_EQ("getloadavg_0100", result, -1); in getloadavg_0100()
35 result = getloadavg(load, 0); in getloadavg_0100()
38 result = getloadavg(load, 1); in getloadavg_0100()
40 result = getloadavg(load, 2); in getloadavg_0100()
[all …]
/third_party/mindspore/mindspore-src/source/tests/st/networks/
Dtest_mindcv_overfit.py7 # http://www.apache.org/licenses/LICENSE-2.0
16 # For more details, please refer to MindCV (https://github.com/mindspore-lab/mindcv)
201 … data1 = ms.Tensor(np.load(os.path.join(test_datapath, "image.npy")))[: args.batch_size, :, :, :]
202 data2 = ms.Tensor(np.load(os.path.join(test_datapath, "label.npy")))[: args.batch_size]
204 …data1 = ms.Tensor(np.load(os.path.join(test_datapath, "image_299.npy")))[: args.batch_size, :, :, …
205 data2 = ms.Tensor(np.load(os.path.join(test_datapath, "label_299.npy")))[: args.batch_size]
216 step_time = time() - step_start
229 compile_time = first_step_time - step_time
231 if i == train_steps - 1:
235 print(f"Average step time is: {average_step_time:.2f}ms")
[all …]
/third_party/ffmpeg/libavfilter/dnn/
Ddnn_backend_native_layer_avgpool.h18 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
37 * @brief Load Average Pooling Layer.
39 * It assigns the Average Pooling layer with AvgPoolParams
54 * @brief Execute the Average Pooling Layer.
60 * @param parameters average pooling parameters
/third_party/skia/third_party/externals/oboe/src/common/
DStabilizedCallback.cpp8 * http://www.apache.org/licenses/LICENSE-2.0
47 int64_t durationSinceEpochNanos = startTimeNanos - mEpochTimeNanos; in onAudioReady()
53 int64_t idealStartTimeNanos = (mFrameCount * kNanosPerSecond) / oboeStream->getSampleRate(); in onAudioReady()
54 int64_t lateStartNanos = durationSinceEpochNanos - idealStartTimeNanos; in onAudioReady()
63 int64_t numFramesAsNanos = (numFrames * kNanosPerSecond) / oboeStream->getSampleRate(); in onAudioReady()
65 (numFramesAsNanos * kPercentageOfCallbackToUse) - lateStartNanos); in onAudioReady()
67 Trace::beginSection("Actual load"); in onAudioReady()
68 DataCallbackResult result = mCallback->onAudioReady(oboeStream, audioData, numFrames); in onAudioReady()
71 int64_t executionDurationNanos = AudioClock::getNanoseconds() - startTimeNanos; in onAudioReady()
72 int64_t stabilizingLoadDurationNanos = targetDurationNanos - executionDurationNanos; in onAudioReady()
[all …]
/third_party/rust/crates/nix/src/sys/
Dsysinfo.rs13 // The fields are c_ulong on 32-bit linux, u64 on 64-bit linux; x32's ulong is u32
20 /// Returns the load average tuple.
22 /// The returned values represent the load average over time intervals of
24 pub fn load_average(&self) -> (f64, f64, f64) { in load_average()
35 pub fn uptime(&self) -> Duration { in uptime()
41 pub fn process_count(&self) -> u16 { in process_count()
46 pub fn swap_total(&self) -> u64 { in swap_total()
51 pub fn swap_free(&self) -> u64 { in swap_free()
56 pub fn ram_total(&self) -> u64 { in ram_total()
65 pub fn ram_unused(&self) -> u64 { in ram_unused()
[all …]
/third_party/mesa3d/src/freedreno/ds/
Dfd_pps_driver.cc3 * SPDX-License-Identifier: MIT
131 * https://gpuinspector.dev/docs/gpu-counters/qualcomm in setup_a6xx_counters()
150 return percent(PERF_SP_BUSY_CYCLES / time, max_freq * info->num_sp_cores); in setup_a6xx_counters()
156 return percent(PERF_SP_STALL_CYCLES_TP / time, max_freq * info->num_sp_cores); in setup_a6xx_counters()
162 return percent(PERF_PC_STALL_CYCLES_VFD / time, max_freq * info->num_sp_cores); in setup_a6xx_counters()
183 return percent(PERF_UCHE_STALL_CYCLES_ARBITER / time, max_freq * info->num_sp_cores); in setup_a6xx_counters()
187 counter("Pre-clipped Polygons / Second", Counter::Units::None, [=]() { in setup_a6xx_counters()
202 counter("Average Vertices / Polygon", Counter::Units::None, [=]() { in setup_a6xx_counters()
212 counter("Average Polygon Area", Counter::Units::None, [=]() { in setup_a6xx_counters()
349 counter("% Non-Base Level Textures", Counter::Units::Percent, [=]() { in setup_a6xx_counters()
[all …]
/third_party/skia/src/shaders/gradients/
DSk4fLinearGradient.cpp4 * Use of this source code is governed by a BSD-style license that can be
38 n -= 4; in ramp()
101 SkASSERT(fIntervals->count() > 0); in LinearGradient4fContext()
102 fCachedInterval = fIntervals->begin(); in LinearGradient4fContext()
107 SkASSERT(in_range(fx, fIntervals->front().fT0, fIntervals->back().fT1)); in findInterval()
111 SkASSERT(fCachedInterval >= fIntervals->begin()); in findInterval()
112 SkASSERT(fCachedInterval < fIntervals->end()); in findInterval()
113 const int search_dir = fDstToPos.getScaleX() >= 0 ? 1 : -1; in findInterval()
114 while (!in_range(fx, fCachedInterval->fT0, fCachedInterval->fT1)) { in findInterval()
116 if (fCachedInterval >= fIntervals->end()) { in findInterval()
[all …]
/third_party/typescript/src/testRunner/unittests/tsserver/
DversionCache.ts9 return lineIndex.absolutePositionOfStartOfLine(line) + (col - 1);
39 lineIndex.load(lines);
53 lineIndex.load([]);
62 lineIndex.load(lines);
111 lineIndex.load(lines);
130 validateEditAtPosition(lines[0].length - 1, lines[1].length, "");
134 validateEditAtPosition(lineMap[lineMap.length - 2], lines[lines.length - 1].length, "");
223 lineIndex.load(lines);
229 la[j] = Math.floor(Math.random() * (totalChars - rsa[j]));
238 ela[j] = Math.floor(Math.random() * (etotalChars - ersa[j]));
[all …]
/third_party/vk-gl-cts/doc/testspecs/GLES3/
Dperformance.txt1 -------------------------------------------------------------------------
3 -----------------------------------------------
11 http://www.apache.org/licenses/LICENSE-2.0
18 -------------------------------------------------------------------------
30 - Use of any other features should be kept at minimum.
31 - Architecture-specific optimizations should not affect result unless they
33 - Measurement overhead should be minimized.
36 - In most cases total throughput is more important than latency.
37 - Latency is measured where it matters.
38 - Test cases should behave like other graphics-intensive applications on
[all …]
/third_party/vk-gl-cts/doc/testspecs/GLES2/
Dperformance.txt1 -------------------------------------------------------------------------
3 -----------------------------------------------
11 http://www.apache.org/licenses/LICENSE-2.0
18 -------------------------------------------------------------------------
30 - Use of any other features should be kept at minimum.
31 - Architecture-specific optimizations should not affect result unless they
33 - Measurement overhead should be minimized.
36 - In most cases total throughput is more important than latency.
37 - Latency is measured where it matters.
38 - Test cases should behave like other graphics-intensive applications on
[all …]
/third_party/python/Lib/
Dtracemalloc.py58 average = self.size / self.count
59 text += ", average=%s" % _format_size(average, False)
105 average = self.size / self.count
106 text += ", average=%s" % _format_size(average, False)
126 stat.size, stat.size - previous.size,
127 stat.count, stat.count - previous.count)
135 stat = StatisticDiff(traceback, 0, -stat.size, 0, -stat.count)
240 frame_slice = self[-limit:]
341 filename = filename[:-1]
434 def load(filename): member in Snapshot
[all …]
/third_party/ffmpeg/libavcodec/aarch64/
Dme_cmp_neon.S18 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
33 ld1 {v0.16b}, [x1], x3 // load pix1
34 ld1 {v4.16b}, [x2], x3 // load pix2
35 ld1 {v1.16b}, [x1], x3 // load pix1
36 ld1 {v5.16b}, [x2], x3 // load pix2
39 ld1 {v2.16b}, [x1], x3 // load pix1
40 ld1 {v6.16b}, [x2], x3 // load pix2
47 sub w4, w4, #4 // h -= 4
62 ld1 {v0.16b}, [x1], x3 // load pix1
63 ld1 {v4.16b}, [x2], x3 // load pix2
[all …]
/third_party/toybox/tests/
Duptime.test3 [ -f testing.sh ] && . testing.sh
7 testing "uptime" "uptime | grep -q 'load average:' && echo t" "t\n" "" ""
8 testing "uptime -s" \
9 …"uptime -s | grep -q '^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]…
/third_party/mindspore/mindspore-src/source/tests/st/networks/models/fasterrcnn/
Dtest_fasterrcnn_overfit.py7 # http://www.apache.org/licenses/LICENSE-2.0
39 "--config",
45 "--ckpt",
51 "--test_data",
56 parser.add_argument("--seed", type=int, default=1234, help="runtime seed")
58 … "--ms_mode", type=int, default=0, help="Running in GRAPH_MODE(0) or PYNATIVE_MODE(1) (default=0)"
60 …parser.add_argument("--ms_loss_scaler", type=str, default="static", help="train loss scaler, stati…
61 …parser.add_argument("--ms_loss_scaler_value", type=float, default=256.0, help="static loss scale v…
62 …parser.add_argument("--num_parallel_workers", type=int, default=8, help="num parallel worker for d…
63 …parser.add_argument("--device_target", type=str, default="Ascend", help="device target, Ascend/GPU…
[all …]
/third_party/skia/m133/third_party/externals/libyuv/source/
Drow_neon.cc4 * Use of this source code is governed by a BSD-style license
22 // d8-d15, r4-r11,r14(lr) need to be preserved if used. r13(sp),r15(pc) are
154 : [kUVCoeff] "r"(&yuvconstants->kUVCoeff), // %[kUVCoeff] in I444ToARGBRow_NEON()
155 [kRGBCoeffBias] "r"(&yuvconstants->kRGBCoeffBias) // %[kRGBCoeffBias] in I444ToARGBRow_NEON()
178 : [kUVCoeff] "r"(&yuvconstants->kUVCoeff), // %[kUVCoeff] in I422ToARGBRow_NEON()
179 [kRGBCoeffBias] "r"(&yuvconstants->kRGBCoeffBias) // %[kRGBCoeffBias] in I422ToARGBRow_NEON()
204 : [kUVCoeff] "r"(&yuvconstants->kUVCoeff), // %[kUVCoeff] in I444AlphaToARGBRow_NEON()
205 [kRGBCoeffBias] "r"(&yuvconstants->kRGBCoeffBias) // %[kRGBCoeffBias] in I444AlphaToARGBRow_NEON()
230 : [kUVCoeff] "r"(&yuvconstants->kUVCoeff), // %[kUVCoeff] in I422AlphaToARGBRow_NEON()
231 [kRGBCoeffBias] "r"(&yuvconstants->kRGBCoeffBias) // %[kRGBCoeffBias] in I422AlphaToARGBRow_NEON()
[all …]
/third_party/skia/site/docs/user/sample/
Dviewer.md2 ---
6 ---
10 * Observe rendering performance - placing the Viewer in stats mode displays average frame times.
11 * Try different rendering methods - it's possible to cycle among the three rendering methods: raste…
14 Some slides require resources stored outside the program. These resources are stored in the `<skia-
17 ----------------------------
21 bin/gn gen out/Release --args='is_debug=false'
22 ninja -C out/Release viewer
24 To load resources in the desktop Viewers, use the `--resourcePath` option:
26 <skia-path>/out/Release/viewer --resourcePath <skia-path>/resources
[all …]
/third_party/toybox/toys/other/
Duptime.c1 /* uptime.c - Tell how long the system has been running.
14 usage: uptime [-ps]
17 of users, and the system load averages for the past 1, 5 and 15 minutes.
19 -p Pretty (human readable) uptime
20 -s Since when has the system been up?
41 t -= info.uptime; in uptime_main()
67 xprintf(" %02d:%02d:%02d up ", tm->tm_hour, tm->tm_min, tm->tm_sec); in uptime_main()
74 while ((entry = getutxent())) if (entry->ut_type == USER_PROCESS) users++; in uptime_main()
78 printf(" load average: %.02f, %.02f, %.02f\n", info.loads[0]/65536.0, in uptime_main()

12345678910>>...13