| /tools/testing/selftests/powerpc/ptrace/ |
| D | ptrace-vsx.h | 13 int validate_vsx(unsigned long *vsx, unsigned long *load) in validate_vsx() argument 18 if (vsx[i] != load[2 * i + 1]) { in validate_vsx() 20 i, vsx[i], 2 * i + 1, load[2 * i + 1]); in validate_vsx() 31 int validate_vmx(unsigned long vmx[][2], unsigned long *load) in validate_vmx() argument 37 if ((vmx[i][0] != load[64 + 2 * i]) || in validate_vmx() 38 (vmx[i][1] != load[65 + 2 * i])) { in validate_vmx() 41 load[64 + 2 * i]); in validate_vmx() 44 load[65 + 2 * i]); in validate_vmx() 51 if ((vmx[i][0] != load[65 + 2 * i]) || in validate_vmx() 52 (vmx[i][1] != load[64 + 2 * i])) { in validate_vmx() [all …]
|
| /tools/perf/scripts/python/bin/ |
| D | mem-phys-addr-record | 8 load=`perf list | grep mem_inst_retired.all_loads` 9 if [ -z "$load" ]; then 10 load=`perf list | grep mem_uops_retired.all_loads` 12 if [ -z "$load" ]; then 17 arg=$(echo $load | tr -d ' ')
|
| /tools/power/cpupower/bench/ |
| D | README-BENCH | 9 - Identify average reaction time of a governor to CPU load changes 34 You can specify load (100% CPU load) and sleep (0% CPU load) times in us which 38 load=25000 41 This part of the configuration file will create 25ms load/sleep turns, 48 Will increase load and sleep time by 25ms 5 times. 50 25ms load/sleep time repeated 20 times (cycles). 51 50ms load/sleep time repeated 20 times (cycles). 53 100ms load/sleep time repeated 20 times (cycles). 69 100% CPU load (load) | 0 % CPU load (sleep) | round 76 In round 1, ondemand should have rather static 50% load and probably [all …]
|
| D | benchmark.c | 32 unsigned int calculate_timespace(long load, struct config *config) in calculate_timespace() argument 41 printf("calibrating load of %lius, please wait...\n", load); in calculate_timespace() 53 rounds = (unsigned int)(load * estimated / timed); in calculate_timespace() 88 load_time = config->load; in start_benchmark() 92 total_time += _round * (config->sleep + config->load); in start_benchmark()
|
| D | system.c | 136 (config->load + config->load_step * round) + in prepare_user() 137 (config->load + config->load_step * round * 4); in prepare_user()
|
| D | example.cfg | 2 load = 50000
|
| D | main.c | 97 sscanf(optarg, "%li", &config->load); in main() 169 config->load, in main()
|
| D | parse.h | 11 long load; /* load time in µs */ member
|
| /tools/testing/selftests/sgx/ |
| D | Makefile | 29 $(OUTPUT)/load.o \ 38 $(OUTPUT)/load.o: load.c 55 $(OUTPUT)/load.o \
|
| /tools/testing/selftests/powerpc/security/ |
| D | flush_utils.c | 21 static inline __u64 load(void *addr) in load() function 35 load(p + j); in syscall_loop() 47 load(p + j); in syscall_loop_uaccess()
|
| /tools/perf/util/ |
| D | jitdump.c | 337 jr->load.pid = bswap_32(jr->load.pid); in jit_get_next_entry() 338 jr->load.tid = bswap_32(jr->load.tid); in jit_get_next_entry() 339 jr->load.vma = bswap_64(jr->load.vma); in jit_get_next_entry() 340 jr->load.code_addr = bswap_64(jr->load.code_addr); in jit_get_next_entry() 341 jr->load.code_size = bswap_64(jr->load.code_size); in jit_get_next_entry() 342 jr->load.code_index= bswap_64(jr->load.code_index); in jit_get_next_entry() 382 return jr->load.pid; in jr_entry_pid() 389 return jr->load.tid; in jr_entry_tid() 443 nspid = jr->load.pid; in jit_repipe_code_load() 446 csize = jr->load.code_size; in jit_repipe_code_load() [all …]
|
| /tools/memory-model/Documentation/ |
| D | glossary.txt | 8 based on the value returned by an earlier load, an "address 9 dependency" extends from that load extending to the later access. 29 a special operation that includes a load and which orders that 30 load before later memory references running on that same CPU. 35 When an acquire load returns the value stored by a release store 36 to that same variable, (in other words, the acquire load "reads 38 store "happen before" any operations following that load acquire. 55 of a value computed from a value returned by an earlier load, 56 a "control dependency" extends from that load to that store. 89 on the value returned by an earlier load, a "data dependency" [all …]
|
| D | control-dependencies.txt | 12 Therefore, a load-load control dependency will not preserve ordering 20 are permitted to predict the result of the load from "b". This prediction 21 can cause other CPUs to see this load as having happened before the load 32 (usually) guaranteed for load-store control dependencies, as in the 42 fuse the load from "a" with other loads. Without the WRITE_ONCE(), 44 the compiler might convert the store into a load and a check followed 45 by a store, and this compiler-generated load would not be ordered by 57 load, it does *not* force the compiler to actually use the loaded value. 78 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ 87 Now there is no conditional between the load from "a" and the store to [all …]
|
| D | explanation.txt | 74 load instructions. The LKMM makes these predictions for code running 85 Each load instruction must obtain the value written by the most recent 190 by each load is simply the value written by the most recently executed 203 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2 204 would be 1 since a load obtains its value from the most recent 398 For this code, the LKMM predicts that the load from x will always be 411 Given this version of the code, the LKMM would predict that the load 481 a control dependency from the load to the store. 498 There appears to be a data dependency from the load of x to the store 505 the value returned by the load from x, which would certainly destroy [all …]
|
| /tools/testing/selftests/bpf/ |
| D | test_cpp.cpp | 42 int load() { return T::load(skel); } in load() function in Skeleton 75 err = skel.load(); in try_skeleton_template()
|
| D | test_bpftool_metadata.sh | 61 bpftool prog load $BPF_FILE_UNUSED $BPF_DIR/unused 73 bpftool prog load $BPF_FILE_USED $BPF_DIR/used
|
| D | generate_udp_fragments.py | 72 pkt = IP(src=sip,dst=dip) / UDP(sport=sport,dport=dport,chksum=0) / Raw(load=payload) 75 …c=sip6,dst=dip6) / IPv6ExtHdrFragment(id=0xBEEF) / UDP(sport=sport,dport=dport) / Raw(load=payload)
|
| /tools/testing/selftests/bpf/prog_tests/ |
| D | ksyms_module.c | 27 err = bpf_prog_test_run_opts(skel->progs.load.prog_fd, &topts); in test_ksyms_module_lskel() 54 err = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.load), &topts); in test_ksyms_module_libbpf()
|
| /tools/perf/Documentation/ |
| D | perf-c2c.txt | 22 On Intel, the tool is based on load latency and precise store facility events 26 sample load and store operations, therefore hardware and kernel support is 33 - type of the access (load and store details) 34 - latency (in cycles) of the load access 202 - count of Total/Local/Remote load HITMs 205 - count of Total/Local/Remote load from peer cache or DRAM 211 - sum of all load accesses 222 - count of load hits in FB (Fill Buffer), L1 and L2 cache 225 - count of LLC load accesses, includes LLC hits and LLC HITMs 228 - count of remote load accesses, includes remote hits and remote HITMs; [all …]
|
| D | perf-mem.txt | 23 not the pure load (or store latency). Use latency includes any pipeline 26 On Arm64 this uses SPE to sample load and store operations, therefore hardware 39 Select the memory operation type: load or store (default: load,store) 108 - blocked: reason of blocked load access for the data at the time of the sample
|
| /tools/perf/scripts/perl/Perf-Trace-Util/lib/Perf/Trace/ |
| D | Context.pm | 23 XSLoader::load('Perf::Trace::Context', $VERSION);
|
| /tools/testing/selftests/drivers/net/lib/py/ |
| D | __init__.py | 18 from .load import *
|
| /tools/testing/selftests/kexec/ |
| D | test_kexec_load.sh | 32 kexec --load $KERNEL_IMAGE > /dev/null 2>&1
|
| /tools/memory-model/litmus-tests/ |
| D | LB+poonceonces.litmus | 6 * Can the counter-intuitive outcome for the load-buffering pattern
|
| D | LB+poacquireonce+pooncerelease.litmus | 6 * Does a release-acquire pair suffice for the load-buffering litmus
|