Home
last modified time | relevance | path

Searched refs:perf (Results 1 – 25 of 480) sorted by relevance

12345678910>>...20

/external/toolchain-utils/crosperf/perf_files/
Dperf.data.report.01 # To display the perf.data header info, please use --header/--header-only options.
16 0.48% 1297 chrome perf-24199.map [.] 0x0000115bb6c35d7a
17 0.47% 1286 chrome perf-24199.map [.] 0x0000115bb7ba9b54
20 0.37% 991 chrome perf-24199.map [.] 0x0000115bb6c35d72
21 0.28% 762 chrome perf-24199.map [.] 0x0000115bb6c35d76
22 0.27% 735 chrome perf-24199.map [.] 0x0000115bb6aa463a
23 0.22% 608 chrome perf-24199.map [.] 0x0000115bb7ba9ebf
24 0.17% 468 chrome perf-24199.map [.] 0x0000115bb6a7afc3
26 0.17% 450 chrome perf-24199.map [.] 0x0000115bb6af7457
27 0.16% 444 chrome perf-24199.map [.] 0x0000115bb7c6edd1
[all …]
/external/v8/tools/turbolizer/
DREADME.md22 Optionally, profiling data generated by the perf tools in linux can be merged
23 with the .json files using the turbolizer-perf.py file included. The following
24 command is an example of using the perf script:
26 perf script -i perf.data.jitted -s turbolizer-perf.py turbo-main.json
29 which, when uploaded to turbolizer, will display the event counts from perf next
33 Using the python interface in perf script requires python-dev to be installed
34 and perf be recompiled with python support enabled. Once recompiled, the
35 variable PERF_EXEC_PATH must be set to the location of the recompiled perf
49 In order to generate perf data that matches exactly with the turbofan trace, you
52 necessary disassembly for linking with the perf profile.
[all …]
/external/perf_data_converter/src/quipper/
Dperf_stat.proto9 // Stores output generated by the "perf stat" command.
11 // See https://perf.wiki.kernel.org/index.php/Tutorial#Counting_with_perf_stat
16 // All lines printed by "perf stat".
19 // The command line used to run "perf stat".
22 // Represents one line of "perf stat" output.
25 // Time since the start of the "perf stat" command, in milliseconds.
27 // When running "perf stat" and printing the counters at the end, this is
30 // Alternatively, "perf stat" can print its stats at regular intervals until
31 // the end of the run. For example, if "perf stat" runs for one second and
33 // a total of five times. According to "perf stat" usage instructions, the
[all …]
Dperf_data.proto10 // Stores information from a perf session generated via running:
11 // "perf record"
13 // See $kernel/tools/perf/design.txt for more details.
86 // Indicates whether we enable perf events after an exec() function call.
157 // Describes a perf.data file attribute.
162 // List of perf file attribute ids. Each id describes an event.
166 // Protobuf version of the perf_event_type struct found in perf/util/event.h.
183 // This message contains information about a perf sample itself, as opposed to
184 // a perf event captured by a sample.
224 // Info about the perf sample containing this event.
[all …]
/external/tensorflow/tensorflow/python/grappler/
Dcost_analyzer.cc105 OpPerformance* perf = op_perf_.mutable_op_performance(i); in PreprocessCosts() local
107 perf->set_compute_time(analytical.compute_time()); in PreprocessCosts()
108 perf->set_memory_time(analytical.memory_time()); in PreprocessCosts()
109 double measured_cost = perf->compute_cost(); in PreprocessCosts()
114 perf->set_compute_efficiency(-INFINITY); in PreprocessCosts()
116 perf->set_compute_efficiency(analytical_compute_cost / measured_cost); in PreprocessCosts()
122 perf->set_memory_efficiency(-INFINITY); in PreprocessCosts()
124 perf->set_memory_efficiency(analytical_memory_cost / measured_cost); in PreprocessCosts()
252 const auto& perf = op_perf_.op_performance(i); in PrintAnalysis() local
253 string op_name = perf.op().op(); in PrintAnalysis()
[all …]
/external/autotest/server/site_tests/network_WiFi_AssocConfigPerformance/
Dnetwork_WiFi_AssocConfigPerformance.py139 discovery = [perf['discovery_time'] for perf in connect_disconnect]
140 assoc = [perf['association_time'] for perf in connect_disconnect]
141 config = [perf['configuration_time'] for perf in connect_disconnect]
160 connect_times = [perf['connect_time'] for perf in suspend_resume]
161 total_times = [perf['total_time'] for perf in suspend_resume]
/external/skia/infra/pathkit/
DMakefile9 docker build -t perf-karma-chrome-tests -f ./docker/perf-karma-chrome-tests/Dockerfile .
15 CGO_ENABLED=0 GOOS=linux go build -o ./tmp/perf-aggregator -a ./perf/
26 docker tag perf-karma-chrome-tests gcr.io/skia-public/perf-karma-chrome-tests:${CHROME_VERSION}
27 docker push gcr.io/skia-public/perf-karma-chrome-tests:${CHROME_VERSION}
/external/skqp/infra/pathkit/
DMakefile9 docker build -t perf-karma-chrome-tests -f ./docker/perf-karma-chrome-tests/Dockerfile .
15 CGO_ENABLED=0 GOOS=linux go build -o ./tmp/perf-aggregator -a ./perf/
26 docker tag perf-karma-chrome-tests gcr.io/skia-public/perf-karma-chrome-tests:${CHROME_VERSION}
27 docker push gcr.io/skia-public/perf-karma-chrome-tests:${CHROME_VERSION}
/external/icu/icu4c/source/test/perf/dicttrieperf/
Ddicttrieperf.cpp84 PackageLookup(const DictionaryTriePerfTest &perf) { in PackageLookup() argument
86 CharString filename(perf.getSourceDir(), errorCode); in PackageLookup()
140 BinarySearchPackageLookup(const DictionaryTriePerfTest &perf) in BinarySearchPackageLookup() argument
141 : PackageLookup(perf) { in BinarySearchPackageLookup()
249 PrefixBinarySearchPackageLookup(const DictionaryTriePerfTest &perf) in PrefixBinarySearchPackageLookup() argument
250 : BinarySearchPackageLookup(perf) {} in PrefixBinarySearchPackageLookup()
276 BytesTriePackageLookup(const DictionaryTriePerfTest &perf) in BytesTriePackageLookup() argument
277 : PackageLookup(perf) { in BytesTriePackageLookup()
332 DictLookup(const DictionaryTriePerfTest &perfTest) : perf(perfTest) {} in DictLookup()
335 return perf.numTextLines; in getOperationsPerIteration()
[all …]
/external/chromium-trace/catapult/devil/devil/android/perf/
Dperf_control_devicetest.py14 from devil.android.perf import perf_control
26 perf = perf_control.PerfControl(self._device)
28 perf.SetPerfProfilingMode()
29 cpu_info = perf.GetCpuInfo()
30 self.assertEquals(len(perf._cpu_files), len(cpu_info))
35 perf.SetDefaultPerfMode()
/external/perf_data_converter/
DREADME.md3 The `perf_to_profile` binary can be used to turn a perf.data file, which is
4 generated by the linux profiler, perf, into a profile.proto file which can be
46 Profile a command using perf, for example:
48 perf record /bin/ls
51 The example command will generate a profile named perf.data, you
56 perf_to_profile perf.data profile.pb
62 pprof -web perf.data
68 Note that perf data converter and quipper projects do not use GitHub pull
/external/skia/platform_tools/android/bin/
Dandroid_perf69 $ADB shell /data/local/tmp/simpleperf record -p ${APP_PID} -o /data/local/tmp/perf.data sleep 70
71 $ADB pull /data/local/tmp/perf.data $PERF_TMP_DIR/perf.data
77 adb_pull_if_needed /data/local/tmp/perf.data $PERF_TMP_DIR/perf.data
78 $SKIA_OUT/perfhost_report.py -i $PERF_TMP_DIR/perf.data --symfs=$PERF_TMP_DIR ${runVars[@]}
/external/skqp/platform_tools/android/bin/
Dandroid_perf69 $ADB shell /data/local/tmp/simpleperf record -p ${APP_PID} -o /data/local/tmp/perf.data sleep 70
71 $ADB pull /data/local/tmp/perf.data $PERF_TMP_DIR/perf.data
77 adb_pull_if_needed /data/local/tmp/perf.data $PERF_TMP_DIR/perf.data
78 $SKIA_OUT/perfhost_report.py -i $PERF_TMP_DIR/perf.data --symfs=$PERF_TMP_DIR ${runVars[@]}
/external/autotest/client/site_tests/kernel_PerfEventRename/src/
DMakefile5 perf-rename-test: perf-rename-test.c
6 $(CC) -g perf-rename-test.c -o perf-rename-test -lpthread
/external/jacoco/org.jacoco.core.test/src/org/jacoco/core/test/perf/
DPerformanceSuite.java12 package org.jacoco.core.test.perf;
16 import org.jacoco.core.test.perf.targets.Target01;
17 import org.jacoco.core.test.perf.targets.Target02;
18 import org.jacoco.core.test.perf.targets.Target03;
/external/autotest/client/site_tests/platform_Perf/
Dcontrol7 PURPOSE = 'Verify that the perf tool works properly.'
10 Successfully collect a perf data profile and verify that the contents are well
22 Runs 'perf record' and 'perf report'.
/external/v8/tools/testrunner/local/
Dandroid.py42 from devil.android.perf import cache_control # pylint: disable=import-error
43 from devil.android.perf import perf_control # pylint: disable=import-error
192 perf = perf_control.PerfControl(self.device)
193 perf.SetHighPerfMode()
197 perf = perf_control.PerfControl(self.device)
198 perf.SetDefaultPerfMode()
/external/skqp/site/dev/testing/
Dskiaperf.md4 [Skia Perf](https://perf.skia.org) is a Polymer-based web application for
16 powerful is [k-means clustering](https://perf.skia.org/t/). This tool groups
28 …io of playback time in ms to the number of ops for desk\_wowwiki.skp](https://perf.skia.org/#1876):
35 …he data to answer questions like [how many tests were run per commit](https://perf.skia.org/#1878).
39 See Skia Perf for the [full list of functions available](https://perf.skia.org/help).
/external/skia/site/dev/testing/
Dskiaperf.md4 [Skia Perf](https://perf.skia.org) is a Polymer-based web application for
16 powerful is [k-means clustering](https://perf.skia.org/t/). This tool groups
28 …io of playback time in ms to the number of ops for desk\_wowwiki.skp](https://perf.skia.org/#1876):
35 …he data to answer questions like [how many tests were run per commit](https://perf.skia.org/#1878).
39 See Skia Perf for the [full list of functions available](https://perf.skia.org/help).
/external/vulkan-validation-layers/layers/
Dvk_layer_settings.txt44 # perf - Report using the API in a way that may cause suboptimal performance.
59 lunarg_core_validation.report_flags = error,warn,perf
64 lunarg_object_tracker.report_flags = error,warn,perf
69 lunarg_parameter_validation.report_flags = error,warn,perf
74 google_threading.report_flags = error,warn,perf
79 google_unique_objects.report_flags = error,warn,perf
/external/swiftshader/third_party/llvm-7.0/llvm/docs/
DBenchmarking.rst20 * Use a high resolution timer, e.g. perf under linux.
56 program you are benchmarking. If using perf, leave at least 2 cores
57 so that perf runs in one and your program in another::
74 cset shield --exec -- perf stat -r 10 <cmd>
77 particular perf command runs the ``<cmd>`` 10 times and reports
80 With these in place you can expect perf variations of less than 0.1%.
/external/adeb/
DTODO3 - symlink whichever perf is installed to /usr/bin/perf
4 - patch perf with the new futex contention script
/external/autotest/client/site_tests/kernel_PerfEventRename/
Dcontrol7 Test to determine if the kernel's perf event architecture can handle
10 A test program which changes its name using prctl runs under perf and runs
13 perf again to produce a report. The report is checked to see that the samples
19 Fails if perf report doesn't contain the correct executable name
/external/tensorflow/tensorflow/core/grappler/costs/
Dutils.cc311 OpPerformance* perf = ret.add_op_performance(); in CostGraphToOpPerformanceData() local
312 perf->set_node(node.name()); in CostGraphToOpPerformanceData()
316 *perf->mutable_op() = BuildOpInfoWithoutDevice(node, name_to_node, inputs); in CostGraphToOpPerformanceData()
317 *perf->mutable_op()->mutable_device() = GetDeviceInfo(cost_node->device()); in CostGraphToOpPerformanceData()
319 perf->set_temporary_memory_size(cost_node->temporary_memory_size()); in CostGraphToOpPerformanceData()
322 perf->set_compute_cost(cost_node->compute_cost() * 1000); in CostGraphToOpPerformanceData()
323 perf->set_compute_time(cost_node->compute_time() * 1000); in CostGraphToOpPerformanceData()
324 perf->set_memory_time(cost_node->memory_time() * 1000); in CostGraphToOpPerformanceData()
327 perf->mutable_op_memory()->add_output_memory(output_info.size()); in CostGraphToOpPerformanceData()
330 perf->mutable_op_memory()->set_temp_memory( in CostGraphToOpPerformanceData()
[all …]
/external/autotest/client/site_tests/platform_DebugDaemonPerfDataInFeedbackLogs/
Dcontrol7 PURPOSE = "Verify that feedback logs contain perf data"
10 GetFeedbackLogs must contain a perf profile.
21 Exercises the debugd GetFeedbackLogs API and checks for a valid perf profile.

12345678910>>...20