Home
last modified time | relevance | path

Searched refs:accelerator (Results 1 – 25 of 107) sorted by relevance

12345

/external/tensorflow/tensorflow/core/profiler/g3doc/
Dadvise.md43 top 1 operation type: SoftmaxCrossEntropyWithLogits, cpu: 1.37sec, accelerator: 0us, total: 1.37sec…
44 top 2 operation type: MatMul, cpu: 427.39ms, accelerator: 280.76ms, total: 708.14ms (13.83%)
45 top 3 operation type: ConcatV2, cpu: 357.83ms, accelerator: 31.80ms, total: 389.63ms (7.61%)
46 seq2seq_attention_model.py:360:build_graph:self._add_seq2seq(), cpu: 3.16sec, accelerator: 214.84ms…
47 …seq2seq_attention_model.py:293:_add_seq2seq:decoder_outputs, ..., cpu: 2.46sec, accelerator: 3.25m…
48 …seq2seq_lib.py:181:sampled_sequence_...:average_across_ti..., cpu: 2.46sec, accelerator: 3.24ms, t…
49 …seq2seq_lib.py:147:sequence_loss_by_...:crossent = loss_f..., cpu: 2.46sec, accelerator: 3.06ms, t…
50 …ntion_model.py:289:sampled_loss_func:num_classes=vsize), cpu: 2.46sec, accelerator: 3.06ms, total:…
51 …ntion_model.py:282:sampled_loss_func:labels = tf.resha..., cpu: 164us, accelerator: 0us, total: 16…
52 …seq2seq_lib.py:148:sequence_loss_by_...:log_perp_list.app..., cpu: 1.33ms, accelerator: 120us, tot…
[all …]
Dprofile_time.md12 in the graph. An operation can be placed on an accelerator or on CPU.
16 When an operation is placed on accelerator, it will first be scheduled
19 accelerator. While some computation (e.g. pre-processing) is still done
20 in CPU. OpKernel::Compute can dispatch computation on accelerator
21 and return, or it can also wait for the accelerator to finish.
25 * <b>accelerator_micros</b>, which is the part of computation time spent on accelerator.
30 Since accelerator, such as GPU, usually runs operation asynchronously, you
32 accelerator.
Dpython_api.md54 # Note: When run on accelerator (e.g. GPU), an operation might perform some
55 # cpu computation, enqueue the accelerator computation. The accelerator
57 # times: 1) accelerator computation. 2) cpu computation (might wait on
58 # accelerator). 3) the sum of 1 and 2.
Doptions.md47 Each accelerator usually performs massive parallel processing. The profiler
51 micros: This is the sum of cpu and accelerator times.
52 accelerator_micros: This is the accelerator times.
87 accelerator_micros and cpu_micros. Note: cpu and accelerator can run in parallel.
89 …cros`: Show nodes that spend at least this number of microseconds to run on accelerator (e.g. GPU).
/external/harfbuzz_ng/src/
Dhb-subset-plan.hh214 const hb_subset_accelerator_t* accelerator; member
222 hb_lock_t (accelerator ? &accelerator->sanitized_table_cache_lock : nullptr); in source_table()
224 auto *cache = accelerator ? &accelerator->sanitized_table_cache : sanitized_table_cache; in source_table()
Dhb-subset-plan.cc468 if (plan->accelerator) in _populate_unicodes_to_retain()
469 unicode_to_gid = &plan->accelerator->unicode_to_gid; in _populate_unicodes_to_retain()
517 if (!plan->accelerator) { in _populate_unicodes_to_retain()
523 unicode_glyphid_map = &plan->accelerator->unicode_to_gid; in _populate_unicodes_to_retain()
524 cmap_unicodes = &plan->accelerator->unicodes; in _populate_unicodes_to_retain()
527 if (plan->accelerator && in _populate_unicodes_to_retain()
531 auto &gid_to_unicodes = plan->accelerator->gid_to_unicodes; in _populate_unicodes_to_retain()
675 if (!plan->accelerator || plan->accelerator->has_seac) in _populate_gids_to_retain()
913 plan->accelerator = (hb_subset_accelerator_t*) accel; in hb_subset_plan_create_or_fail()
Dhb-subset-cff1.cc446 hb_map_t *glyph_to_sid_map = (plan->accelerator && plan->accelerator->cff_accelerator) ? in plan_subset_charset()
447 plan->accelerator->cff_accelerator->glyph_to_sid_map : in plan_subset_charset()
451 ((plan->accelerator && plan->accelerator->cff_accelerator) || in plan_subset_charset()
482 if (!(plan->accelerator && plan->accelerator->cff_accelerator) || in plan_subset_charset()
483 !plan->accelerator->cff_accelerator->glyph_to_sid_map.cmpexch (nullptr, glyph_to_sid_map)) in plan_subset_charset()
/external/mesa3d/include/CL/
Dcl_ext_intel.h186 cl_accelerator_intel accelerator,
193 cl_accelerator_intel accelerator,
201 cl_accelerator_intel accelerator) CL_EXT_SUFFIX__VERSION_1_2;
204 cl_accelerator_intel accelerator) CL_EXT_SUFFIX__VERSION_1_2;
208 cl_accelerator_intel accelerator) CL_EXT_SUFFIX__VERSION_1_2;
211 cl_accelerator_intel accelerator) CL_EXT_SUFFIX__VERSION_1_2;
/external/tensorflow/tensorflow/core/protobuf/tpu/
Dtopology.proto11 // No embedding lookup accelerator available on the tpu.
13 // Embedding lookup accelerator V1. The embedding lookup operation can only
17 // Embedding lookup accelerator V2. The embedding lookup operation can be
/external/tensorflow/tensorflow/python/ops/memory_tests/
Dcustom_gradient_memory_test.py38 for accelerator in ["GPU", "TPU"]:
39 if config.list_physical_devices(accelerator):
40 return accelerator
/external/llvm/test/DebugInfo/
Ddwarfdump-accel.test3 Gather some DIE indexes to verify the accelerator table contents.
55 Check that an empty accelerator section is handled correctly.
/external/tensorflow/tensorflow/lite/tools/benchmark/
Dbenchmark_performance_options.cc55 const std::string accelerator = in PerfOptionName() local
57 return accelerator.empty() ? "nnapi(w/o accel name)" in PerfOptionName()
58 : "nnapi(" + accelerator + ")"; in PerfOptionName()
/external/ComputeLibrary/include/CL/
Dcl_ext.h1135 cl_accelerator_intel accelerator,
1142 cl_accelerator_intel accelerator,
1150 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1153 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1157 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1160 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
/external/angle/include/CL/
Dcl_ext.h1080 cl_accelerator_intel accelerator,
1087 cl_accelerator_intel accelerator,
1095 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1098 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1102 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1105 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
/external/tensorflow/tensorflow/core/profiler/
Dtfprof_log.proto58 // Whether or not the TF device tracer fails to return accelerator
59 // information (which could lead to 0 accelerator execution time).
106 // For accelerator, vector size can be larger than 1, multiple kernel fires
/external/tensorflow/tensorflow/core/profiler/protobuf/
Dhardware_types.proto11 // CPU only without any hardware accelerator.
/external/tensorflow/tensorflow/lite/experimental/acceleration/
DREADME.md3 Experimental library and tools for determining whether an accelerator engine
/external/tensorflow/tensorflow/lite/experimental/acceleration/configuration/
Dconfiguration.proto51 // TFLite accelerator to use.
61 // The Coral EdgeTpu Dev Board / USB accelerator.
89 // Which preference to use this accelerator for.
103 // Which instance (NNAPI accelerator) to use. One driver may provide several
128 // Whether to allow use of NNAPI CPU (nnapi-reference accelerator) on Android
129 // 10+ when an accelerator name is not specified. The NNAPI CPU typically
141 // accelerator. This should only be enabled if the target device supports
146 // Whether to allow the NNAPI accelerator to optionally use lower-precision
411 // Coral Dev Board / USB accelerator delegate settings.
/external/tensorflow/tensorflow/lite/g3doc/guide/
Droadmap.md22 TFLite and hardware accelerator compatibility during-training and
56 * Enhance CMake support (e.g., broader accelerator support).
/external/tensorflow/tensorflow/lite/g3doc/android/delegates/
Dnnapi.md139 accelerator. The remainder runs on the CPU, which results in split execution.
140 Due to the high cost of CPU/accelerator synchronization, this may result in
174 A graph that can't be processed completely by an accelerator can fall back to
/external/mbedtls/scripts/data_files/driver_templates/
Dpsa_crypto_driver_wrappers.c.jinja3 * and appropriate accelerator.
321 /* Fell through, meaning no accelerator supports this operation */
663 /* Fell through, meaning no accelerator supports this operation */
792 /* Fell through, meaning no accelerator supports this operation */
1102 /* Fell through, meaning no accelerator supports this operation */
1175 /* Fell through, meaning no accelerator supports this operation */
1573 /* Fell through, meaning no accelerator supports this operation */
1625 /* Fell through, meaning no accelerator supports this operation */
1673 /* Fell through, meaning no accelerator supports this operation */
1722 /* Fell through, meaning no accelerator supports this operation */
[all …]
/external/tensorflow/tensorflow/python/tpu/
Dtpu.bzl38 name: Name of test. Will be prefixed by accelerator versions.
/external/python/cpython3/Modules/
DSetup178 …ir)/Modules/expat -DHAVE_EXPAT_CONFIG_H -DUSE_PYEXPAT_CAPI _elementtree.c # elementtree accelerator
179 #_pickle -DPy_BUILD_CORE_MODULE _pickle.c # pickle accelerator
180 #_datetime _datetimemodule.c # datetime accelerator
181 #_zoneinfo _zoneinfo.c -DPy_BUILD_CORE_MODULE # zoneinfo accelerator
186 #_statistics _statisticsmodule.c # statistics accelerator
/external/OpenCL-CTS/dependencies/ocl-headers/CL/
Dcl_ext.h1960 cl_accelerator_intel accelerator,
1967 cl_accelerator_intel accelerator,
1975 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1978 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1982 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
1985 cl_accelerator_intel accelerator) CL_API_SUFFIX__VERSION_1_2;
/external/armnn/shim/sl/
DREADME.md15 …ify "-v" to enable verbose logging and "-c CpuAcc" to direct that the Neon(TM) accelerator be used.

12345