Home
last modified time | relevance | path

Searched refs:optimizations (Results 1 – 25 of 852) sorted by relevance

12345678910>>...35

/external/tensorflow/tensorflow/core/kernels/data/
Doptimize_dataset_op.cc67 std::vector<tstring> optimizations; in MakeDataset() local
70 ctx, ParseVectorArgument<tstring>(ctx, kOptimizations, &optimizations)); in MakeDataset()
91 optimizations = SelectOptimizations( in MakeDataset()
102 if (std::find(optimizations.begin(), optimizations.end(), experiment) != in MakeDataset()
103 optimizations.end()) { in MakeDataset()
119 if (std::find(optimizations.begin(), optimizations.end(), experiment) == in MakeDataset()
120 optimizations.end()) { in MakeDataset()
121 optimizations.push_back(experiment); in MakeDataset()
127 if (optimizations.empty()) { in MakeDataset()
133 auto config_factory = [this, &optimizations]() { in MakeDataset()
[all …]
Ddataset_utils_test.cc312 std::vector<tstring> optimizations = SelectOptimizations( in TEST_P() local
322 EXPECT_THAT(optimizations, UnorderedElementsAre("exp2", "exp3", "exp4", in TEST_P()
328 EXPECT_THAT(optimizations, in TEST_P()
334 EXPECT_THAT(optimizations, UnorderedElementsAre("exp6", "exp7")); in TEST_P()
361 std::vector<tstring> optimizations = SelectOptimizations( in TEST_P() local
367 EXPECT_THAT(optimizations, UnorderedElementsAre()); in TEST_P()
371 EXPECT_THAT(optimizations, in TEST_P()
375 EXPECT_THAT(optimizations, UnorderedElementsAre("exp4", "exp5")); in TEST_P()
378 EXPECT_THAT(optimizations, UnorderedElementsAre("exp2", "exp4", "exp5")); in TEST_P()
383 EXPECT_THAT(optimizations, UnorderedElementsAre("exp2", "exp3", "exp4")); in TEST_P()
[all …]
/external/libaom/libaom/build/cmake/
Daom_config_defaults.cmake33 set_aom_detect_var(HAVE_NEON 0 "Enables NEON intrinsics optimizations.")
36 set_aom_detect_var(HAVE_DSPR2 0 "Enables DSPR2 optimizations.")
37 set_aom_detect_var(HAVE_MIPS32 0 "Enables MIPS32 optimizations.")
38 set_aom_detect_var(HAVE_MIPS64 0 "Enables MIPS64 optimizations. ")
39 set_aom_detect_var(HAVE_MSA 0 "Enables MSA optimizations.")
42 set_aom_detect_var(HAVE_VSX 0 "Enables VSX optimizations.")
45 set_aom_detect_var(HAVE_AVX 0 "Enables AVX optimizations.")
46 set_aom_detect_var(HAVE_AVX2 0 "Enables AVX2 optimizations.")
47 set_aom_detect_var(HAVE_MMX 0 "Enables MMX optimizations. ")
48 set_aom_detect_var(HAVE_SSE 0 "Enables SSE optimizations.")
[all …]
/external/libpng/
Dconfigure.ac305 AC_ARG_ENABLE([hardware-optimizations],
306 AS_HELP_STRING([[[--enable-hardware-optimizations]]],
307 [Enable hardware optimizations: =no/off, yes/on:]),
313 [Disable ARM_NEON optimizations])
316 [Disable MIPS_MSA optimizations])
319 [Disable POWERPC VSX optimizations])
322 [Disable INTEL_SSE optimizations])
330 [Enable ARM_NEON optimizations])
335 [Enable MIPS_MSA optimizations])
340 [Enable Intel SSE optimizations])
[all …]
Dconfig.h.in69 /* Turn on ARM Neon optimizations at run-time */
75 /* Enable ARM Neon optimizations */
78 /* Enable Intel SSE optimizations */
81 /* Turn on MIPS MSA optimizations at run-time */
87 /* Enable MIPS MSA optimizations */
90 /* Turn on POWERPC VSX optimizations at run-time */
96 /* Enable POWERPC VSX optimizations */
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_OptimizeDatasetV2.pbtxt13 A `tf.string` vector `tf.Tensor` identifying user enabled optimizations.
19 A `tf.string` vector `tf.Tensor` identifying user disabled optimizations.
25 A `tf.string` vector `tf.Tensor` identifying optimizations by default.
28 summary: "Creates a dataset by applying related optimizations to `input_dataset`."
30 Creates a dataset by applying related optimizations to `input_dataset`.
Dapi_def_OptimizeDataset.pbtxt11 name: "optimizations"
13 A `tf.string` vector `tf.Tensor` identifying optimizations to use.
16 summary: "Creates a dataset by applying optimizations to `input_dataset`."
18 Creates a dataset by applying optimizations to `input_dataset`.
/external/llvm-project/llvm/docs/HistoricalNotes/
D2001-06-01-GCCOptimizations2.txt10 If we were to reimplement any of these optimizations, I assume that we
14 Static optimizations, xlation unit at a time:
17 Link time optimizations:
20 Of course, many optimizations could be shared between llvmopt and
24 > BTW, about SGI, "borrowing" SSA-based optimizations from one compiler and
31 optimizations are written in C++ and are actually somewhat
35 > But your larger point is valid that adding SSA based optimizations is
46 optimization" happens right along with other data optimizations (ie, CSE
49 As far as REAL back end optimizations go, it looks something like this:
D2001-06-01-GCCOptimizations.txt7 Take a look at this document (which describes the order of optimizations
31 I've marked optimizations with a [t] to indicate things that I believe to
36 optimizations are done on the tree representation].
38 Given the lack of "strong" optimizations that would take a long time to
41 SSA based optimizations that could be adapted (besides the fact that their
/external/llvm/docs/HistoricalNotes/
D2001-06-01-GCCOptimizations2.txt10 If we were to reimplement any of these optimizations, I assume that we
14 Static optimizations, xlation unit at a time:
17 Link time optimizations:
20 Of course, many optimizations could be shared between llvmopt and
24 > BTW, about SGI, "borrowing" SSA-based optimizations from one compiler and
31 optimizations are written in C++ and are actually somewhat
35 > But your larger point is valid that adding SSA based optimizations is
46 optimization" happens right along with other data optimizations (ie, CSE
49 As far as REAL back end optimizations go, it looks something like this:
D2001-06-01-GCCOptimizations.txt7 Take a look at this document (which describes the order of optimizations
31 I've marked optimizations with a [t] to indicate things that I believe to
36 optimizations are done on the tree representation].
38 Given the lack of "strong" optimizations that would take a long time to
41 SSA based optimizations that could be adapted (besides the fact that their
/external/mesa3d/src/compiler/nir/
Dnir_opt_algebraic.py92 optimizations = [ variable
162 optimizations.extend([
188 optimizations.extend([
255 optimizations.extend([
273 optimizations.extend([
289 optimizations.extend([
300 optimizations.extend([
523 optimizations.extend([
541 optimizations.extend([
630 optimizations.extend([
[all …]
/external/tensorflow/tensorflow/lite/tools/make/targets/
Drpi_makefile.inc12 -funsafe-math-optimizations \
19 -funsafe-math-optimizations \
39 -funsafe-math-optimizations \
46 -funsafe-math-optimizations \
Dbbb_makefile.inc10 -funsafe-math-optimizations \
17 -funsafe-math-optimizations \
/external/llvm-project/openmp/docs/
Dindex.rst20 middle-end :ref:`optimizations <llvm_openmp_optimizations>`, up to the
36 2020), has an :doc:`OpenMP-Aware optimization pass <optimizations/OpenMPOpt>`
37 as well as the ability to :doc:`perform "scalar optimizations" across OpenMP region
38 boundaries <optimizations/OpenMPUnawareOptimizations>`.
40 In-depth discussion of the topic can be found :doc:`here <optimizations/Overview>`.
46 optimizations/Overview
57 The OpenMP optimizations in LLVM have been developed with remark support as a
/external/libffi/testsuite/lib/
Dlibffi.exp485 set optimizations [ list $env(LIBFFI_TEST_OPTIMIZATION) ]
487 set optimizations { "-O0" "-O2" }
493 set optimizations [ list $env(LIBFFI_TEST_OPTIMIZATION) ]
495 set optimizations { "-O0" "-O2" }
502 set optimizations [ list $env(LIBFFI_TEST_OPTIMIZATION) ]
504 set optimizations { "" }
538 foreach opt $optimizations {
/external/llvm-project/llvm/test/CodeGen/AMDGPU/
Datomic_load_add.ll1 ; RUN: llc -march=amdgcn -amdgpu-atomic-optimizations=false -verify-machineinstrs < %s | FileCheck …
2 ; RUN: llc -march=amdgcn -mcpu=tonga -mattr=-flat-for-global -amdgpu-atomic-optimizations=false -ve…
3 ; RUN: llc -march=amdgcn -mcpu=gfx900 -amdgpu-atomic-optimizations=false -verify-machineinstrs < %s…
4 ; RUN: llc -march=r600 -mcpu=redwood -amdgpu-atomic-optimizations=false < %s | FileCheck -check-pre…
Datomic_load_sub.ll1 ; RUN: llc -march=amdgcn -amdgpu-atomic-optimizations=false -verify-machineinstrs < %s | FileCheck …
2 ; RUN: llc -march=amdgcn -mcpu=tonga -mattr=-flat-for-global -amdgpu-atomic-optimizations=false -ve…
3 ; RUN: llc -march=amdgcn -mcpu=gfx900 -mattr=-flat-for-global -amdgpu-atomic-optimizations=false -v…
4 ; RUN: llc -march=r600 -mcpu=redwood -amdgpu-atomic-optimizations=false < %s | FileCheck -enable-va…
/external/adhd/cras/
Dconfigure.ac151 AC_ARG_ENABLE(sse42, [AS_HELP_STRING([--enable-sse42],[enable SSE42 optimizations])], have_sse42=$e…
156 AC_DEFINE(HAVE_SSE42,1,[Define to enable SSE42 optimizations.])
163 AC_ARG_ENABLE(avx, [AS_HELP_STRING([--enable-avx],[enable AVX optimizations])], have_avx=$enableval…
168 AC_DEFINE(HAVE_AVX,1,[Define to enable AVX optimizations.])
175 AC_ARG_ENABLE(avx2, [AS_HELP_STRING([--enable-avx2],[enable AVX2 optimizations])], have_avx2=$enabl…
180 AC_DEFINE(HAVE_AVX2,1,[Define to enable AVX2 optimizations.])
187 AC_ARG_ENABLE(fma, [AS_HELP_STRING([--enable-fma],[enable FMA optimizations])], have_fma=$enableval…
192 AC_DEFINE(HAVE_FMA,1,[Define to enable FMA optimizations.])
/external/llvm-project/llvm/docs/CommandGuide/
Dopt.rst15 takes LLVM source files as input, runs the specified optimizations or analyses
27 optimized output file. The optimizations available via :program:`opt` depend
30 option to determine what optimizations you can use.
79 applying other optimizations. It is essentially the same as `-strip`
108 line options to enable various optimizations or analyses. To see the new
109 complete list of optimizations, use the :option:`-help` and :option:`-load`
/external/llvm/docs/CommandGuide/
Dopt.rst13 takes LLVM source files as input, runs the specified optimizations or analyses
25 optimized output file. The optimizations available via :program:`opt` depend
28 option to determine what optimizations you can use.
77 applying other optimizations. It is essentially the same as :option:`-strip`
106 line options to enable various optimizations or analyses. To see the new
107 complete list of optimizations, use the :option:`-help` and :option:`-load`
/external/proguard/src/proguard/ant/
DConfigurationTask.java61 … configuration.optimizations = extendClassSpecifications(configuration.optimizations, in appendTo()
62 … this.configuration.optimizations); in appendTo()
215 configuration.optimizations = extendFilter(configuration.optimizations, in addConfiguredOptimization()
/external/tensorflow/tensorflow/core/protobuf/
Drewriter_config.proto36 // Enable some aggressive optimizations that use assumptions that TF graphs
67 // Shape optimizations (default is ON)
76 // Arithmetic optimizations (default is ON)
79 // Control dependency optimizations (default is ON).
82 // Loop optimizations (default is ON).
84 // Function optimizations (default is ON).
120 // Disable optimizations that assume compressed tensors. Note that this flag
187 // optimizations to turn on and the order of the optimizations (replacing the
/external/proguard/src/proguard/gui/
DOptimizationsDialog.java189 public void setFilter(String optimizations) in setFilter() argument
191 StringMatcher filter = optimizations != null && optimizations.length() > 0 ? in setFilter()
192 new ListParser(new NameParser()).parse(optimizations) : in setFilter()
/external/llvm-project/polly/docs/
DArchitecture.rst10 optimizations have been derived and applied, optimized LLVM-IR is regenerated
25 optimizations. The second phase consists of three conceptual groups that are
34 optimizations are executed as part of the inliner cycle. Even though they
35 perform some optimizations, their primary goal is still the simplification of
43 optimizations in this phase is vectorization, but also target specific loop
53 with the loop optimizations in the inliner cycle. We only discuss the first two
84 However, due to the many optimizations that LLVM runs before Polly the IR that

12345678910>>...35