Home
last modified time | relevance | path

Searched refs:inference (Results 1 – 25 of 212) sorted by relevance

123456789

/external/desugar/test/java/com/google/devtools/build/android/desugar/
DByteCodeTypePrinter.java88 BytecodeTypeInference inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() local
89 mv = new MethodIrTypeDumper(mv, inference, printWriter); in visitMethod()
90 inference.setDelegateMethodVisitor(mv); in visitMethod()
92 return inference; in visitMethod()
109 private final BytecodeTypeInference inference; field in ByteCodeTypePrinter.MethodIrTypeDumper
114 MethodVisitor visitor, BytecodeTypeInference inference, PrintWriter printWriter) { in MethodIrTypeDumper() argument
116 this.inference = inference; in MethodIrTypeDumper()
121 printer.print(" |__STACK: " + inference.getOperandStackAsString() + "\n"); in printTypeOfOperandStack()
122 printer.print(" |__LOCAL: " + inference.getLocalsAsString() + "\n"); in printTypeOfOperandStack()
/external/tensorflow/tensorflow/lite/g3doc/guide/
Dinference.md1 # TensorFlow Lite inference
3 The term *inference* refers to the process of executing a TensorFlow Lite model
5 inference with a TensorFlow Lite model, you must run it through an
11 perform an inference using C++, Java, and Python, plus links to other resources
18 TensorFlow Lite inference typically follows the following steps:
31 1. **Running inference**
39 When you receive results from the model inference, you must interpret the
48 TensorFlow inference APIs are provided for most common mobile/embedded platforms
52 use. TensorFlow Lite is designed for fast inference on small devices, so it
58 feed inputs, and retrieve inference outputs.
[all …]
Dindex.md5 inference with low latency and a small binary size.
9 - The [TensorFlow Lite interpreter](inference.md), which runs specially
43 * *[Interpreter](inference.md) tuned for on-device ML*, supporting a set of
48 for accelerated inference.
82 [TensorFlow Lite interpreter](inference.md), with APIs in many languages.
95 TensorFlow Lite plans to provide high performance on-device inference for any
/external/v8/src/compiler/
Djs-call-reducer.cc284 void MaybeInsertMapChecks(MapInference* inference, in MaybeInsertMapChecks() argument
290 inference->InsertMapChecks(jsgraph(), &e, Control{control()}, feedback()); in MaybeInsertMapChecks()
675 MapInference* inference, const bool has_stability_dependency,
677 TNode<Object> ReduceArrayPrototypeReduce(MapInference* inference,
683 MapInference* inference, const bool has_stability_dependency,
687 MapInference* inference, const bool has_stability_dependency,
690 TNode<Object> ReduceArrayPrototypeFind(MapInference* inference,
697 MapInference* inference, const bool has_stability_dependency,
1251 MapInference* inference, const bool has_stability_dependency, in ReduceArrayPrototypeForEach() argument
1273 MaybeInsertMapChecks(inference, has_stability_dependency); in ReduceArrayPrototypeForEach()
[all …]
/external/v8/src/torque/
Ddeclarable.cc143 TypeArgumentInference inference(generic_parameters(), in InferSpecializationTypes() local
146 if (!inference.HasFailed()) { in InferSpecializationTypes()
148 FindConstraintViolation(inference.GetResult(), Constraints())) { in InferSpecializationTypes()
149 inference.Fail(*violation); in InferSpecializationTypes()
152 return inference; in InferSpecializationTypes()
/external/tensorflow/tensorflow/lite/c/
DREADME.md5 for inference.
16 * `c_api.h` - Contains the TensorFlow Lite C API for inference. The
19 * `c_api_experimental.h` - Contains experimental C API methods for inference.
28 A native shared library target that contains the C API for inference has been
/external/tensorflow/tensorflow/tools/android/
DREADME.md1 # Deprecated android inference interface.
3 WARNING: This directory contains deprecated tf-mobile android inference
/external/tensorflow/tensorflow/lite/g3doc/performance/
Dgpu_advanced.md16 parallelism typically results in lower latency. In the best scenario, inference
29 Another benefit that comes with GPU inference is its power efficiency. A GPU
73 // Run inference.
108 // Run inference.
149 // Run inference.
261 // Run inference; the null input argument indicates use of the bound buffer for input.
294 // Run inference; the null output argument indicates use of the bound buffer for output.
306 `Interpreter::ModifyGraphWithDelegate()`. Additionally, the inference output is,
319 // Run inference.
323 Note: Once the default behavior is turned off, copying the inference output from
[all …]
Dgpu.md11 resulting in lower latency. In the best scenario, inference on the GPU may now
17 Another benefit with GPU inference is its power efficiency. GPUs carry out the
143 // Run inference
169 // Run inference
190 …t detection](https://ai.googleblog.com/2018/07/accelerated-training-and-inference-with.html) [[dow…
231 on-device inference.
/external/tensorflow/tensorflow/lite/g3doc/r1/convert/
Dcmdline_reference.md75 * When performing float inference (`--inference_type=FLOAT`) on a
77 the inference code according to the above formula, before proceeding
78 with float inference.
79 * When performing quantized inference
81 the inference code. However, the quantization parameters of all arrays,
84 quantized inference code. `mean_value` must be an integer when
85 performing quantized inference.
115 requiring floating-point inference. For such image models, the uint8 input
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_FusedBatchNorm.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
91 or inference.
Dapi_def_FusedBatchNormV2.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
97 or inference.
Dapi_def_FusedBatchNormV3.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
104 or inference.
/external/tensorflow/tensorflow/lite/g3doc/microcontrollers/
Dget_started.md5 then walks through the code for a simple application that runs inference on a
58 a model, converting it for use with TensorFlow Lite, and running inference on a
69 * A C++ 11 application that runs inference using the model, tested to work
71 * A unit test that demonstrates the process of running inference
81 ## How to run inference
85 which demonstrates how to run inference using TensorFlow Lite for
88 The test loads the model and then uses it to run inference several times.
144 The remainder of the code demonstrates how to load the model and run inference.
292 The following code asserts that the value is `kTfLiteOk`, meaning inference was
326 ### Run inference again argument
[all …]
/external/desugar/java/com/google/devtools/build/android/desugar/
DTryWithResourcesRewriter.java202 BytecodeTypeInference inference = null; in visitMethod() local
209 inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod()
210 inference.setDelegateMethodVisitor(visitor); in visitMethod()
211 visitor = inference; in visitMethod()
215 new TryWithResourceVisitor(internalName, name + desc, visitor, classLoader, inference); in visitMethod()
/external/tensorflow/tensorflow/compiler/xla/service/
Dhlo_get_dimension_size_rewriter.cc79 TF_ASSIGN_OR_RETURN(DynamicDimensionInference inference, in Run()
100 ReplaceGetSize(instruction, &inference)); in Run()
/external/parameter-framework/upstream/doc/requirements/
DAPIs.md136 Exports the "Domains" (aka "Settings") which is the inference engine's data.
141 Imports previously-exported data into the inference engine. See [req-deserializable].
145 Exports a given part of the inference engine data. See [Serialization of individual data].
149 Imports a partial inference engine data as previously exported. See section
/external/tensorflow/tensorflow/lite/micro/examples/person_detection/
DREADME.md128 inference. Since the vision model we are using for person detection is
129 relatively large, it takes a long time to run inference—around 19 seconds at the
133 After 19 seconds or so, the inference result will be translated into another LED
138 flashes, the device will capture another image and begin to run inference. After
141 Remember, image data is captured as a snapshot before each inference, whenever
154 We can also see the results of inference via the Arduino Serial Monitor. To do
172 greyscale, and 18.6 seconds to run inference.
337 When the sample is running, check the LEDs to determine whether the inference is
342 During inference, the blue LED will toggle every time inference is complete. The
/external/tensorflow/tensorflow/lite/micro/examples/person_detection_experimental/
DREADME.md129 inference. Since the vision model we are using for person detection is
130 relatively large, it takes a long time to run inference—around 19 seconds at the
134 After 19 seconds or so, the inference result will be translated into another LED
139 flashes, the device will capture another image and begin to run inference. After
142 Remember, image data is captured as a snapshot before each inference, whenever
155 We can also see the results of inference via the Arduino Serial Monitor. To do
173 greyscale, and 18.6 seconds to run inference.
338 When the sample is running, check the LEDs to determine whether the inference is
343 During inference, the blue LED will toggle every time inference is complete. The
/external/tensorflow/tensorflow/lite/testing/
Dtflite_model_test.bzl43 inference_type: The data type for inference and output.
108 inference_type: The data type for inference and output.
124 fail("Invalid inference type (%s). Expected 'float' or 'quantized'" % inference_type)
/external/tensorflow/tensorflow/lite/delegates/gpu/
DREADME.md11 resulting in lower latency. In the best scenario, inference on the GPU may now
21 Another benefit that comes with GPU inference is its power efficiency. GPUs
70 // Run inference.
148 optimization for on-device inference.
/external/tensorflow/tensorflow/lite/toco/
Dtoco_flags.proto27 // Tensorflow's mobile inference model.
61 // inference. For such image models, the uint8 input is quantized, i.e.
97 // to estimate the performance of quantized inference, without caring about
122 // transformations, in order to ensure that quantized inference has the
128 // transformations that are necessary in order to generate inference
132 // at the cost of no longer faithfully matching inference and training
/external/tensorflow/tensorflow/core/protobuf/tpu/
Dtpu_embedding_configuration.proto25 // Mode. Should the embedding layer program be run for inference (just forward
39 // Number of TPU hosts used for inference/training.
42 // Number of TensorCore used for inference/training.
/external/apache-commons-math/src/main/java/org/apache/commons/math/stat/inference/
DUnknownDistributionChiSquareTest.java17 package org.apache.commons.math.stat.inference;
DOneWayAnova.java17 package org.apache.commons.math.stat.inference;

123456789