Home
last modified time | relevance | path

Searched refs:inference (Results 1 – 25 of 364) sorted by relevance

12345678910>>...15

/external/desugar/test/java/com/google/devtools/build/android/desugar/
DByteCodeTypePrinter.java88 BytecodeTypeInference inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() local
89 mv = new MethodIrTypeDumper(mv, inference, printWriter); in visitMethod()
90 inference.setDelegateMethodVisitor(mv); in visitMethod()
92 return inference; in visitMethod()
109 private final BytecodeTypeInference inference; field in ByteCodeTypePrinter.MethodIrTypeDumper
114 MethodVisitor visitor, BytecodeTypeInference inference, PrintWriter printWriter) { in MethodIrTypeDumper() argument
116 this.inference = inference; in MethodIrTypeDumper()
121 printer.print(" |__STACK: " + inference.getOperandStackAsString() + "\n"); in printTypeOfOperandStack()
122 printer.print(" |__LOCAL: " + inference.getLocalsAsString() + "\n"); in printTypeOfOperandStack()
/external/llvm-project/mlir/docs/
DShapeInference.md3 Shape inference as discussed here is considered a specific instance of type
4 inference for [ShapedType][ShapedType]. Type constraints are along (at least)
13 Type inference is currently modelled executionally for operation creation using the
16 inference. The return type can often be deduced from the deduced return shape
18 inference for tensor types can be implemented with `InferShapedTypeOpInterface`.
22 The C++ interfaces are the base mechanism whereby shape inference is queried and
25 Initially the shape inference will be declaratively specified using:
43 Shape inference is currently tested alongside type inference by
56 This section details the shape type inference dialect (`shape`). The initial
82 The requirements for the shape inference functions are determined by the
[all …]
/external/tensorflow/tensorflow/lite/delegates/gpu/cl/
Dserialization.cc954 const InferenceContext& inference, in Encode() argument
956 std::vector<int32_t> in_ids(inference.input_ids_.size()); in Encode()
958 in_ids[i] = inference.input_ids_[i]; in Encode()
960 std::vector<int32_t> out_ids(inference.output_ids_.size()); in Encode()
962 out_ids[i] = inference.output_ids_[i]; in Encode()
967 auto in_refs_fb = builder->CreateVector(inference.in_refs_); in Encode()
968 auto out_refs_fb = builder->CreateVector(inference.out_refs_); in Encode()
971 for (int i = 0; i < inference.nodes_.size(); ++i) { in Encode()
972 auto node_fb = Encode(inference.nodes_[i], builder); in Encode()
978 auto tensors = inference.tensor_reserver_.GetTensorDescs(); in Encode()
[all …]
Dserialization.h32 const InferenceContext& inference, flatbuffers::FlatBufferBuilder* builder);
36 InferenceContext* inference);
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/task_library/
Dbert_nl_classifier.md29 ## Run inference in Java
56 ### Step 2: Run inference using the API
62 // Run inference
70 ## Run inference in Swift
83 ### Step 2: Run inference using the API
90 // Run inference
98 ## Run inference in C++
108 // Run inference
Dbert_question_answerer.md29 ## Run inference in Java
56 ### Step 2: Run inference using the API
62 // Run inference
70 ## Run inference in Swift
83 ### Step 2: Run inference using the API
90 // Run inference
99 ## Run inference in C++
109 // Run inference
Dnl_classifier.md30 ## Run inference in Java
61 ### Step 2: Run inference using the API
68 // Run inference
76 ## Run inference in Swift
89 ### Step 2: Run inference using the API
100 // Run inference
108 ## Run inference in C++
123 // Run inference
Doverview.md21 same, shareable processing logic for training and inference.
25 fast inference experience using TensorFlow Lite.
29 easily build your own Android/iOS inference APIs.
/external/tensorflow/tensorflow/lite/g3doc/guide/
Dinference.md1 # TensorFlow Lite inference
3 The term *inference* refers to the process of executing a TensorFlow Lite model
5 inference with a TensorFlow Lite model, you must run it through an
11 an inference using C++, Java, and Python, plus links to other resources for each
18 TensorFlow Lite inference typically follows the following steps:
31 1. **Running inference**
39 When you receive results from the model inference, you must interpret the
48 TensorFlow inference APIs are provided for most common mobile/embedded platforms
53 use. TensorFlow Lite is designed for fast inference on small devices, so it
59 inputs, and retrieve inference outputs.
[all …]
Dindex.md5 inference with low latency and a small binary size.
9 - The [TensorFlow Lite interpreter](inference.md), which runs specially
44 * *[Interpreter](inference.md) tuned for on-device ML*, supporting a set of
49 for accelerated inference.
83 [TensorFlow Lite interpreter](inference.md), with APIs in many languages.
96 TensorFlow Lite plans to provide high performance on-device inference for any
/external/tensorflow/tensorflow/tools/android/
DREADME.md1 # Deprecated android inference interface.
3 WARNING: This directory contains deprecated tf-mobile android inference
/external/tensorflow/tensorflow/lite/c/
DREADME.md5 for inference.
16 * `c_api.h` - Contains the TensorFlow Lite C API for inference. The
19 * `c_api_experimental.h` - Contains experimental C API methods for inference.
28 A native shared library target that contains the C API for inference has been
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/
Dadd.pbtxt1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
2 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
3 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
4 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
5 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
Dlegacy-fed-input-without-inputs.pbtxt1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
2 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
Dgraph-as-function-control-ret.pbtxt1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-graph-…
2 # RUN: not tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-gr…
3 # RUN: not tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-gr…
/external/llvm-project/clang-tools-extra/clangd/quality/
DREADME.md44 This header is included by the inference runtime.
80 The implementation of inference runtime is split across:
83 …or `CompletionModelCodegen.py` takes input the `${model}` dir and generates the inference library:
97 …nding to use the CompletionModel for inference can use this to trigger the code generator and gene…
100 ### Generated API for inference
207 Inorder to use the inference runtime, one can use `gen_decision_forest` function
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/
Doverview.md1 # TensorFlow Lite inference with metadata
6 to automatically generate the inference code for you, such as using the
9 used to configure your custom inference pipeline.
42 ### Build custom inference pipelines with the TensorFlow Lite Support Library
45 that helps to customize model interface and build inference pipelines. It
/external/tensorflow/tensorflow/lite/tools/evaluation/tasks/
DREADME.md26 …w/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/inference_diff#inference-diff-tool)
28 This binary compares TensorFlow Lite execution in single-threaded CPU inference
29 and user-defined inference.
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_FusedBatchNorm.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
91 or inference.
Dapi_def_FusedBatchNormV3.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
104 or inference.
Dapi_def_FusedBatchNormV2.pbtxt24 A 1D Tensor for population mean. Used for inference only;
31 A 1D Tensor for population variance. Used for inference only;
97 or inference.
/external/tensorflow/tensorflow/lite/g3doc/microcontrollers/
Dget_started.md3 This document explains how to train a model and run inference using a
21 2. [Run inference](#run-inference) (in C++ 11): An end-to-end unit test that
22 runs inference on the model using the [C++ library](library.md).
60 ## Run inference
70 unit test which demonstrates how to run inference using TensorFlow Lite for
71 Microcontrollers. It loads the model and runs inference several times.
278 The following code asserts that the value is `kTfLiteOk`, meaning inference was
312 ### 14. Run inference again argument
314 The remainder of the code runs inference several more times. In each instance,
/external/tensorflow/tensorflow/python/ops/numpy_ops/integration_test/benchmarks/
Dmicro_benchmarks.py87 self._benchmark_and_report(self._get_name(), lambda: model.inference(x))
94 self._get_name(), tf.function(lambda: model.inference(x)))
99 self._benchmark_and_report(self._get_name(), lambda: model.inference(x))
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/
Dresource_inlining.mlir1 // RUN: tf-opt -tf-shape-inference -inline='default-pipeline=''' %s | FileCheck %s --dump-input=alw…
5 // resource subtype, and after shape inference, function argument is refined and
/external/desugar/java/com/google/devtools/build/android/desugar/
DTryWithResourcesRewriter.java202 BytecodeTypeInference inference = null; in visitMethod() local
209 inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod()
210 inference.setDelegateMethodVisitor(visitor); in visitMethod()
211 visitor = inference; in visitMethod()
215 new TryWithResourceVisitor(internalName, name + desc, visitor, classLoader, inference); in visitMethod()

12345678910>>...15