/external/desugar/test/java/com/google/devtools/build/android/desugar/ |
D | ByteCodeTypePrinter.java | 88 BytecodeTypeInference inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() local 89 mv = new MethodIrTypeDumper(mv, inference, printWriter); in visitMethod() 90 inference.setDelegateMethodVisitor(mv); in visitMethod() 92 return inference; in visitMethod() 109 private final BytecodeTypeInference inference; field in ByteCodeTypePrinter.MethodIrTypeDumper 114 MethodVisitor visitor, BytecodeTypeInference inference, PrintWriter printWriter) { in MethodIrTypeDumper() argument 116 this.inference = inference; in MethodIrTypeDumper() 121 printer.print(" |__STACK: " + inference.getOperandStackAsString() + "\n"); in printTypeOfOperandStack() 122 printer.print(" |__LOCAL: " + inference.getLocalsAsString() + "\n"); in printTypeOfOperandStack()
|
/external/llvm-project/mlir/docs/ |
D | ShapeInference.md | 3 Shape inference as discussed here is considered a specific instance of type 4 inference for [ShapedType][ShapedType]. Type constraints are along (at least) 13 Type inference is currently modelled executionally for operation creation using the 16 inference. The return type can often be deduced from the deduced return shape 18 inference for tensor types can be implemented with `InferShapedTypeOpInterface`. 22 The C++ interfaces are the base mechanism whereby shape inference is queried and 25 Initially the shape inference will be declaratively specified using: 43 Shape inference is currently tested alongside type inference by 56 This section details the shape type inference dialect (`shape`). The initial 82 The requirements for the shape inference functions are determined by the [all …]
|
/external/tensorflow/tensorflow/lite/delegates/gpu/cl/ |
D | serialization.cc | 954 const InferenceContext& inference, in Encode() argument 956 std::vector<int32_t> in_ids(inference.input_ids_.size()); in Encode() 958 in_ids[i] = inference.input_ids_[i]; in Encode() 960 std::vector<int32_t> out_ids(inference.output_ids_.size()); in Encode() 962 out_ids[i] = inference.output_ids_[i]; in Encode() 967 auto in_refs_fb = builder->CreateVector(inference.in_refs_); in Encode() 968 auto out_refs_fb = builder->CreateVector(inference.out_refs_); in Encode() 971 for (int i = 0; i < inference.nodes_.size(); ++i) { in Encode() 972 auto node_fb = Encode(inference.nodes_[i], builder); in Encode() 978 auto tensors = inference.tensor_reserver_.GetTensorDescs(); in Encode() [all …]
|
D | serialization.h | 32 const InferenceContext& inference, flatbuffers::FlatBufferBuilder* builder); 36 InferenceContext* inference);
|
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/task_library/ |
D | bert_nl_classifier.md | 29 ## Run inference in Java 56 ### Step 2: Run inference using the API 62 // Run inference 70 ## Run inference in Swift 83 ### Step 2: Run inference using the API 90 // Run inference 98 ## Run inference in C++ 108 // Run inference
|
D | bert_question_answerer.md | 29 ## Run inference in Java 56 ### Step 2: Run inference using the API 62 // Run inference 70 ## Run inference in Swift 83 ### Step 2: Run inference using the API 90 // Run inference 99 ## Run inference in C++ 109 // Run inference
|
D | nl_classifier.md | 30 ## Run inference in Java 61 ### Step 2: Run inference using the API 68 // Run inference 76 ## Run inference in Swift 89 ### Step 2: Run inference using the API 100 // Run inference 108 ## Run inference in C++ 123 // Run inference
|
D | overview.md | 21 same, shareable processing logic for training and inference. 25 fast inference experience using TensorFlow Lite. 29 easily build your own Android/iOS inference APIs.
|
/external/tensorflow/tensorflow/lite/g3doc/guide/ |
D | inference.md | 1 # TensorFlow Lite inference 3 The term *inference* refers to the process of executing a TensorFlow Lite model 5 inference with a TensorFlow Lite model, you must run it through an 11 an inference using C++, Java, and Python, plus links to other resources for each 18 TensorFlow Lite inference typically follows the following steps: 31 1. **Running inference** 39 When you receive results from the model inference, you must interpret the 48 TensorFlow inference APIs are provided for most common mobile/embedded platforms 53 use. TensorFlow Lite is designed for fast inference on small devices, so it 59 inputs, and retrieve inference outputs. [all …]
|
D | index.md | 5 inference with low latency and a small binary size. 9 - The [TensorFlow Lite interpreter](inference.md), which runs specially 44 * *[Interpreter](inference.md) tuned for on-device ML*, supporting a set of 49 for accelerated inference. 83 [TensorFlow Lite interpreter](inference.md), with APIs in many languages. 96 TensorFlow Lite plans to provide high performance on-device inference for any
|
/external/tensorflow/tensorflow/tools/android/ |
D | README.md | 1 # Deprecated android inference interface. 3 WARNING: This directory contains deprecated tf-mobile android inference
|
/external/tensorflow/tensorflow/lite/c/ |
D | README.md | 5 for inference. 16 * `c_api.h` - Contains the TensorFlow Lite C API for inference. The 19 * `c_api_experimental.h` - Contains experimental C API methods for inference. 28 A native shared library target that contains the C API for inference has been
|
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/ |
D | add.pbtxt | 1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-… 2 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-… 3 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-… 4 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-… 5 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
|
D | legacy-fed-input-without-inputs.pbtxt | 1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-… 2 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-…
|
D | graph-as-function-control-ret.pbtxt | 1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-graph-… 2 # RUN: not tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-gr… 3 # RUN: not tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-gr…
|
/external/llvm-project/clang-tools-extra/clangd/quality/ |
D | README.md | 44 This header is included by the inference runtime. 80 The implementation of inference runtime is split across: 83 …or `CompletionModelCodegen.py` takes input the `${model}` dir and generates the inference library: 97 …nding to use the CompletionModel for inference can use this to trigger the code generator and gene… 100 ### Generated API for inference 207 Inorder to use the inference runtime, one can use `gen_decision_forest` function
|
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/ |
D | overview.md | 1 # TensorFlow Lite inference with metadata 6 to automatically generate the inference code for you, such as using the 9 used to configure your custom inference pipeline. 42 ### Build custom inference pipelines with the TensorFlow Lite Support Library 45 that helps to customize model interface and build inference pipelines. It
|
/external/tensorflow/tensorflow/lite/tools/evaluation/tasks/ |
D | README.md | 26 …w/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/inference_diff#inference-diff-tool) 28 This binary compares TensorFlow Lite execution in single-threaded CPU inference 29 and user-defined inference.
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_FusedBatchNorm.pbtxt | 24 A 1D Tensor for population mean. Used for inference only; 31 A 1D Tensor for population variance. Used for inference only; 91 or inference.
|
D | api_def_FusedBatchNormV3.pbtxt | 24 A 1D Tensor for population mean. Used for inference only; 31 A 1D Tensor for population variance. Used for inference only; 104 or inference.
|
D | api_def_FusedBatchNormV2.pbtxt | 24 A 1D Tensor for population mean. Used for inference only; 31 A 1D Tensor for population variance. Used for inference only; 97 or inference.
|
/external/tensorflow/tensorflow/lite/g3doc/microcontrollers/ |
D | get_started.md | 3 This document explains how to train a model and run inference using a 21 2. [Run inference](#run-inference) (in C++ 11): An end-to-end unit test that 22 runs inference on the model using the [C++ library](library.md). 60 ## Run inference 70 unit test which demonstrates how to run inference using TensorFlow Lite for 71 Microcontrollers. It loads the model and runs inference several times. 278 The following code asserts that the value is `kTfLiteOk`, meaning inference was 312 ### 14. Run inference again argument 314 The remainder of the code runs inference several more times. In each instance,
|
/external/tensorflow/tensorflow/python/ops/numpy_ops/integration_test/benchmarks/ |
D | micro_benchmarks.py | 87 self._benchmark_and_report(self._get_name(), lambda: model.inference(x)) 94 self._get_name(), tf.function(lambda: model.inference(x))) 99 self._benchmark_and_report(self._get_name(), lambda: model.inference(x))
|
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/ |
D | resource_inlining.mlir | 1 // RUN: tf-opt -tf-shape-inference -inline='default-pipeline=''' %s | FileCheck %s --dump-input=alw… 5 // resource subtype, and after shape inference, function argument is refined and
|
/external/desugar/java/com/google/devtools/build/android/desugar/ |
D | TryWithResourcesRewriter.java | 202 BytecodeTypeInference inference = null; in visitMethod() local 209 inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() 210 inference.setDelegateMethodVisitor(visitor); in visitMethod() 211 visitor = inference; in visitMethod() 215 new TryWithResourceVisitor(internalName, name + desc, visitor, classLoader, inference); in visitMethod()
|