/packages/modules/NeuralNetworks/tools/api/ |
D | types.spec | 111 * Supported tensor rank: 4, with "NHWC" (i.e., Num_samples, Height, Width, 213 * Supported tensor rank: 4, with "NHWC" or "NCHW" data layout. 220 * Since %{NNAPILevel3}, generic zero-sized input tensor is supported. Zero 226 * Since %{NNAPILevel3}, zero batches is supported for this tensor. 292 * A tensor of OEM specific values. 321 * Types prefaced with %{ANN}TENSOR_* must be used for tensor data (i.e., tensors 343 /** A tensor of 32 bit floating point values. */ 346 /** A tensor of 32 bit integer values. */ 350 * A tensor of 8 bit unsigned integers that represent real numbers. 352 * Attached to this tensor are two numbers that can be used to convert the [all …]
|
D | OperationTypes.t | 49 * Expands a representation of a sparse tensor to a dense tensor. 51 * To encode a conceptual n-dimensional dense tensor with dims [D0, ..., Dn-1], potentially with 57 * * 2: How each block dimension in [Dn, ..., Dn+k-1] maps to the original tensor dimension in 70 * Supported tensor {@link OperandType}: 84 * http://tensor-compiler.org/kjolstad-oopsla17-tensor-compiler.pdf 87 * * 0: A 1-D tensor representing the compressed sparse tensor data of a conceptual 88 * n-dimensional tensor. 89 * * 1: A 1-D {@link OperandType::TENSOR_INT32} tensor defining the traversal order for reading 90 * the non-zero blocks. For an n-dimensional tensor with dimensions [D0, D1, …, Dn-1]: if 96 * * 2: An optional 1-D {@link OperandType::TENSOR_INT32} tensor defining the block map. For a [all …]
|
D | NeuralNetworksTypes.t | 336 * should typically create one shared memory object that contains every constant tensor 350 * of the element type byte size, e.g., a tensor with 591 * A tensor operand type with all dimensions specified is "fully 593 * known at model construction time), a tensor operand type should be 597 * If a tensor operand's type is not fully specified, the dimensions 603 * <p>In the following situations, a tensor operand type must be fully 611 * model within a compilation. A fully specified tensor operand type 619 * not have a fully specified tensor operand type.</li> 624 * A fully specified tensor operand type must either be provided 630 * A tensor operand type of specified rank but some number of [all …]
|
D | OperandTypes.t | 51 * A tensor of OEM specific values.
|
/packages/modules/NeuralNetworks/runtime/test/specs/V1_3/ |
D | bidirectional_sequence_rnn_1_3.mod.py | 20 def convert_to_time_major(tensor, tensor_shape): argument 21 return np.array(tensor).reshape(tensor_shape).transpose( 30 def reverse_batch_major(tensor, tensor_shape): argument 31 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist() 33 def split_tensor_in_two(tensor, tensor_shape): argument 34 tensor = np.array(tensor).reshape(tensor_shape) 35 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
|
D | bidirectional_sequence_rnn_state_output.mod.py | 20 def convert_to_time_major(tensor, tensor_shape): argument 21 return np.array(tensor).reshape(tensor_shape).transpose([1, 0, 2 31 def reverse_batch_major(tensor, tensor_shape): argument 32 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist() 35 def split_tensor_in_two(tensor, tensor_shape): argument 36 tensor = np.array(tensor).reshape(tensor_shape) 37 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
|
D | unidirectional_sequence_rnn.mod.py | 42 def convert_to_time_major(tensor, num_batches, max_time, input_size): argument 43 return np.array(tensor).reshape([num_batches, max_time, input_size
|
/packages/modules/NeuralNetworks/runtime/test/specs/V1_2/ |
D | bidirectional_sequence_rnn.mod.py | 20 def convert_to_time_major(tensor, tensor_shape): argument 21 return np.array(tensor).reshape(tensor_shape).transpose( 30 def reverse_batch_major(tensor, tensor_shape): argument 31 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist() 33 def split_tensor_in_two(tensor, tensor_shape): argument 34 tensor = np.array(tensor).reshape(tensor_shape) 35 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
|
D | unidirectional_sequence_rnn.mod.py | 39 def convert_to_time_major(tensor, num_batches, max_time, input_size): argument 40 return np.array(tensor).reshape([num_batches, max_time,
|
/packages/modules/NeuralNetworks/runtime/operation_converters/ |
D | SubGraphContext.cpp | 54 int SubGraphContext::addTensorFlatbuffer(TensorFlatbuffer tensor, int32_t operandIdx) { in addTensorFlatbuffer() argument 55 mTensorVector.push_back(tensor); in addTensorFlatbuffer() 205 TensorFlatbuffer tensor = tflite::CreateTensorDirect( in createTensorFlatbufferFromOperand() local 208 addTensorFlatbuffer(tensor, operandIdx); in createTensorFlatbufferFromOperand()
|
D | SubGraphContext.h | 45 int addTensorFlatbuffer(TensorFlatbuffer tensor, int32_t operandIdx = -1);
|
/packages/modules/NeuralNetworks/tools/test_generator/ |
D | spec_visualizer.py | 148 for tensor in op.ins: 150 "source": str(tensor), 153 for tensor in op.outs: 155 "target": str(tensor),
|
D | README.md | 199 … as an internal operand. Will skip if the model does not have any output tensor that is compatible… 231 …model to model inputs. Will skip if the model does not have any constant tensor, or if the model h… 233 …t as an internal operand. Will skip if the model does not have any input tensor that is compatible…
|
/packages/modules/NeuralNetworks/common/cpu_operations/ |
D | QuantizedLSTMTest.cpp | 226 Result setInputTensor(Execution* execution, int tensor, const std::vector<T>& data) { in setInputTensor() argument 227 return execution->setInput(tensor, data.data(), sizeof(T) * data.size()); in setInputTensor() 230 Result setOutputTensor(Execution* execution, int tensor, std::vector<T>* data) { in setOutputTensor() argument 231 return execution->setOutput(tensor, data->data(), sizeof(T) * data->size()); in setOutputTensor()
|
D | QLSTM.cpp | 36 inline bool hasTensor(IOperationExecutionContext* context, const uint32_t tensor) { in hasTensor() argument 37 return context->getInputBuffer(tensor) != nullptr; in hasTensor() 58 for (const int tensor : requiredTensorInputs) { in prepare() local 59 NN_RET_CHECK(!context->isOmittedInput(tensor)) in prepare() 60 << "required input " << tensor << " is omitted"; in prepare()
|
D | UnidirectionalSequenceLSTM.cpp | 40 inline bool hasTensor(IOperationExecutionContext* context, const uint32_t tensor) { in hasTensor() argument 41 return context->getInputBuffer(tensor) != nullptr; in hasTensor()
|
/packages/modules/NeuralNetworks/extensions/ |
D | README.md | 61 * A custom tensor type. 63 * Attached to this tensor is {@link ExampleTensorParams}. 76 * * 0: A tensor of {@link EXAMPLE_TENSOR}.
|
/packages/modules/OnDevicePersonalization/src/com/android/ondevicepersonalization/services/inference/ |
D | IsolatedModelServiceImpl.java | 155 Tensor tensor = interpreter.getInputTensor(i); in runTfliteInterpreter() local 156 int[] shape = tensor.shape(); in runTfliteInterpreter()
|
/packages/inputmethods/LatinIME/dictionaries/ |
D | en_GB_wordlist.combined.gz | 1dictionary=main:en_gb,locale=en_GB,description=English (UK),date ... |
D | en_US_wordlist.combined.gz | 1dictionary=main:en_us,locale=en_US,description=English (US),date ... |
D | en_wordlist.combined.gz | 1dictionary=main:en,locale=en,description=English,date=1414726273, ... |
D | pl_wordlist.combined.gz | 1dictionary=main:pl,locale=pl,description=Polski,date=1414726264, ... |
D | nl_wordlist.combined.gz | 1dictionary=main:nl,locale=nl,description=Nederlands,date=1414726258, ... |
D | nb_wordlist.combined.gz | 1dictionary=main:nb,locale=nb,description=Norsk bokmål,date=1393228136 ... |
D | pt_BR_wordlist.combined.gz | 1dictionary=main:pt_br,locale=pt_BR,description=Português (Brasil),date ... |