Home
last modified time | relevance | path

Searched full:inference (Results 1 – 25 of 1648) sorted by relevance

12345678910>>...66

/external/apache-commons-math/src/main/java/org/apache/commons/math3/stat/inference/
DTestUtils.java17 package org.apache.commons.math3.stat.inference;
36 * A collection of static methods to create inference test instances or to
37 * perform inference tests.
68 * @see org.apache.commons.math3.stat.inference.TTest#homoscedasticT(double[], double[])
76 …* @see org.apache.commons.math3.stat.inference.TTest#homoscedasticT(org.apache.commons.math3.stat.…
85 …* @see org.apache.commons.math3.stat.inference.TTest#homoscedasticTTest(double[], double[], double)
95 * @see org.apache.commons.math3.stat.inference.TTest#homoscedasticTTest(double[], double[])
103 …* @see org.apache.commons.math3.stat.inference.TTest#homoscedasticTTest(org.apache.commons.math3.s…
112 * @see org.apache.commons.math3.stat.inference.TTest#pairedT(double[], double[])
121 * @see org.apache.commons.math3.stat.inference.TTest#pairedTTest(double[], double[], double)
[all …]
/external/aws-sdk-java-v2/services/elasticinference/src/main/resources/codegen-resources/
Dservice-2.json5 "endpointPrefix":"api.elastic-inference",
8 "serviceAbbreviation":"Amazon Elastic Inference",
9 "serviceFullName":"Amazon Elastic Inference",
10 "serviceId":"Elastic Inference",
12 "signingName":"elastic-inference",
13 "uid":"elastic-inference-2017-07-25"
29 …ng April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will h…
42 …ng April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will h…
57 …ng April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will h…
72 …lastic Inference Accelerator. </p> <p> February 15, 2023: Starting April 15, 2023, AWS will not on…
[all …]
Dendpoint-tests.json7 "url": "https://api.elastic-inference.ap-northeast-1.amazonaws.com"
20 "url": "https://api.elastic-inference.ap-northeast-2.amazonaws.com"
33 "url": "https://api.elastic-inference.eu-west-1.amazonaws.com"
46 "url": "https://api.elastic-inference.us-east-1.amazonaws.com"
59 "url": "https://api.elastic-inference.us-east-2.amazonaws.com"
72 "url": "https://api.elastic-inference.us-west-2.amazonaws.com"
85 "url": "https://api.elastic-inference-fips.us-east-1.api.aws"
98 "url": "https://api.elastic-inference-fips.us-east-1.amazonaws.com"
111 "url": "https://api.elastic-inference.us-east-1.api.aws"
124 … "url": "https://api.elastic-inference-fips.cn-north-1.api.amazonwebservices.com.cn"
[all …]
/external/apache-commons-math/src/main/java/org/apache/commons/math/stat/inference/
DTestUtils.java17 package org.apache.commons.math.stat.inference;
24 * A collection of static methods to create inference test instances or to
25 * perform inference tests.
156 * @see org.apache.commons.math.stat.inference.TTest#homoscedasticT(double[], double[])
164 …* @see org.apache.commons.math.stat.inference.TTest#homoscedasticT(org.apache.commons.math.stat.de…
173 … * @see org.apache.commons.math.stat.inference.TTest#homoscedasticTTest(double[], double[], double)
182 * @see org.apache.commons.math.stat.inference.TTest#homoscedasticTTest(double[], double[])
190 …* @see org.apache.commons.math.stat.inference.TTest#homoscedasticTTest(org.apache.commons.math.sta…
199 * @see org.apache.commons.math.stat.inference.TTest#pairedT(double[], double[])
207 * @see org.apache.commons.math.stat.inference.TTest#pairedTTest(double[], double[], double)
[all …]
/external/armnn/samples/ObjectDetection/include/
DObjectDetectionPipeline.hpp18 …eric object detection pipeline with 3 steps: data pre-processing, inference execution and inference
27 * @param executor - unique pointer to inference runner
28 * @param decoder - unique pointer to inference results decoder
44 * @brief Executes inference
46 * Calls inference runner provided during instance construction.
48 * @param[in] processed - input inference data. Data type should be aligned with input tensor.
49 * @param[out] result - raw floating point inference results.
51 virtual void Inference(const cv::Mat& processed, common::InferenceResults<float>& result);
54 * @brief Standard inference results post-processing implementation.
56 * Decodes inference results using decoder provided during construction.
[all …]
/external/armnn/samples/KeywordSpotting/include/
DKeywordSpottingPipeline.hpp16 …eric Keyword Spotting pipeline with 3 steps: data pre-processing, inference execution and inference
26 * @param executor - unique pointer to inference runner
27 * @param decoder - unique pointer to inference results decoder
36 * Preprocesses and prepares the data for inference by
45 * @brief Executes inference
47 * Calls inference runner provided during instance construction.
49 …* @param[in] preprocessedData - input inference data. Data type should be aligned with input tenso…
50 * @param[out] result - raw inference results.
52 …void Inference(const std::vector<int8_t>& preprocessedData, common::InferenceResults<int8_t>& resu…
55 * @brief Standard inference results post-processing implementation.
[all …]
/external/armnn/samples/SpeechRecognition/include/
DSpeechRecognitionPipeline.hpp16 …ic Speech Recognition pipeline with 3 steps: data pre-processing, inference execution and inference
26 * @param executor - unique pointer to inference runner
27 * @param decoder - unique pointer to inference results decoder
35 * Preprocesses and prepares the data for inference by
51 * @brief Executes inference
53 * Calls inference runner provided during instance construction.
55 …* @param[in] preprocessedData - input inference data. Data type should be aligned with input tenso…
56 * @param[out] result - raw inference results.
59 … void Inference(const std::vector<T>& preprocessedData, common::InferenceResults<int8_t>& result) in Inference() function in asr::ASRPipeline
66 * @brief Standard inference results post-processing implementation.
[all …]
/external/aws-sdk-java-v2/services/sagemakerruntime/src/main/resources/codegen-resources/
Dservice-2.json47Inference requests sent to this API are enqueued for asynchronous processing. The processing of th…
65inference response as a stream. The inference stream provides the response payload incrementally a…
172 …"documentation":"<p>The desired MIME type of the inference response from the model container.</p>",
178 …"documentation":"<p>Provides additional information about a request for an inference submitted to …
184 …"documentation":"<p>The identifier for the inference request. Amazon SageMaker will generate an id…
186 "locationName":"X-Amzn-SageMaker-Inference-Id"
190 "documentation":"<p>The Amazon S3 URI where the inference request payload is stored.</p>",
213 …"documentation":"<p>Identifier for an inference request. This will be the same as the <code>Infere…
217 … "documentation":"<p>The Amazon S3 URI where the inference response payload is stored.</p>",
223 …"documentation":"<p>The Amazon S3 URI where the inference failure response payload is stored.</p>",
[all …]
/external/tensorflow/tensorflow/lite/g3doc/guide/
Dinference.md1 # TensorFlow Lite inference
3 The term *inference* refers to the process of executing a TensorFlow Lite model
5 inference with a TensorFlow Lite model, you must run it through an
11 an inference using C++, Java, and Python, plus links to other resources for each
18 TensorFlow Lite inference typically follows the following steps:
31 1. **Running inference**
39 When you receive results from the model inference, you must interpret the
48 TensorFlow inference APIs are provided for most common mobile/embedded platforms
53 use. TensorFlow Lite is designed for fast inference on small devices, so it
59 inputs, and retrieve inference outputs.
[all …]
/external/armnn/python/pyarmnn/examples/speech_recognition/
Daudio_utils.py10 """Decodes the integer encoded results from inference into a string.
13 model_output: Results from running inference.
27 results: List of top 1 results from inference.
57 …es the current right context, and updates it for each inference. Will get used after last inferenc…
60 is_first_window: Boolean to show if it is the first window we are running inference on
62 output_result: the output from the inference
74 # Since it's the first inference, keep the left context, and inner context, and decode
80 # Store the right context, we will need it after the last inference
/external/armnn/
DInstallationViaAptRepository.md153 libarmnn-cpuref-backend23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
154 libarmnn-cpuref-backend24 - Arm NN is an inference engine for CPUs, GPUs and NPUs
155 libarmnn-dev - Arm NN is an inference engine for CPUs, GPUs and NPUs
156 …libarmnntfliteparser-dev - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal o…
157 libarmnn-tfliteparser23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
158 …libarmnntfliteparser24 - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal of …
159 …libarmnntfliteparser24.5 - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal o…
160 libarmnn23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
161 libarmnn24 - Arm NN is an inference engine for CPUs, GPUs and NPUs
162 libarmnn25 - Arm NN is an inference engine for CPUs, GPUs and NPUs
[all …]
/external/aws-sdk-java-v2/services/lookoutequipment/src/main/resources/codegen-resources/
Dservice-2.json51 …mentation":"<p> Creates a scheduled inference. Scheduling an inference is setting up a continuous …
107 …"documentation":"<p>Creates a machine learning model for data inference. </p> <p>A machine-learnin…
142 …ataset and associated artifacts. The operation will check to see if any inference scheduler or dat…
159 …"documentation":"<p>Deletes an inference scheduler that has been set up. Prior inference results w…
210 …zon Lookout for Equipment. This will prevent it from being used with an inference scheduler, even …
295 …"documentation":"<p> Specifies information about the inference scheduler being used, including nam…
484 …"documentation":"<p> Lists all inference events that have been found for the specified inference s…
501 …"documentation":"<p> Lists all inference executions that have been performed by the specified infe…
517 …"documentation":"<p>Retrieves a list of all inference schedulers currently available for your acco…
688 "documentation":"<p>Starts an inference scheduler. </p>"
[all …]
/external/tensorflow/tensorflow/core/framework/
Dfull_type_inference_util.h33 // inference functions.
40 // same can be said about the shape inference function.
42 // Note: Unlike type constructors, which describe op definitions, type inference
46 // Helper for a no-op type inference function that indicates type inference
48 // This is the same as not defining a type inference function at all, but
52 // Helper for a type inference function which has the same type as the i'th
59 // Helper for a type inference function which has the same type as a variadic
74 // Helper for the type inference counterpart of Unary, that is (U ->
116 // Auxiliary constructs to help creation of type inference functions.
117 // TODO(mdan): define these as type inference functions as well.
[all …]
Dop_def_builder.h17 // inference function for Op registration.
42 // A type inference function, called for each node during type inference
100 // Forward type inference function. This callable infers the return type of an
103 // Note that the type constructor and forward inference functions need not be
107 // forward inference function.
117 // These type inference functions are intermediate solutions as well: once the
119 // a solver-based type inference, it will replace these functions.
121 // TODO(mdan): Merge with shape inference.
122 // TODO(mdan): Replace with a union-based type inference algorithm.
125 // Reverse type inference function. This callable infers some input types
[all …]
/external/aws-sdk-java-v2/services/bedrockruntime/src/main/resources/codegen-resources/
Dservice-2.json35inference using the input provided in the request body. You use InvokeModel to run inference for t…
58inference using the input provided. Return the response in a stream.</p> <p>For more information, …
105 …://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.<…
115 …"documentation":"<p>The desired MIME type of the inference body in the response. The default value…
137Inference response from the model in the format specified in the content-type header field. To see…
141 "documentation":"<p>The MIME type of the inference result.</p>",
157Inference input in the format specified by the content-type. To see the format and content of this…
167 …"documentation":"<p>The desired MIME type of the inference body in the response. The default value…
189Inference response from the model in the format specified by Content-Type. To see the format and c…
193 "documentation":"<p>The MIME type of the inference result.</p>",
[all …]
/external/armnn/samples/SpeechRecognition/
DReadme.md16 sample rate. Top level inference API is provided by Arm NN library.
104 3. Executing Inference
106 5. Decoding and Processing Inference Output
163 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
195 …on pipeline has 3 steps to perform, data pre-processing, run inference and decode inference results
202 …ositioned window of data, sized appropriately for the given model, to pre-process before inference.
212 After all the MFCCs needed for an inference have been extracted from the audio data, we convolve th…
215 #### Executing Inference
219 asrPipeline->Inference<int8_t>(preprocessedData, results);
221 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
[all …]
/external/armnn/samples/KeywordSpotting/
DReadme.md17 …data from file, and to re-sample to the expected sample rate. Top level inference API is provided …
123 3. Executing Inference
125 5. Decoding and Processing Inference Output
188 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
224 …ng pipeline has 3 steps to perform: data pre-processing, run inference and decode inference result…
231 …ositioned window of data, sized appropriately for the given model, to pre-process before inference.
241 After all the MFCCs needed for an inference have been extracted from the audio data they are concat…
243 #### Executing Inference
248 kwsPipeline->Inference(preprocessedData, results);
251 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
[all …]
/external/tensorflow/tensorflow/lite/delegates/xnnpack/
DREADME.md3 XNNPACK is a highly optimized library of neural network inference operators for
6 library as an inference engine for TensorFlow Lite.
12 for floating-point inference.
85 inference by default.**
129 // Run inference using XNNPACK
178 // and inference.
197 The weights cache has to be finalized before any inference, it will be an error
421 XNNPACK supports half-precision (using IEEE FP16 format) inference for a subset
423 inference when the following conditions are met:
431 * IEEE FP16 inference is supported for every floating-point operator in the
[all …]
/external/desugar/test/java/com/google/devtools/build/android/desugar/
DByteCodeTypePrinter.java88 BytecodeTypeInference inference = new BytecodeTypeInference(access, internalName, name, desc); in visitMethod() local
89 mv = new MethodIrTypeDumper(mv, inference, printWriter); in visitMethod()
90 inference.setDelegateMethodVisitor(mv); in visitMethod()
91 // Let the type inference runs first. in visitMethod()
92 return inference; in visitMethod()
109 private final BytecodeTypeInference inference; field in ByteCodeTypePrinter.MethodIrTypeDumper
114 MethodVisitor visitor, BytecodeTypeInference inference, PrintWriter printWriter) { in MethodIrTypeDumper() argument
116 this.inference = inference; in MethodIrTypeDumper()
121 printer.print(" |__STACK: " + inference.getOperandStackAsString() + "\n"); in printTypeOfOperandStack()
122 printer.print(" |__LOCAL: " + inference.getLocalsAsString() + "\n"); in printTypeOfOperandStack()
/external/armnn/python/pyarmnn/src/pyarmnn/_utilities/
Dprofiling_helper.py10 ProfilerData.__doc__ = """Container to hold the profiling inference data, and the profiling data pe…
13 inference_data (dict): holds end-to-end inference performance data. Keys:
15 … 'execution_time' - list of total inference execution times for each inference run.
19 … 'execution_time' - list of total execution times for each inference run.
56 …#Get the inference measurements dict, this will be just one value for key starting with "inference…
72 # This is the total inference time map
/external/javaparser/javaparser-symbol-solver-core/src/main/java/com/github/javaparser/symbolsolver/resolution/typeinference/
DTypeInference.java65 // throw new IllegalArgumentException("Type inference unnecessary as type arguments have… in instantiationInference()
71 …// - Where P1, ..., Pp (p ≥ 1) are the type parameters of m, let α1, ..., αp be inference variable… in instantiationInference()
134 // inference variables in B2 succeeds (§18.4). in instantiationInference()
184 …:=α1, ..., Pp:=αp] defined in §18.5.1 to replace the type parameters of m with inference variables. in invocationTypeInferenceBoundsSetB3()
186 … in §18.5.1. (While it was necessary in §18.5.1 to demonstrate that the inference variables in B2 … in invocationTypeInferenceBoundsSetB3()
197 …// for fresh inference variables β1, ..., βn, the constraint formula ‹G<β1, ..., βn> → T› is r… in invocationTypeInferenceBoundsSetB3()
200 // - Otherwise, if R θ is an inference variable α, and one of the following is true: in invocationTypeInferenceBoundsSetB3()
256inference variable α can influence an inference variable β if α depends on the resolution of β (§1… in invocationTypeInference()
272 … //Finally, if B4 does not contain the bound false, the inference variables in B4 are resolved. in invocationTypeInference()
274 …//If resolution succeeds with instantiations T1, ..., Tp for inference variables α1, ..., αp, let … in invocationTypeInference()
[all …]
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/task_library/
Dbert_nl_classifier.md29 ## Run inference in Java
60 ### Step 2: Run inference using the API
71 // Run inference
79 ## Run inference in Swift
92 ### Step 2: Run inference using the API
99 // Run inference
107 ## Run inference in C++
115 // Run inference with your input, `input_text`.
Dimage_classifier.md48 ## Run inference in Java
75 // Import the GPU delegate plugin Library for GPU inference
93 // Run inference
101 ## Run inference in iOS
123 Make sure that the `.tflite` model you will be using for inference is present in
150 // Run inference
179 // Run inference
188 ## Run inference in Python
213 # Run inference
222 ## Run inference in C++
[all …]
Dimage_segmenter.md37 ## Run inference in Java
64 // Import the GPU delegate plugin Library for GPU inference
85 // Run inference
93 ## Run inference in iOS
115 Make sure that the `.tflite` model you will be using for inference is present in
142 // Run inference
171 // Run inference
180 ## Run inference in Python
206 # Run inference
215 ## Run inference in C++
[all …]
Dobject_detector.md46 ## Run inference in Java
73 // Import the GPU delegate plugin Library for GPU inference
95 // Run inference
103 ## Run inference in iOS
125 Make sure that the `.tflite` model you will be using for inference is present in
152 // Run inference
180 // Run inference
188 ## Run inference in Python
213 # Run inference
222 ## Run inference in C++
[all …]

12345678910>>...66