| /external/sdv/vsomeip/third_party/boost/function_types/test/ |
| D | Jamfile | 7 #------------------------------------------------------------------------------- 12 test-suite function_types : 14 # Classification 16 [ compile classification/is_function.cpp ] 17 [ compile classification/is_function_pointer.cpp ] 18 [ compile classification/is_function_reference.cpp ] 19 [ compile classification/is_member_function_pointer.cpp ] 20 [ compile classification/is_member_object_pointer.cpp ] 21 [ compile classification/is_callable_builtin.cpp ] 22 [ compile classification/is_nonmember_callable_builtin.cpp ] [all …]
|
| /external/google-cloud-java/java-automl/proto-google-cloud-automl-v1beta1/src/main/java/com/google/cloud/automl/v1beta1/ |
| D | BatchPredictRequestOrBuilder.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 144 * Required. Additional domain-specific parameters for the predictions, any string must 146 * * For Text Classification: 147 * `score_threshold` - (float) A value from 0.0 to 1.0. When the model 150 * * For Image Classification: 151 * `score_threshold` - (float) A value from 0.0 to 1.0. When the model 155 * `score_threshold` - (float) When Model detects objects on the image, 158 * `max_bounding_box_count` - (int64) No more than this number of bounding 161 * * For Video Classification : 162 * `score_threshold` - (float) A value from 0.0 to 1.0. When the model [all …]
|
| D | BatchPredictRequest.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 283 * Required. Additional domain-specific parameters for the predictions, any string must 285 * * For Text Classification: 286 * `score_threshold` - (float) A value from 0.0 to 1.0. When the model 289 * * For Image Classification: 290 * `score_threshold` - (float) A value from 0.0 to 1.0. When the model 294 * `score_threshold` - (float) When Model detects objects on the image, 297 * `max_bounding_box_count` - (int64) No more than this number of bounding 300 * * For Video Classification : 301 * `score_threshold` - (float) A value from 0.0 to 1.0. When the model [all …]
|
| /external/google-cloud-java/java-automl/proto-google-cloud-automl-v1/src/main/java/com/google/cloud/automl/v1/ |
| D | BatchPredictRequestOrBuilder.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 144 * Additional domain-specific parameters for the predictions, any string must 146 * AutoML Natural Language Classification 151 * AutoML Vision Classification 165 * AutoML Video Intelligence Classification 172 * segment-level classification. AutoML Video Intelligence returns 177 * : (boolean) Set to true to request shot-level 178 * classification. AutoML Video Intelligence determines the boundaries 184 * WARNING: Model evaluation is not done for this classification type, 189 * classification for a video at one-second intervals. AutoML Video [all …]
|
| D | BatchPredictRequest.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 281 * Additional domain-specific parameters for the predictions, any string must 283 * AutoML Natural Language Classification 288 * AutoML Vision Classification 302 * AutoML Video Intelligence Classification 309 * segment-level classification. AutoML Video Intelligence returns 314 * : (boolean) Set to true to request shot-level 315 * classification. AutoML Video Intelligence determines the boundaries 321 * WARNING: Model evaluation is not done for this classification type, 326 * classification for a video at one-second intervals. AutoML Video [all …]
|
| /external/libtextclassifier/native/annotator/translate/ |
| D | translate_test.cc | 8 * http://www.apache.org/licenses/LICENSE-2.0 22 #include "utils/test-data-test-utils.h" 23 #include "lang_id/fb_model/lang-id-from-fb.h" 24 #include "lang_id/lang-id.h" 56 options_data_classification->data()); in TestingTranslateAnnotatorOptions() 59 options_data_none->data()); in TestingTranslateAnnotatorOptions() 95 ClassificationResult classification; in TEST_F() local 98 "en", &classification)); in TEST_F() 100 EXPECT_THAT(classification, in TEST_F() 103 GetEntityData(classification.serialized_entity_data.data()); in TEST_F() [all …]
|
| /external/sdv/vsomeip/third_party/boost/algorithm/include/boost/algorithm/string/ |
| D | classification.hpp | 1 // Boost string_algo library classification.hpp header file ---------------------------// 3 // Copyright Pavol Droba 2002-2003. 18 #include <boost/algorithm/string/detail/classification.hpp> 23 Classification predicates are included in the library to give 25 They wrap functionality of STL classification functions ( e.g. \c std::isspace() ) 32 // classification functor generator -------------------------------------// 40 \param Loc A locale used for classification 53 \param Loc A locale used for classification 66 \param Loc A locale used for classification 79 \param Loc A locale used for classification [all …]
|
| /external/google-cloud-java/java-automl/proto-google-cloud-automl-v1beta1/src/main/proto/google/cloud/automl/v1beta1/ |
| D | classification.proto | 7 // http://www.apache.org/licenses/LICENSE-2.0 27 // Type of the classification problem. 29 // An un-set value of this enum. 39 // Contains annotation details specific to classification. 49 // Contains annotation details specific to video classification. 51 // Output only. Expresses the type of video classification. Possible values: 53 // * `segment` - Classification done on a specified by user 56 // ML model evaluations are done only for this type of classification. 58 // * `shot`- Shot-level classification. 64 // WARNING: Model evaluation is not done for this classification type, [all …]
|
| /external/googleapis/google/cloud/automl/v1beta1/ |
| D | classification.proto | 7 // http://www.apache.org/licenses/LICENSE-2.0 27 // Type of the classification problem. 29 // An un-set value of this enum. 39 // Contains annotation details specific to classification. 49 // Contains annotation details specific to video classification. 51 // Output only. Expresses the type of video classification. Possible values: 53 // * `segment` - Classification done on a specified by user 56 // ML model evaluations are done only for this type of classification. 58 // * `shot`- Shot-level classification. 64 // WARNING: Model evaluation is not done for this classification type, [all …]
|
| /external/libtextclassifier/native/annotator/duration/ |
| D | duration_test.cc | 8 * http://www.apache.org/licenses/LICENSE-2.0 25 #include "annotator/types-test-util.h" 27 #include "utils/tokenizer-utils.h" 69 options.sub_token_separator_codepoints.push_back('-'); in CreateOptionsData() 88 options_data_selection->data()); in TestingDurationAnnotatorOptions() 91 options_data_no_selection->data()); in TestingDurationAnnotatorOptions() 94 options_data_all->data()); in TestingDurationAnnotatorOptions() 109 config->start = 32; in BuildFeatureProcessor() 110 config->end = 33; in BuildFeatureProcessor() 111 config->role = TokenizationCodepointRange_::Role_WHITESPACE_SEPARATOR; in BuildFeatureProcessor() [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/examples/video_classification/ |
| D | overview.md | 1 # Video classification 3 <img src="../images/video.png" class="attempt-right"> 5 *Video classification* is the machine learning task of identifying what a video 6 represents. A video classification model is trained on a video dataset that 11 Video classification and image classification models both use images as inputs 13 However, a video classification model also processes the spatio-temporal 18 of a video classification model on Android. 20 …rage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/push-up-classification.gif"/> 27 download the starter video classification model and the supporting files. You 31 <a class="button button-primary" href="https://tfhub.dev/tensorflow/lite-model/movinet/a0/stream/ki… [all …]
|
| /external/libtextclassifier/java/src/com/android/textclassifier/ |
| D | ExtrasUtils.java | 8 * http://www.apache.org/licenses/LICENSE-2.0 37 private static final String SERIALIZED_ENTITIES_DATA = "serialized-entities-data"; 38 private static final String ENTITIES_EXTRAS = "entities-extras"; 39 private static final String ACTION_INTENT = "action-intent"; 40 private static final String ACTIONS_INTENTS = "actions-intents"; 41 private static final String FOREIGN_LANGUAGE = "foreign-language"; 42 private static final String ENTITY_TYPE = "entity-type"; 44 private static final String MODEL_VERSION = "model-version"; 45 private static final String MODEL_NAME = "model-name"; 46 private static final String TEXT_LANGUAGES = "text-languages"; [all …]
|
| /external/tflite-support/tensorflow_lite_support/examples/task/vision/desktop/ |
| D | image_classifier_demo.cc | 7 http://www.apache.org/licenses/LICENSE-2.0 17 // bazel run -c opt \ 19 // -- \ 20 // --model_path=/path/to/model.tflite \ 21 // --image_path=/path/to/image.jpg 46 "Maximum number of classification results to display."); 48 "Classification results with a confidence score below this value are " 53 "Comma-separated list of class names that acts as a whitelist. If " 54 "non-empty, classification results whose 'class_name' is not in this list " 58 "Comma-separated list of class names that acts as a blacklist. If " [all …]
|
| /external/libwebsockets/lib/drivers/button/ |
| D | README.md | 5 one for interrupt to bottom-half event triggering and another that runs at 5ms 9 each button can apply its own classification regime, to allow for different 20 internal struct passed per-interrupt to differentiate them and bind them to a 23 The interrupt is set for notification of the active-going edge, usually if 24 the button is pulled-up, that's the downgoing edge only. This avoids any 28 An OS timer is used to schedule a bottom-half handler outside of interrupt 31 To combat commonly-seen partial charging of the actual and parasitic network 32 around the button causing drift and oscillation, the bottom-half briefly drives 38 The bottom-half makes sure a monitoring timer is enabled, by refcount. This 39 is the engine of the rest of the classification while any button is down. The [all …]
|
| /external/libcups/filter/ |
| D | common.c | 4 * Copyright 2007-2014 by Apple Inc. 5 * Copyright 1997-2006 by Easy Software Products. 35 * 'SetCommonOptions()' - Set common filter options for media size, etc. 38 ppd_file_t * /* O - PPD file */ 40 int num_options, /* I - Number of options */ in SetCommonOptions() 41 cups_option_t *options, /* I - Options */ in SetCommonOptions() 42 int change_size) /* I - Change page size? */ in SetCommonOptions() 60 PageWidth = pagesize->width; in SetCommonOptions() 61 PageLength = pagesize->length; in SetCommonOptions() 62 PageTop = pagesize->top; in SetCommonOptions() [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/ |
| D | _book.yaml | 2 - name: "Install" 5 - include: /install/_toc.yaml 7 - name: "Learn" 11 - include: /learn/_menu_toc.yaml 17 - name: "Guide" 19 - title: "Overview" 21 - title: "Running ML models" 24 - heading: "Libraries and tools" 25 - title: "Task Library" 27 - title: "Overview" [all …]
|
| /external/libtextclassifier/native/annotator/ |
| D | test-utils.h | 8 * http://www.apache.org/licenses/LICENSE-2.0 29 const std::string first_result = arg.classification.empty() 31 : arg.classification[0].collection; 37 const std::string first_result = arg.classification.empty() 39 : arg.classification[0].collection; 56 if (arg.classification.empty()) { 61 arg.classification[0].duration_ms == duration_ms; 65 if (arg.classification.empty()) { 70 arg.classification[0].datetime_parse_result.time_ms_utc == 72 arg.classification[0].datetime_parse_result.granularity == granularity;
|
| /external/libtextclassifier/native/annotator/grammar/ |
| D | grammar-annotator.cc | 8 * http://www.apache.org/licenses/LICENSE-2.0 17 #include "annotator/grammar/grammar-annotator.h" 19 #include "annotator/feature-processor.h" 38 capturing_nodes[mapping_node->id] = mapping_node; in GetCapturingNodes() 46 const GrammarModel_::RuleClassificationResult* classification) { in MatchSelectionBoundaries() argument 47 if (classification->capturing_group() == nullptr) { in MatchSelectionBoundaries() 49 return parse_tree->codepoint_span; in MatchSelectionBoundaries() 58 for (int i = 0; i < classification->capturing_group()->size(); i++) { in MatchSelectionBoundaries() 64 const CapturingGroup* group = classification->capturing_group()->Get(i); in MatchSelectionBoundaries() 65 if (group->extend_selection()) { in MatchSelectionBoundaries() [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/examples/audio_classification/ |
| D | overview.md | 1 # Audio classification 3 <img src="../images/audio.png" class="attempt-right"> 6 classification_. An audio classification model is trained to recognize various 9 TensorFlow Lite provides optimized pre-trained models that you can deploy in 10 your mobile applications. Learn more about audio classification using TensorFlow 13 The following image shows the output of the audio classification model on 28 You can leverage the out-of-box API from 30 to integrate audio classification models in just a few lines of code. You can 37 <a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/exam… 40 <a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/exam… [all …]
|
| /external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1/src/main/java/com/google/cloud/aiplatform/v1/schema/predict/prediction/ |
| D | VideoClassificationPredictionResultOrBuilder.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 82 * - segment-classification 83 * - shot-classification 84 * - one-sec-interval-classification 98 * - segment-classification 99 * - shot-classification 100 * - one-sec-interval-classification 117 * 'segment-classification' prediction type, this equals the original 135 * 'segment-classification' prediction type, this equals the original 153 * 'segment-classification' prediction type, this equals the original [all …]
|
| /external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1beta1/src/main/java/com/google/cloud/aiplatform/v1beta1/schema/predict/prediction/ |
| D | VideoClassificationPredictionResultOrBuilder.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 82 * - segment-classification 83 * - shot-classification 84 * - one-sec-interval-classification 98 * - segment-classification 99 * - shot-classification 100 * - one-sec-interval-classification 117 * 'segment-classification' prediction type, this equals the original 135 * 'segment-classification' prediction type, this equals the original 153 * 'segment-classification' prediction type, this equals the original [all …]
|
| /external/google-cloud-java/java-automl/google-cloud-automl/src/main/java/com/google/cloud/automl/v1/ |
| D | PredictionServiceClient.java | 8 * https://www.apache.org/licenses/LICENSE-2.0 33 // AUTO-GENERATED DOCUMENTATION AND CLASS. 37 * <p>On any input that is documented to expect a string parameter in snake_case or dash-case, 46 * // - It may require correct/in-range values for request initialization. 47 * // - It may require specifying regional endpoints when creating the service client as shown in 58 * such as threads. In the example above, try-with-resources is used, which automatically calls 89 * // - It may require correct/in-range values for request initialization. 90 * // - It may require specifying regional endpoints when creating the service client as shown in 105 * // - It may require correct/in-range values for request initialization. 106 * // - It may require specifying regional endpoints when creating the service client as shown in [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/examples/image_classification/ |
| D | overview.md | 1 # Image classification 3 <img src="../images/image.png" class="attempt-right"> 6 classification_. An image classification model is trained to recognize various 9 TensorFlow Lite provides optimized pre-trained models that you can deploy in 10 your mobile applications. Learn more about image classification using TensorFlow 11 [here](https://www.tensorflow.org/tutorials/images/classification). 13 The following image shows the output of the image classification model on 29 You can leverage the out-of-box API from 31 to integrate image classification models in just a few lines of code. You can 41 <a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/exam… [all …]
|
| /external/python/google-api-python-client/docs/dyn/ |
| D | cloudsupport_v2beta.cases.html | 8 font-weight: inherit; 9 font-style: inherit; 10 font-size: 100%; 11 font-family: inherit; 12 vertical-align: baseline; 16 font-size: 13px; 21 font-size: 26px; 22 margin-bottom: 1em; 26 font-size: 24px; 27 margin-bottom: 1em; [all …]
|
| /external/tflite-support/tensorflow_lite_support/java/src/native/task/vision/ |
| D | jni_utils.cc | 7 http://www.apache.org/licenses/LICENSE-2.0 33 jobject ConvertToCategory(JNIEnv* env, const Class& classification) { in ConvertToCategory() argument 35 jclass category_class = env->FindClass(kCategoryClassName); in ConvertToCategory() 36 jmethodID category_create = env->GetStaticMethodID( in ConvertToCategory() 42 std::string label_string = classification.has_class_name() in ConvertToCategory() 43 ? classification.class_name() in ConvertToCategory() 44 : std::to_string(classification.index()); in ConvertToCategory() 45 jstring label = env->NewStringUTF(label_string.c_str()); in ConvertToCategory() 46 std::string display_name_string = classification.has_display_name() in ConvertToCategory() 47 ? classification.display_name() in ConvertToCategory() [all …]
|