/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_GenerateVocabRemapping.pbtxt | 20 new ID that is not found in the old vocabulary. 45 use the entire old vocabulary. 48 summary: "Given a path to new and old vocabulary files, returns a remapping Tensor of" 51 vocabulary that corresponds to row `i` in the new vocabulary (starting at line 53 in the new vocabulary is not in the old vocabulary. The old vocabulary is 60 with each line containing a single entity within the vocabulary. 67 The op also returns a count of how many entries in the new vocabulary 68 were present in the old vocabulary, which is used to calculate the number of
|
D | api_def_NegTrain.pbtxt | 30 Count of words in the vocabulary.
|
D | api_def_Skipgram.pbtxt | 67 vocabulary.
|
D | api_def_InitializeTableFromTextFile.pbtxt | 13 Filename of a vocabulary text file.
|
D | api_def_InitializeTableFromTextFileV2.pbtxt | 15 Filename of a vocabulary text file.
|
D | api_def_FixedUnigramCandidateSampler.pbtxt | 134 The vocabulary file should be in CSV-like format, with the last field
|
/external/tensorflow/tensorflow/examples/saved_model/integration_tests/ |
D | export_simple_text_embedding.py | 37 def write_vocabulary_file(vocabulary): argument 42 for entry in vocabulary: 54 def __init__(self, vocabulary, emb_dim, oov_buckets): argument 58 write_vocabulary_file(vocabulary)) 59 self._total_size = len(vocabulary) + oov_buckets 99 vocabulary = ["cat", "is", "on", "the", "mat"] 100 module = TextEmbeddingModel(vocabulary=vocabulary, emb_dim=10, oov_buckets=10)
|
/external/antlr/tool/src/test/java/org/antlr/test/ |
D | TestIntervalSet.java | 99 IntervalSet vocabulary = IntervalSet.of(1,1000); in testNotSingleElement() local 100 vocabulary.add(2000,3000); in testNotSingleElement() 103 String result = (s.complement(vocabulary)).toString(); in testNotSingleElement() 108 IntervalSet vocabulary = IntervalSet.of(1,1000); in testNotSet() local 113 String result = (s.complement(vocabulary)).toString(); in testNotSet() 118 IntervalSet vocabulary = IntervalSet.of(1,1000); in testNotEqualSet() local 121 String result = (s.complement(vocabulary)).toString(); in testNotEqualSet() 126 IntervalSet vocabulary = IntervalSet.of(1,2); in testNotSetEdgeElement() local 129 String result = (s.complement(vocabulary)).toString(); in testNotSetEdgeElement() 134 IntervalSet vocabulary = IntervalSet.of(1,255); in testNotSetFragmentedVocabulary() local [all …]
|
/external/tensorflow/tensorflow/contrib/learn/python/learn/preprocessing/ |
D | text.py | 136 vocabulary=None, argument 151 if vocabulary: 152 self.vocabulary_ = vocabulary
|
/external/antlr/antlr3-maven-plugin/src/site/apt/examples/ |
D | libraries.apt | 5 …caused some confusion in regard to the fact that generated vocabulary files (<<<*.tokens>>>) can a… 10 directive and also require the use of a vocabulary file then you will need to locate 13 location of your imported grammars and ANTLR will not find any vocabulary files in
|
/external/antlr/tool/src/main/java/org/antlr/misc/ |
D | IntervalSet.java | 221 public IntervalSet complement(IntSet vocabulary) { in complement() argument 222 if ( vocabulary==null ) { in complement() 225 if ( !(vocabulary instanceof IntervalSet ) ) { in complement() 227 vocabulary.getClass().getName()+")"); in complement() 229 IntervalSet vocabularyIS = ((IntervalSet)vocabulary); in complement()
|
D | BitSet.java | 514 public String toString(String separator, List<String> vocabulary) { in toString() argument 515 if (vocabulary == null) { in toString() 524 if (i >= vocabulary.size()) { in toString() 527 else if (vocabulary.get(i) == null) { in toString() 531 str += vocabulary.get(i); in toString()
|
/external/tensorflow/tensorflow/examples/tutorials/word2vec/ |
D | word2vec_basic.py | 74 vocabulary = read_data(filename) 75 print('Data size', len(vocabulary)) 105 vocabulary, vocabulary_size) 106 del vocabulary # Hint to reduce memory.
|
/external/antlr/runtime/JavaScript/src/org/antlr/runtime/ |
D | BitSet.js | 685 toString2: function(separator, vocabulary) { argument 693 if (i >= vocabulary.size()) { 696 else if (!org.antlr.lang.isValue(vocabulary.get(i))) { 700 str += vocabulary.get(i);
|
/external/tensorflow/tensorflow/contrib/learn/python/learn/preprocessing/tests/ |
D | text_test.py | 73 max_document_length=4, vocabulary=vocab, tokenizer_fn=list)
|
/external/antlr/antlr3-maven-archetype/src/main/resources/archetype-resources/src/main/antlr3/ |
D | TParser.g | 20 // Use the vocabulary generated by the accompanying
|
/external/antlr/tool/src/main/antlr3/org/antlr/grammar/v3/ |
D | AssignTokenTypesWalker.g | 46 * a) Import token vocabulary if available. Set token types for any new tokens 236 // check for grammar-level option to import vocabulary
|
/external/tensorflow/tensorflow/core/protobuf/tpu/ |
D | tpu_embedding_configuration.proto | 13 // Size of the vocabulary (i.e., number of rows) in the table.
|
/external/antlr/tool/src/main/resources/org/antlr/tool/templates/messages/languages/ |
D | en.stg | 45 problem reading token vocabulary file <arg>: <exception> 63 "problems parsing token vocabulary file <arg> on line <arg2>"
|
/external/antlr/runtime/ActionScript/project/src/org/antlr/runtime/ |
D | Lexer.as | 298 /** Lexers can normally match any char in it's vocabulary after matching
|
/external/antlr/runtime/ObjC/Framework/ |
D | Lexer.m | 413 /** Lexers can normally match any char in it's vocabulary after matching
|
/external/libtextclassifier/annotator/ |
D | model.fbs | 616 // out-of-vocabulary.
|
/external/tensorflow/tensorflow/examples/udacity/ |
D | 6_lstm.ipynb | 261 "Utility functions to map characters to vocabulary IDs and back."
|
/external/tensorflow/ |
D | RELEASE.md | 1513 transform string features to IDs, where the mapping is defined by a vocabulary
|
/external/icu/icu4j/main/shared/data/ |
D | Transliterator_Han_Latin_Definition.txt | 4634 詞彙 < \(list\-of\)\-vocabulary; 11797 語彙 < vocabulary; 26961 詞彙 > \(list\-of\)\-vocabulary; 29505 語彙 > vocabulary;
|