Home
last modified time | relevance | path

Searched refs:sparsity (Results 1 – 18 of 18) sorted by relevance

/external/tensorflow/tensorflow/contrib/model_pruning/python/
Dpruning_test.py48 self.sparsity = variables.VariableV1(0.5, name="sparsity")
65 p = pruning.Pruning(spec=self.pruning_hparams, sparsity=self.sparsity)
67 sparsity = p._sparsity.eval()
68 self.assertAlmostEqual(sparsity, 0.5)
72 p = pruning.Pruning(spec=self.pruning_hparams, sparsity=self.sparsity)
74 spec=self.pruning_hparams, sparsity=self.sparsity)
76 sparsity = p._sparsity.eval()
77 self.assertAlmostEqual(sparsity, 0.5)
105 sparsity = variables.VariableV1(0.95, name="sparsity")
106 p = pruning.Pruning(sparsity=sparsity)
[all …]
Dpruning.py223 def __init__(self, spec=None, global_step=None, sparsity=None): argument
250 self._sparsity = (sparsity
251 if sparsity is not None else self._setup_sparsity())
319 sparsity = math_ops.add(
325 return sparsity
348 weight_name, sparsity = val.split(':')
349 if float(sparsity) >= 1.0:
351 weight_sparsity_map[weight_name] = float(sparsity)
358 sparsity for name, sparsity in self._weight_sparsity_map.items()
399 sparsity = self._get_sparsity(weights.op.name)
[all …]
Dstrip_pruning_vars_test.py69 self.sparsity = variables.Variable(0.5, name="sparsity")
127 p = pruning.Pruning(pruning_hparams, sparsity=self.sparsity)
/external/tensorflow/tensorflow/contrib/model_pruning/
DREADME.md10 - [Block sparsity](#block-sparsity)
53 … name (or layer name):target sparsity pairs. Eg. [conv1:0.9,conv2/kernel:0.8]. For layers/weights …
59 | initial_sparsity | float | 0.0 | Initial sparsity value |
60 | target_sparsity | float | 0.5 | Target sparsity value |
61 | sparsity_function_begin_step | integer | 0 | The global step at this which the gradual sparsity f…
62 …nd_step | integer | 100 | The global step used as the end point for the gradual sparsity function |
63 | sparsity_function_exponent | float | 3.0 | exponent = 1 is linearly varying sparsity between init…
66 The sparsity $$s_t$$ at global step $$t$$ is given by:
76 #### Block Sparsity <a name="block-sparsity"></a>
78sparsity. To train models in which the weight tensors have block sparse structure, set *block_heig…
[all …]
/external/webrtc/webrtc/common_audio/
Dsparse_fir_filter.cc19 size_t sparsity, in SparseFIRFilter() argument
21 : sparsity_(sparsity), in SparseFIRFilter()
26 RTC_CHECK_GE(sparsity, 1u); in SparseFIRFilter()
Dsparse_fir_filter.h34 size_t sparsity,
/external/tensorflow/tensorflow/python/kernel_tests/
Dsparse_add_op_test.py217 def _s2d_add_vs_sparse_add(sparsity, n, m, num_iters=50): argument
222 sp_t, unused_nnz = _sparsify(sp_vals, thresh=sparsity, index_dtype=np.int32)
246 for sparsity in [0.99, 0.5, 0.01]:
249 s2d_dt, sa_dt = _s2d_add_vs_sparse_add(sparsity, n, m)
250 print("%.2f \t %d \t %d \t %.4f \t %.4f \t %.2f" % (sparsity, n, m,
/external/tensorflow/tensorflow/contrib/boosted_trees/lib/testutil/
Dbatch_features_testutil.cc50 const double sparsity = in RandomlyInitializeBatchFeatures() local
52 const double density = 1 - sparsity; in RandomlyInitializeBatchFeatures()
/external/tensorflow/tensorflow/core/kernels/
Dsparse_matmul_op_test.cc33 void Sparsify(Tensor* t, float sparsity) { in Sparsify() argument
35 CHECK_LE(sparsity, 1); in Sparsify()
37 if (sparsity == 1) { in Sparsify()
43 if (rnd.Uniform(K) < sparsity * K) { in Sparsify()
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_SparseMatMul.pbtxt12 The gradient computation of this operation will only take advantage of sparsity
/external/eigen/Eigen/
DOrderingMethods54 * \note Some of these methods (like AMD or METIS), need the sparsity pattern
/external/tensorflow/tensorflow/contrib/factorization/g3doc/
Dwals.md46 we decompose the norm into two terms, corresponding to the sparsity pattern of
/external/tensorflow/tensorflow/python/ops/
Dnn_test.py76 sparsity = nn_impl.zero_fraction(
78 self.assertAllClose(1.0, self.evaluate(sparsity))
82 sparsity = nn_impl.zero_fraction(
84 self.assertAllClose(0.0, self.evaluate(sparsity))
89 sparsity = nn_impl.zero_fraction(value)
93 sess.run(sparsity, {value: [[0., 1.], [0.3, 2.]]}))
/external/swiftshader/third_party/subzero/docs/
DREGALLOC.rst64 sparsity of the data, resulting in stable performance as function size scales
95 by fitting well into the sparsity optimizations of their data structures.
160 As before, we need to take advantage of sparsity of variable uses across basic
DDESIGN.rst596 algorithm if implemented naively. To improve things based on sparsity, we note
/external/eigen/doc/
DSparseLinearSystems.dox123 In the case where multiple problems with the same sparsity pattern have to be solved, then the "com…
/external/swiftshader/third_party/subzero/
DDESIGN.rst596 algorithm if implemented naively. To improve things based on sparsity, we note
/external/cldr/tools/java/org/unicode/cldr/util/data/transforms/
Dinternal_raw_IPA-old.txt183176 sparsity %10817