/external/ImageMagick/MagickCore/ |
D | paint.c | 414 *gradient; in GradientImage() local 429 gradient=(&draw_info->gradient); in GradientImage() 430 gradient->type=type; in GradientImage() 431 gradient->bounding_box.width=image->columns; in GradientImage() 432 gradient->bounding_box.height=image->rows; in GradientImage() 435 (void) ParseAbsoluteGeometry(artifact,&gradient->bounding_box); in GradientImage() 436 gradient->gradient_vector.x2=(double) image->columns-1; in GradientImage() 437 gradient->gradient_vector.y2=(double) image->rows-1; in GradientImage() 450 gradient->gradient_vector.x1=(double) image->columns-1; in GradientImage() 451 gradient->gradient_vector.y1=(double) image->rows-1; in GradientImage() [all …]
|
/external/fmtlib/doc/bootstrap/mixins/ |
D | gradients.less | 3 #gradient { 5 // Horizontal gradient, from left to right 10 …background-image: -webkit-linear-gradient(left, @start-color @start-percent, @end-color @end-perce… 11 …background-image: -o-linear-gradient(left, @start-color @start-percent, @end-color @end-percent); … 12 …background-image: linear-gradient(to right, @start-color @start-percent, @end-color @end-percent);… 14 …filter: e(%("progid:DXImageTransform.Microsoft.gradient(startColorstr='%d', endColorstr='%d', Grad… 17 // Vertical gradient, from top to bottom 22 …background-image: -webkit-linear-gradient(top, @start-color @start-percent, @end-color @end-percen… 23 …background-image: -o-linear-gradient(top, @start-color @start-percent, @end-color @end-percent); … 24 …background-image: linear-gradient(to bottom, @start-color @start-percent, @end-color @end-percent)… [all …]
|
/external/rust/crates/quiche/ |
D | quiche.svg | 1 …gradient);}.cls-6{fill:#ffdb6f;}.cls-7{fill:#f16975;}.cls-8{fill:url(#linear-gradient-2);}.cls-9{f…
|
/external/tensorflow/tensorflow/core/api_def/base_api/ |
D | api_def_TensorArrayGradV3.pbtxt | 21 The gradient source string, used to decide which gradient TensorArray 27 If the given TensorArray gradient already exists, returns a reference to it. 33 The handle flow_in forces the execution of the gradient lookup to occur 36 may resize the object. The gradient TensorArray is statically sized based 39 As a result, the flow is used to ensure that the call to generate the gradient 42 In the case of dynamically sized TensorArrays, gradient computation should 49 TensorArray gradient calls use an accumulator TensorArray object. If 51 gradient nodes may accidentally flow through the same accumulator TensorArray. 52 This double counts and generally breaks the TensorArray gradient flow. 54 The solution is to identify which gradient call this particular [all …]
|
D | api_def_BlockLSTMGrad.pbtxt | 104 The current gradient of cs. 110 The gradient of h vector. 116 The gradient of x to be back-propped. 122 The gradient of cs_prev to be back-propped. 128 The gradient of h_prev to be back-propped. 134 The gradient for w to be back-propped. 140 The gradient for wci to be back-propped. 146 The gradient for wcf to be back-propped. 152 The gradient for wco to be back-propped. 158 The gradient for w to be back-propped.
|
D | api_def_BlockLSTMGradV2.pbtxt | 104 The current gradient of cs. 110 The gradient of h vector. 116 The gradient of x to be back-propped. 122 The gradient of cs_prev to be back-propped. 128 The gradient of h_prev to be back-propped. 134 The gradient for w to be back-propped. 140 The gradient for wci to be back-propped. 146 The gradient for wcf to be back-propped. 152 The gradient for wco to be back-propped. 158 The gradient for w to be back-propped.
|
D | api_def_FusedBatchNormGradV3.pbtxt | 6 A 4D Tensor for the gradient with respect to y. 25 mean to be reused in gradient computation. When is_training is 27 1st and 2nd order gradient computation. 35 gradient computation. When is_training is False, a 1D Tensor 37 order gradient computation. 44 in gradient computation. When is_training is False, a dummy empty Tensor will be 51 A 4D Tensor for the gradient with respect to x. 57 A 1D Tensor for the gradient with respect to scale. 63 A 1D Tensor for the gradient with respect to offset.
|
D | api_def_FusedBatchNormGrad.pbtxt | 6 A 4D Tensor for the gradient with respect to y. 25 mean to be reused in gradient computation. When is_training is 27 1st and 2nd order gradient computation. 35 gradient computation. When is_training is False, a 1D Tensor 37 order gradient computation. 43 A 4D Tensor for the gradient with respect to x. 49 A 1D Tensor for the gradient with respect to scale. 55 A 1D Tensor for the gradient with respect to offset.
|
D | api_def_SparseMatrixSoftmaxGrad.pbtxt | 10 description: "The gradient of `softmax`." 13 name: "gradient" 14 description: "The output gradient." 16 summary: "Calculates the gradient of the SparseMatrixSoftmax op."
|
D | api_def_FusedBatchNormGradV2.pbtxt | 6 A 4D Tensor for the gradient with respect to y. 25 mean to be reused in gradient computation. When is_training is 27 1st and 2nd order gradient computation. 35 gradient computation. When is_training is False, a 1D Tensor 37 order gradient computation. 43 A 4D Tensor for the gradient with respect to x. 49 A 1D Tensor for the gradient with respect to scale. 55 A 1D Tensor for the gradient with respect to offset.
|
D | api_def_SparseAccumulatorApplyGradient.pbtxt | 12 The local_step value at which the sparse gradient was computed. 18 Indices of the sparse gradient to be accumulated. Must be a 25 Values are the non-zero slices of the gradient, and must have 33 Shape of the sparse gradient to be accumulated. 50 summary: "Applies a sparse gradient to a given accumulator."
|
D | api_def_StridedSliceGrad.pbtxt | 3 summary: "Returns the gradient of `StridedSlice`." 6 `shape`, its gradient will have the same shape (which is passed here 7 as `shape`). The gradient will be zero in any element that the slice 11 `dy` is the input gradient to be propagated and `shape` is the
|
D | api_def_SparseAddGrad.pbtxt | 6 1-D with shape `[nnz(sum)]`. The gradient with respect to 32 1-D with shape `[nnz(A)]`. The gradient with respect to the 39 1-D with shape `[nnz(B)]`. The gradient with respect to the 43 summary: "The gradient operator for the SparseAdd op." 46 as `SparseTensor` objects. This op takes in the upstream gradient w.r.t.
|
/external/rust/crates/base64/ |
D | icon_CLion.svg | 3 …<linearGradient id="linear-gradient" x1="40.69" y1="-676.56" x2="83.48" y2="-676.56" gradientTrans… 14 …<linearGradient id="linear-gradient-2" x1="32.58" y1="-665.27" x2="13.76" y2="-791.59" gradientTra… 18 …<linearGradient id="linear-gradient-3" x1="116.68" y1="-660.66" x2="-12.09" y2="-796.66" xlink:hre… 19 …<linearGradient id="linear-gradient-4" x1="73.35" y1="-739.1" x2="122.29" y2="-746.06" xlink:href=… 23 <polygon points="49.2 51.8 40.6 55.4 48.4 0 77.8 16.2 49.2 51.8" fill="url(#linear-gradient)"/> 24 <polygon points="44.6 76.8 48.8 0 11.8 23.2 0 94 44.6 76.8" fill="url(#linear-gradient-2)"/> 25 ….4 109 4.8 77.8 16.2 55 41.4 0 94 41.6 124.4 93.6 77.2 125.4 38.4" fill="url(#linear-gradient-3)"/> 26 …points="53.8 54.6 46.6 98.4 75.8 121 107.8 128 128 82.4 53.8 54.6" fill="url(#linear-gradient-4)"/>
|
/external/tensorflow/tensorflow/compiler/tf2xla/ |
D | xla_resource.cc | 81 for (const string& gradient : tensor_array_gradients) { in XlaResource() local 82 tensor_array_gradients_[gradient].reset(new XlaResource( in XlaResource() 173 std::unique_ptr<XlaResource>& gradient = tensor_array_gradients_[source]; in GetOrCreateTensorArrayGradient() local 174 if (!gradient) { in GetOrCreateTensorArrayGradient() 180 gradient.reset( in GetOrCreateTensorArrayGradient() 187 *gradient_out = gradient.get(); in GetOrCreateTensorArrayGradient() 198 for (const auto& gradient : tensor_array_gradients_) { in Pack() local 199 elems.push_back(gradient.second->value_); in Pack() 224 XlaResource* gradient; in SetFromPack() local 226 GetOrCreateTensorArrayGradient(source, builder, &gradient)); in SetFromPack() [all …]
|
/external/tensorflow/tensorflow/python/kernel_tests/ |
D | gradient_correctness_test.py | 42 grads = tape.gradient([yexp, yexplog], [x]) 53 dx_dx = tape.gradient(x, x) 61 dx_dx = tape.gradient(x, x) 72 dy_dx = tape.gradient(y, x) 83 dy_dx = tape.gradient(y, x) 94 dy_dk = tape.gradient(y, k) 104 dm_dk = tape.gradient(m, k) 114 dm_dk = tape.gradient(m, k) 125 dn_dk = tape.gradient(n, k) 136 grad_1 = tape.gradient(k * k, k) [all …]
|
D | map_ops_test.py | 198 g = tape.gradient(l, v) 219 g = tape.gradient(l * 5, v) 220 g2 = tape.gradient(l2 * 6, v2) 221 g3 = tape.gradient(l3 * 7, v3) 238 g = tape.gradient(l2 * 5, v) 253 g = tape.gradient(l * 5, v) 259 g2 = tape.gradient(l2 * 6, v) 261 g3 = tape.gradient(l2 * 7, v2) 278 g = tape.gradient(l + l2, v) 282 g2 = tape.gradient(l + l2, v2) [all …]
|
/external/lottie/lottie/src/main/java/com/airbnb/lottie/animation/content/ |
D | GradientStrokeContent.java | 90 LinearGradient gradient = linearGradientCache.get(gradientHash); in getLinearGradient() local 91 if (gradient != null) { in getLinearGradient() 92 return gradient; in getLinearGradient() 103 gradient = new LinearGradient(x0, y0, x1, y1, colors, positions, Shader.TileMode.CLAMP); in getLinearGradient() 104 linearGradientCache.put(gradientHash, gradient); in getLinearGradient() 105 return gradient; in getLinearGradient() 110 RadialGradient gradient = radialGradientCache.get(gradientHash); in getRadialGradient() local 111 if (gradient != null) { in getRadialGradient() 112 return gradient; in getRadialGradient() 124 gradient = new RadialGradient(x0, y0, r, colors, positions, Shader.TileMode.CLAMP); in getRadialGradient() [all …]
|
D | GradientFillContent.java | 154 LinearGradient gradient = linearGradientCache.get(gradientHash); in getLinearGradient() local 155 if (gradient != null) { in getLinearGradient() 156 return gradient; in getLinearGradient() 163 gradient = new LinearGradient(startPoint.x, startPoint.y, endPoint.x, endPoint.y, colors, in getLinearGradient() 165 linearGradientCache.put(gradientHash, gradient); in getLinearGradient() 166 return gradient; in getLinearGradient() 171 RadialGradient gradient = radialGradientCache.get(gradientHash); in getRadialGradient() local 172 if (gradient != null) { in getRadialGradient() 173 return gradient; in getRadialGradient() 188 gradient = new RadialGradient(x0, y0, r, colors, positions, Shader.TileMode.CLAMP); in getRadialGradient() [all …]
|
/external/fmtlib/doc/bootstrap/ |
D | theme.less | 38 #gradient > .vertical(@start-color: @btn-color; @end-color: darken(@btn-color, 12%)); 65 // Remove the gradient for the pressed/active state 97 …#gradient > .vertical(@start-color: @dropdown-link-hover-bg; @end-color: darken(@dropdown-link-hov… 103 …#gradient > .vertical(@start-color: @dropdown-link-active-bg; @end-color: darken(@dropdown-link-ac… 114 …#gradient > .vertical(@start-color: lighten(@navbar-default-bg, 10%); @end-color: @navbar-default-… 115 .reset-filter(); // Remove gradient in IE<10 to fix bug where dropdowns don't get triggered 122 …#gradient > .vertical(@start-color: darken(@navbar-default-link-active-bg, 5%); @end-color: darken… 133 …#gradient > .vertical(@start-color: lighten(@navbar-inverse-bg, 10%); @end-color: @navbar-inverse-… 134 ….reset-filter(); // Remove gradient in IE<10 to fix bug where dropdowns don't get triggered; see h… 138 …#gradient > .vertical(@start-color: @navbar-inverse-link-active-bg; @end-color: lighten(@navbar-in… [all …]
|
/external/tensorflow/tensorflow/python/eager/ |
D | function_gradients_test.py | 114 tape_dy = tape.gradient(y, x) 137 return tape.gradient(primal_out, primal) 160 g, = tape.gradient(primal_out, tape.watched_variables()) 192 x = tape.gradient(x, start) 207 x, = tape.gradient(x, tape.watched_variables()) 223 g = t.gradient(y, x, doutputs) 225 gg = tt.gradient(g, doutputs) 284 self.assertAllEqual(self.evaluate(t.gradient(y, x)), 2.0) 299 self.assertAllEqual(self.evaluate(t.gradient(y, x)), 4.0) 430 gradient = grad_fn() [all …]
|
D | backprop_test.py | 156 return t.gradient(loss, [x, x1, x2, x3, x4]) 173 self.assertAllClose(1., tape.gradient(y, x)) 183 self.assertIsNotNone(t.gradient(result, v)) 207 dx, dy = t.gradient([xx, yy], [x, y]) 225 t.gradient(y, [x]) 233 dx, = t.gradient([loss, x], [x], output_gradients=[1.0, 2.0]) 335 self.assertEqual(t.gradient(y, x).numpy(), 1.0) 342 self.assertEqual(t.gradient(y, x).numpy(), 1.0) 355 self.assertAllEqual(t.gradient(y, x), [1.0]) 362 self.assertEqual(t.gradient([x, y], x).numpy(), 5.0) [all …]
|
/external/tensorflow/tensorflow/core/kernels/ |
D | relu_op_gpu.cu.cc | 44 __global__ void ReluGradHalfKernel(const Eigen::half* __restrict__ gradient, in ReluGradHalfKernel() argument 55 half2 gradient_h2 = reinterpret_cast<const half2*>(gradient)[index]; in ReluGradHalfKernel() 85 Eigen::half grad_h = gradient[count - 1]; in ReluGradHalfKernel() 98 const Eigen::half* __restrict__ gradient, in ReluGradHalfKernelVector() argument 106 float4 gradient_h8 = reinterpret_cast<const float4*>(gradient)[index]; in ReluGradHalfKernelVector() 145 Eigen::half grad_h = gradient[half8_count * VectorSizeElements + index]; in ReluGradHalfKernelVector() 166 typename TTypes<Eigen::half>::ConstTensor gradient, in operator ()() 172 auto gradient_ptr = reinterpret_cast<uintptr_t>(gradient.data()); in operator ()() 177 int32 count = gradient.size(); in operator ()() 185 gradient.data(), feature.data(), backprop.data(), count)); in operator ()() [all …]
|
/external/tensorflow/tensorflow/python/tpu/ |
D | tpu_embedding_gradient.py | 147 if any(gradient is None for gradient in table_gradients): 159 gradient is not None for gradient in table_gradients) 172 for feature, gradient in zip(tpu_embedding.table_to_features_dict[table], 174 if gradient is not None: 175 feature_to_gradient_dict[feature] = gradient
|
/external/tensorflow/tensorflow/python/training/experimental/ |
D | loss_scaling_gradient_tape_test.py | 96 return g.gradient(y, x) 116 return g.gradient(y, x, output_gradients=constant_op.constant(2.0)) 141 return g.gradient(y, [x1, x2, x3, x4]) 169 return g.gradient(y, x) 197 dy_dx = gg.gradient(y, x) 198 d2y_dx2 = g.gradient(dy_dx, x) 216 g.gradient(z, x) 218 g.gradient(y, x) 235 dz_dx = g.gradient(z, x) 236 dy_dx = g.gradient(y, x) [all …]
|