Home
last modified time | relevance | path

Searched refs:gradient (Results 1 – 25 of 281) sorted by relevance

12345678910>>...12

/external/libvncserver/webclients/novnc/include/
Dblack.css15 …background: -moz-linear-gradient(top, #4c4c4c 0%, #2c2c2c 50%, #000000 51%, #131313 100%); /* FF3.…
16 …background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#4c4c4c), color-stop(50%…
17 …background: -webkit-linear-gradient(top, #4c4c4c 0%,#2c2c2c 50%,#000000 51%,#131313 100%); /* Chro…
18 …background: -o-linear-gradient(top, #4c4c4c 0%,#2c2c2c 50%,#000000 51%,#131313 100%); /* Opera11.1…
19 background: -ms-linear-gradient(top, #4c4c4c 0%,#2c2c2c 50%,#000000 51%,#131313 100%); /* IE10+ */
20 background: linear-gradient(top, #4c4c4c 0%,#2c2c2c 50%,#000000 51%,#131313 100%); /* W3C */
24 …background: -moz-linear-gradient(top, #f04040 0%, #2c2c2c 50%, #000000 51%, #131313 100%); /* FF3.…
25 …background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#f04040), color-stop(50%…
26 …background: -webkit-linear-gradient(top, #f04040 0%,#2c2c2c 50%,#000000 51%,#131313 100%); /* Chro…
27 …background: -o-linear-gradient(top, #f04040 0%,#2c2c2c 50%,#000000 51%,#131313 100%); /* Opera11.1…
[all …]
Dbase.css177 background:#eee; /* default background for browsers without gradient support */
243 …background: -moz-linear-gradient(top, #b2bdcd 0%, #899cb3 49%, #7e93af 51%, #6e84a3 100%); /* FF3.…
244 …background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#b2bdcd), color-stop(49%…
245 …background: -webkit-linear-gradient(top, #b2bdcd 0%,#899cb3 49%,#7e93af 51%,#6e84a3 100%); /* Chro…
246 …background: -o-linear-gradient(top, #b2bdcd 0%,#899cb3 49%,#7e93af 51%,#6e84a3 100%); /* Opera11.1…
247 background: -ms-linear-gradient(top, #b2bdcd 0%,#899cb3 49%,#7e93af 51%,#6e84a3 100%); /* IE10+ */
248 background: linear-gradient(top, #b2bdcd 0%,#899cb3 49%,#7e93af 51%,#6e84a3 100%); /* W3C */
252 …background: -moz-linear-gradient(top, #f04040 0%, #899cb3 49%, #7e93af 51%, #6e84a3 100%); /* FF3.…
253 …background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#f04040), color-stop(49%…
254 …background: -webkit-linear-gradient(top, #f04040 0%,#899cb3 49%,#7e93af 51%,#6e84a3 100%); /* Chro…
[all …]
Dblue.css11 background-image: -webkit-gradient(
18 background-image: -moz-linear-gradient(
26 background-image: -webkit-gradient(
33 background-image: -moz-linear-gradient(
41 background-image: -webkit-gradient(
48 background-image: -moz-linear-gradient(
/external/ImageMagick/MagickCore/
Dpaint.c419 *gradient; in GradientImage() local
434 gradient=(&draw_info->gradient); in GradientImage()
435 gradient->type=type; in GradientImage()
436 gradient->bounding_box.width=image->columns; in GradientImage()
437 gradient->bounding_box.height=image->rows; in GradientImage()
440 (void) ParseAbsoluteGeometry(artifact,&gradient->bounding_box); in GradientImage()
441 gradient->gradient_vector.x2=(double) image->columns-1.0; in GradientImage()
442 gradient->gradient_vector.y2=(double) image->rows-1.0; in GradientImage()
455 gradient->gradient_vector.x1=(double) image->columns-1.0; in GradientImage()
456 gradient->gradient_vector.y1=(double) image->rows-1.0; in GradientImage()
[all …]
/external/tensorflow/tensorflow/core/api_def/base_api/
Dapi_def_TensorArrayGradV3.pbtxt21 The gradient source string, used to decide which gradient TensorArray
27 If the given TensorArray gradient already exists, returns a reference to it.
33 The handle flow_in forces the execution of the gradient lookup to occur
36 may resize the object. The gradient TensorArray is statically sized based
39 As a result, the flow is used to ensure that the call to generate the gradient
42 In the case of dynamically sized TensorArrays, gradient computation should
49 TensorArray gradient calls use an accumulator TensorArray object. If
51 gradient nodes may accidentally flow through the same accumulator TensorArray.
52 This double counts and generally breaks the TensorArray gradient flow.
54 The solution is to identify which gradient call this particular
[all …]
Dapi_def_FusedBatchNormGrad.pbtxt6 A 4D Tensor for the gradient with respect to y.
25 mean to be reused in gradient computation. When is_training is
27 1st and 2nd order gradient computation.
35 gradient computation. When is_training is False, a 1D Tensor
37 order gradient computation.
43 A 4D Tensor for the gradient with respect to x.
49 A 1D Tensor for the gradient with respect to scale.
55 A 1D Tensor for the gradient with respect to offset.
Dapi_def_FusedBatchNormGradV2.pbtxt6 A 4D Tensor for the gradient with respect to y.
25 mean to be reused in gradient computation. When is_training is
27 1st and 2nd order gradient computation.
35 gradient computation. When is_training is False, a 1D Tensor
37 order gradient computation.
43 A 4D Tensor for the gradient with respect to x.
49 A 1D Tensor for the gradient with respect to scale.
55 A 1D Tensor for the gradient with respect to offset.
Dapi_def_SparseAccumulatorApplyGradient.pbtxt12 The local_step value at which the sparse gradient was computed.
18 Indices of the sparse gradient to be accumulated. Must be a
25 Values are the non-zero slices of the gradient, and must have
33 Shape of the sparse gradient to be accumulated.
50 summary: "Applies a sparse gradient to a given accumulator."
Dapi_def_StridedSliceGrad.pbtxt3 summary: "Returns the gradient of `StridedSlice`."
6 `shape`, its gradient will have the same shape (which is passed here
7 as `shape`). The gradient will be zero in any element that the slice
11 `dy` is the input gradient to be propagated and `shape` is the
Dapi_def_SparseAddGrad.pbtxt6 1-D with shape `[nnz(sum)]`. The gradient with respect to
32 1-D with shape `[nnz(A)]`. The gradient with respect to the
39 1-D with shape `[nnz(B)]`. The gradient with respect to the
43 summary: "The gradient operator for the SparseAdd op."
46 as `SparseTensor` objects. This op takes in the upstream gradient w.r.t.
Dapi_def_AccumulatorApplyGradient.pbtxt12 The local_step value at which the gradient was computed.
16 name: "gradient"
18 A tensor of the gradient to be accumulated.
28 summary: "Applies a gradient to a given accumulator."
Dapi_def_PreventGradient.pbtxt22 summary: "An identity op that triggers an error if a gradient is requested."
26 When building ops to compute gradients, the TensorFlow gradient system
27 will return an error when trying to lookup the gradient of this op,
28 because no gradient must ever be registered for this function. This
Dapi_def_UnbatchGrad.pbtxt9 original_input: The input to the Unbatch operation this is the gradient of.
10 batch_index: The batch_index given to the Unbatch operation this is the gradient
12 grad: The downstream gradient.
14 batched_grad: The return value, either an empty tensor or the batched gradient.
Dapi_def_DebugGradientRefIdentity.pbtxt3 summary: "Identity op for gradient debugging."
6 register gradient tensors for gradient debugging.
Dapi_def_DebugGradientIdentity.pbtxt3 summary: "Identity op for gradient debugging."
6 register gradient tensors for gradient debugging.
/external/libxcam/cl_kernel/
Dkernel_3d_denoise_slm.cl69 float4 gradient = (float4)(0.0f, 0.0f, 0.0f, 0.0f);
96gradient = (float4)(ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, REF_BLOCK_WIDTH + local_id_x + j)].s2,
100 gradient = (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, local_id_x + j)]) +
101 … (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, REF_BLOCK_WIDTH + local_id_x + j)]) +
102 … (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, 2 * REF_BLOCK_WIDTH + local_id_x + j)]) +
103 … (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, 3 * REF_BLOCK_WIDTH + local_id_x + j)]);
104 gradient.s0 = (gradient.s0 + gradient.s1 + gradient.s2 + gradient.s3) / 15.0f;
105 gain = (gradient.s0 < threshold) ? gain : 2.0f * gain;
142gradient = (float4)(ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, REF_BLOCK_WIDTH + local_id_x + j)].s2,
146 gradient = (gradient - ref_cache[mad24(i, 4 * REF_BLOCK_WIDTH, local_id_x + j)]) +
[all …]
/external/tensorflow/tensorflow/compiler/tf2xla/
Dxla_resource.cc43 for (const string& gradient : tensor_array_gradients) { in XlaResource() local
44 tensor_array_gradients_[gradient].reset( in XlaResource()
130 std::unique_ptr<XlaResource>& gradient = tensor_array_gradients_[source]; in GetOrCreateTensorArrayGradient() local
131 if (!gradient) { in GetOrCreateTensorArrayGradient()
137 gradient.reset( in GetOrCreateTensorArrayGradient()
143 *gradient_out = gradient.get(); in GetOrCreateTensorArrayGradient()
155 for (const auto& gradient : tensor_array_gradients_) { in Pack() local
156 elems.push_back(gradient.second->value_); in Pack()
181 XlaResource* gradient; in SetFromPack() local
183 GetOrCreateTensorArrayGradient(source, builder, &gradient)); in SetFromPack()
[all …]
/external/swiftshader/src/Renderer/
DSetupProcessor.cpp111 state.gradient[interpolant][component].attribute = Unused; in update()
112 state.gradient[interpolant][component].flat = false; in update()
113 state.gradient[interpolant][component].wrap = false; in update()
154 state.gradient[interpolant][component].attribute = input; in update()
155 state.gradient[interpolant][component].flat = flat; in update()
173 state.gradient[interpolant][component].attribute = T0 + semantic.index; in update()
174 state.gradient[interpolant][component].flat = semantic.flat || (point && !sprite); in update()
177 state.gradient[interpolant][component].attribute = C0 + semantic.index; in update()
178 state.gradient[interpolant][component].flat = semantic.flat || flatShading; in update()
/external/tensorflow/tensorflow/contrib/layers/python/layers/
Doptimizers.py274 for gradient, variable in gradients:
275 if isinstance(gradient, ops.IndexedSlices):
276 grad_values = gradient.values
278 grad_values = gradient
416 for gradient in gradients:
417 if gradient is None:
420 if isinstance(gradient, ops.IndexedSlices):
421 gradient_shape = gradient.dense_shape
423 gradient_shape = gradient.get_shape()
425 noisy_gradients.append(gradient + noise)
/external/apache-commons-math/src/main/java/org/apache/commons/math/optimization/fitting/
DPolynomialFitter.java87 public double[] gradient(double x, double[] parameters) { in gradient() method in PolynomialFitter.ParametricPolynomial
88 final double[] gradient = new double[parameters.length]; in gradient() local
91 gradient[i] = xn; in gradient()
94 return gradient; in gradient()
/external/tensorflow/tensorflow/core/kernels/
Dfake_quant_ops.cc118 void Operate(OpKernelContext* context, const Tensor& gradient, in Operate() argument
120 OperateNoTemplate(context, gradient, input, output); in Operate()
123 void OperateNoTemplate(OpKernelContext* context, const Tensor& gradient, in OperateNoTemplate() argument
125 OP_REQUIRES(context, input.IsSameSize(gradient), in OperateNoTemplate()
128 functor(context->eigen_device<Device>(), gradient.flat<float>(), in OperateNoTemplate()
230 const Tensor& gradient = context->input(0); in Compute() local
232 OP_REQUIRES(context, input.IsSameSize(gradient), in Compute()
251 functor(context->eigen_device<Device>(), gradient.flat<float>(), in Compute()
367 const Tensor& gradient = context->input(0); in Compute() local
369 OP_REQUIRES(context, input.IsSameSize(gradient), in Compute()
[all …]
/external/tensorflow/tensorflow/cc/gradients/
DREADME.md10 1. Create the op gradient function in `foo_grad.cc` corresponding to the
14 2. Write the op gradient with the following naming scheme:
30 for the op's inputs and calling `RunTest` (`RunTest` uses a [gradient
32 to verify that the theoretical gradient matches the numeric gradient). For
/external/okhttp/website/static/
Dbootstrap-combined.min.css9gradient(top,#08c,#0077b3);background-image:-webkit-gradient(linear,0 0,0 100%,from(#08c),to(#0077…
/external/apache-commons-math/src/main/java/org/apache/commons/math/optimization/general/
DAbstractScalarDifferentiableOptimizer.java79 private MultivariateVectorialFunction gradient; field in AbstractScalarDifferentiableOptimizer
156 return gradient.value(evaluationPoint); in computeObjectiveGradient()
189 gradient = f.gradient(); in optimize()
/external/tensorflow/tensorflow/contrib/slim/python/slim/
Dlearning_test.py67 gradient = constant_op.constant(self._grad_vec, dtype=dtypes.float32)
69 gradients_to_variables = (gradient, variable)
81 gradient = None
84 gradients_to_variables = (gradient, variable)
100 gradient = ops.IndexedSlices(values, indices, dense_shape)
103 gradients_to_variables = (gradient, variable)
126 gradient = constant_op.constant(self._grad_vec, dtype=dtypes.float32)
127 variable = variables_lib.Variable(array_ops.zeros_like(gradient))
128 grad_to_var = (gradient, variable)
134 gradient = constant_op.constant(self._grad_vec, dtype=dtypes.float32)
[all …]

12345678910>>...12