Home
last modified time | relevance | path

Searched full:loss (Results 1 – 25 of 11386) sorted by relevance

12345678910>>...456

/external/tensorflow/tensorflow/python/kernel_tests/nn_ops/
Dlosses_test.py52 loss = losses.absolute_difference(self._predictions, self._predictions)
54 self.assertAlmostEqual(0.0, self.evaluate(loss), 3)
57 loss = losses.absolute_difference(self._labels, self._predictions)
59 self.assertAlmostEqual(5.5, self.evaluate(loss), 3)
63 loss = losses.absolute_difference(self._labels, self._predictions, weights)
65 self.assertAlmostEqual(5.5 * weights, self.evaluate(loss), 3)
69 loss = losses.absolute_difference(self._labels, self._predictions,
72 self.assertAlmostEqual(5.5 * weights, self.evaluate(loss), 3)
76 loss = losses.absolute_difference(self._labels, self._predictions, weights)
78 self.assertAlmostEqual(5.6, self.evaluate(loss), 3)
[all …]
/external/tensorflow/tensorflow/python/ops/losses/
Dlosses_impl.py15 """Implementation of Loss operations for use in neural networks."""
36 """Types of loss reduction.
77 losses: `Tensor` whose elements contain individual loss measurements.
89 """Computes the number of elements in the loss function induced by `weights`.
141 """Computes the weighted loss.
148 scope: the scope for the operations performed in computing the loss.
149 loss_collection: the loss will be added to these collections.
150 reduction: Type of reduction to apply to loss.
153 Weighted loss `Tensor` of the same type as `losses`. If `reduction` is
162 When calculating the gradient of a weighted loss contributions from
[all …]
Dutil.py15 """Utilities for manipulating the loss collections."""
120 """Scales loss values by the given sample weights.
126 losses: Loss tensor.
148 per_example_loss: Per example loss tensor.
175 def add_loss(loss, loss_collection=ops.GraphKeys.LOSSES): argument
176 """Adds a externally defined loss to the collection of losses.
179 loss: A loss `Tensor`.
180 loss_collection: Optional collection to add the loss to.
183 # ends, holding on to a loss when executing eagerly is indistinguishable from
186 ops.add_to_collection(loss_collection, loss)
[all …]
/external/tensorflow/tensorflow/python/training/experimental/
Dloss_scale.py42 """Base class for all TF1 loss scales.
49 Loss scaling is a process that multiplies the loss by a multiplier called the
50 loss scale, and divides each gradient by the same multiplier. The pseudocode
54 loss = ...
55 loss *= loss_scale
56 grads = gradients(loss, vars)
60 Mathematically, loss scaling has no effect, but can help avoid numerical
62 precision training. By multiplying the loss, each intermediate gradient will
65 Instances of this class represent a loss scale. Calling instances of this
66 class returns the loss scale as a scalar float32 tensor, while method
[all …]
Dloss_scale_optimizer.py32 """An optimizer that applies loss scaling.
34 Loss scaling is a process that multiplies the loss by a multiplier called the
35 loss scale, and divides each gradient by the same multiplier. The pseudocode
39 loss = ...
40 loss *= loss_scale
41 grads = gradients(loss, vars)
45 Mathematically, loss scaling has no effect, but can help avoid numerical
47 precision training. By multiplying the loss, each intermediate gradient will
50 The loss scale can either be a fixed constant, chosen by the user, or be
51 dynamically determined. Dynamically determining the loss scale is convenient
[all …]
Dmixed_precision.py27 # is a loss scale optimizer class, and wrapper_fn is a function that takes in
37 """Registers a loss scale optimizer wrapper.
40 automatically wraps an optimizer with an optimizer wrapper that performs loss
50 "loss_scale", and returns a loss scale optimizer of type "wrapper_cls"
52 wrapper_cls: A loss scale optimizer class. Defaults to `wrapper_fn`, in
53 which case `wrapper_fn` should be a loss scale optimizer class whose
89 operation and a loss-scale optimizer.
102 expected. If a `NaN` gradient occurs with dynamic loss scaling, the model
104 incremented, and the `LossScaleOptimizer` attempts to decrease the loss
119 model.compile(loss="mse", optimizer=opt)
[all …]
/external/tensorflow/tensorflow/python/keras/
Dlosses.py16 """Built-in loss functions."""
48 @keras_export('keras.losses.Loss')
49 class Loss: class
50 """Loss base class.
53 * `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`.
58 class MeanSquaredError(Loss):
82 loss = (tf.reduce_sum(loss_obj(labels, predictions)) *
88 """Initializes `Loss` class.
92 loss. Default value is `AUTO`. `AUTO` indicates that the reduction
121 """Invokes the `Loss` instance.
[all …]
/external/webrtc/modules/rtp_rtcp/test/testFec/
Dtest_packet_masks_metrics.cc15 * The metrics measure the efficiency (recovery potential or residual loss) of
16 * the FEC code, under various statistical loss models for the packet/symbol
17 * loss events. Various constraints on the behavior of these metrics are
25 * In the case of XOR, the residual loss is determined via the set of packet
26 * masks (generator matrix). In the case of RS, the residual loss is determined
40 * The type of packet/symbol loss models considered in this test are:
41 * (1) Random loss: Bernoulli process, characterized by the average loss rate.
42 * (2) Bursty loss: Markov chain (Gilbert-Elliot model), characterized by two
43 * parameters: average loss rate and average burst length.
62 // Maximum gap size for characterizing the consecutiveness of the loss.
[all …]
/external/tensorflow/tensorflow/python/keras/engine/
Dtraining_eager_v1.py34 loss = loss_fn(targets, outputs)
35 return loss
87 """Calculates the loss for a given model.
95 loss values.
100 Returns the model output, total loss, loss value calculated using the
101 specified loss function and masks for each output. The total loss includes
103 to the loss value.
106 # Used to keep track of the total loss value (stateless).
143 with backend.name_scope('loss'):
152 'because it has no loss to optimize.')
[all …]
Dcompile_utils.py79 Applies a Loss / Metric to all outputs.
85 # each Metric / Loss separate. When there is only one Model output,
116 self._loss_metric = metrics_mod.Mean(name='loss') # Total loss.
121 """Per-output loss metrics."""
131 """One-time setup of loss objects."""
152 """Creates per-output loss metrics, but only for multi-output Models."""
169 """Computes the overall loss.
175 per-sample loss weights. If one Tensor is passed, it is used for all
178 regularization_losses: Additional losses to be added to the total loss.
194 loss_metric_values = [] # Used for loss metric calculation.
[all …]
Dtraining_utils_v1.py118 """Aggregator that calculates loss and metrics info.
138 # Loss.
472 stateful_metric_names = stateful_metric_names[1:] # Exclude `loss`
781 """Does validation on the compatibility of targets and loss functions.
783 This helps prevent users from using loss functions incorrectly. This check
788 loss_fns: list of loss functions.
792 ValueError: if a loss function or target array
801 for y, loss, shape in zip(targets, loss_fns, output_shapes):
802 if y is None or loss is None or tensor_util.is_tf_type(y):
804 if losses.is_categorical_crossentropy(loss):
[all …]
/external/tensorflow/tensorflow/python/keras/mixed_precision/
Dloss_scale_optimizer.py15 """Contains the loss scaling optimizer class."""
97 """The state of a dynamic loss scale."""
103 """Creates the dynamic loss scale."""
115 # nonfinite gradient or change in loss scale. The name is 'good_steps' for
121 """Adds a weight to this loss scale.
141 # Set aggregation to NONE, as loss scaling variables should never be
200 """Returns the current loss scale as a float32 `tf.Variable`."""
209 """Returns the current loss scale as a scalar `float32` tensor."""
213 """Updates the value of the loss scale.
217 all-reduced gradient of the loss with respect to a weight.
[all …]
/external/tensorflow/tensorflow/python/keras/utils/
Dlosses_utils.py16 """Utilities related to loss functions."""
31 """Types of loss reduction.
41 loss function. When non-scalar losses are returned to Keras functions like
42 `fit`/`evaluate`, the unreduced vector loss is passed to the optimizer
43 but the reported loss will be a scalar value.
46 The builtin loss functions wrapped by the loss classes reduce
47 one dimension (`axis=-1`, or `axis` if specified by loss function).
67 loss = tf.reduce_sum(loss_obj(labels, predictions)) *
247 losses: `Tensor` whose elements contain individual loss measurements.
266 """Reduces the individual weighted loss measurements."""
[all …]
/external/tensorflow/tensorflow/python/ops/
Dnn_loss_scaling_utilities_test.py15 """Tests for loss scaling utilities in tensorflow.ops.nn."""
40 loss = nn_impl.compute_average_loss(per_example_loss, global_batch_size=10)
41 self.assertEqual(self.evaluate(loss), 1.5)
76 loss = nn_impl.compute_average_loss(per_example_loss)
77 self.assertAllClose(self.evaluate(loss), (2.5 + 6.2 + 5.) / 3)
83 loss = distribution.reduce("SUM", per_replica_losses, axis=None)
84 self.assertAllClose(self.evaluate(loss), (2.5 + 6.2 + 5.) / 3)
99 loss = distribution.reduce("SUM", per_replica_losses, axis=None)
100 self.assertAllClose(self.evaluate(loss), (2. + 4. + 6.) * 2. / 3)
107 loss = distribution.reduce("SUM", per_replica_losses, axis=None)
[all …]
Dnn_xent_test.py57 loss = nn_impl.sigmoid_cross_entropy_with_logits(
59 self.assertEqual("mylogistic", loss.op.name)
66 loss = nn_impl.sigmoid_cross_entropy_with_logits(
69 tf_loss = self.evaluate(loss)
77 loss = nn_impl.sigmoid_cross_entropy_with_logits(
80 tf_loss = self.evaluate(loss)
88 loss = nn_impl.sigmoid_cross_entropy_with_logits(
90 err = gradient_checker.compute_gradient_error(logits, sizes, loss, sizes)
91 print("logistic loss gradient err = ", err)
99 loss = nn_impl.sigmoid_cross_entropy_with_logits(
[all …]
/external/webrtc/modules/video_coding/
Dmedia_opt_util.h26 // Number of time periods used for (max) window filter for packet loss
27 // TODO(marpan): set reasonable window size for filtered packet loss,
28 // adjustment should be based on logged/real data of loss stats/correlation.
34 // The type of filter used on the received packet loss reports.
36 kNoFilter, // No filtering on received loss.
100 // Returns the effective packet loss for ER, required by this protection
103 // Return value : Required effective packet loss
135 // Estimation of residual loss after the FEC
150 // Get the effective packet loss
159 // Get the effective packet loss for ER
[all …]
/external/tensorflow/tensorflow/python/saved_model/model_utils/
Dexport_output_test.py241 loss = {'my_loss': constant_op.constant([0])}
249 outputter = MockSupervisedOutput(loss, predictions, metrics)
250 self.assertEqual(outputter.loss['loss/my_loss'], loss['my_loss'])
260 loss['my_loss'], predictions['output1'], metrics['metrics'])
261 self.assertEqual(outputter.loss, {'loss': loss['my_loss']})
270 self.assertLen(outputter.loss, 1)
277 with self.assertRaisesRegex(ValueError, 'loss output value must'):
281 with self.assertRaisesRegex(ValueError, 'loss output key must'):
287 loss = {('my', 'loss'): constant_op.constant([0])}
296 outputter = MockSupervisedOutput(loss, predictions, metrics)
[all …]
/external/deqp-deps/glslang/Test/
Dhlsl.promotions.frag29 float3 Fn_R_F3D(out float3 p) { p = d3; return d3; } // valid, but loss of precision on downconve…
34 int3 Fn_R_I3D(out int3 p) { p = d3; return d3; } // valid, but loss of precision on downconvers…
39 uint3 Fn_R_U3D(out uint3 p) { p = d3; return d3; } // valid, but loss of precision on downconver…
57 float3 r03 = d3; // valid, but loss of precision on downconversion.
62 int3 r13 = d3; // valid, but loss of precision on downconversion.
67 uint3 r23 = d3; // valid, but loss of precision on downconversion.
83 r03 *= d3; // valid, but loss of precision on downconversion.
88 r13 *= d3; // valid, but loss of precision on downconversion.
93 r23 *= d3; // valid, but loss of precision on downconversion.
106 r03 *= ds; // valid, but loss of precision on downconversion.
[all …]
/external/angle/third_party/vulkan-deps/glslang/src/Test/
Dhlsl.promotions.frag29 float3 Fn_R_F3D(out float3 p) { p = d3; return d3; } // valid, but loss of precision on downconve…
34 int3 Fn_R_I3D(out int3 p) { p = d3; return d3; } // valid, but loss of precision on downconvers…
39 uint3 Fn_R_U3D(out uint3 p) { p = d3; return d3; } // valid, but loss of precision on downconver…
57 float3 r03 = d3; // valid, but loss of precision on downconversion.
62 int3 r13 = d3; // valid, but loss of precision on downconversion.
67 uint3 r23 = d3; // valid, but loss of precision on downconversion.
83 r03 *= d3; // valid, but loss of precision on downconversion.
88 r13 *= d3; // valid, but loss of precision on downconversion.
93 r23 *= d3; // valid, but loss of precision on downconversion.
106 r03 *= ds; // valid, but loss of precision on downconversion.
[all …]
/external/tensorflow/tensorflow/python/keras/saving/utils_v1/
Dsignature_def_utils.py24 inputs, loss, predictions=None, metrics=None): argument
26 unexported_constants.SUPERVISED_TRAIN_METHOD_NAME, inputs, loss=loss,
31 inputs, loss, predictions=None, metrics=None): argument
33 unexported_constants.SUPERVISED_EVAL_METHOD_NAME, inputs, loss=loss,
38 method_name, inputs, loss=None, predictions=None, argument
44 results in loss, metrics, and the like. Note that this function only requires
50 loss: dict of string to `Tensor` representing computed loss.
67 for output_set in (loss, predictions, metrics):
/external/autotest/client/common_lib/cros/network/
Dping_runner.py115 3 packets transmitted, 3 packets received, 0.0% packet loss
126 20 packets transmitted, 20 packets received, 0.0% packet loss
136 loss = _regex_float_from_string('([0-9]+\.[0-9]+)% packet loss',
138 if None in (sent, received, loss):
143 return PingResult(sent, received, loss,
151 return PingResult(sent, received, loss)
206 10 packets transmitted, 7 received, +3 errors, 30% packet loss,
219 5 packets transmitted, 5 received, 0% packet loss, time 4007ms
223 9 packets transmitted, 9 received, +1 duplicates, 0% packet loss,
230 loss = _regex_float_from_string('([0-9]+(\.[0-9]+)?)% packet loss',
[all …]
/external/tensorflow/tensorflow/python/keras/optimizer_v2/
Doptimizer_v2.py123 # `loss` is a callable that takes no argument and returns the value
125 loss = lambda: 3 * var1 * var1 + 2 * var2 * var2
126 # In graph mode, returns op that minimizes the loss by updating the listed
128 opt_op = opt.minimize(loss, var_list=[var1, var2])
131 opt.minimize(loss, var_list=[var1, var2])
172 loss = <call_loss_function>
174 grads = tape.gradient(loss, vars)
188 you divide your loss by the global batch size, which is done
190 See the `reduction` argument of your loss which should be set to
244 # `loss` is a callable that takes no argument and returns the value
[all …]
/external/webrtc/modules/audio_coding/neteq/tools/
Dneteq_quality_test.cc52 ABSL_FLAG(int, packet_loss_rate, 10, "Percentile of packet loss.");
57 "Random loss mode: 0--no loss, 1--uniform loss, 2--Gilbert Elliot "
58 "loss, 3--fixed loss.");
63 "Burst length in milliseconds, only valid for Gilbert Elliot loss.");
75 "List of loss events time and duration separated by comma: "
126 // ProbTrans00Solver() is to calculate the transition probability from no-loss
127 // state to itself in a modified Gilbert Elliot packet loss model. The result is
128 // to achieve the target packet loss rate `loss_rate`, when a packet is not
130 // no-loss.
216 << "Invalid packet loss percentile, should be between 0 and 100."; in NetEqQualityTest()
[all …]
/external/iproute2/tc/
Dq_netem.c38 " [ loss random PERCENT [CORRELATION]]\n" \ in explain()
39 " [ loss state P13 [P31 [P32 [P23 P14]]]\n" \ in explain()
40 " [ loss gemodel PERCENT [R [1-H [1-K]]]\n" \ in explain()
220 } else if (matches(*argv, "loss") == 0 || in netem_parse_opt()
222 if (opt.loss > 0 || loss_type != NETEM_LOSS_UNSPEC) { in netem_parse_opt()
223 explain1("duplicate loss argument\n"); in netem_parse_opt()
228 /* Old (deprecated) random loss model syntax */ in netem_parse_opt()
235 if (get_percent(&opt.loss, *argv)) { in netem_parse_opt()
236 explain1("loss percent"); in netem_parse_opt()
243 explain1("loss correllation"); in netem_parse_opt()
[all …]
/external/webrtc/modules/rtp_rtcp/source/
Dpacket_loss_stats_unittest.cc57 // three should count as a multiple loss event and three multiple loss packets.
96 // Add loss packets as the first three and the fifth of every eight packets. The
97 // set of three should be multiple loss and the fifth should be single loss.
109 // Add loss packets as the first three and the fifth of every eight packets such
122 // Add loss packets as the first three and the fifth of every eight packets such
136 // Add loss packets such that there is a multiple loss event that continues
150 // Add loss packets such that there is a multiple loss event that continues
167 // Add loss packets out of order and ensure that they still get counted
168 // correctly as single or multiple loss events.
182 // Add loss packets out of order and ensure that they still get counted
[all …]

12345678910>>...456