Home
last modified time | relevance | path

Searched full:backward (Results 1 – 25 of 5158) sorted by relevance

12345678910>>...207

/external/pytorch/torch/csrc/jit/runtime/
Dsymbolic_script.cpp55 def backward(grad_output):
58 return torch.mean(self, dtype=dtype), backward
67 def backward(grad_output):
71 return torch.mean(self, dim, keepdim, dtype=dtype), backward
78 def backward(grad_output):
82 return result, backward
143 def backward(grad_output):
148 return std_out, backward
155 def backward(grad_output):
160 return std_out, backward
[all …]
/external/pytorch/torch/autograd/
Danomaly_mode.py16 - Running the forward pass with detection enabled will allow the backward
18 backward function.
19 - If ``check_nan`` is ``True``, any backward computation that generate "nan"
35 ... def backward(ctx, gO):
36 ... # Error during the backward pass
37 ... raise RuntimeError("Some error in backward")
44 >>> out.backward()
47 File "/your/pytorch/install/torch/_tensor.py", line 93, in backward
48 torch.autograd.backward(self, gradient, retain_graph, create_graph)
49 File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward
[all …]
Dfunction.py36 r"""Save given tensors for a future call to :func:`~Function.backward`.
41 All tensors intended to be used in the backward pass should be saved
47 nor outputs of :func:`forward`, are saved for backward, your custom Function
48 may not support double backward.
49 Custom Functions that do not support double backward should decorate their
50 :func:`backward` method with ``@once_differentiable`` so that performing
51 double backward raises an error. If you'd like to support double backward,
52 you can either recompute intermediaries based on the inputs during backward
54 …`double backward tutorial <https://pytorch.org/tutorials/intermediate/custom_function_double_backw…
57 In :func:`backward`, saved tensors can be accessed through the :attr:`saved_tensors`
[all …]
Dgraph.py77 r"""Register a backward hook.
92 See :ref:`backward-hooks-execution` for more information on how when this hook
102 >>> b.sum().backward(retain_graph=True)
107 >>> b.sum().backward(retain_graph=True)
115 r"""Register a backward pre-hook.
129 See :ref:`backward-hooks-execution` for more information on how when this hook
138 >>> b.sum().backward(retain_graph=True)
143 >>> b.sum().backward(retain_graph=True)
215 operation saves a tensor for backward (this includes intermediary results
223 namely when executing :func:`torch.Tensor.backward()` or
[all …]
/external/pytorch/docs/source/rpc/
Ddistributed_autograd.rst41 The main motivation behind distributed autograd is to enable running a backward
51 used to execute the backward pass. For more details see
55 pass to ensure the backward pass is executed appropriately. For this purpose,
61 The input for this function during the backward pass is received from the
66 node to the appropriate ``send`` function during the backward pass.
69 function on a remote node during the backward pass.
83 Each forward and backward pass that uses distributed autograd is assigned a
90 1. Multiple nodes running distributed backward passes might accumulate
92 tensor would have gradients from a variety of distributed backward passes
94 calling :meth:`torch.autograd.backward` multiple times locally. In order to
[all …]
/external/pytorch/torch/_custom_op/
Dautograd.py17 # and register something that is actually a backward formula
25 # As explained in NOTE ["backward", "save_for_backward", and "autograd"],
26 # after the user gives us "backward" and "save_for_backward", we generate
29 if custom_op._has_impl('save_for_backward') or custom_op._has_impl('backward'):
31 'save_for_backward' if custom_op._has_impl('backward')
32 else 'backward'
34 found = 'save_for_backward' if missing == 'backward' else 'backward'
39 f"To use the CustomOp API to register a backward formula, "
40 f"please provide us both a backward function and a "
41 f"'save for backward' function via `impl_backward` and "
[all …]
/external/pytorch/test/dynamo/
Dtest_hooks.py51 v.backward(torch.tensor([1.0, 2.0, 3.0]))
64 v.backward(torch.tensor([1.0, 2.0, 3.0]))
80 v.backward(torch.tensor([1.0, 2.0, 3.0]))
95 v.backward(torch.tensor([1.0, 2.0, 3.0]))
113 v.backward(torch.tensor([1.0, 2.0, 3.0]))
132 v.backward(torch.tensor([1.0, 2.0, 3.0]))
152 v.backward(torch.tensor([1.0, 2.0, 3.0]))
171 v.backward(torch.tensor([1.0, 2.0, 3.0]))
189 v[0].backward(torch.tensor([1.0, 2.0, 3.0]))
205 v.backward(torch.tensor([1.0, 2.0, 3.0]))
[all …]
Dtest_autograd_function.py26 def backward(ctx, grad_output): member in CustomFunc1
41 def backward(ctx, grad_output): member in CustomFunc3
89 # Note that forward, setup_context, and backward are @staticmethods
106 def backward(ctx, grad_output): member in LinearFunction
131 def backward(ctx, grad_out1, grad_out2): member in MaterializingGradFunction
146 def backward(ctx, grad_output): member in CustomFuncBwdPrintGraphBreak
162 def backward(ctx, grad_output): member in CustomFuncStrideBwd
180 def backward(ctx, grad_output): member in CustomFuncSaveForBwd
199 def backward(ctx, grad_output): member in ContextSaveAndMark
212 def backward(ctx, grad_output): member in ContextMarkAndSave
[all …]
/external/pytorch/functorch/notebooks/
Daot_autograd_optimizations.ipynb17 …"* AOT Autograd traces the forward and backward graph ahead of time. Presence of forward and backw…
18 …"* AOT Autograd provides simple mechanisms to compile the extracted forward and backward graphs th…
67 "loss.backward()"
76 …racted forward and backward graphs. Internally, AOT uses `__torch_dispatch__` based tracing mechan…
78 …"AOT Autograd then sends these forward and backward graphs to the user supplied compilers. So, let…
119 "# The compiler_fn is called after the forward and backward graphs are extracted.\n",
132 "res.sum().backward()\n",
140backward graph. You can see that in addition to the original input of the forward pass, the forwar…
148 …"Now that we understand how to use AOT Autograd to print forward and backward graphs, let us use A…
160 "# Lets compile the forward and backward through ts_compile.\n",
[all …]
/external/pytorch/test/
Dtest_autograd.py131 out.sum().backward()
228 result.sum().backward(go, create_graph=True)
246 def backward(ctx, grad_output): member in TestAutograd.test_function.MyFunction
275 def backward(ctx, grad_output): member in TestAutograd.test_once_differentiable.MyFunction
301 def backward(ctx, grad): member in TestAutograd.test_function_returns_input.MyFunction
306 MyFunction.apply(v).backward()
311 MyFunction.apply(v.clone()).backward()
321 def backward(ctx, grad): member in TestAutograd.test_function_returns_undefined_tensor.MyFunction
324 # Test that undefined tensors returned from custom backward function
328 MyFunction.apply(x).backward()
[all …]
/external/pytorch/test/inductor/
Dtest_compiled_autograd.py131 loss.backward()
154 result.backward()
173 result.backward()
190 def backward(ctx, grad): function
194 sin.register_autograd(backward, setup_context=setup_context)
199 y.backward()
213 result.backward()
230 result.backward()
247 result.backward()
265 result.backward()
[all …]
/external/pytorch/test/cpp/api/
Dautograd.cpp39 y[0].backward(); in TEST()
47 y[0].backward(); in TEST()
54 backward({res.sum()}, {}); in TEST()
64 backward({res}, {torch::ones({2, 2})}, {}, true); in TEST()
66 backward({res}, {torch::ones({2, 2})}); in TEST()
87 res.backward(torch::ones({2, 2}), false, true); in TEST()
123 x.backward(grad_output, false, true); in TEST()
160 static variable_list backward( in TEST() function
207 out.backward({}, /*keep_graph=*/true); in TEST()
210 out.backward({}, /*keep_graph=*/true); in TEST()
[all …]
/external/pytorch/torch/distributed/fsdp/
D_runtime_utils.py54 BACKWARD = auto() variable in _PrefetchMode
190 "FSDP optimizer in backward only supported with use_orig_params=True!"
260 # Stream for overlapping gradient reduction with the backward pass gradient
355 registering post-backward hooks for these current parameters. This function
383 # Register post-backward hooks to reshard the parameters and reduce-scatter
388 # set the grad to None in the backward pass.
445 and registering pre-backward hooks on the forward outputs.
457 output (Any): Forward pass output; pre-backward hooks are registered on
472 # Register pre-backward hooks to unshard the flat parameters for the
490 # with the intention that they are immediately used for backward
[all …]
/external/pytorch/docs/source/notes/
Dddp.rst20 it with DDP, and then runs one forward pass, one backward pass, and an optimizer
49 # backward pass
50 loss_fn(outputs, labels).backward()
93 later will take care of the gradients synchronization during the backward
101 order is because DDP expects gradients to become ready during the backward
105 be true, and when that happens it could hurt DDP backward speed as the
109 the backward pass when the gradient becomes ready.
113 backward on a subgraph of the model, and DDP finds out which parameters are
114 involved in the backward pass by traversing the autograd graph from the model
116 backward pass, the ``Reducer`` would only wait for unready parameters, but it
[all …]
Damp_examples.rst49 # Scales loss. Calls backward() on scaled loss to create scaled gradients.
50 # Backward passes under autocast are not recommended.
51 # Backward ops run in the same dtype autocast chose for corresponding forward ops.
52 scaler.scale(loss).backward()
67 All gradients produced by ``scaler.scale(loss).backward()`` are scaled. If you wish to modify or i…
68 the parameters' ``.grad`` attributes between ``backward()`` and ``scaler.step(optimizer)``, you sh…
93 scaler.scale(loss).backward()
131 the next backward pass will add scaled grads to unscaled grads (or grads scaled by a different fact…
149 scaler.scale(loss).backward()
187 loss.backward()
[all …]
/external/cronet/stable/third_party/brotli/enc/
Dhash_longest_match_quickly_inc.h34 help create backward references to previous data.
78 system to find accidentally good backward references here and there. */ in FN()
139 /* Find a longest backward match of &data[cur_ix & ring_buffer_mask]
191 size_t backward; in FN() local
196 backward = cur_ix - prev_ix; in FN()
201 if (BROTLI_PREDICT_FALSE(backward == 0 || backward > max_backward)) { in FN()
208 const score_t score = BackwardReferenceScore(len, backward); in FN()
211 out->distance = backward; in FN()
225 size_t backward; in FN() local
227 backward = cur_ix - prev_ix; in FN()
[all …]
/external/brotli/c/enc/
Dhash_longest_match_quickly_inc.h34 help create backward references to previous data.
78 system to find accidentally good backward references here and there. */ in FN()
139 /* Find a longest backward match of &data[cur_ix & ring_buffer_mask]
191 size_t backward; in FN() local
196 backward = cur_ix - prev_ix; in FN()
201 if (BROTLI_PREDICT_FALSE(backward == 0 || backward > max_backward)) { in FN()
208 const score_t score = BackwardReferenceScore(len, backward); in FN()
211 out->distance = backward; in FN()
225 size_t backward; in FN() local
227 backward = cur_ix - prev_ix; in FN()
[all …]
Dhash_longest_match_inc.h11 help create backward references to previous data.
34 /* Only block_size_ newest backward references are kept,
53 /* Buckets containing block_size_ of backward references. */
147 /* Find a longest backward match of &data[cur_ix] up to the length of
178 const size_t backward = (size_t)distance_cache[i]; in FN() local
179 size_t prev_ix = (size_t)(cur_ix - backward); in FN()
183 if (BROTLI_PREDICT_FALSE(backward > max_backward)) { in FN()
199 a few unnecessary binary logarithms in backward reference score, in FN()
208 out->distance = backward; in FN()
223 const size_t backward = cur_ix - prev_ix; in FN() local
[all …]
/external/cronet/tot/third_party/brotli/enc/
Dhash_longest_match_quickly_inc.h34 help create backward references to previous data.
78 system to find accidentally good backward references here and there. */ in FN()
139 /* Find a longest backward match of &data[cur_ix & ring_buffer_mask]
191 size_t backward; in FN() local
196 backward = cur_ix - prev_ix; in FN()
201 if (BROTLI_PREDICT_FALSE(backward == 0 || backward > max_backward)) { in FN()
208 const score_t score = BackwardReferenceScore(len, backward); in FN()
211 out->distance = backward; in FN()
225 size_t backward; in FN() local
227 backward = cur_ix - prev_ix; in FN()
[all …]
Dhash_longest_match_inc.h11 help create backward references to previous data.
34 /* Only block_size_ newest backward references are kept,
53 /* Buckets containing block_size_ of backward references. */
143 /* Find a longest backward match of &data[cur_ix] up to the length of
174 const size_t backward = (size_t)distance_cache[i]; in FN() local
175 size_t prev_ix = (size_t)(cur_ix - backward); in FN()
179 if (BROTLI_PREDICT_FALSE(backward > max_backward)) { in FN()
195 a few unnecessary binary logarithms in backward reference score, in FN()
204 out->distance = backward; in FN()
219 const size_t backward = cur_ix - prev_ix; in FN() local
[all …]
/external/pytorch/torch/testing/_internal/distributed/rpc/
Ddist_autograd_test.py173 dist_autograd.backward(context_id, [loss])
187 dist_autograd.backward(context_id, [loss])
202 def backward(ctx, input): member in SimulateBackwardError
204 raise Exception("Simulate error on backward pass") # noqa: TRY002
252 torch.autograd.backward(tensors)
258 dist_autograd.backward(context_id, tensors)
619 dist_autograd.backward(context_id, [loss], retain_graph=True)
630 loss_local.backward()
636 dist_autograd.backward(context_id, [loss])
651 local_ret.backward()
[all …]
/external/brotli/research/
DREADME.md3backward reference distance distributions in LZ77 compression. We developed these tools to be able…
9 This tool generates optimal (match-length-wise) backward references for every position in the input…
17 This tool generates a visualization of the distribution of backward references stored in `*.dist` f…
29 …ages. Input images must be of same size. Useful for comparing different backward references distri…
48 ## Backward distance file format
55 More verbose explanation: for each backward reference there is a position-distance pair, also a cop…
/external/pytorch/torch/_library/
Dautograd.py62 def backward(ctx, *grads): function
74 f"Trying to backward through {op} but no autograd "
84 "backward": staticmethod(backward),
112 orig_backward = cls.backward
149 "NYI: calling supports_tensorlist autograd.Function.backward directly. "
174 # Assume that any Nones in the backward are Tensors.
175 # If the forward has an arg that is [1, 2, 3], the backward should
177 # If the forward has an arg that is [tensor, tensor], the backward
184 f"Expected the return from backward to be of the same structure "
185 f"as the inputs. Got: {grad_inputs_spec} (return from backward), "
[all …]
/external/pytorch/test/distributed/pipelining/
Dtest_backward.py35 # Forward and backward in stage manner
47 ref_loss.backward()
74 # Forward, then backward of loss with respect to inputs
87 ref_loss.backward()
109 # Forward, then backward of loss with respect to inputs
119 # backward of loss with respect to weights
125 ref_loss.backward()
156 # Forward, then backward of loss with respect to inputs
167 # backward of loss with respect to weights
174 ref_loss.backward()
/external/pytorch/benchmarks/fastrnns/
Dfactory.py20 (options) -> (inputs, params, forward, backward_setup, backward)
27 Then, we pass backward_inputs to backward. If None, then it is assumed to
29 backward: Given `output = backward_setup(*forward(*inputs))`, performs
32 fastrnns.bench times the forward and backward invocations.
37 "ModelDef", ["inputs", "params", "forward", "backward_setup", "backward"]
55 return output.backward(grad_output, **kwargs)
65 backward=simple_backward,
77 backward=simple_backward,
106 backward=simple_backward,
134 backward=simple_backward,
[all …]

12345678910>>...207