Home
last modified time | relevance | path

Searched full:autograd (Results 1 – 25 of 1001) sorted by relevance

12345678910>>...41

/external/pytorch/
Dpt_template_srcs.bzl115 …"autograd/generated/ADInplaceOrViewTypeEverything.cpp": ["autograd/generated/ADInplaceOrViewTypeEv…
116 … "autograd/generated/ADInplaceOrViewType_0.cpp": ["autograd/generated/ADInplaceOrViewType_0.cpp"],
117 … "autograd/generated/ADInplaceOrViewType_1.cpp": ["autograd/generated/ADInplaceOrViewType_1.cpp"],
118 "autograd/generated/Functions.cpp": ["autograd/generated/Functions.cpp"],
119 "autograd/generated/Functions.h": ["autograd/generated/Functions.h"],
120 … "autograd/generated/TraceTypeEverything.cpp": ["autograd/generated/TraceTypeEverything.cpp"],
121 "autograd/generated/TraceType_0.cpp": ["autograd/generated/TraceType_0.cpp"],
122 "autograd/generated/TraceType_1.cpp": ["autograd/generated/TraceType_1.cpp"],
123 "autograd/generated/TraceType_2.cpp": ["autograd/generated/TraceType_2.cpp"],
124 "autograd/generated/TraceType_3.cpp": ["autograd/generated/TraceType_3.cpp"],
[all …]
Dbuild.bzl136 name = "generated-autograd-headers",
258 "torch/csrc/autograd/generated/python_functions.h",
259 "torch/csrc/autograd/generated/python_return_types.h",
263 "torch/csrc/autograd/generated/Functions.h",
264 "torch/csrc/autograd/generated/VariableType.h",
265 "torch/csrc/autograd/generated/ViewFuncs.h",
266 "torch/csrc/autograd/generated/variable_factories.h",
280 "torch/csrc/autograd/generated/python_functions_0.cpp",
281 "torch/csrc/autograd/generated/python_functions_1.cpp",
282 "torch/csrc/autograd/generated/python_functions_2.cpp",
[all …]
/external/pytorch/torch/csrc/distributed/autograd/engine/
Ddist_engine.h6 #include <torch/csrc/autograd/engine.h>
7 #include <torch/csrc/autograd/function.h>
8 #include <torch/csrc/autograd/functions/basic_ops.h>
9 #include <torch/csrc/distributed/autograd/context/context.h>
13 namespace autograd {
19 // passes. This engine relies heavily on the vanilla autograd engine and tries
21 // distributed aspects of autograd and tries to hook into the autograd engine
24 // Unlike the vanilla autograd engine, the distributed autograd engine
33 // these variables and accumulate all the gradients in the current autograd
34 // context on each node. This method is used to kickoff distributed autograd
[all …]
/external/pytorch/torch/csrc/distributed/autograd/functions/
Dsendrpc_backward.h3 #include <torch/csrc/autograd/function.h>
7 namespace autograd {
9 // As part of our distributed autograd implementation, whenever we send an RPC
10 // from one node to another, we add a 'SendRpcBackward' autograd function to the
11 // autograd graph. This is more or less a placeholder function that is used to
12 // kickoff the autograd engine on the current worker on the backward pass. The
13 // edges for this autograd function are the inputs to the RPC method.
16 // autograd engine which eventually runs the rest of the autograd graph.
17 struct TORCH_API SendRpcBackward : public torch::autograd::Node {
19 torch::autograd::variable_list apply(
[all …]
Drecvrpc_backward.h3 #include <torch/csrc/autograd/function.h>
4 #include <torch/csrc/distributed/autograd/context/context.h>
5 #include <torch/csrc/distributed/autograd/rpc_messages/autograd_metadata.h>
10 namespace autograd {
15 // As part of our distributed autograd implementation, whenever we receive an
16 // RPC from a node, we add a 'RecvRpcBackward' autograd function to the
17 // autograd graph. This is more or less a placeholder function that is used to
19 // RPC function are the inputs to this autograd function.
20 class TORCH_API RecvRpcBackward : public torch::autograd::Node {
28 torch::autograd::variable_list apply(
[all …]
Drecvrpc_backward.cpp3 #include <torch/csrc/distributed/autograd/functions/recvrpc_backward.h>
4 #include <torch/csrc/distributed/autograd/rpc_messages/propagate_gradients_req.h>
9 namespace autograd { namespace
11 using torch::autograd::Variable;
12 using torch::autograd::variable_list;
40 "Autograd context no longer valid! This usually ", in apply()
41 "means the autograd context was cleaned up by a different thread due ", in apply()
44 // Send the gradients over the wire and record the future in the autograd in apply()
63 // need to return anything for any downstream autograd function. in apply()
67 } // namespace autograd
/external/pytorch/torch/testing/_internal/optests/
Dautograd_registration.py20 """Check if autograd was registered correctly (for the operator).
22 Operators should have "autograd support" registered directly to an
23 autograd dispatch key.
32 Here are some best practices if you do find your autograd is
35 and you wish the operator to decompose and get autograd support
38 - If you're adding an autograd formula for the operator, the correct
39 thing to do is to register an autograd.Function to
40 DispatchKey::Autograd (preferred) or one of the
41 DispatchKey::Autograd<BACKEND> keys. It is NOT OK to register
42 an autograd.Function to a backend (e.g. CPU/CUDA) key.
[all …]
/external/pytorch/tools/
DBUCK.bzl111 name = "autograd",
112 srcs = glob(["autograd/*.py"]),
115 "autograd/deprecated.yaml",
116 "autograd/derivatives.yaml",
117 "autograd/templates/ADInplaceOrViewType.cpp",
118 "autograd/templates/Functions.cpp",
119 "autograd/templates/Functions.h",
120 "autograd/templates/TraceType.cpp",
121 "autograd/templates/VariableType.cpp",
122 "autograd/templates/VariableType.h",
[all …]
/external/pytorch/docs/source/rpc/
Ddistributed_autograd.rst3 .. _distributed-autograd-design:
5 Distributed Autograd Design
8 This note will present the detailed design for distributed autograd and walk
10 :ref:`autograd-mechanics` and the :ref:`distributed-rpc-framework` before
41 The main motivation behind distributed autograd is to enable running a backward
47 Autograd recording during the forward pass
50 PyTorch builds the autograd graph during the forward pass and this graph is
52 :ref:`how-autograd-encodes-history`.
54 For distributed autograd, we need to keep track of all RPCs during the forward
56 we attach ``send`` and ``recv`` functions to the autograd graph when we perform
[all …]
/external/pytorch/torch/csrc/distributed/autograd/
Dutils.h3 #include <torch/csrc/distributed/autograd/context/context.h>
4 #include <torch/csrc/distributed/autograd/rpc_messages/rpc_with_autograd.h>
5 #include <torch/csrc/distributed/autograd/rpc_messages/rpc_with_profiling_req.h>
6 #include <torch/csrc/distributed/autograd/rpc_messages/rpc_with_profiling_resp.h>
10 namespace autograd {
12 // This method is used to attach the 'send' autograd function to the autograd
13 // graph when we use RPC. This method creates a new 'send' autograd function
16 // autograd context. Finally, the RPC message is updated with appropriate
17 // autograd information for the recipient.
23 // This method is used to attach the 'recv' autograd function to the autograd
[all …]
Dutils.cpp3 #include <torch/csrc/autograd/functions/utils.h>
4 #include <torch/csrc/autograd/profiler.h>
5 #include <torch/csrc/distributed/autograd/context/container.h>
6 #include <torch/csrc/distributed/autograd/functions/recvrpc_backward.h>
7 #include <torch/csrc/distributed/autograd/functions/sendrpc_backward.h>
8 #include <torch/csrc/distributed/autograd/utils.h>
15 namespace autograd { namespace
17 using torch::distributed::autograd::AutogradMetadata;
18 using torch::distributed::autograd::RpcWithAutograd;
29 // Attach autograd information only for tensors requiring grad. in addSendRpcBackward()
[all …]
Dinit.cpp1 #include <torch/csrc/autograd/python_cpp_function.h>
2 #include <torch/csrc/distributed/autograd/autograd.h>
11 namespace autograd { namespace
20 THPObjectPtr(PyImport_ImportModule("torch.distributed.autograd")); in dist_autograd_init()
32 "_distributed_autograd", "distributed autograd bindings"); in dist_autograd_init()
54 torch::autograd::functionToPyObject( in dist_autograd_init()
72 torch::autograd::functionToPyObject( in dist_autograd_init()
147 assumes all RPC messages sent in the same distributed autograd context in dist_autograd_init()
148 across workers would be part of the autograd graph during the backward pass. in dist_autograd_init()
150 We use the provided roots to discover the autograd graph and compute in dist_autograd_init()
[all …]
Dautograd.h3 #include <torch/csrc/distributed/autograd/context/container.h>
4 #include <torch/csrc/distributed/autograd/engine/dist_engine.h>
8 namespace autograd {
10 using torch::autograd::variable_list;
12 /// C++ API of Distributed Autograd that kicks off the distributed backward pass
15 /// distributed autograd context across workers would be part of the autograd
18 /// We use the provided roots to discover the autograd graph and compute
20 /// autograd computation is done.
24 /// \param context_id The autograd context id for which we should retrieve the
26 /// \param roots Tensors which represent the roots of the autograd computation.
[all …]
/external/pytorch/docs/source/
Dautograd.rst4 Automatic differentiation package - torch.autograd
7 .. automodule:: torch.autograd
8 .. currentmodule:: torch.autograd
49 This section contains the higher level API for the autograd that builds on the basic API above
87 :func:`torch.autograd.backward` or :func:`torch.Tensor.backward`
136 Supporting in-place operations in autograd is a hard matter, and we discourage
137 their use in most cases. Autograd's aggressive buffer freeing and reuse makes
157 use autograd with tensors. Autograd automatically supports Tensors with
173 Tensor autograd functions
244 .. automodule:: torch.autograd.gradcheck
[all …]
/external/pytorch/torch/csrc/distributed/autograd/context/
Dcontainer.h6 #include <torch/csrc/distributed/autograd/context/context.h>
10 namespace autograd {
13 // autograd context for each autograd pass and also cleans up data for an
14 // autograd pass once its done.
16 // Each autograd pass is assigned a unique autograd_context_id and all data for
23 // id, which is used to associate send/recv autograd function pairs. The format
37 // Create a new context for a distributed autograd pass.
40 // Clean up resources for a given context_id once the autograd pass is done.
46 // Releases an autograd context if it is present on this node. Also sends RPC
54 // Retrieve the autograd context for a given context_id.
[all …]
Dcontext.h7 #include <torch/csrc/autograd/engine.h>
8 #include <torch/csrc/distributed/autograd/functions/recvrpc_backward.h>
9 #include <torch/csrc/distributed/autograd/functions/sendrpc_backward.h>
14 namespace autograd {
19 // autograd pass on a worker.
26 // Retrieves the autograd context id for this context.
29 // Records a 'send' autograd function for this context with the provided
35 // Records a 'recv' autograd function for this context with the provided
63 const torch::autograd::Variable& variable,
72 // workerIDs are added here when we attach a send function to this autograd
[all …]
/external/pytorch/torch/csrc/autograd/
Dfunction_hook.h8 namespace torch::dynamo::autograd {
11 } // namespace torch::dynamo::autograd
15 namespace torch::autograd {
23 // only implemented for python hooks, registers hook with compiled autograd
24 virtual void compiled_args(torch::dynamo::autograd::CompiledNodeArgs& args) { in compiled_args()
26 std::string("compiled_args nyi, see [Note: Compiled Autograd] ") + in compiled_args()
36 // only implemented for python hooks, registers hook with compiled autograd
37 virtual void compiled_args(torch::dynamo::autograd::CompiledNodeArgs& args) { in compiled_args()
39 std::string("compiled_args nyi, see [Note: Compiled Autograd] ") + in compiled_args()
48 // autograd
[all …]
Dpython_engine.cpp1 #include <torch/csrc/autograd/python_engine.h>
9 #include <torch/csrc/autograd/edge.h>
10 #include <torch/csrc/autograd/engine.h>
11 #include <torch/csrc/autograd/function.h>
12 #include <torch/csrc/autograd/functions/basic_ops.h>
13 #include <torch/csrc/autograd/python_anomaly_mode.h>
14 #include <torch/csrc/autograd/python_cpp_function.h>
15 #include <torch/csrc/autograd/python_function.h>
16 #include <torch/csrc/autograd/python_saved_variable_hooks.h>
27 using namespace torch::autograd;
[all …]
Dpython_hook.h3 #include <torch/csrc/autograd/function_hook.h>
7 namespace torch::dynamo::autograd {
9 } // namespace torch::dynamo::autograd
11 namespace torch::autograd {
17 void compiled_args(torch::dynamo::autograd::CompiledNodeArgs& args) override;
26 void compiled_args(torch::dynamo::autograd::CompiledNodeArgs& args) override;
36 void compiled_args(torch::dynamo::autograd::CompiledNodeArgs& args) override;
48 void compiled_args(torch::dynamo::autograd::CompiledNodeArgs& args) override;
51 torch::dynamo::autograd::SwapSavedVariables& saved) override;
55 } // namespace torch::autograd
/external/pytorch/docs/source/notes/
Dextending.func.rst1 .. _func-autograd-function:
3 Extending torch.func with autograd.Function
6 .. currentmodule:: torch.autograd
8 So you'd like to use :class:`torch.autograd.Function` with the :mod:`torch.func`
14 have it work with function transforms. That is, the :class:`torch.autograd.Function`'s
19 PyTorch combines both of these concepts into :class:`torch.autograd.Function`.
24 This guide assumes you are familiar with :ref:`extending-autograd`,
25 which explains how to use :class:`torch.autograd.Function`.
27 :class:`torch.autograd.Function` can either have a :meth:`~Function.forward` that accepts a ctx obj…
51 the :class:`torch.autograd.Function` needs a :meth:`~Function.backward` staticmethod.
[all …]
/external/pytorch/torch/csrc/api/include/torch/nn/modules/
D_functions.h3 #include <torch/csrc/autograd/custom_function.h>
4 #include <torch/csrc/autograd/variable.h>
12 class CrossMapLRN2d : public torch::autograd::Function<CrossMapLRN2d> {
14 static torch::autograd::Variable forward(
15 torch::autograd::AutogradContext* ctx,
16 const torch::autograd::Variable& input,
19 static torch::autograd::variable_list backward(
20 torch::autograd::AutogradContext* ctx,
21 torch::autograd::variable_list grad_output);
/external/pytorch/aten/src/ATen/core/
DVariableFallbackKernel.cpp8 * Since tensors always have the Autograd set, but custom operators
9 * usually don't have a kernel registered for Autograd, the dispatcher
11 * Note that this is not a correct autograd implementation. It will just
13 * If you want a custom operator to work with autograd, you need to use
14 * autograd::Function so that the custom operator implementation knows how to
15 * do autograd.
28 // NOTE [mobile/edge builds and the autograd fallback]
30 // autograd kernels for built-in operators (VariableTypeEverything.cpp).
32 // - we don't care about having a nice autograd fallback that warns if
33 // an operator has incorrect autograd support. If you're running
[all …]
/external/pytorch/tools/test/
Dtest_gen_backend_stubs.py92 autograd:
178 …# The backend is valid, but doesn't have a valid autograd key. They can't override autograd kernel…
179 …# Only using Vulkan here because it has a valid backend key but not an autograd key- if this chang…
186 autograd:
193 …perator group, currently all operators must either be registered to the backend or autograd kernel.
201 autograd:
206autograd key. They cannot be mix and matched. If this is something you need, feel free to create a…
209 …perator group, currently all operators must either be registered to the backend or autograd kernel.
217 autograd:
222autograd key. They cannot be mix and matched. If this is something you need, feel free to create a…
[all …]
/external/pytorch/test/profiler/
Dtest_profiler_tree.py287 autograd::engine::evaluate_function: PowBackward0
300 autograd::engine::evaluate_function: SubBackward0
303 autograd::engine::evaluate_function: AddBackward0
305 autograd::engine::evaluate_function: torch::autograd::AccumulateGrad
306 torch::autograd::AccumulateGrad
310 autograd::engine::evaluate_function: torch::autograd::AccumulateGrad
311 torch::autograd::AccumulateGrad
321 with torch.autograd.profiler.record_function("Top level Annotation"):
322 with torch.autograd.profiler.record_function("First Annotation"):
327 _ = torch.autograd.profiler.record_function(
[all …]
/external/pytorch/test/inductor/
Dtest_compiled_autograd.py90 with torch.autograd.set_multithreading_enabled(False):
485 # Freeze compiled autograd graph
632 gy, gz = torch.autograd.grad(result, inputs=[y, z])
642 class UnreachableBwd(torch.autograd.Function):
661 gz = torch.autograd.grad(result, inputs=[z])
692 class UnreachableBwd(torch.autograd.Function):
734 torch.compile(lambda: torch.autograd.backward(loss, inputs=[x]))()
739 torch.compile(lambda: torch.autograd.backward(loss, inputs=[y]))()
921 class MySin(torch.autograd.Function):
943 class MyFn(torch.autograd.Function):
[all …]

12345678910>>...41