Home
last modified time | relevance | path

Searched full:linear_clamp_run (Results 1 – 13 of 13) sorted by relevance

/external/pytorch/aten/src/ATen/native/xnnpack/
DRegisterOpContextClass.cpp79 …m.def(TORCH_SELECTIVE_SCHEMA("prepacked::linear_clamp_run(Tensor X, __torch__.torch.classes.xnnpac… in TORCH_LIBRARY()
88 …m.impl(TORCH_SELECTIVE_NAME("prepacked::linear_clamp_run"), TORCH_FN(internal::linear::linear_clam… in TORCH_LIBRARY_IMPL()
DLinear.h18 Tensor linear_clamp_run(const Tensor& input, const c10::intrusive_ptr<xnnpack::LinearOpContext>& op…
DLinear.cpp183 Tensor linear_clamp_run( in linear_clamp_run() function
/external/pytorch/test/
Dtest_xnnpack_integration.py52 output_linearprepacked = torch.ops.prepacked.linear_clamp_run(
73 output_linearprepacked = torch.ops.prepacked.linear_clamp_run(
268 return torch.ops.prepacked.linear_clamp_run(x, self.packed_weight_bias)
673 o = torch.ops.prepacked.linear_clamp_run(
840 "prepacked::linear_clamp_run": 1,
1006 "prepacked::linear_clamp_run": 1,
1024 "prepacked::linear_clamp_run": 1,
1046 "prepacked::linear_clamp_run": 1,
1068 "prepacked::linear_clamp_run": 1,
1090 "prepacked::linear_clamp_run": 1,
[all …]
Dtest_mobile_optimizer.py116 .check_count("prepacked::linear_clamp_run", 1, exactly=True) \
128 .check_count("prepacked::linear_clamp_run", 1, exactly=True) \
141 .check_not("prepacked::linear_clamp_run") \
/external/pytorch/test/jit/
Dtest_optimize_for_mobile_preserve_debug_info.py134 "prepacked::linear_clamp_run": "aten::linear",
149 "prepacked::linear_clamp_run": "aten::linear",
226 "prepacked::linear_clamp_run": linear_activation_kind,
/external/pytorch/torch/csrc/jit/passes/
Dxnnpack_rewrite.cpp103 %res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in insertPrePackedLinearOp()
178 %res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in fuseHardtanhWithPackedOps()
194 %linear_res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in fuseHardtanhWithPackedOps()
228 %linear_res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in fuseHardtanhWithPackedOps()
270 %res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in fuseReluWithPackedOps()
288 %linear_res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in fuseReluWithPackedOps()
324 %linear_res = prepacked::linear_clamp_run(%input, %packed_weight_bias) in fuseReluWithPackedOps()
/external/pytorch/torch/_export/passes/
Dreplace_quantized_ops_with_standard_ops_pass.py433 elif opname == "linear_clamp_run":
557 …For prepacked::conv2d_clamp_run and prepacked::linear_clamp_run, we directly convert them to aten.…
/external/pytorch/test/mobile/model_test/
Dmodel_ops.yaml398 prepacked::linear_clamp_run: 36
Dcoverage.yaml648 - prepacked::linear_clamp_run
1017 prepacked::linear_clamp_run: 26
/external/pytorch/torch/csrc/jit/runtime/
Dsymbolic_shape_registry.cpp64 …{"prepacked::linear_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.LinearOpContext W_prepack)… in conditionally_defined_ops()
/external/pytorch/test/export/
Dtest_converter.py1420 x = torch.ops.prepacked.linear_clamp_run(x, self.linear_op)
/external/pytorch/torch/csrc/jit/tensorexpr/
Dlowerings.cpp42 …{"prepacked::linear_clamp_run(Tensor X, __torch__.torch.classes.xnnpack.LinearOpContext W_prepack)… in nnc_lowerings_lazy_registration()