Home
last modified time | relevance | path

Searched refs:BackendConfig (Results 1 – 25 of 28) sorted by relevance

12

/external/pytorch/torch/ao/quantization/backend_config/
Dutils.py10 from .backend_config import BackendConfig, BackendPatternConfig, DTypeConfig
30 backend_config: BackendConfig, argument
38 def get_qat_module_classes(backend_config: BackendConfig) -> Tuple[type, ...]: argument
46 def get_fused_module_classes(backend_config: BackendConfig) -> Tuple[type, ...]: argument
55 backend_config: BackendConfig, argument
64 backend_config: BackendConfig, argument
77 backend_config: BackendConfig, argument
91 backend_config: BackendConfig, argument
101 backend_config: BackendConfig, argument
120 backend_config: BackendConfig, argument
Dnative.py18 from .backend_config import BackendConfig, DTypeConfig
108 def get_test_only_legacy_native_backend_config() -> BackendConfig:
145 BackendConfig("_native_and_fp16")
169 def get_native_backend_config() -> BackendConfig:
195 BackendConfig("native")
Dtensorrt.py12 BackendConfig,
25 def get_tensorrt_backend_config() -> BackendConfig:
79 BackendConfig("tensorrt")
Dfbgemm.py16 from .backend_config import BackendConfig, DTypeConfig
85 def get_fbgemm_backend_config() -> BackendConfig:
109 BackendConfig("fbgemm")
Dx86.py16 from .backend_config import BackendConfig, DTypeConfig
82 def get_x86_backend_config() -> BackendConfig:
106 BackendConfig("x86")
DREADME.md1 ## BackendConfig Overview
3 BackendConfig allows PyTorch quantization to work with different backend or kernel libraries. These…
5 BackendConfig configures quantization behavior in terms of operator patterns. For each operator pat…
21BackendConfig throughout the code base. This allows PyTorch Quantization to work with all first-pa…
25 The operator patterns used in BackendConfig are float modules, functional operators, pytorch operat…
37 …re complex scenarios such as graph patterns. For these use cases, the BackendConfig API offers an …
58 ## BackendConfig Implementation
60 The BackendConfig is comprised of a list of BackendPatternConfigs, each of which define the specifi…
65 BackendConfig,
104 backend_config = BackendConfig("my_backend") \
[all …]
Dqnnpack.py15 from .backend_config import BackendConfig, DTypeConfig, DTypeWithConstraints
115 def get_qnnpack_backend_config() -> BackendConfig:
154 BackendConfig("qnnpack")
Dbackend_config.py289 class BackendConfig: class
367 def set_name(self, name: str) -> BackendConfig:
374 def set_backend_pattern_config(self, config: BackendPatternConfig) -> BackendConfig:
388 ) -> BackendConfig:
405 def from_dict(cls, backend_config_dict: Dict[str, Any]) -> BackendConfig:
D_qnnpack_pt2e.py6 BackendConfig,
176 BackendConfig("qnnpack_pytorch_2.0_export")
D__init__.py2 BackendConfig,
Dexecutorch.py20 BackendConfig,
487 def get_executorch_backend_config() -> BackendConfig:
492 BackendConfig("executorch")
Donednn.py27 BackendConfig,
624 def get_onednn_backend_config() -> BackendConfig:
629 BackendConfig("onednn")
/external/pytorch/torch/ao/quantization/
Dquantize_fx.py9 from .backend_config import BackendConfig, get_tensorrt_backend_config # noqa: F401
77 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
97 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
168 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
207 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
258 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
408 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
516 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
560 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
627 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
[all …]
/external/pytorch/torch/ao/quantization/fx/
Dcustom_config.py8 from torch.ao.quantization.backend_config import BackendConfig
43 backend_config: Optional[BackendConfig]
86 backend_config: Optional[BackendConfig], argument
106 backend_config: Optional[BackendConfig], argument
236 def _get_backend_config(obj: Any, dict_key: str) -> Optional[BackendConfig]:
240 if isinstance(obj, BackendConfig) or obj is None:
243 return BackendConfig.from_dict(obj)
Dfuse.py6 BackendConfig,
36 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
57 backend_config = BackendConfig.from_dict(backend_config)
Dlstm_utils.py13 from torch.ao.quantization.backend_config import BackendConfig
22 backend_config: Optional[BackendConfig] = None, argument
145 backend_config: Optional[BackendConfig] = None, argument
Dquantize_handler.py7 BackendConfig,
142 backend_config: BackendConfig, argument
Dprepare.py18 BackendConfig,
262 backend_config: BackendConfig, argument
392 backend_config: BackendConfig, argument
432 parent_backend_config: Optional[BackendConfig], argument
434 QConfigMapping, Tuple[Any, ...], PrepareCustomConfig, Optional[BackendConfig]
510 backend_config: BackendConfig, argument
565 backend_config: BackendConfig, argument
809 backend_config: Optional[BackendConfig] = None, argument
971 backend_config: Optional[BackendConfig] = None, argument
1451 backend_config: BackendConfig, argument
[all …]
Dfuse_handler.py6 from torch.ao.quantization.backend_config import BackendConfig
125 backend_config: BackendConfig, argument
Dconvert.py11 BackendConfig,
538 def _run_weight_observers(observed: GraphModule, backend_config: BackendConfig) -> None: argument
667 backend_config: Optional[BackendConfig], argument
731 backend_config: BackendConfig, argument
985 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument
1047 backend_config = BackendConfig.from_dict(backend_config)
Dqconfig_mapping_utils.py9 from torch.ao.quantization.backend_config import BackendConfig, DTypeConfig
387 qconfig_mapping: QConfigMapping, backend_config: BackendConfig argument
DREADME.md4 float_model QConfigMapping BackendConfig
36 BackendConfig:::nofs --> prepare_fx
50 …ike fusion, inserting stubs are fully automated and controlled by QConfigMapping and BackendConfig.
67 …tion for each step, and then talk about the corresponding settings in BackendConfig. We’ll follow …
290 BackendConfig(nniqat.LinearReLU)
386 BackendConfig(nniqat.LinearReLU)
449 … requirements for each pattern. For more detail, please refer to the [BackendConfig README](/torch…
/external/pytorch/test/quantization/core/
Dtest_backend_config.py10 BackendConfig,
276 conf = BackendConfig("name1")
282 conf = BackendConfig("name1")
305 conf = BackendConfig.from_dict(conf_dict)
320 … conf = BackendConfig("name1").set_backend_pattern_config(op1).set_backend_pattern_config(op2)
/external/pytorch/torch/ao/ns/
D_numeric_suite_fx.py112 from torch.ao.quantization.backend_config import BackendConfig
854 backend_config: BackendConfig, argument
963 backend_config: BackendConfig, argument
1044 backend_config: BackendConfig, argument
/external/pytorch/docs/source/
Dquantization-support.rst89 This module contains BackendConfig, a config object that defines how quantization is supported
100 BackendConfig

12