/external/pytorch/torch/ao/quantization/backend_config/ |
D | utils.py | 10 from .backend_config import BackendConfig, BackendPatternConfig, DTypeConfig 30 backend_config: BackendConfig, argument 38 def get_qat_module_classes(backend_config: BackendConfig) -> Tuple[type, ...]: argument 46 def get_fused_module_classes(backend_config: BackendConfig) -> Tuple[type, ...]: argument 55 backend_config: BackendConfig, argument 64 backend_config: BackendConfig, argument 77 backend_config: BackendConfig, argument 91 backend_config: BackendConfig, argument 101 backend_config: BackendConfig, argument 120 backend_config: BackendConfig, argument
|
D | native.py | 18 from .backend_config import BackendConfig, DTypeConfig 108 def get_test_only_legacy_native_backend_config() -> BackendConfig: 145 BackendConfig("_native_and_fp16") 169 def get_native_backend_config() -> BackendConfig: 195 BackendConfig("native")
|
D | tensorrt.py | 12 BackendConfig, 25 def get_tensorrt_backend_config() -> BackendConfig: 79 BackendConfig("tensorrt")
|
D | fbgemm.py | 16 from .backend_config import BackendConfig, DTypeConfig 85 def get_fbgemm_backend_config() -> BackendConfig: 109 BackendConfig("fbgemm")
|
D | x86.py | 16 from .backend_config import BackendConfig, DTypeConfig 82 def get_x86_backend_config() -> BackendConfig: 106 BackendConfig("x86")
|
D | README.md | 1 ## BackendConfig Overview 3 BackendConfig allows PyTorch quantization to work with different backend or kernel libraries. These… 5 BackendConfig configures quantization behavior in terms of operator patterns. For each operator pat… 21 …BackendConfig throughout the code base. This allows PyTorch Quantization to work with all first-pa… 25 The operator patterns used in BackendConfig are float modules, functional operators, pytorch operat… 37 …re complex scenarios such as graph patterns. For these use cases, the BackendConfig API offers an … 58 ## BackendConfig Implementation 60 The BackendConfig is comprised of a list of BackendPatternConfigs, each of which define the specifi… 65 BackendConfig, 104 backend_config = BackendConfig("my_backend") \ [all …]
|
D | qnnpack.py | 15 from .backend_config import BackendConfig, DTypeConfig, DTypeWithConstraints 115 def get_qnnpack_backend_config() -> BackendConfig: 154 BackendConfig("qnnpack")
|
D | backend_config.py | 289 class BackendConfig: class 367 def set_name(self, name: str) -> BackendConfig: 374 def set_backend_pattern_config(self, config: BackendPatternConfig) -> BackendConfig: 388 ) -> BackendConfig: 405 def from_dict(cls, backend_config_dict: Dict[str, Any]) -> BackendConfig:
|
D | _qnnpack_pt2e.py | 6 BackendConfig, 176 BackendConfig("qnnpack_pytorch_2.0_export")
|
D | __init__.py | 2 BackendConfig,
|
D | executorch.py | 20 BackendConfig, 487 def get_executorch_backend_config() -> BackendConfig: 492 BackendConfig("executorch")
|
D | onednn.py | 27 BackendConfig, 624 def get_onednn_backend_config() -> BackendConfig: 629 BackendConfig("onednn")
|
/external/pytorch/torch/ao/quantization/ |
D | quantize_fx.py | 9 from .backend_config import BackendConfig, get_tensorrt_backend_config # noqa: F401 77 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 97 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 168 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 207 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 258 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 408 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 516 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 560 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 627 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument [all …]
|
/external/pytorch/torch/ao/quantization/fx/ |
D | custom_config.py | 8 from torch.ao.quantization.backend_config import BackendConfig 43 backend_config: Optional[BackendConfig] 86 backend_config: Optional[BackendConfig], argument 106 backend_config: Optional[BackendConfig], argument 236 def _get_backend_config(obj: Any, dict_key: str) -> Optional[BackendConfig]: 240 if isinstance(obj, BackendConfig) or obj is None: 243 return BackendConfig.from_dict(obj)
|
D | fuse.py | 6 BackendConfig, 36 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 57 backend_config = BackendConfig.from_dict(backend_config)
|
D | lstm_utils.py | 13 from torch.ao.quantization.backend_config import BackendConfig 22 backend_config: Optional[BackendConfig] = None, argument 145 backend_config: Optional[BackendConfig] = None, argument
|
D | quantize_handler.py | 7 BackendConfig, 142 backend_config: BackendConfig, argument
|
D | prepare.py | 18 BackendConfig, 262 backend_config: BackendConfig, argument 392 backend_config: BackendConfig, argument 432 parent_backend_config: Optional[BackendConfig], argument 434 QConfigMapping, Tuple[Any, ...], PrepareCustomConfig, Optional[BackendConfig] 510 backend_config: BackendConfig, argument 565 backend_config: BackendConfig, argument 809 backend_config: Optional[BackendConfig] = None, argument 971 backend_config: Optional[BackendConfig] = None, argument 1451 backend_config: BackendConfig, argument [all …]
|
D | fuse_handler.py | 6 from torch.ao.quantization.backend_config import BackendConfig 125 backend_config: BackendConfig, argument
|
D | convert.py | 11 BackendConfig, 538 def _run_weight_observers(observed: GraphModule, backend_config: BackendConfig) -> None: argument 667 backend_config: Optional[BackendConfig], argument 731 backend_config: BackendConfig, argument 985 backend_config: Union[BackendConfig, Dict[str, Any], None] = None, argument 1047 backend_config = BackendConfig.from_dict(backend_config)
|
D | qconfig_mapping_utils.py | 9 from torch.ao.quantization.backend_config import BackendConfig, DTypeConfig 387 qconfig_mapping: QConfigMapping, backend_config: BackendConfig argument
|
D | README.md | 4 float_model QConfigMapping BackendConfig 36 BackendConfig:::nofs --> prepare_fx 50 …ike fusion, inserting stubs are fully automated and controlled by QConfigMapping and BackendConfig. 67 …tion for each step, and then talk about the corresponding settings in BackendConfig. We’ll follow … 290 BackendConfig(nniqat.LinearReLU) 386 BackendConfig(nniqat.LinearReLU) 449 … requirements for each pattern. For more detail, please refer to the [BackendConfig README](/torch…
|
/external/pytorch/test/quantization/core/ |
D | test_backend_config.py | 10 BackendConfig, 276 conf = BackendConfig("name1") 282 conf = BackendConfig("name1") 305 conf = BackendConfig.from_dict(conf_dict) 320 … conf = BackendConfig("name1").set_backend_pattern_config(op1).set_backend_pattern_config(op2)
|
/external/pytorch/torch/ao/ns/ |
D | _numeric_suite_fx.py | 112 from torch.ao.quantization.backend_config import BackendConfig 854 backend_config: BackendConfig, argument 963 backend_config: BackendConfig, argument 1044 backend_config: BackendConfig, argument
|
/external/pytorch/docs/source/ |
D | quantization-support.rst | 89 This module contains BackendConfig, a config object that defines how quantization is supported 100 BackendConfig
|