Home
last modified time | relevance | path

Searched +full:- +full:- +full:prune (Results 1 – 25 of 809) sorted by relevance

12345678910>>...33

/external/pytorch/test/nn/
Dtest_pruning.py8 import torch.nn.utils.prune as prune namespace
22 # torch/nn/utils/prune.py
31 respect to the size of the tensor to prune. That's left to
36 prune._validate_pruning_amount_init(amount="I'm a string")
40 prune._validate_pruning_amount_init(amount=1.1)
42 prune._validate_pruning_amount_init(amount=20.0)
46 prune._validate_pruning_amount_init(amount=-10)
49 prune._validate_pruning_amount_init(amount=0.34)
50 prune._validate_pruning_amount_init(amount=1500)
51 prune._validate_pruning_amount_init(amount=0)
[all …]
/external/pytorch/torch/nn/utils/
Dprune.py1 # mypy: allow-untyped-defs
28 module (nn.Module): module containing the tensor to prune
44 parameter to prune.
60 module (nn.Module): module containing the tensor to prune
79 Adds the forward pre-hook that enables pruning on the fly and
84 module (nn.Module): module containing the tensor to prune
133 method = old_method # rename old_method --> method
142 method = container # rename container --> method
204 def prune(self, t, default_mask=None, importance_scores=None): member in BasePruningMethod
210 t (torch.Tensor): tensor to prune (of same dimensions as
[all …]
/external/tensorflow/tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/
Dtarget.pbtxt1 # RUN: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-contro…
2-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-prune-unused-n…
3-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-prune-unused-n…
169 # CHECK-LABEL: func @main
170 # CHECK-SAME: control_outputs = "AssignAdd"
171 # CHECK-SAME: inputs = ""
172 # CHECK-SAME: outputs = ""
180 # PRUNE-LABEL: func @main
181 # PRUNE-SAME: control_outputs = "AssignAdd"
182 # PRUNE-SAME: inputs = ""
[all …]
Dmulti-output-feeds.pbtxt1-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-input-arrays=z…
2-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-prune-unused-n…
3-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-prune-unused-n…
271 # CHECK-LABEL: func @main
272 # CHECK-SAME: (%[[ARG_0:.*]]: tensor<f32>, %[[ARG_1:.*]]: tensor<f32>) -> (tensor<f32>, tensor<f32…
273 # CHECK-SAME: control_outputs = ""
274 # CHECK-SAME: inputs = "z:1,z:2"
275 # CHECK-SAME: outputs = "z:2,z:1,a:0"
284 # PRUNE-LABEL: func @main
285 # PRUNE-SAME: (%[[ARG_0:.*]]: tensor<f32>, %[[ARG_1:.*]]: tensor<f32>) -> (tensor<f32>, tensor<f32…
[all …]
Dprune_unused_nodes.pbtxt1-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false %s -tf-prune-unused-n…
3 # Verify that an unused Node (here named "Prune") isn't converted when we
5 # CHECK-LABEL: func @main
6 # CHECK-NOT: Prune
7 # CHECK-NOT: unused_input
10 name: "Prune"
/external/pytorch/torch/ao/pruning/_experimental/pruner/
DREADME.md18 We can prune the lowest absolute value elements in W in order to preserve as much information as po…
27 Unfortunately, zeroing out parameters does not offer a speed-up to the model out of the box. We nee…
54 1. Define what layers in the model you want to structured prune.
79 The above [example](#weight-resizing) of two linear layers would match against a `(nn.Linear, nn.Li…
83 - linear -> linear
84 - linear -> activation -> linear
85 - conv2d -> conv2d
86 - conv2d -> activation -> conv2d
87 - conv2d -> activation -> pool -> conv2d
88 - conv2d -> pool -> activation -> conv2d
[all …]
Dsaliency_pruner.py1 # mypy: allow-untyped-defs
7 Prune rows based on the saliency (L1 norm) of each row.
9 This pruner works on N-Dimensional weight tensors.
20 # use negative weights so we can use topk (we prune out the smallest)
25 saliency = -weights.norm(dim=tuple(range(1, weights.dim())), p=1)
29 prune = saliency.topk(num_to_pick).indices
31 # Set the mask to be false for the rows we want to prune
32 mask.data[prune] = False
/external/libaom/av1/encoder/
Dspeed_features.h142 SUBPEL_TREE_PRUNED = 1, // Prunes 1/2-pel searches
143 SUBPEL_TREE_PRUNED_MORE = 2, // Prunes 1/2-pel searches more aggressively
150 // Try the full image filter search with non-dual filter only.
202 // similar, but applies much more aggressive pruning to get better speed-up
220 // Turns off multi-winner mode. So we will do txfm search on either all modes
238 PRUNE_NEARMV_LEVEL1 = 1, // Prune nearmv for qindex (0-85)
239 PRUNE_NEARMV_LEVEL2 = 2, // Prune nearmv for qindex (0-170)
240 PRUNE_NEARMV_LEVEL3 = 3, // Prune nearmv more aggressively for qindex (0-170)
262 // 1 - 1024: Probability threshold used for conditionally forcing tx type,
268 // Prune less likely chosen transforms for each intra mode. The speed
[all …]
/external/toolchain-utils/binary_search_tool/
DMAINTENANCE2 # Use of this source code is governed by a BSD-style license that can be
10 * chromeos-toolchain@
92 3. The weird options for the --verify, --verbose, --file_args, etc. arguments:
96 functionality for a boolean argument (using --prune as an example):
97 * --prune (prune set to True)
98 * <not given> (prune set to False)
99 * --prune=True (prune set to True)
100 * --prune=False (prune set to False)
104 last two? Imagine if the Android bisector set --prune=True as a default
106 the user to override prune and set it to False. So the user needs the
[all …]
DREADME.pass_bisect.md14 `-opt-bisect-limit` and `print-debug-counter` that only exist in LLVM.
18 All the required arguments in object-file-level bisection tool are still
21 1. `--pass_bisect`: enables pass level bisection
22 2. `--ir_diff`: enables output of IR differences
24 Please refer to `--help` or the examples below for details about how to use
29 *TODO* - Future work: Currently this only works for Android.
45 --pass_bisect=’android/generate_cmd.sh’
46 --prune=False
47 --ir_diff
48 --verbose
[all …]
Dcommon.py1 # -*- coding: utf-8 -*-
3 # Use of this source code is governed by a BSD-style license that can be
48 ['-n', '--iterations'] : {
79 can be safely and easily populated. Each call to this method will have a 1-1
83 *args: The names for the argument (-V, --verbose, etc.)
155 "-n",
156 "--iterations",
163 "-i",
164 "--get_initial_items",
168 "the --verbose option must be used",
[all …]
/external/javassist/src/main/javassist/scopedpool/
DScopedClassPoolRepositoryImpl.java2 * Javassist, a Java-bytecode translator toolkit.
3 * Copyright (C) 1999- Shigeru Chiba. All Rights Reserved.
39 /** Whether to prune */
40 private boolean prune = true; field in ScopedClassPoolRepositoryImpl
42 /** Whether to prune when added to the classpool's cache */
75 * Returns the value of the prune attribute.
77 * @return the prune.
81 return prune; in isPrune()
85 * Set the prune attribute.
87 * @param prune a new value.
[all …]
DScopedClassPoolRepository.java2 * Javassist, a Java-bytecode translator toolkit.
3 * Copyright (C) 1999- Shigeru Chiba. All Rights Reserved.
43 * @return the prune.
48 * Sets the prune flag.
50 * @param prune a new value.
52 void setPrune(boolean prune); in setPrune() argument
/external/clang/test/Modules/
Dprune.m12 // RUN: rm -rf %t
14-DIMPORT_DEPENDS_ON_MODULE -fmodules-ignore-macro=DIMPORT_DEPENDS_ON_MODULE -fmodules -fimplicit-
15-DIMPORT_DEPENDS_ON_MODULE -fmodules-ignore-macro=DIMPORT_DEPENDS_ON_MODULE -fmodules -fimplicit-
17 // RUN: ls -R %t | grep ^Module.*pcm
18 // RUN: ls -R %t | grep DependsOnModule.*pcm
20 // Set the timestamp back more than two days. We should try to prune,
22 // RUN: touch -m -a -t 201101010000 %t/modules.timestamp
23 …cc1 -fmodules -fimplicit-module-maps -F %S/Inputs -fmodules-cache-path=%t -fmodules -fmodules-prun…
25 // RUN: ls -R %t | grep ^Module.*pcm
26 // RUN: ls -R %t | grep DependsOnModule.*pcm
[all …]
/external/pytorch/docs/source/
Dnn.rst2 :class: hidden-section
31 ----------------------------------
64 ----------------------------------
87 ----------------------------------
116 --------------
139 Non-linear Activations (weighted sum, nonlinearity)
140 ---------------------------------------------------
173 Non-linear Activations (other)
174 ------------------------------
188 ----------------------------------
[all …]
/external/tcpdump/tests/
Dpim-packet-assortment.out1 1 2019-07-05 17:10:44.789433 IP 10.0.0.2 > 224.0.0.13: PIMv2, Bootstrap, length 14
2 2 2019-07-05 17:10:59.798983 IP 10.0.0.2 > 224.0.0.13: PIMv2, Bootstrap, length 14
3 3 2019-07-05 17:11:14.807715 IP 10.0.0.2 > 224.0.0.13: PIMv2, Bootstrap, length 14
4 4 2019-07-05 17:11:14.823339 IP 10.0.0.2 > 224.0.0.13: PIMv2, Bootstrap, length 14
5 5 2019-07-05 17:11:14.838646 IP 10.0.0.2 > 224.0.0.13: PIMv2, Bootstrap, length 26
6 6 2019-07-05 17:11:14.854392 IP 10.0.0.2 > 224.0.0.13: PIMv2, Bootstrap, length 58
7 7 2019-07-05 17:11:14.870050 IP 10.0.0.2 > 10.0.0.1: PIMv2, Bootstrap, length 14
8 8 2019-07-05 17:11:29.877641 IP 10.0.0.1 > 224.0.0.13: PIMv2, Bootstrap, length 14
9 9 2019-07-05 17:11:29.882313 IP 10.0.0.1 > 224.0.0.13: PIMv2, Bootstrap, length 14
10 10 2019-07-05 17:11:29.886825 IP 10.0.0.1 > 224.0.0.13: PIMv2, Bootstrap, length 26
[all …]
/external/toolchain-utils/binary_search_tool/test/
Dbinary_search_tool_test.py2 # -*- coding: utf-8 -*-
4 # Use of this source code is governed by a BSD-style license that can be
30 gen_obj.Main(["--obj_num", str(obj_num), "--bad_obj_num", str(bad_obj_num)])
47 with open("./is_setup", "w", encoding="utf-8"):
79 prune=True,
97 "tail -n1"
120 """Generate [100-1000] object files, and 1-5% of which are bad ones."""
123 with open("./is_setup", "w", encoding="utf-8"):
157 prune=True,
165 "--get_initial_items",
[all …]
/external/llvm/utils/lit/
DMANIFEST.in2 recursive-include tests *
3 recursive-include examples *
4 global-exclude *pyc
5 global-exclude *~
6 prune tests/Output
7 prune tests/*/Output
8 prune tests/*/*/Output
9 prune tests/*/*/*/Output
/external/executorch/examples/models/llama/source_transformation/
Dprune_vocab.py4 # This source code is licensed under the BSD-style license found in the
18 ) -> torch.nn.Module:
19 """Prune the model output linear layer while keeping the tokens in the token map.
21 Note: Pruning is performed in-place.
24 model: The model to prune.
27 output_layer_name: name of the output layer to prune
78 ) -> torch.nn.Module:
79 """Prune the model input embedding layer while keeping the tokens in the token map.
81 Note: Pruning is performed in-place.
84 model: The model to prune.
[all …]
/external/perfetto/src/trace_redaction/
Dprune_package_list_unittest.cc9 * http://www.apache.org/licenses/LICENSE-2.0
41 auto* package = list->add_packages(); in AddPackage()
42 package->set_uid(uid); in AddPackage()
43 package->set_name(std::string(name)); in AddPackage()
49 packet->set_trusted_uid(9999); in CreateTestPacket()
50 packet->set_trusted_packet_sequence_id(2); in CreateTestPacket()
51 packet->set_previous_packet_dropped(true); in CreateTestPacket()
53 auto* packages = packet->mutable_packages_list(); in CreateTestPacket()
60 return packet->SerializeAsString(); in CreateTestPacket()
69 // cmdline: "-O/data/vendor/wifi/wpa/sockets"
[all …]
/external/angle/src/compiler/translator/tree_ops/
DPruneNoOps.cpp3 // Use of this source code is governed by a BSD-style license that can be
6 // PruneNoOps.cpp: The PruneNoOps function prunes no-op statements.
21 if (value->getType() == EbtYuvCscStandardEXT) in GetSwitchConstantAsUInt()
23 asUInt.setUConst(value->getYuvCscStandardEXTConst()); in GetSwitchConstantAsUInt()
40 TIntermConstantUnion *expr = node->getInit()->getAsConstantUnion(); in IsNoOpSwitch()
46 const uint32_t exprValue = GetSwitchConstantAsUInt(expr->getConstantValue()); in IsNoOpSwitch()
49 const TIntermSequence &statements = *node->getStatementList()->getSequence(); in IsNoOpSwitch()
53 TIntermCase *caseLabel = statement->getAsCaseNode(); in IsNoOpSwitch()
59 // Default matches everything, consider it not a no-op. in IsNoOpSwitch()
60 if (!caseLabel->hasCondition()) in IsNoOpSwitch()
[all …]
/external/pytorch/.github/actions/teardown-xpu/
Daction.yml8 - name: Teardown XPU
12 # Prune all stopped containers.
14 nprune=$(ps -ef | grep -c "docker container prune")
15 if [[ $nprune -eq 1 ]]; then
16 docker container prune -f
18 - name: Runner diskspace health check
19 uses: ./.github/actions/diskspace-cleanup
/external/llvm/lib/Fuzzer/test/
Dfuzzer-prunecorpus.test1 RUN: rm -rf %t/PruneCorpus
2 RUN: mkdir -p %t/PruneCorpus
5 RUN: LLVMFuzzer-EmptyTest %t/PruneCorpus -prune_corpus=1 -runs=0 2>&1 | FileCheck %s --check-prefix…
6 RUN: LLVMFuzzer-EmptyTest %t/PruneCorpus -prune_corpus=0 -runs=0 2>&1 | FileCheck %s --check-prefix…
7 RUN: rm -rf %t/PruneCorpus
9 PRUNE: READ units: 2
10 PRUNE: INITED{{.*}}units: 1
/external/scapy/scapy/contrib/
Dpim.py1 # SPDX-License-Identifier: GPL-2.0-or-later
9 - https://tools.ietf.org/html/rfc4601
10 - https://www.iana.org/assignments/pim-parameters/pim-parameters.xhtml
28 2: "Register-Stop",
29 3: "Join/Prune",
33 7: "Graft-Ack",
34 8: "Candidate-RP-Advertisement"
129 name = "PIMv2 Hello Options : LAN Prune Delay Value"
138 name = "PIMv2 Hello Options : LAN Prune Delay"
169 name = "PIMv2 Hello Options : State-Refresh Value"
[all …]
/external/llvm/include/llvm/Support/
DCachePruning.h1 //=- CachePruning.h - Helper to manage the pruning of a cache dir -*- C++ -*-=//
8 //===----------------------------------------------------------------------===//
13 //===----------------------------------------------------------------------===//
23 /// to prune.
26 /// Prepare to prune \p Path.
31 /// prune. A value of 0 forces the scan to occurs.
39 /// the expiration-based pruning.
49 /// 0 disable the size-based pruning.
57 bool prune();

12345678910>>...33