Home
last modified time | relevance | path

Searched full:ml (Results 1 – 25 of 2537) sorted by relevance

12345678910>>...102

/external/python/setuptools/setuptools/tests/
Dtest_manifest.py244 ml = make_local_path
250 files = default_files | set([ml('app/c.rst')])
255 ml = make_local_path
258 ml('app/a.txt'), ml('app/b.txt'),
259 ml('app/static/app.js'), ml('app/static/app.js.map'),
260 ml('app/static/app.css'), ml('app/static/app.css.map')])
265 ml = make_local_path
268 ml('app/static/app.js'), ml('app/static/app.js.map'),
269 ml('app/static/app.css'), ml('app/static/app.css.map')])
274 ml = make_local_path
[all …]
/external/capstone/bindings/ocaml/
DMakefile22 test_basic.cmx: test_basic.ml
25 test_detail.cmx: test_detail.ml
28 test_x86.cmx: test_x86.ml
31 test_arm.cmx: test_arm.ml
34 test_arm64.cmx: test_arm64.ml
37 test_mips.cmx: test_mips.ml
40 test_ppc.cmx: test_ppc.ml
43 test_sparc.cmx: test_sparc.ml
46 test_systemz.cmx: test_systemz.ml
49 test_xcore.cmx: test_xcore.ml
[all …]
/external/selinux/python/sepolgen/tests/
Dtest_matching.py48 ml = matching.MatchList()
49 ml.threshold = 100
53 ml.append(a)
54 self.assertEqual(len(ml), 1)
58 ml.append(a)
59 self.assertEqual(len(ml), 2)
60 self.assertEqual(len(ml.bastards), 1)
62 ml.allow_info_dir_change = False
66 ml.append(a)
67 self.assertEqual(len(ml), 3)
[all …]
/external/tensorflow/tensorflow/lite/g3doc/performance/
Dcoreml_delegate.md1 # Tensorflow Lite Core ML delegate
3 The TensorFlow Lite Core ML delegate enables running TensorFlow Lite models on
4 [Core ML framework](https://developer.apple.com/documentation/coreml), which
10 Note: Core ML delegate supports Core ML version 2 and later.
14 * iOS 12 and later. In the older iOS versions, Core ML delegate will
16 * By default, Core ML delegate will only be enabled on devices with A12 SoC
18 If you want to use Core ML delegate also on the older devices, please see
23 The Core ML delegate currently supports float (FP32 and FP16) models.
25 ## Trying the Core ML delegate on your own model
27 The Core ML delegate is already included in nightly release of TensorFlow lite
[all …]
/external/compiler-rt/lib/tsan/tests/rtl/
Dtsan_test_util.h75 void Read(const MemLoc &ml, int size, bool expect_race = false) {
76 Access(ml.loc(), false, size, expect_race);
78 void Write(const MemLoc &ml, int size, bool expect_race = false) {
79 Access(ml.loc(), true, size, expect_race);
81 void Read1(const MemLoc &ml, bool expect_race = false) {
82 Read(ml, 1, expect_race); }
83 void Read2(const MemLoc &ml, bool expect_race = false) {
84 Read(ml, 2, expect_race); }
85 void Read4(const MemLoc &ml, bool expect_race = false) {
86 Read(ml, 4, expect_race); }
[all …]
/external/tensorflow/tensorflow/lite/objc/apis/
DTFLCoreMLDelegate.h23 * This enum specifies for which devices the Core ML delegate will be enabled.
32 /** Custom configuration options for a Core ML delegate. */
36 * Indicates which devices the Core ML delegate should be enabled for. The default value is
43 * Target Core ML version for the model conversion. When it's not set, Core ML version will be set
49 * The maximum number of Core ML delegate partitions created. Each graph corresponds to one
56 * The minimum number of nodes per partition to be delegated by the Core ML delegate. The default
63 /** A delegate that uses the Core ML framework for performing TensorFlow Lite graph operations. */
67 * Initializes a new Core ML delegate with default options.
69 * @return A Core ML delegate initialized with default options. `nil` when the delegate creation
70 * fails. For example, trying to initialize a Core ML delegate on an unsupported device.
[all …]
/external/toybox/toys/lsb/
Dumount.c110 struct mtab_list *mlsave = 0, *mlrev = 0, *ml; in umount_main() local
130 for (ml = mlrev; ml; ml = ml->prev) in umount_main()
131 if (mountlist_istype(ml, typestr)) do_umount(ml->dir, ml->device, flags); in umount_main()
140 for (ml = abs ? mlrev : 0; ml; ml = ml->prev) { in umount_main()
141 if (!strcmp(ml->dir, abs)) break; in umount_main()
142 if (!strcmp(ml->device, abs)) { in umount_main()
144 abs = ml->dir; in umount_main()
149 do_umount(abs ? abs : *optargs, ml ? ml->device : 0, flags); in umount_main()
150 if (ml && abs != ml->dir) free(abs); in umount_main()
/external/tensorflow/tensorflow/lite/delegates/coreml/
DREADME.md1 # Tensorflow Lite Core ML Delegate
3 TensorFlow Lite Core ML Delegate enables running TensorFlow Lite models on
4 [Core ML framework](https://developer.apple.com/documentation/coreml),
11 * iOS 12 and later. In the older iOS versions, Core ML delegate will
16 ## Update code to use Core ML delegate
20 Initialize TensorFlow Lite interpreter with Core ML delegate.
44 // Add following section to use Core ML delegate.
63 Following ops are supported by the Core ML delegate.
66 * Only certain shapes are broadcastable. In Core ML tensor layout,
86 * Only certain shapes are broadcastable. In Core ML tensor layout,
[all …]
Dcoreml_delegate.h24 // Create Core ML delegate only on devices with Apple Neural Engine.
27 // Always create Core ML delegate
34 // Specifies target Core ML version for model conversion.
35 // Core ML 3 come with a lot more ops, but some ops (e.g. reshape) is not
41 // This sets the maximum number of Core ML delegates created.
46 // Core ML delegate. Defaults to 2.
/external/tensorflow/tensorflow/compiler/xla/mlir_hlo/stablehlo/
DREADME.md3 StableHLO is an operation set that expresses ML computations. It has been
9 StableHLO is a portability layer between ML frameworks and ML compilers.
10 We are aiming for adoption by a wide variety of ML frameworks including
11 TensorFlow, JAX and PyTorch, as well as ML compilers including XLA and IREE.
22 portability layer between ML frameworks and ML compilers. Let's work together
32 * Workstream #3: Support for ML frameworks (TensorFlow, JAX, PyTorch) and
33 ML compilers (XLA and IREE) - ETA: H2 2022.
/external/python/cpython2/Objects/
Dmethodobject.c17 PyCFunction_NewEx(PyMethodDef *ml, PyObject *self, PyObject *module) in PyCFunction_NewEx() argument
31 op->m_ml = ml; in PyCFunction_NewEx()
322 PyMethodDef *ml; in listmethodchain() local
328 for (ml = c->methods; ml->ml_name != NULL; ml++) in listmethodchain()
336 for (ml = c->methods; ml->ml_name != NULL; ml++) { in listmethodchain()
337 PyList_SetItem(v, i, PyString_FromString(ml->ml_name)); in listmethodchain()
368 PyMethodDef *ml = chain->methods; in Py_FindMethodInChain() local
369 for (; ml->ml_name != NULL; ml++) { in Py_FindMethodInChain()
370 if (name[0] == ml->ml_name[0] && in Py_FindMethodInChain()
371 strcmp(name+1, ml->ml_name+1) == 0) in Py_FindMethodInChain()
[all …]
/external/kmod/libkmod/python/kmod/
Dmodule.pyx80 cdef _list.ModList ml = _list.ModList()
82 err = _libkmod_h.kmod_module_get_info(self.module, &ml.list)
87 for item in ml:
95 _libkmod_h.kmod_module_info_free_list(ml.list)
96 ml.list = NULL
101 cdef _list.ModList ml = _list.ModList()
103 err = _libkmod_h.kmod_module_get_versions(self.module, &ml.list)
107 for item in ml:
114 _libkmod_h.kmod_module_versions_free_list(ml.list)
115 ml.list = NULL
/external/tensorflow/tensorflow/lite/swift/Sources/
DCoreMLDelegate.swift17 /// A delegate that uses the `Core ML` framework for performing TensorFlow Lite graph operations.
28 /// Core ML delegate could not be created because `Options.enabledDevices` was set to
51 /// A type indicating which devices the Core ML delegate should be enabled for.
72 /// A type indicating which devices the Core ML delegate should be enabled for. The default
76 /// Target Core ML version for the model conversion. When it's not set, Core ML version will
79 /// The maximum number of Core ML delegate partitions created. Each graph corresponds to one
83 /// The minimum number of nodes per partition to be delegated by the Core ML delegate. The
/external/libffi/
Dmsvcc.sh41 # GCC-compatible wrapper for cl.exe and ml.exe. Arguments are given in GCC
42 # format and translated into something sensible for cl or ml.
51 ml="ml"
79 ml="ml64" # "$MSVC/x86_amd64/ml64"
84 ml='armasm'
89 ml='armasm64'
303 if [ $ml = "armasm" ]; then
307 if [ $ml = "armasm64" ]; then
317 if [ $ml = "armasm" ]; then
319 elif [ $ml = "armasm64" ]; then
[all …]
/external/cronet/third_party/icu/source/data/unit/
Den_AU.txt190 dnam{"ML"}
191 one{"{0}ML"}
192 other{"{0}ML"}
195 dnam{"mL"}
196 one{"{0}mL"}
197 other{"{0}mL"}
388 dnam{"ML"}
389 one{"{0} ML"}
390 other{"{0} ML"}
393 dnam{"mL"}
[all …]
/external/icu/icu4c/source/data/unit/
Den_AU.txt190 dnam{"ML"}
191 one{"{0}ML"}
192 other{"{0}ML"}
195 dnam{"mL"}
196 one{"{0}mL"}
197 other{"{0}mL"}
388 dnam{"ML"}
389 one{"{0} ML"}
390 other{"{0} ML"}
393 dnam{"mL"}
[all …]
/external/tensorflow/tensorflow/lite/g3doc/inference_with_metadata/task_library/
Doverview.md4 task-specific libraries for app developers to create ML experiences with TFLite.
13 * **Clean and well-defined APIs usable by non-ML-experts** \
16 develop ML with TFLite on mobile devices.
85 * [Core ML delegate](https://www.tensorflow.org/lite/performance/coreml_delegate):
259 ### Example usage of Core ML Delegate in C++
262 [Image Classifier Core ML Delegate Test](https://github.com/tensorflow/tflite-support/blob/master/t…
264 Step 1. Depend on the Core ML delegate plugin in your bazel build target, such
269 "//tensorflow_lite_support/acceleration/configuration:coreml_plugin", # for Core ML Delegate
273 Step 2. Configure Core ML Delegate in the task options. For example, you can set
274 up Core ML Delegate in `ImageClassifier` as follows:
[all …]
/external/tensorflow/tensorflow/lite/g3doc/examples/
D_index.yaml17 learning (ML) model format. You can use pre-trained models with TensorFlow Lite, modify
66 <h2 class="tfo-landing-page-heading no-link">Using models for quick tasks: ML Kit</h2>
70 <a href="https://developers.google.com/ml-kit">ML Kit</a> before starting
72 directly from mobile apps to complete common ML tasks such as barcode scanning and
73 on-device translation. Using this method can help you get results fast. However, ML Kit
75 see the <a href="https://developers.google.com/ml-kit">ML Kit</a> developer
111 Learn how to pick a pre-trained ML model to use with TensorFlow Lite.
/external/lz4/lib/
Dlz4hc.c577 int ml0, ml, ml2, ml3; in LZ4HC_compress_hashChain() local
593ml = LZ4HC_InsertAndFindBestMatch(ctx, ip, matchlimit, &ref, maxNbAttempts, patternAnalysis, dict); in LZ4HC_compress_hashChain()
594 if (ml<MINMATCH) { ip++; continue; } in LZ4HC_compress_hashChain()
597 start0 = ip; ref0 = ref; ml0 = ml; in LZ4HC_compress_hashChain()
600 if (ip+ml <= mflimit) { in LZ4HC_compress_hashChain()
602 ip + ml - 2, ip + 0, matchlimit, ml, &ref2, &start2, in LZ4HC_compress_hashChain()
605 ml2 = ml; in LZ4HC_compress_hashChain()
608 if (ml2 == ml) { /* No better match => encode ML1 */ in LZ4HC_compress_hashChain()
610 … if (LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml, ref, limit, oend)) goto _dest_overflow; in LZ4HC_compress_hashChain()
616 ip = start0; ref = ref0; ml = ml0; /* restore initial ML1 */ in LZ4HC_compress_hashChain()
[all …]
/external/armnn/samples/ImageClassification/
DREADME.md33 from the Arm ML-Zoo).
50 the Arm ML-Zoo as well as scripts to download the labels for the model.
55 git clone https://github.com/arm-software/ml-zoo.git
57 cd ml-zoo/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8
63 …cp ml-zoo/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8/mobilenet_v2_1.0_224_quant…
64 cp ml-zoo/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8/labelmappings.txt .
99 ├── mobilenet_v2_1.0_224_quantized_1_default_1.tflite # tflite model from ml-zoo
/external/cronet/build/toolchain/win/
Dtoolchain.gni249 ml = "${cl_prefix}${_clang_bin_path}/clang-cl${_exe} --target=arm64-windows"
252 ml = string_replace(ml, "/", "\\")
254 ml += " -c -o{{output}}"
259 ml = "armasm64.exe"
264 ml = "$prefix/llvm-ml${_exe}"
266 ml += " -m64"
268 ml += " -m32"
272 ml = "ml64.exe"
274 ml = "ml.exe"
280 ml += " /nologo /Fo{{output}}"
[all …]
/external/armnn/
DREADME.md18 …* is the **most performant** machine learning (ML) inference engine for Android and Linux, acceler…
19 on **Arm Cortex-A CPUs and Arm Mali GPUs**. This ML inference engine is an open source SDK which br…
22 Arm NN outperforms generic ML libraries due to **Arm architecture-specific optimizations** (e.g. SV…
33 **The Arm NN TF Lite Delegate provides the widest ML operator support in Arm NN** and is an easy wa…
34 your ML model. To start using the TF Lite Delegate, first download the **[Pre-Built Binaries](#pre-…
43 integrated into your Android ML application. Using the AAR allows you to benefit from the **vast op…
45 accelerate an ML Image Segmentation app in 5 minutes using this AAR file. To download the Arm NN AA…
49 (albeit with less ML operator support than the TF Lite Delegate). There is an installation guide av…
57 …h is the ability to **exactly choose which components to build, targeted for your ML project**.<br>
73 The Arm NN SDK supports ML models in **TensorFlow Lite** (TF Lite) and **ONNX** formats.
[all …]
DCONTRIBUTING.md7 - All code reviews are performed on [Linaro ML Platform Gerrit](https://review.mlplatform.org)
8 - GitHub account credentials are required for creating an account on ML Platform
10 - git clone https://review.mlplatform.org/ml/armnn
19 - Patch will appear on ML Platform Gerrit [here](https://review.mlplatform.org/q/is:open+project:ml
50 …s hosted on the [mlplatform.org git repository](https://git.mlplatform.org/ml/armnn.git/) hosted b…
/external/clang/test/CXX/expr/expr.prim/expr.prim.lambda/
Dp5.cpp27 auto ml = [=]() mutable{}; // expected-note{{method is not marked const}} \ in test_quals() local
29 const decltype(ml) mlc = ml; in test_quals()
30 ml(); in test_quals()
35 volatile decltype(ml) mlv = ml; in test_quals()
/external/icu/android_icu4j/src/main/tests/android/icu/dev/test/format/
Dplurals.txt1441 ml 0.0 വ്യക്തികൾ
1442 ml 0.00 വ്യക്തികൾ
1443 ml 0.000 വ്യക്തികൾ
1444 ml 0.001 വ്യക്തികൾ
1445 ml 0.002 വ്യക്തികൾ
1446 ml 0.01 വ്യക്തികൾ
1447 ml 0.010 വ്യക്തികൾ
1448 ml 0.011 വ്യക്തികൾ
1449 ml 0.02 വ്യക്തികൾ
1450 ml 0.1 വ്യക്തികൾ
[all …]

12345678910>>...102