| /external/tensorflow/tensorflow/lite/g3doc/examples/build/ |
| D | index.md | 31 * **Limited compute capabilities** - Compared to fully equipped servers with 32 multiple CPUs, high memory capacity, and specialized processors such as GPUs 36 * **Size of models** - The overall complexity of a model, including data 37 pre-processing logic and the number of layers in the model, increases the 38 in-memory size of a model. A large model may run unacceptably slow or simply 40 * **Size of data** - The size of input data that can be effectively processed 44 off-device storage and access solutions. 45 * **Supported TensorFlow operations** - TensorFlow Lite runtime environments 51 For more information building effective, compatible, high performance models 53 [Performance best practices](../../performance/best_practices). [all …]
|
| /external/jemalloc_new/ |
| D | TUNING.md | 1 This document summarizes the common approaches for performance fine tuning with 8 by a few percent, or make favorable trade-offs. 11 ## Notable runtime options for performance tuning 42 operating system, and therefore provides a fairly straightforward trade-off 47 Suggested: tune the values based on the desired trade-offs. 52 However high arena count may also increase overall memory fragmentation, 53 since arenas manage memory independently. When high degree of parallelism 71 * High resource consumption application, prioritizing CPU utilization: 77 * High resource consumption application, prioritizing memory usage: 90 * Extremely conservative -- minimize memory usage at all costs, only suitable when [all …]
|
| /external/mesa3d/docs/drivers/openswr/ |
| D | faq.rst | 5 -------------------------------- 10 * Architecture - given our focus on scientific visualization, our 16 * Historical - Intel had developed a high performance software 19 with Mesa to provide a high quality API layer while at the same 20 time benefiting from the excellent performance the software 24 ------------------------ 26 SWR is a tile based immediate mode renderer with a sort-free threading 36 Our pipeline uses just-in-time compiled code for the fetch shader that 46 What's the performance? 47 ----------------------- [all …]
|
| /external/tensorflow/tensorflow/core/profiler/protobuf/ |
| D | overview_page.proto | 39 // Remark text in the performance summary section. 47 // Percentage of device computation that is 16-bit. 49 // Percentage of device computation that is 32-bit. 57 // Percentage of TF-op execution time on the host (excluding the idle time) 60 // Percentage of TF-op execution time on the device (excluding the idle time) 63 // Percentage of TF-op execution time on the device (excluding the idle time) 70 // Overview result for a performance tip to users. 77 // Indicates if kernel launch is a performance bottleneck. Possible values: 78 // "no", "moderate", "high". 80 // A statement that recommends if we need to further investigate kernel-launch [all …]
|
| /external/tensorflow/tensorflow/compiler/mlir/g3doc/ |
| D | _index.yaml | 4 infrastructure for high-performance ML models in TensorFlow. 6 custom_css_path: /site-assets/css/style.css 8 - heading: MLIR unifies the infrastructure for high-performance ML models in TensorFlow. 10 - description: > 12 intermediate representation (IR) that unifies the infrastructure required to execute high 13 performance machine learning models in TensorFlow and similar ML frameworks. This project 19 - code_block: | 23 %x = call @thingToCall(%arg0) : (i32) -> i32 31 - classname: devsite-landing-row-cards 33 - heading: "Multi-Level Intermediate Representation for Compiler Infrastructure" [all …]
|
| /external/walt/docs/ |
| D | AudioLatency.md | 9 [High Performance Audio website](http://googlesamples.github.io/android-audio-high-performance/). 21 …e](http://googlesamples.github.io/android-audio-high-performance/guides/audio-output-latency.html#… 40 | :--- | :--- | :--- | ---: | ---: | 49 …onization accuracy is about 1 ms hence the relative error for recording latency can be fairly high. 52 Superpowered Inc. maintains an open source app for measuring round trip audio latency -
|
| /external/autotest/client/cros/ |
| D | perf.py | 2 # Use of this source code is governed by a BSD-style license that can be 13 Provides methods for setting the performance mode of a device. 16 it into a consistent, high performance state during initialization. 26 # Do all performance testing. 71 For now we declare performance results as valid if 72 - we did not have an error before. 73 - the monitoring thread never saw temperatures a throttle 112 Warning: this risks abnormal behavior if machine runs in high load.
|
| /external/oboe/docs/ |
| D | FullGuide.md | 2 Oboe is a C++ library which makes it easy to build high-performance audio apps on Android. Apps com… 6 …ams*, represented by the class `AudioStream`. The read/write calls can be blocking or non-blocking. 19 (a built-in mic or bluetooth headset) with the *Android device* (the phone or watch) that is runnin… 33  37  53 | :------------ | :---------- | :---- | 54 | I16 | int16_t | common 16-bit samples, [Q0.15 format](https://source.android.com/devices/audio/da… 55 | Float | float | -1.0 to +1.0 | 59 AudioFormat dataFormat = stream->getDataFormat(); 146 general, the API should be chosen by Oboe to allow for best performance and [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/performance/ |
| D | best_practices.md | 1 # Performance best practices 6 Lite model performance. 11 and size. If your task requires high accuracy, then you may need a large and 17  24 [TensorFlow Hub](https://tfhub.dev/s?deployment-format=lite) lists several other 36 has a built-in profiler that shows per operator profiling statistics. This can 37 help in understanding performance bottlenecks and which operators dominate the 66 TensorFlow Lite supports multi-threaded kernels for many operators. You can 74 Multi-threaded execution, however, comes at the cost of increased performance [all …]
|
| /external/perfetto/docs/ |
| D | tracing-101.md | 2 *This page provides a birds-eye view of performance analysis. 6 ### Performance subsection 7 Performance analysis is concerned with making software run *better*. 15 Much of the difficulty in improving performance comes from 16 identifying the root cause of performance issues. Modern software systems are 17 complicated, having a lot of components and a web of cross-interactions. 21 **Tracing** and **profiling** are two such widely-used techniques for 22 performance analysis. **Perfetto** is an open-source suite of tools, combining 31 They often include low-level kernel events like scheduler context switches, 33 performance bug is not needed as the trace provides all necessary context. [all …]
|
| /external/icu/icu4j/perf-tests/src/com/ibm/icu/dev/test/perf/ |
| D | CollationPerformanceTest.java | 5 * Copyright (C) 2002-2007, International Business Machines Corporation and * 29 + "-help Display this message.\n" 30 + "-file file_name utf-16 format file of names.\n" 31 + "-locale name ICU locale to use. Default is en_US\n" 32 + "-rules file_name Collation rules file (overrides locale)\n" 33 …//+ "-langid 0x1234 Windows Language ID number. Default to value for -locale option\n" 35 //+ "-win Run test using Windows native services. (ICU is default)\n" 36 //+ "-unix Run test using Unix strxfrm, strcoll services.\n" 37 …//+ "-uselen Use API with string lengths. Default is null-terminated strings\n" 38 + "-usekeys Run tests using sortkeys rather than strcoll\n" [all …]
|
| /external/cpuinfo/src/arm/ |
| D | uarch.c | 42 * Core information is ambiguous: some sources specify Cortex-A12, others - Cortex-A17. in cpuinfo_arm_decode_vendor_uarch() 43 * Assume it is Cortex-A12. in cpuinfo_arm_decode_vendor_uarch() 91 case 0xD0E: /* Cortex-A76AE */ in cpuinfo_arm_decode_vendor_uarch() 99 case 0xD41: /* Cortex-A78 */ in cpuinfo_arm_decode_vendor_uarch() 102 case 0xD44: /* Cortex-X1 */ in cpuinfo_arm_decode_vendor_uarch() 105 case 0xD46: /* Cortex-A510 */ in cpuinfo_arm_decode_vendor_uarch() 108 case 0xD47: /* Cortex-A710 */ in cpuinfo_arm_decode_vendor_uarch() 111 case 0xD48: /* Cortex-X2 */ in cpuinfo_arm_decode_vendor_uarch() 186 case 0xD40: /* Kirin 980 Big/Medium cores -> Cortex-A76 */ in cpuinfo_arm_decode_vendor_uarch() 243 /* Unlike Scorpion, Cortex-A5 comes with VFPv4 */ in cpuinfo_arm_decode_vendor_uarch() [all …]
|
| /external/selinux/secilc/docs/ |
| D | cil_introduction.md | 4 …nguage that sits between one or more high level policy languages (such as the current module langu… 6 …high-level languages that can both consume and produce language constructs with more features than… 8 * Eases the creation of high-level languages, encouraging the creation of more domain specific poli… 10 … analysis of the output of multiple high-level languages using a single analysis tool set without … 13 ------------------ 17 …- provide rich semantics needed for cross-language interaction but not for convenience. If a featu… 19 …- provide clear, simple syntax that is easy to parse and to generate by high-level compilers, anal… 21 …- the ultimate goal of CIL is the generation of the policy that will be enforced by the kernel. Th… 23 * The only good binary file format is a non-existent one - CIL is meant for a source policy oriente… 25 * Enable backwards compatibility but don't be a slave to it - source, but not binary, compatibility… [all …]
|
| /external/rust/crates/glam/ |
| D | ARCHITECTURE.md | 3 This document describes the high-level architecture of `glam`. While `glam` is 11 * Good out of the box performance using SIMD when available 15 * High quality [rustdoc] generated document 17 [standard library]: https://doc.rust-lang.org/std/index.html 18 [API guidelines]: https://rust-lang.github.io/api-guidelines 19 [rustdoc]: https://doc.rust-lang.org/rustdoc/index.html 24 `x86_64` architectures gave better performance than using Rust's built in `f32` 29 …ing with SIMD]: https://bitshifter.github.io/2018/06/04/simd-path-tracing/#converting-vec3-to-sse2. 67 ## High level structure
|
| /external/perfetto/src/trace_processor/stdlib/common/ |
| D | cpus.sql | 1 -- 2 -- Copyright 2023 The Android Open Source Project 3 -- 4 -- Licensed under the Apache License, Version 2.0 (the "License"); 5 -- you may not use this file except in compliance with the License. 6 -- You may obtain a copy of the License at 7 -- 8 -- https://www.apache.org/licenses/LICENSE-2.0 9 -- 10 -- Unless required by applicable law or agreed to in writing, software [all …]
|
| /external/tensorflow/tensorflow/lite/delegates/gpu/ |
| D | README.md | 7 GPUs are designed to have high throughput for massively parallelizable 8 workloads. Thus, they are well-suited for deep neural nets which consists of a 12 run fast enough and now become suitable for real-time applications if it was not 15 GPUs do their computation with 16-bit or 32-bit floating point numbers and do 16 not require quantization for optimal performance unlike the CPUs. If 26 TFLite on GPU supports the following ops in 16-bit and 32-bit float precision: 32 * `DEPTHWISE_CONV_2D v1-2` 46 * `RESIZE_BILINEAR v1-3` 56 [the documentation](https://www.tensorflow.org/lite/performance/gpu). 81 WriteToInputTensor(interpreter->typed_input_tensor<float>(0)); [all …]
|
| /external/tensorflow/tensorflow/lite/g3doc/examples/segmentation/ |
| D | overview.md | 9 The model will create a mask over the target objects with high accuracy. 11 <img src="images/segmentation.gif" class="attempt-right" /> 22 You can leverage the out-of-box API from 34 <a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/exam… 37 <a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/exam… 45 <a class="button button-primary" href="https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata… 50 _DeepLab_ is a state-of-art deep learning model for semantic image segmentation, 66 …ects at multiple scales with filters at multiple sampling rates and effective fields-of-views.</li> 67 …-level feature [5, 6] to capture longer range information. We also include batch normalization [7]… 68 …in this encoder-decoder structure one can arbitrarily control the resolution of extracted encoder … [all …]
|
| /external/oss-fuzz/projects/opensips/ |
| D | patch.diff | 1 diff --git a/Makefile.conf.template b/Makefile.conf.template 3 --- a/Makefile.conf.template 5 @@ -69,17 +69,19 @@ exclude_modules?= aaa_diameter aaa_radius auth_jwt b2b_logic_xml cachedb_cassand 9 -DEFS+= -DPKG_MALLOC #Use a faster malloc 10 -DEFS+= -DSHM_MMAP #Use mmap instead of SYSV shared memory 11 -DEFS+= -DUSE_MCAST #Compile in support for IP Multicast 12 +#DEFS+= -DPKG_MALLOC #Use a faster malloc 13 +#DEFS+= -DSHM_MMAP #Use mmap instead of SYSV shared memory 14 +#DEFS+= -DUSE_MCAST #Compile in support for IP Multicast 15 DEFS+= -DDISABLE_NAGLE #Disable the TCP NAgle Algorithm ( lower delay ) [all …]
|
| /external/rust/crates/libz-sys/ |
| D | Cargo.toml.orig | 2 name = "libz-sys" 6 license = "MIT OR Apache-2.0" 7 repository = "https://github.com/rust-lang/libz-sys" 8 description = "Low-level bindings to the system libz library (also known as zlib)." 9 categories = ["compression", "external-ffi-bindings"] 10 keywords = ["zlib", "zlib-ng"] 16 "/Cargo-zng.toml", 17 "/cargo-zng", 28 # eliminating some high-level functions like gz*, compress* and 33 [build-dependencies] [all …]
|
| /external/skia/src/core/ |
| D | SkSharedMutex.h | 4 * Use of this source code is governed by a BSD-style license that can be 22 // There are two shared lock implementations one debug the other is high performance. They implement 24 // This is a shared lock implementation similar to pthreads rwlocks. The high performance 26 // http://preshing.com/20150316/semaphores-are-surprisingly-versatile/
|
| /external/fec/ |
| D | simd-viterbi.3 | 1 .TH SIMD-VITERBI 3 10 chainback_viterbi615, delete_viterbi615 -\ IA32 SIMD-assisted Viterbi decoders 53 These functions implement high performance Viterbi decoders for four 78 non-IA32, non-PPC machine, a portable C version is executed 115 decoder instance, a pointer to a user-supplied buffer into which the 120 may be unreliable. The decoded data is written in big-endian order, 121 i.e., the first bit in the frame is written into the high order bit of 140 with a total of 2012 symbols - the last 12 encoded symbols 154 are those of the NASA-JPL convention \fIwithout\fR symbol inversion. 155 The NASA-JPL convention normally inverts the first symbol. [all …]
|
| /external/rust/crates/aho-corasick/src/packed/teddy/ |
| D | runtime.rs | 8 // variants: Slim 128-bit Teddy (8 buckets), Slim 256-bit Teddy (8 buckets) 9 // and Fat 256-bit Teddy (16 buckets). For each variant, there are three 13 // while at <= len(haystack) - CHUNK_SIZE: 23 // In the code below, a "candidate" corresponds to a single vector with 8-bit 24 // lanes. Each lane is itself an 8-bit bitset, where the ith bit is set in the 31 // 4-byte vector would look like this: 63 /// like Aho-Corasick, it does find occurrences for a small set of patterns 64 /// much more quickly than Aho-Corasick. 86 /// Note that users of the aho-corasick crate cannot get this wrong. Only 101 /// All matches are consistent with the match semantics (leftmost-first or [all …]
|
| /external/cronet/base/allocator/partition_allocator/partition_alloc_base/time/ |
| D | time_win.cc | 2 // Use of this source code is governed by a BSD-style license that can be 13 // QueryPerformanceCounter is the logical choice for a high-precision timer. 16 // It's 3-4x slower than timeGetTime() on desktops, but can be 10x slower 30 // will only increase the system-wide timer if we're not running on battery 56 // From MSDN, FILETIME "Contains a 64-bit value representing the number of 57 // 100-nanosecond intervals since January 1, 1601 (UTC)." 60 // 100-nanoseconds to microseconds. This only works on little-endian in FileTimeToMicroseconds() 70 PA_DCHECK(CanConvertToFileTime(us)) << "Out-of-range: Cannot convert " << us in MicrosecondsToFileTime() 73 // Multiply by 10 to convert microseconds to 100-nanoseconds. Bit_cast will in MicrosecondsToFileTime() 74 // handle alignment problems. This only works on little-endian machines. in MicrosecondsToFileTime() [all …]
|
| /external/python/cpython2/Doc/library/ |
| D | hotshot.rst | 2 :mod:`hotshot` --- High performance logging profiler 6 :synopsis: High performance logging profiler, mostly written in C. 15 in C, it should result in a much smaller performance impact than the existing 21 the expense of long data post-processing times. For common usage it is 46 .. _hotshot-objects: 49 --------------- 71 Profile an :keyword:`exec`\ -compatible string in the script environment. The 86 Evaluate an :keyword:`exec`\ -compatible string in a specific environment. The 101 ------------------ 125 .. _hotshot-example: [all …]
|
| /external/bcc/man/man8/ |
| D | readahead.8 | 1 .TH readahead 8 "2020-08-20" "USER COMMANDS" 3 readahead \- Show performance of read-ahead cache 5 .B readahead [-d DURATION] 7 The tool shows the performance of read-ahead caching on the system under a given load to investigat… 24 \-h 27 \-d DURATION 28 Trace the read-ahead caching system for DURATION seconds 31 Trace for 30 seconds and show histogram of page age (ms) in read-ahead cache along with unused pag… 33 .B readahead -d 30 35 The kernel functions instrumented by this program could be high-frequency depending on the profile … [all …]
|