/external/ImageMagick/coders/ |
D | exr.c | 167 compression; in ReadEXRImage() local 214 image->compression=NoCompression; in ReadEXRImage() 215 compression=ImfHeaderCompression(hdr_info); in ReadEXRImage() 216 if (compression == IMF_RLE_COMPRESSION) in ReadEXRImage() 217 image->compression=RLECompression; in ReadEXRImage() 218 if (compression == IMF_ZIPS_COMPRESSION) in ReadEXRImage() 219 image->compression=ZipSCompression; in ReadEXRImage() 220 if (compression == IMF_ZIP_COMPRESSION) in ReadEXRImage() 221 image->compression=ZipCompression; in ReadEXRImage() 222 if (compression == IMF_PIZ_COMPRESSION) in ReadEXRImage() [all …]
|
D | ps2.c | 383 compression; in WritePS2Image() local 465 compression=image->compression; in WritePS2Image() 466 if (image_info->compression != UndefinedCompression) in WritePS2Image() 467 compression=image_info->compression; in WritePS2Image() 468 switch (compression) in WritePS2Image() 473 compression=RLECompression; in WritePS2Image() 619 switch (compression) in WritePS2Image() 697 compression == NoCompression ? "ASCII" : "Binary"); in WritePS2Image() 726 if ((compression == FaxCompression) || (compression == Group4Compression) || in WritePS2Image() 735 ((compression != FaxCompression) && in WritePS2Image() [all …]
|
D | ps3.c | 435 Image *image,const CompressionType compression,ExceptionInfo *exception) in WritePS3MaskImage() argument 478 "%%%%BeginData:%13ld %s Bytes\n",0L,compression == NoCompression ? in WritePS3MaskImage() 487 switch (compression) in WritePS3MaskImage() 542 switch (compression) in WritePS3MaskImage() 562 if ((compression == FaxCompression) || in WritePS3MaskImage() 614 compression == NoCompression ? "ASCII" : "BINARY"); in WritePS3MaskImage() 821 compression; in WritePS3Image() local 894 compression=image->compression; in WritePS3Image() 895 if (image_info->compression != UndefinedCompression) in WritePS3Image() 896 compression=image_info->compression; in WritePS3Image() [all …]
|
/external/grpc-grpc/doc/ |
D | compression_cookbook.md | 5 This document describes compression as implemented by the gRPC C core. See [the 6 full compression specification](compression.md) for details. 10 Wrapped languages developers, for the purposes of supporting compression by 15 1. Be able to set compression at [channel](#per-channel-settings), 20 spec](https://github.com/grpc/grpc/blob/master/doc/compression.md#test-cases). 31 still not symmetric between clients and servers (e.g. the [use of compression 32 levels](https://github.com/grpc/grpc/blob/master/doc/compression.md#compression-levels-and-algorith… 47 document](https://github.com/grpc/grpc/blob/master/doc/compression.md#compression-levels-and-algori… 48 compression _levels_ are the primary mechanism for compression selection _at the 52 As of this writing (Q2 2016), clients can only specify compression _algorithms_. [all …]
|
D | compression.md | 10 compression supported by gRPC acts _at the individual message level_, taking 14 The implementation supports different compression algorithms. A _default 15 compression level_, to be used in the absence of message-specific settings, MAY 18 The ability to control compression settings per call and to enable/disable 19 compression on a per message basis MAY be used to prevent CRIME/BEAST attacks. 20 It also allows for asymmetric compression communication, whereby a response MAY 26 appropriate API method. There are two scenarios where compression MAY be 29 + At channel creation time, which sets the channel default compression and 30 therefore the compression that SHALL be used in the absence of per-RPC 31 compression configuration. [all …]
|
/external/tensorflow/tensorflow/core/data/ |
D | snapshot_utils_test.cc | 85 SnapshotRoundTrip(io::compression::kNone, 1); in TEST() 86 SnapshotRoundTrip(io::compression::kGzip, 1); in TEST() 87 SnapshotRoundTrip(io::compression::kSnappy, 1); in TEST() 89 SnapshotRoundTrip(io::compression::kNone, 2); in TEST() 90 SnapshotRoundTrip(io::compression::kGzip, 2); in TEST() 91 SnapshotRoundTrip(io::compression::kSnappy, 2); in TEST() 125 SnapshotReaderBenchmarkLoop(state, io::compression::kNone, 1); in SnapshotCustomReaderNoneBenchmark() 129 SnapshotReaderBenchmarkLoop(state, io::compression::kGzip, 1); in SnapshotCustomReaderGzipBenchmark() 133 SnapshotReaderBenchmarkLoop(state, io::compression::kSnappy, 1); in SnapshotCustomReaderSnappyBenchmark() 137 SnapshotReaderBenchmarkLoop(state, io::compression::kNone, 2); in SnapshotTFRecordReaderNoneBenchmark() [all …]
|
/external/skia/tests/ |
D | CompressedBackendAllocationTest.cpp | 41 SkImage::CompressionType compression = in create_image() local 44 SkAlphaType at = SkCompressionTypeIsOpaque(compression) ? kOpaque_SkAlphaType in create_image() 147 SkImage::CompressionType compression, in test_compressed_color_init() argument 161 check_compressed_mipmaps(dContext, img, compression, expectedColors, mipMapped, in test_compressed_color_init() 163 check_readback(dContext, img, compression, color, reporter, "solid readback"); in test_compressed_color_init() 177 check_compressed_mipmaps(dContext, img, compression, expectedNewColors, mipMapped, reporter, in test_compressed_color_init() 179 check_readback(dContext, std::move(img), compression, newColor, reporter, "solid readback"); in test_compressed_color_init() 185 static std::unique_ptr<const char[]> make_compressed_data(SkImage::CompressionType compression, in make_compressed_data() argument 197 size_t dataSize = SkCompressedDataSize(compression, dimensions, &mipMapOffsets, in make_compressed_data() 204 GrFillInCompressedData(compression, dimensions, in make_compressed_data() [all …]
|
/external/tensorflow/tensorflow/python/data/experimental/ops/ |
D | io.py | 49 compression=None, argument 121 save_dataset = _SaveDataset(dataset, path, shard_func, compression) 143 compression=compression, 151 def __init__(self, dataset, path, shard_func, compression): argument 160 compression=compression, 203 def __init__(self, path, element_spec=None, compression=None, argument 223 self._compression = compression 234 compression=compression, 248 def load(path, element_spec=None, compression=None, reader_func=None): argument 310 compression=compression,
|
D | data_service_ops.py | 376 compression="AUTO", argument 435 if compression not in valid_compressions: 438 compression, valid_compressions)) 439 if compression == COMPRESSION_AUTO and data_transfer_protocol is not None: 440 compression = COMPRESSION_NONE 442 dataset_id = _register_dataset(service, dataset, compression=compression) 454 compression=compression, 468 compression="AUTO", argument 699 compression=compression, 703 def _register_dataset(service, dataset, compression): argument [all …]
|
D | snapshot.py | 39 compression=None, argument 53 self._compression = compression if compression is not None else "" 84 compression=self._compression, 106 compression=None, argument 177 compression=compression, 196 def snapshot(path, compression="AUTO", reader_func=None, shard_func=None): argument 277 compression=compression,
|
/external/skia/src/gpu/mock/ |
D | GrMockCaps.h | 44 SkImage::CompressionType compression = format.asMockCompressionType(); in isFormatSRGB() local 45 if (compression != SkImage::CompressionType::kNone) { in isFormatSRGB() 54 SkImage::CompressionType compression = format.asMockCompressionType(); in isFormatTexturable() local 55 if (compression != SkImage::CompressionType::kNone) { in isFormatTexturable() 56 return fOptions.fCompressedOptions[(int)compression].fTexturable; in isFormatTexturable() 90 SkImage::CompressionType compression = format.asMockCompressionType(); in getRenderTargetSampleCount() local 91 if (compression != SkImage::CompressionType::kNone) { in getRenderTargetSampleCount() 111 SkImage::CompressionType compression = format.asMockCompressionType(); in maxRenderTargetSampleCount() local 112 if (compression != SkImage::CompressionType::kNone) { in maxRenderTargetSampleCount() 164 SkImage::CompressionType compression = format.asMockCompressionType(); in onAreColorTypeAndFormatCompatible() local [all …]
|
/external/zstd/programs/ |
D | zstd.1.md | 18 `zstd` is a fast lossless compression algorithm and data compression tool, 21 `zstd` offers highly configurable compression speed, 23 and strong modes nearing lzma compression ratios. 92 Benchmark file(s) using compression level # 104 `#` compression level \[1-19] (default: 3) 106 unlocks high compression levels 20+ (maximum 22), using a lot more memory. 109 switch to ultra-fast compression levels. 111 The higher the value, the faster the compression speed, 112 at the cost of some compression ratio. 113 This setting overwrites compression level if one was set previously. [all …]
|
D | README.md | 72 - __ZSTD_NOCOMPRESS__ : `zstd` cli will be compiled without support for compression. 111 which can be loaded before compression and decompression. 113 Using a dictionary, the compression ratio achievable on small data improves dramatically. 114 These compression gains are achieved while simultaneously providing faster compression and decompre… 117 Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm 128 CLI includes in-memory compression benchmark module for zstd. 134 The benchmark measures ratio, compressed size, compression and decompression speed. 135 One can select compression levels starting from `-b` and ending with `-e`. 148 -# : # compression level (1-19, default: 3) 150 -D DICT: use DICT as Dictionary for compression or decompression [all …]
|
/external/lzma/DOC/ |
D | lzma-sdk.txt | 6 use 7z / LZMA / LZMA2 / XZ compression. 8 LZMA is an improved version of famous LZ77 compression algorithm. 9 It was improved in way of maximum increasing of compression ratio, 13 LZMA2 is a LZMA based compression method. LZMA2 provides better 14 multithreading support for compression than LZMA and some other improvements. 16 7z is a file format for data compression and file archiving. 17 7z is a main file format for 7-Zip compression program (www.7-zip.org). 18 7z format supports different compression methods: LZMA, LZMA2 and others. 21 XZ is a file format for data compression that uses LZMA2 compression. 23 improved compression ratio, splitting to blocks and streams, [all …]
|
/external/lz4/programs/ |
D | lz4.1.md | 21 `lz4` is an extremely fast lossless compression algorithm, 22 based on **byte-aligned LZ77** family of compression scheme. 23 `lz4` offers compression speeds of 400 MB/s per core, linearly scalable with 36 * `lz4 file.lz4` will default to decompression (use `-z` to force compression) 39 during compression or decompression of a single file 61 on successful compression or decompression, using `--rm` command. 100 `-z` can also be used to force compression of an already compressed 114 Benchmark mode, using `#` compression level. 124 Higher values trade compression speed for compression ratio. 126 Recommended values are 1 for fast compression (default), [all …]
|
D | README.md | 25 CLI includes in-memory compression benchmark module for lz4. 30 The benchmark measures ratio, compressed size, compression and decompression speed. 31 One can select compression levels starting from `-b` and ending with `-e`. 45 -1 : Fast compression (default) 46 -9 : High compression 48 -z : force compression 52 --rm : remove source file(s) after successful de/compression 63 -l : compress using Legacy format (Linux kernel compression) 66 -BD : Block dependency (improve compression ratio) 72 --fast[=#]: switch to ultra fast compression level (default: 1) [all …]
|
/external/libwebsockets/lib/roles/http/compression/ |
D | README.md | 1 HTTP compression 4 This directory contains generic compression transforms that can be applied to 7 The compression transforms expose an "ops" type struct and a compressor name 9 ./private-lib-roles-http-compression.h. 11 Because the compression transform depends on being able to send on its output 13 `wsi->buflist_comp` that represents pre-compression transform data 14 ("input data" from the perspective of the compression transform) that was
|
/external/squashfs-tools/RELEASE-READMEs/ |
D | README-4.3 | 8 there are substantial improvements to stability, new compression options 22 2. GZIP compressor now supports compression options, allowing different 23 compression levels to be used. 25 3. Rewritten LZO compressor with compression options, allowing different 26 LZO algorithms and different compression levels to be used. 38 7. The -stat option in Unsquashfs now displays the compression options 40 the compression algorithm used. 65 New compression options and compressors are now supported. 70 -Xcompression-level <compression-level> 71 <compression-level> should be 1 .. 9 (default 9) [all …]
|
/external/tensorflow/tensorflow/python/data/experimental/kernel_tests/ |
D | io_test.py | 58 combinations.combine(compression=[None, "GZIP"]))) 59 def testBasic(self, compression): argument 61 io.save(dataset, self._test_dir, compression=compression) 63 self._test_dir, dataset.element_spec, compression=compression) 98 combinations.combine(compression=[None, "GZIP"]))) 99 def testSaveInsideFunction(self, compression): argument 105 io.save(dataset, self._test_dir, compression=compression) 109 self._test_dir, dataset.element_spec, compression=compression) 158 dataset=dataset, path=self._save_dir, shard_func=None, compression=None)
|
/external/zstd/contrib/premake/ |
D | zstd.lua | 4 function project_zstd(dir, compression, decompression, deprecated, dictbuilder, legacy) 5 if compression == nil then compression = true end 12 if not compression then 32 if compression then
|
/external/zstd/examples/ |
D | README.md | 4 - [Simple compression](simple_compression.c) : 10 Only compatible with simple compression. 14 - [Multiple simple compression](multiple_simple_compression.c) : 24 - [Streaming compression](streaming_compression.c) : 28 - [Multiple Streaming compression](multiple_streaming_compression.c) : 35 Compatible with both simple and streaming compression. 39 - [Dictionary compression](dictionary_compression.c) :
|
/external/zstd/ |
D | CHANGELOG | 2 perf: rebalanced compression levels, to better match the intended speed/level curve, by @senhuang42 7 perf: faster mid-level compression speed in presence of highly repetitive patterns, by @senhuang42 8 perf: minor compression ratio improvements for small data at high levels, by @cyan4973 10 perf: faster compression speed on incompressible data, by @bindhvo 35 perf: Significant speed improvements for middle compression levels (#2494, @senhuang42 @terrelln) 36 perf: Block splitter to improve compression ratio, enabled by default for high compression levels (… 38 perf: Reduced stack usage during compression and decompression entropy stage (#2522 #2524, @terrell… 43 bug: Ensure `ZSTD_estimateCCtxSize*() `monotonically increases with compression level (#2538, @senh… 46 bug: Fix superblock compression divide by zero bug (#2592, @senhuang42) 105 perf: stronger --long mode at high compression levels, by @senhuang42 [all …]
|
D | README.md | 3 __Zstandard__, or `zstd` as short version, is a fast lossless compression algorithm, 4 targeting real-time compression scenarios at zlib-level and better compression ratios. 34 For reference, several fast compression algorithms were tested and compared 39 on the [Silesia compression corpus]. 42 [Silesia compression corpus]: http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia 62 The negative compression levels, specified with `--fast=#`, 63 offer faster compression and decompression speed 64 at the cost of compression ratio (compared to level 1). 66 Zstd can also offer stronger compression ratios at the cost of compression speed. 69 a property shared by most LZ compression algorithms, such as [zlib] or lzma. [all …]
|
/external/python/cpython3/Lib/test/ |
D | test_zipfile.py | 61 def make_test_archive(self, f, compression, compresslevel=None): argument 62 kwargs = {'compression': compression, 'compresslevel': compresslevel} 72 def zip_test(self, f, compression, compresslevel=None): argument 73 self.make_test_archive(f, compression, compresslevel) 76 with zipfile.ZipFile(f, "r", compression) as zipfp: 128 self.zip_test(f, self.compression) 130 def zip_open_test(self, f, compression): argument 131 self.make_test_archive(f, compression) 134 with zipfile.ZipFile(f, "r", compression) as zipfp: 156 self.zip_open_test(f, self.compression) [all …]
|
/external/brotli/c/tools/ |
D | brotli.md | 13 `brotli` is a generic-purpose lossless compression algorithm that compresses 15 coding and 2-nd order context modeling, with a compression ratio comparable to 16 the best currently available general-purpose compression methods. It is similar 17 in speed with deflate but offers more dense compression. 33 * default mode is compression; 59 compression level (0-9); bigger values cause denser, but slower compression 77 compression level (0-11); bigger values cause denser, but slower compression 92 use best compression level (default); same as "`-q 11`"
|