• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1zstd(1) -- zstd, zstdmt, unzstd, zstdcat - Compress or decompress .zst files
2============================================================================
3
4SYNOPSIS
5--------
6
7`zstd` [<OPTIONS>] [-|<INPUT-FILE>] [-o <OUTPUT-FILE>]
8
9`zstdmt` is equivalent to `zstd -T0`
10
11`unzstd` is equivalent to `zstd -d`
12
13`zstdcat` is equivalent to `zstd -dcf`
14
15
16DESCRIPTION
17-----------
18`zstd` is a fast lossless compression algorithm and data compression tool,
19with command line syntax similar to `gzip`(1) and `xz`(1).
20It is based on the **LZ77** family, with further FSE & huff0 entropy stages.
21`zstd` offers highly configurable compression speed,
22from fast modes at > 200 MB/s per core,
23to strong modes with excellent compression ratios.
24It also features a very fast decoder, with speeds > 500 MB/s per core,
25which remains roughly stable at all compression settings.
26
27`zstd` command line syntax is generally similar to gzip,
28but features the following few differences:
29
30  - Source files are preserved by default.
31    It's possible to remove them automatically by using the `--rm` command.
32  - When compressing a single file, `zstd` displays progress notifications
33    and result summary by default.
34    Use `-q` to turn them off.
35  - `zstd` displays a short help page when command line is an error.
36    Use `-q` to turn it off.
37  - `zstd` does not accept input from console,
38    though it does accept `stdin` when it's not the console.
39  - `zstd` does not store the input's filename or attributes, only its contents.
40
41`zstd` processes each _file_ according to the selected operation mode.
42If no _files_ are given or _file_ is `-`, `zstd` reads from standard input
43and writes the processed data to standard output.
44`zstd` will refuse to write compressed data to standard output
45if it is a terminal: it will display an error message and skip the file.
46Similarly, `zstd` will refuse to read compressed data from standard input
47if it is a terminal.
48
49Unless `--stdout` or `-o` is specified, _files_ are written to a new file
50whose name is derived from the source _file_ name:
51
52* When compressing, the suffix `.zst` is appended to the source filename to
53  get the target filename.
54* When decompressing, the `.zst` suffix is removed from the source filename to
55  get the target filename
56
57### Concatenation with .zst Files
58It is possible to concatenate multiple `.zst` files. `zstd` will decompress
59such agglomerated file as if it was a single `.zst` file.
60
61OPTIONS
62-------
63
64### Integer Suffixes and Special Values
65
66In most places where an integer argument is expected,
67an optional suffix is supported to easily indicate large integers.
68There must be no space between the integer and the suffix.
69
70* `KiB`:
71    Multiply the integer by 1,024 (2\^10).
72    `Ki`, `K`, and `KB` are accepted as synonyms for `KiB`.
73* `MiB`:
74    Multiply the integer by 1,048,576 (2\^20).
75    `Mi`, `M`, and `MB` are accepted as synonyms for `MiB`.
76
77### Operation Mode
78
79If multiple operation mode options are given,
80the last one takes effect.
81
82* `-z`, `--compress`:
83    Compress.
84    This is the default operation mode when no operation mode option is specified
85    and no other operation mode is implied from the command name
86    (for example, `unzstd` implies `--decompress`).
87* `-d`, `--decompress`, `--uncompress`:
88    Decompress.
89* `-t`, `--test`:
90    Test the integrity of compressed _files_.
91    This option is equivalent to `--decompress --stdout > /dev/null`,
92    decompressed data is discarded and checksummed for errors.
93    No files are created or removed.
94* `-b#`:
95    Benchmark file(s) using compression level _#_.
96    See _BENCHMARK_ below for a description of this operation.
97* `--train FILES`:
98    Use _FILES_ as a training set to create a dictionary.
99    The training set should contain a lot of small files (> 100).
100    See _DICTIONARY BUILDER_ below for a description of this operation.
101* `-l`, `--list`:
102    Display information related to a zstd compressed file, such as size, ratio, and checksum.
103    Some of these fields may not be available.
104    This command's output can be augmented with the `-v` modifier.
105
106### Operation Modifiers
107
108* `-#`:
109    selects `#` compression level \[1-19\] (default: 3).
110    Higher compression levels *generally* produce higher compression ratio at the expense of speed and memory.
111    A rough rule of thumb is that compression speed is expected to be divided by 2 every 2 levels.
112    Technically, each level is mapped to a set of advanced parameters (that can also be modified individually, see below).
113    Because the compressor's behavior highly depends on the content to compress, there's no guarantee of a smooth progression from one level to another.
114* `--ultra`:
115    unlocks high compression levels 20+ (maximum 22), using a lot more memory.
116    Note that decompression will also require more memory when using these levels.
117* `--fast[=#]`:
118    switch to ultra-fast compression levels.
119    If `=#` is not present, it defaults to `1`.
120    The higher the value, the faster the compression speed,
121    at the cost of some compression ratio.
122    This setting overwrites compression level if one was set previously.
123    Similarly, if a compression level is set after `--fast`, it overrides it.
124* `-T#`, `--threads=#`:
125    Compress using `#` working threads (default: 1).
126    If `#` is 0, attempt to detect and use the number of physical CPU cores.
127    In all cases, the nb of threads is capped to `ZSTDMT_NBWORKERS_MAX`,
128    which is either 64 in 32-bit mode, or 256 for 64-bit environments.
129    This modifier does nothing if `zstd` is compiled without multithread support.
130* `--single-thread`:
131    Use a single thread for both I/O and compression.
132    As compression is serialized with I/O, this can be slightly slower.
133    Single-thread mode features significantly lower memory usage,
134    which can be useful for systems with limited amount of memory, such as 32-bit systems.
135
136    Note 1: this mode is the only available one when multithread support is disabled.
137
138    Note 2: this mode is different from `-T1`, which spawns 1 compression thread in parallel with I/O.
139    Final compressed result is also slightly different from `-T1`.
140* `--auto-threads={physical,logical} (default: physical)`:
141    When using a default amount of threads via `-T0`, choose the default based on the number
142    of detected physical or logical cores.
143* `--adapt[=min=#,max=#]`:
144    `zstd` will dynamically adapt compression level to perceived I/O conditions.
145    Compression level adaptation can be observed live by using command `-v`.
146    Adaptation can be constrained between supplied `min` and `max` levels.
147    The feature works when combined with multi-threading and `--long` mode.
148    It does not work with `--single-thread`.
149    It sets window size to 8 MiB by default (can be changed manually, see `wlog`).
150    Due to the chaotic nature of dynamic adaptation, compressed result is not reproducible.
151
152    _Note_: at the time of this writing, `--adapt` can remain stuck at low speed
153    when combined with multiple worker threads (>=2).
154* `--long[=#]`:
155    enables long distance matching with `#` `windowLog`, if `#` is not
156    present it defaults to `27`.
157    This increases the window size (`windowLog`) and memory usage for both the
158    compressor and decompressor.
159    This setting is designed to improve the compression ratio for files with
160    long matches at a large distance.
161
162    Note: If `windowLog` is set to larger than 27, `--long=windowLog` or
163    `--memory=windowSize` needs to be passed to the decompressor.
164* `--max`:
165    set advanced parameters to maximum compression.
166    warning: this setting is very slow and uses a lot of resources.
167    It's inappropriate for 32-bit mode and therefore disabled in this mode.
168* `-D DICT`:
169    use `DICT` as Dictionary to compress or decompress FILE(s)
170* `--patch-from FILE`:
171    Specify the file to be used as a reference point for zstd's diff engine.
172    This is effectively dictionary compression with some convenient parameter
173    selection, namely that _windowSize_ > _srcSize_.
174
175    Note: cannot use both this and `-D` together.
176
177    Note: `--long` mode will be automatically activated if _chainLog_ < _fileLog_
178        (_fileLog_ being the _windowLog_ required to cover the whole file). You
179        can also manually force it.
180
181    Note: up to level 15, you can use `--patch-from` in `--single-thread` mode
182        to improve compression ratio marginally at the cost of speed. Using
183        '--single-thread' above level 15  will lead to lower compression
184        ratios.
185
186    Note: for level 19, you can get increased compression ratio at the cost
187        of speed by specifying `--zstd=targetLength=` to be something large
188        (i.e. 4096), and by setting a large `--zstd=chainLog=`.
189* `--rsyncable`:
190    `zstd` will periodically synchronize the compression state to make the
191    compressed file more rsync-friendly.
192    There is a negligible impact to compression ratio,
193    and a potential impact to compression speed, perceptible at higher speeds,
194    for example when combining `--rsyncable` with many parallel worker threads.
195    This feature does not work with `--single-thread`. You probably don't want
196    to use it with long range mode, since it will decrease the effectiveness of
197    the synchronization points, but your mileage may vary.
198* `-C`, `--[no-]check`:
199    add integrity check computed from uncompressed data (default: enabled)
200* `--[no-]content-size`:
201    enable / disable whether or not the original size of the file is placed in
202    the header of the compressed file. The default option is
203    `--content-size` (meaning that the original size will be placed in the header).
204* `--no-dictID`:
205    do not store dictionary ID within frame header (dictionary compression).
206    The decoder will have to rely on implicit knowledge about which dictionary to use,
207    it won't be able to check if it's correct.
208* `-M#`, `--memory=#`:
209    Set a memory usage limit. By default, `zstd` uses 128 MiB for decompression
210    as the maximum amount of memory the decompressor is allowed to use, but you can
211    override this manually if need be in either direction (i.e. you can increase or
212    decrease it).
213
214    This is also used during compression when using with `--patch-from=`. In this case,
215    this parameter overrides that maximum size allowed for a dictionary. (128 MiB).
216
217    Additionally, this can be used to limit memory for dictionary training. This parameter
218    overrides the default limit of 2 GiB. zstd will load training samples up to the memory limit
219    and ignore the rest.
220* `--stream-size=#`:
221    Sets the pledged source size of input coming from a stream. This value must be exact, as it
222    will be included in the produced frame header. Incorrect stream sizes will cause an error.
223    This information will be used to better optimize compression parameters, resulting in
224    better and potentially faster compression, especially for smaller source sizes.
225* `--size-hint=#`:
226    When handling input from a stream, `zstd` must guess how large the source size
227    will be when optimizing compression parameters. If the stream size is relatively
228    small, this guess may be a poor one, resulting in a higher compression ratio than
229    expected. This feature allows for controlling the guess when needed.
230    Exact guesses result in better compression ratios. Overestimates result in slightly
231    degraded compression ratios, while underestimates may result in significant degradation.
232* `--target-compressed-block-size=#`:
233    Attempt to produce compressed blocks of approximately this size.
234    This will split larger blocks in order to approach this target.
235    This feature is notably useful for improved latency, when the receiver can leverage receiving early incomplete data.
236    This parameter defines a loose target: compressed blocks will target this size "on average", but individual blocks can still be larger or smaller.
237    Enabling this feature can decrease compression speed by up to ~10% at level 1.
238    Higher levels will see smaller relative speed regression, becoming invisible at higher settings.
239* `-f`, `--force`:
240    disable input and output checks. Allows overwriting existing files, input
241    from console, output to stdout, operating on links, block devices, etc.
242    During decompression and when the output destination is stdout, pass-through
243    unrecognized formats as-is.
244* `-c`, `--stdout`:
245    write to standard output (even if it is the console); keep original files (disable `--rm`).
246* `-o FILE`:
247    save result into `FILE`.
248    Note that this operation is in conflict with `-c`.
249    If both operations are present on the command line, the last expressed one wins.
250* `--[no-]sparse`:
251    enable / disable sparse FS support,
252    to make files with many zeroes smaller on disk.
253    Creating sparse files may save disk space and speed up decompression by
254    reducing the amount of disk I/O.
255    default: enabled when output is into a file,
256    and disabled when output is stdout.
257    This setting overrides default and can force sparse mode over stdout.
258* `--[no-]pass-through`
259    enable / disable passing through uncompressed files as-is. During
260    decompression when pass-through is enabled, unrecognized formats will be
261    copied as-is from the input to the output. By default, pass-through will
262    occur when the output destination is stdout and the force (`-f`) option is
263    set.
264* `--rm`:
265    remove source file(s) after successful compression or decompression.
266    This command is silently ignored if output is `stdout`.
267    If used in combination with `-o`,
268    triggers a confirmation prompt (which can be silenced with `-f`), as this is a destructive operation.
269* `-k`, `--keep`:
270    keep source file(s) after successful compression or decompression.
271    This is the default behavior.
272* `-r`:
273    operate recursively on directories.
274    It selects all files in the named directory and all its subdirectories.
275    This can be useful both to reduce command line typing,
276    and to circumvent shell expansion limitations,
277    when there are a lot of files and naming breaks the maximum size of a command line.
278* `--filelist FILE`
279    read a list of files to process as content from `FILE`.
280    Format is compatible with `ls` output, with one file per line.
281* `--output-dir-flat DIR`:
282    resulting files are stored into target `DIR` directory,
283    instead of same directory as origin file.
284    Be aware that this command can introduce name collision issues,
285    if multiple files, from different directories, end up having the same name.
286    Collision resolution ensures first file with a given name will be present in `DIR`,
287    while in combination with `-f`, the last file will be present instead.
288* `--output-dir-mirror DIR`:
289    similar to `--output-dir-flat`,
290    the output files are stored underneath target `DIR` directory,
291    but this option will replicate input directory hierarchy into output `DIR`.
292
293    If input directory contains "..", the files in this directory will be ignored.
294    If input directory is an absolute directory (i.e. "/var/tmp/abc"),
295    it will be stored into the "output-dir/var/tmp/abc".
296    If there are multiple input files or directories,
297    name collision resolution will follow the same rules as `--output-dir-flat`.
298* `--format=FORMAT`:
299    compress and decompress in other formats. If compiled with
300    support, zstd can compress to or decompress from other compression algorithm
301    formats. Possibly available options are `zstd`, `gzip`, `xz`, `lzma`, and `lz4`.
302    If no such format is provided, `zstd` is the default.
303* `-h`/`-H`, `--help`:
304    display help/long help and exit
305* `-V`, `--version`:
306    display version number and immediately exit.
307    note that, since it exits, flags specified after `-V` are effectively ignored.
308    Advanced: `-vV` also displays supported formats.
309    `-vvV` also displays POSIX support.
310    `-qV` will only display the version number, suitable for machine reading.
311* `-v`, `--verbose`:
312    verbose mode, display more information
313* `-q`, `--quiet`:
314    suppress warnings, interactivity, and notifications.
315    specify twice to suppress errors too.
316* `--no-progress`:
317    do not display the progress bar, but keep all other messages.
318* `--show-default-cparams`:
319    shows the default compression parameters that will be used for a particular input file, based on the provided compression level and the input size.
320    If the provided file is not a regular file (e.g. a pipe), this flag will output the parameters used for inputs of unknown size.
321* `--exclude-compressed`:
322    only compress files that are not already compressed.
323* `--`:
324    All arguments after `--` are treated as files
325
326
327### gzip Operation Modifiers
328When invoked via a `gzip` symlink, `zstd` will support further
329options that intend to mimic the `gzip` behavior:
330
331* `-n`, `--no-name`:
332    do not store the original filename and timestamps when compressing
333    a file. This is the default behavior and hence a no-op.
334* `--best`:
335    alias to the option `-9`.
336
337
338### Environment Variables
339Employing environment variables to set parameters has security implications.
340Therefore, this avenue is intentionally limited.
341Only `ZSTD_CLEVEL` and `ZSTD_NBTHREADS` are currently supported.
342They set the default compression level and number of threads to use during compression, respectively.
343
344`ZSTD_CLEVEL` can be used to set the level between 1 and 19 (the "normal" range).
345If the value of `ZSTD_CLEVEL` is not a valid integer, it will be ignored with a warning message.
346`ZSTD_CLEVEL` just replaces the default compression level (`3`).
347
348`ZSTD_NBTHREADS` can be used to set the number of threads `zstd` will attempt to use during compression.
349If the value of `ZSTD_NBTHREADS` is not a valid unsigned integer, it will be ignored with a warning message.
350`ZSTD_NBTHREADS` has a default value of `max(1, min(4, nbCores/4))`, and is capped at ZSTDMT_NBWORKERS_MAX==200.
351`zstd` must be compiled with multithread support for this variable to have any effect.
352
353They can both be overridden by corresponding command line arguments:
354`-#` for compression level and `-T#` for number of compression threads.
355
356
357ADVANCED COMPRESSION OPTIONS
358----------------------------
359`zstd` provides 22 predefined regular compression levels plus the fast levels.
360A compression level is translated internally into multiple advanced parameters that control the behavior of the compressor
361(one can observe the result of this translation with `--show-default-cparams`).
362These advanced parameters can be overridden using advanced compression options.
363
364### --zstd[=options]:
365The _options_ are provided as a comma-separated list.
366You may specify only the options you want to change and the rest will be
367taken from the selected or default compression level.
368The list of available _options_:
369
370- `strategy`=_strat_, `strat`=_strat_:
371    Specify a strategy used by a match finder.
372
373    There are 9 strategies numbered from 1 to 9, from fastest to strongest:
374    1=`ZSTD_fast`, 2=`ZSTD_dfast`, 3=`ZSTD_greedy`,
375    4=`ZSTD_lazy`, 5=`ZSTD_lazy2`, 6=`ZSTD_btlazy2`,
376    7=`ZSTD_btopt`, 8=`ZSTD_btultra`, 9=`ZSTD_btultra2`.
377
378- `windowLog`=_wlog_, `wlog`=_wlog_:
379    Specify the maximum number of bits for a match distance.
380
381    The higher number of increases the chance to find a match which usually
382    improves compression ratio.
383    It also increases memory requirements for the compressor and decompressor.
384    The minimum _wlog_ is 10 (1 KiB) and the maximum is 30 (1 GiB) on 32-bit
385    platforms and 31 (2 GiB) on 64-bit platforms.
386
387    Note: If `windowLog` is set to larger than 27, `--long=windowLog` or
388    `--memory=windowSize` needs to be passed to the decompressor.
389
390- `hashLog`=_hlog_, `hlog`=_hlog_:
391    Specify the maximum number of bits for a hash table.
392
393    Bigger hash tables cause fewer collisions which usually makes compression
394    faster, but requires more memory during compression.
395
396    The minimum _hlog_ is 6 (64 entries / 256 B) and the maximum is 30 (1B entries / 4 GiB).
397
398- `chainLog`=_clog_, `clog`=_clog_:
399    Specify the maximum number of bits for the secondary search structure,
400    whose form depends on the selected `strategy`.
401
402    Higher numbers of bits increases the chance to find a match which usually
403    improves compression ratio.
404    It also slows down compression speed and increases memory requirements for
405    compression.
406    This option is ignored for the `ZSTD_fast` `strategy`, which only has the primary hash table.
407
408    The minimum _clog_ is 6 (64 entries / 256 B) and the maximum is 29 (512M entries / 2 GiB) on 32-bit platforms
409    and 30 (1B entries / 4 GiB) on 64-bit platforms.
410
411- `searchLog`=_slog_, `slog`=_slog_:
412    Specify the maximum number of searches in a hash chain or a binary tree
413    using logarithmic scale.
414
415    More searches increases the chance to find a match which usually increases
416    compression ratio but decreases compression speed.
417
418    The minimum _slog_ is 1 and the maximum is 'windowLog' - 1.
419
420- `minMatch`=_mml_, `mml`=_mml_:
421    Specify the minimum searched length of a match in a hash table.
422
423    Larger search lengths usually decrease compression ratio but improve
424    decompression speed.
425
426    The minimum _mml_ is 3 and the maximum is 7.
427
428- `targetLength`=_tlen_, `tlen`=_tlen_:
429    The impact of this field vary depending on selected strategy.
430
431    For `ZSTD_btopt`, `ZSTD_btultra` and `ZSTD_btultra2`, it specifies
432    the minimum match length that causes match finder to stop searching.
433    A larger `targetLength` usually improves compression ratio
434    but decreases compression speed.
435
436    For `ZSTD_fast`, it triggers ultra-fast mode when > 0.
437    The value represents the amount of data skipped between match sampling.
438    Impact is reversed: a larger `targetLength` increases compression speed
439    but decreases compression ratio.
440
441    For all other strategies, this field has no impact.
442
443    The minimum _tlen_ is 0 and the maximum is 128 KiB.
444
445- `overlapLog`=_ovlog_,  `ovlog`=_ovlog_:
446    Determine `overlapSize`, amount of data reloaded from previous job.
447    This parameter is only available when multithreading is enabled.
448    Reloading more data improves compression ratio, but decreases speed.
449
450    The minimum _ovlog_ is 0, and the maximum is 9.
451    1 means "no overlap", hence completely independent jobs.
452    9 means "full overlap", meaning up to `windowSize` is reloaded from previous job.
453    Reducing _ovlog_ by 1 reduces the reloaded amount by a factor 2.
454    For example, 8 means "windowSize/2", and 6 means "windowSize/8".
455    Value 0 is special and means "default": _ovlog_ is automatically determined by `zstd`.
456    In which case, _ovlog_ will range from 6 to 9, depending on selected _strat_.
457
458- `ldmHashRateLog`=_lhrlog_, `lhrlog`=_lhrlog_:
459    Specify the frequency of inserting entries into the long distance matching
460    hash table.
461
462    This option is ignored unless long distance matching is enabled.
463
464    Larger values will improve compression speed. Deviating far from the
465    default value will likely result in a decrease in compression ratio.
466
467    The default value varies between 4 and 7, depending on `strategy`.
468
469- `ldmHashLog`=_lhlog_, `lhlog`=_lhlog_:
470    Specify the maximum size for a hash table used for long distance matching.
471
472    This option is ignored unless long distance matching is enabled.
473
474    Bigger hash tables usually improve compression ratio at the expense of more
475    memory during compression and a decrease in compression speed.
476
477    The minimum _lhlog_ is 6 and the maximum is 30 (default: `windowLog - ldmHashRateLog`).
478
479- `ldmMinMatch`=_lmml_, `lmml`=_lmml_:
480    Specify the minimum searched length of a match for long distance matching.
481
482    This option is ignored unless long distance matching is enabled.
483
484    Larger/very small values usually decrease compression ratio.
485
486    The minimum _lmml_ is 4 and the maximum is 4096 (default: 32 to 64, depending on `strategy`).
487
488- `ldmBucketSizeLog`=_lblog_, `lblog`=_lblog_:
489    Specify the size of each bucket for the hash table used for long distance
490    matching.
491
492    This option is ignored unless long distance matching is enabled.
493
494    Larger bucket sizes improve collision resolution but decrease compression
495    speed.
496
497    The minimum _lblog_ is 1 and the maximum is 8 (default: 4 to 8, depending on `strategy`).
498
499
500### Example
501The following parameters sets advanced compression options to something
502similar to predefined level 19 for files bigger than 256 KB:
503
504`--zstd`=wlog=23,clog=23,hlog=22,slog=6,mml=3,tlen=48,strat=6
505
506### -B#:
507Specify the size of each compression job.
508This parameter is only available when multi-threading is enabled.
509Each compression job is run in parallel, so this value indirectly impacts the nb of active threads.
510Default job size varies depending on compression level (generally  `4 * windowSize`).
511`-B#` makes it possible to manually select a custom size.
512Note that job size must respect a minimum value which is enforced transparently.
513This minimum is either 512 KB, or `overlapSize`, whichever is largest.
514Different job sizes will lead to non-identical compressed frames.
515
516
517DICTIONARY BUILDER
518------------------
519`zstd` offers _dictionary_ compression,
520which greatly improves efficiency on small files and messages.
521It's possible to train `zstd` with a set of samples,
522the result of which is saved into a file called a `dictionary`.
523Then, during compression and decompression, reference the same dictionary,
524using command `-D dictionaryFileName`.
525Compression of small files similar to the sample set will be greatly improved.
526
527* `--train FILEs`:
528    Use FILEs as training set to create a dictionary.
529    The training set should ideally contain a lot of samples (> 100),
530    and weight typically 100x the target dictionary size
531    (for example, ~10 MB for a 100 KB dictionary).
532    `--train` can be combined with `-r` to indicate a directory rather than listing all the files,
533    which can be useful to circumvent shell expansion limits.
534
535    Since dictionary compression is mostly effective for small files,
536    the expectation is that the training set will only contain small files.
537    In the case where some samples happen to be large,
538    only the first 128 KiB of these samples will be used for training.
539
540    `--train` supports multithreading if `zstd` is compiled with threading support (default).
541    Additional advanced parameters can be specified with `--train-fastcover`.
542    The legacy dictionary builder can be accessed with `--train-legacy`.
543    The slower cover dictionary builder can be accessed with `--train-cover`.
544    Default `--train` is equivalent to `--train-fastcover=d=8,steps=4`.
545
546* `-o FILE`:
547    Dictionary saved into `FILE` (default name: dictionary).
548* `--maxdict=#`:
549    Limit dictionary to specified size (default: 112640 bytes).
550    As usual, quantities are expressed in bytes by default,
551    and it's possible to employ suffixes (like `KB` or `MB`)
552    to specify larger values.
553* `-#`:
554    Use `#` compression level during training (optional).
555    Will generate statistics more tuned for selected compression level,
556    resulting in a _small_ compression ratio improvement for this level.
557* `-B#`:
558    Split input files into blocks of size # (default: no split)
559* `-M#`, `--memory=#`:
560    Limit the amount of sample data loaded for training (default: 2 GB).
561    Note that the default (2 GB) is also the maximum.
562    This parameter can be useful in situations where the training set size
563    is not well controlled and could be potentially very large.
564    Since speed of the training process is directly correlated to
565    the size of the training sample set,
566    a smaller sample set leads to faster training.
567
568    In situations where the training set is larger than maximum memory,
569    the CLI will randomly select samples among the available ones,
570    up to the maximum allowed memory budget.
571    This is meant to improve dictionary relevance
572    by mitigating the potential impact of clustering,
573    such as selecting only files from the beginning of a list
574    sorted by modification date, or sorted by alphabetical order.
575    The randomization process is deterministic, so
576    training of the same list of files with the same parameters
577    will lead to the creation of the same dictionary.
578
579* `--dictID=#`:
580    A dictionary ID is a locally unique ID.
581    The decoder will use this value to verify it is using the right dictionary.
582    By default, zstd will create a 4-bytes random number ID.
583    It's possible to provide an explicit number ID instead.
584    It's up to the dictionary manager to not assign twice the same ID to
585    2 different dictionaries.
586    Note that short numbers have an advantage:
587    an ID < 256 will only need 1 byte in the compressed frame header,
588    and an ID < 65536 will only need 2 bytes.
589    This compares favorably to 4 bytes default.
590
591    Note that RFC8878 reserves IDs less than 32768 and greater than or equal to 2\^31, so they should not be used in public.
592
593* `--train-cover[=k#,d=#,steps=#,split=#,shrink[=#]]`:
594    Select parameters for the default dictionary builder algorithm named cover.
595    If _d_ is not specified, then it tries _d_ = 6 and _d_ = 8.
596    If _k_ is not specified, then it tries _steps_ values in the range [50, 2000].
597    If _steps_ is not specified, then the default value of 40 is used.
598    If _split_ is not specified or split <= 0, then the default value of 100 is used.
599    Requires that _d_ <= _k_.
600    If _shrink_ flag is not used, then the default value for _shrinkDict_ of 0 is used.
601    If _shrink_ is not specified, then the default value for _shrinkDictMaxRegression_ of 1 is used.
602
603    Selects segments of size _k_ with highest score to put in the dictionary.
604    The score of a segment is computed by the sum of the frequencies of all the
605    subsegments of size _d_.
606    Generally _d_ should be in the range [6, 8], occasionally up to 16, but the
607    algorithm will run faster with d <= _8_.
608    Good values for _k_ vary widely based on the input data, but a safe range is
609    [2 * _d_, 2000].
610    If _split_ is 100, all input samples are used for both training and testing
611    to find optimal _d_ and _k_ to build dictionary.
612    Supports multithreading if `zstd` is compiled with threading support.
613    Having _shrink_ enabled takes a truncated dictionary of minimum size and doubles
614    in size until compression ratio of the truncated dictionary is at most
615    _shrinkDictMaxRegression%_ worse than the compression ratio of the largest dictionary.
616
617    Examples:
618
619    `zstd --train-cover FILEs`
620
621    `zstd --train-cover=k=50,d=8 FILEs`
622
623    `zstd --train-cover=d=8,steps=500 FILEs`
624
625    `zstd --train-cover=k=50 FILEs`
626
627    `zstd --train-cover=k=50,split=60 FILEs`
628
629    `zstd --train-cover=shrink FILEs`
630
631    `zstd --train-cover=shrink=2 FILEs`
632
633* `--train-fastcover[=k#,d=#,f=#,steps=#,split=#,accel=#]`:
634    Same as cover but with extra parameters _f_ and _accel_ and different default value of split
635    If _split_ is not specified, then it tries _split_ = 75.
636    If _f_ is not specified, then it tries _f_ = 20.
637    Requires that 0 < _f_ < 32.
638    If _accel_ is not specified, then it tries _accel_ = 1.
639    Requires that 0 < _accel_ <= 10.
640    Requires that _d_ = 6 or _d_ = 8.
641
642    _f_ is log of size of array that keeps track of frequency of subsegments of size _d_.
643    The subsegment is hashed to an index in the range [0,2^_f_ - 1].
644    It is possible that 2 different subsegments are hashed to the same index, and they are considered as the same subsegment when computing frequency.
645    Using a higher _f_ reduces collision but takes longer.
646
647    Examples:
648
649    `zstd --train-fastcover FILEs`
650
651    `zstd --train-fastcover=d=8,f=15,accel=2 FILEs`
652
653* `--train-legacy[=selectivity=#]`:
654    Use legacy dictionary builder algorithm with the given dictionary
655    _selectivity_ (default: 9).
656    The smaller the _selectivity_ value, the denser the dictionary,
657    improving its efficiency but reducing its achievable maximum size.
658    `--train-legacy=s=#` is also accepted.
659
660    Examples:
661
662    `zstd --train-legacy FILEs`
663
664    `zstd --train-legacy=selectivity=8 FILEs`
665
666
667BENCHMARK
668---------
669The `zstd` CLI provides a benchmarking mode that can be used to easily find suitable compression parameters, or alternatively to benchmark a computer's performance.
670`zstd -b [FILE(s)]` will benchmark `zstd` for both compression and decompression using default compression level.
671Note that results are very dependent on the content being compressed.
672
673It's possible to pass multiple files to the benchmark, and even a directory with `-r DIRECTORY`.
674When no `FILE` is provided, the benchmark will use a procedurally generated `lorem ipsum` text.
675
676Benchmarking will employ `max(1, min(4, nbCores/4))` worker threads by default in order to match the behavior of the normal CLI I/O.
677
678* `-b#`:
679    benchmark file(s) using compression level #
680* `-e#`:
681    benchmark file(s) using multiple compression levels, from `-b#` to `-e#` (inclusive)
682* `-d`:
683    benchmark decompression speed only (requires providing a zstd-compressed content)
684* `-i#`:
685    minimum evaluation time, in seconds (default: 3s), benchmark mode only
686* `-B#`, `--block-size=#`:
687    cut file(s) into independent chunks of size # (default: no chunking)
688* `-S`:
689    output one benchmark result per input file (default: consolidated result)
690* `-D dictionary`
691    benchmark using dictionary
692* `--priority=rt`:
693    set process priority to real-time (Windows)
694
695Beyond compression levels, benchmarking is also compatible with other parameters, such as number of threads (`-T#`), advanced compression parameters (`--zstd=###`), dictionary compression (`-D dictionary`), or even disabling checksum verification for example.
696
697**Output Format:** CompressionLevel#Filename: InputSize -> OutputSize (CompressionRatio), CompressionSpeed, DecompressionSpeed
698
699**Methodology:** For speed measurement, the entire input is compressed/decompressed in-memory to measure speed. A run lasts at least 1 sec, so when files are small, they are compressed/decompressed several times per run, in order to improve measurement accuracy.
700
701
702SEE ALSO
703--------
704`zstdgrep`(1), `zstdless`(1), `gzip`(1), `xz`(1)
705
706The <zstandard> format is specified in Y. Collet, "Zstandard Compression and the 'application/zstd' Media Type", https://www.ietf.org/rfc/rfc8878.txt, Internet RFC 8878 (February 2021).
707
708BUGS
709----
710Report bugs at: https://github.com/facebook/zstd/issues
711
712AUTHOR
713------
714Yann Collet
715