• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1\input texinfo @c -*- texinfo -*-
2@documentencoding UTF-8
3
4@settitle ffmpeg Documentation
5@titlepage
6@center @titlefont{ffmpeg Documentation}
7@end titlepage
8
9@top
10
11@contents
12
13@chapter Synopsis
14
15ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_url}@} ... @{[@var{output_file_options}] @file{output_url}@} ...
16
17@chapter Description
18@c man begin DESCRIPTION
19
20@command{ffmpeg} is a very fast video and audio converter that can also grab from
21a live audio/video source. It can also convert between arbitrary sample
22rates and resize video on the fly with a high quality polyphase filter.
23
24@command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
25files, pipes, network streams, grabbing devices, etc.), specified by the
26@code{-i} option, and writes to an arbitrary number of output "files", which are
27specified by a plain output url. Anything found on the command line which
28cannot be interpreted as an option is considered to be an output url.
29
30Each input or output url can, in principle, contain any number of streams of
31different types (video/audio/subtitle/attachment/data). The allowed number and/or
32types of streams may be limited by the container format. Selecting which
33streams from which inputs will go into which output is either done automatically
34or with the @code{-map} option (see the Stream selection chapter).
35
36To refer to input files in options, you must use their indices (0-based). E.g.
37the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
38within a file are referred to by their indices. E.g. @code{2:3} refers to the
39fourth stream in the third input file. Also see the Stream specifiers chapter.
40
41As a general rule, options are applied to the next specified
42file. Therefore, order is important, and you can have the same
43option on the command line multiple times. Each occurrence is
44then applied to the next input or output file.
45Exceptions from this rule are the global options (e.g. verbosity level),
46which should be specified first.
47
48Do not mix input and output files -- first specify all input files, then all
49output files. Also do not mix options which belong to different files. All
50options apply ONLY to the next input or output file and are reset between files.
51
52@itemize
53@item
54To set the video bitrate of the output file to 64 kbit/s:
55@example
56ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
57@end example
58
59@item
60To force the frame rate of the output file to 24 fps:
61@example
62ffmpeg -i input.avi -r 24 output.avi
63@end example
64
65@item
66To force the frame rate of the input file (valid for raw formats only)
67to 1 fps and the frame rate of the output file to 24 fps:
68@example
69ffmpeg -r 1 -i input.m2v -r 24 output.avi
70@end example
71@end itemize
72
73The format option may be needed for raw input files.
74
75@c man end DESCRIPTION
76
77@chapter Detailed description
78@c man begin DETAILED DESCRIPTION
79
80The transcoding process in @command{ffmpeg} for each output can be described by
81the following diagram:
82
83@verbatim
84 _______              ______________
85|       |            |              |
86| input |  demuxer   | encoded data |   decoder
87| file  | ---------> | packets      | -----+
88|_______|            |______________|      |
89                                           v
90                                       _________
91                                      |         |
92                                      | decoded |
93                                      | frames  |
94                                      |_________|
95 ________             ______________       |
96|        |           |              |      |
97| output | <-------- | encoded data | <----+
98| file   |   muxer   | packets      |   encoder
99|________|           |______________|
100
101
102@end verbatim
103
104@command{ffmpeg} calls the libavformat library (containing demuxers) to read
105input files and get packets containing encoded data from them. When there are
106multiple input files, @command{ffmpeg} tries to keep them synchronized by
107tracking lowest timestamp on any active input stream.
108
109Encoded packets are then passed to the decoder (unless streamcopy is selected
110for the stream, see further for a description). The decoder produces
111uncompressed frames (raw video/PCM audio/...) which can be processed further by
112filtering (see next section). After filtering, the frames are passed to the
113encoder, which encodes them and outputs encoded packets. Finally those are
114passed to the muxer, which writes the encoded packets to the output file.
115
116@section Filtering
117Before encoding, @command{ffmpeg} can process raw audio and video frames using
118filters from the libavfilter library. Several chained filters form a filter
119graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
120simple and complex.
121
122@subsection Simple filtergraphs
123Simple filtergraphs are those that have exactly one input and output, both of
124the same type. In the above diagram they can be represented by simply inserting
125an additional step between decoding and encoding:
126
127@verbatim
128 _________                        ______________
129|         |                      |              |
130| decoded |                      | encoded data |
131| frames  |\                   _ | packets      |
132|_________| \                  /||______________|
133             \   __________   /
134  simple     _\||          | /  encoder
135  filtergraph   | filtered |/
136                | frames   |
137                |__________|
138
139@end verbatim
140
141Simple filtergraphs are configured with the per-stream @option{-filter} option
142(with @option{-vf} and @option{-af} aliases for video and audio respectively).
143A simple filtergraph for video can look for example like this:
144
145@verbatim
146 _______        _____________        _______        ________
147|       |      |             |      |       |      |        |
148| input | ---> | deinterlace | ---> | scale | ---> | output |
149|_______|      |_____________|      |_______|      |________|
150
151@end verbatim
152
153Note that some filters change frame properties but not frame contents. E.g. the
154@code{fps} filter in the example above changes number of frames, but does not
155touch the frame contents. Another example is the @code{setpts} filter, which
156only sets timestamps and otherwise passes the frames unchanged.
157
158@subsection Complex filtergraphs
159Complex filtergraphs are those which cannot be described as simply a linear
160processing chain applied to one stream. This is the case, for example, when the graph has
161more than one input and/or output, or when output stream type is different from
162input. They can be represented with the following diagram:
163
164@verbatim
165 _________
166|         |
167| input 0 |\                    __________
168|_________| \                  |          |
169             \   _________    /| output 0 |
170              \ |         |  / |__________|
171 _________     \| complex | /
172|         |     |         |/
173| input 1 |---->| filter  |\
174|_________|     |         | \   __________
175               /| graph   |  \ |          |
176              / |         |   \| output 1 |
177 _________   /  |_________|    |__________|
178|         | /
179| input 2 |/
180|_________|
181
182@end verbatim
183
184Complex filtergraphs are configured with the @option{-filter_complex} option.
185Note that this option is global, since a complex filtergraph, by its nature,
186cannot be unambiguously associated with a single stream or file.
187
188The @option{-lavfi} option is equivalent to @option{-filter_complex}.
189
190A trivial example of a complex filtergraph is the @code{overlay} filter, which
191has two video inputs and one video output, containing one video overlaid on top
192of the other. Its audio counterpart is the @code{amix} filter.
193
194@section Stream copy
195Stream copy is a mode selected by supplying the @code{copy} parameter to the
196@option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
197step for the specified stream, so it does only demuxing and muxing. It is useful
198for changing the container format or modifying container-level metadata. The
199diagram above will, in this case, simplify to this:
200
201@verbatim
202 _______              ______________            ________
203|       |            |              |          |        |
204| input |  demuxer   | encoded data |  muxer   | output |
205| file  | ---------> | packets      | -------> | file   |
206|_______|            |______________|          |________|
207
208@end verbatim
209
210Since there is no decoding or encoding, it is very fast and there is no quality
211loss. However, it might not work in some cases because of many factors. Applying
212filters is obviously also impossible, since filters work on uncompressed data.
213
214@c man end DETAILED DESCRIPTION
215
216@chapter Stream selection
217@c man begin STREAM SELECTION
218
219@command{ffmpeg} provides the @code{-map} option for manual control of stream selection in each
220output file. Users can skip @code{-map} and let ffmpeg perform automatic stream selection as
221described below. The @code{-vn / -an / -sn / -dn} options can be used to skip inclusion of
222video, audio, subtitle and data streams respectively, whether manually mapped or automatically
223selected, except for those streams which are outputs of complex filtergraphs.
224
225@section Description
226The sub-sections that follow describe the various rules that are involved in stream selection.
227The examples that follow next show how these rules are applied in practice.
228
229While every effort is made to accurately reflect the behavior of the program, FFmpeg is under
230continuous development and the code may have changed since the time of this writing.
231
232@subsection Automatic stream selection
233
234In the absence of any map options for a particular output file, ffmpeg inspects the output
235format to check which type of streams can be included in it, viz. video, audio and/or
236subtitles. For each acceptable stream type, ffmpeg will pick one stream, when available,
237from among all the inputs.
238
239It will select that stream based upon the following criteria:
240@itemize
241@item
242for video, it is the stream with the highest resolution,
243@item
244for audio, it is the stream with the most channels,
245@item
246for subtitles, it is the first subtitle stream found but there's a caveat.
247The output format's default subtitle encoder can be either text-based or image-based,
248and only a subtitle stream of the same type will be chosen.
249@end itemize
250
251In the case where several streams of the same type rate equally, the stream with the lowest
252index is chosen.
253
254Data or attachment streams are not automatically selected and can only be included
255using @code{-map}.
256@subsection Manual stream selection
257
258When @code{-map} is used, only user-mapped streams are included in that output file,
259with one possible exception for filtergraph outputs described below.
260
261@subsection Complex filtergraphs
262
263If there are any complex filtergraph output streams with unlabeled pads, they will be added
264to the first output file. This will lead to a fatal error if the stream type is not supported
265by the output format. In the absence of the map option, the inclusion of these streams leads
266to the automatic stream selection of their types being skipped. If map options are present,
267these filtergraph streams are included in addition to the mapped streams.
268
269Complex filtergraph output streams with labeled pads must be mapped once and exactly once.
270
271@subsection Stream handling
272
273Stream handling is independent of stream selection, with an exception for subtitles described
274below. Stream handling is set via the @code{-codec} option addressed to streams within a
275specific @emph{output} file. In particular, codec options are applied by ffmpeg after the
276stream selection process and thus do not influence the latter. If no @code{-codec} option is
277specified for a stream type, ffmpeg will select the default encoder registered by the output
278file muxer.
279
280An exception exists for subtitles. If a subtitle encoder is specified for an output file, the
281first subtitle stream found of any type, text or image, will be included. ffmpeg does not validate
282if the specified encoder can convert the selected stream or if the converted stream is acceptable
283within the output format. This applies generally as well: when the user sets an encoder manually,
284the stream selection process cannot check if the encoded stream can be muxed into the output file.
285If it cannot, ffmpeg will abort and @emph{all} output files will fail to be processed.
286
287@section Examples
288
289The following examples illustrate the behavior, quirks and limitations of ffmpeg's stream
290selection methods.
291
292They assume the following three input files.
293
294@verbatim
295
296input file 'A.avi'
297      stream 0: video 640x360
298      stream 1: audio 2 channels
299
300input file 'B.mp4'
301      stream 0: video 1920x1080
302      stream 1: audio 2 channels
303      stream 2: subtitles (text)
304      stream 3: audio 5.1 channels
305      stream 4: subtitles (text)
306
307input file 'C.mkv'
308      stream 0: video 1280x720
309      stream 1: audio 2 channels
310      stream 2: subtitles (image)
311@end verbatim
312
313@subsubheading Example: automatic stream selection
314@example
315ffmpeg -i A.avi -i B.mp4 out1.mkv out2.wav -map 1:a -c:a copy out3.mov
316@end example
317There are three output files specified, and for the first two, no @code{-map} options
318are set, so ffmpeg will select streams for these two files automatically.
319
320@file{out1.mkv} is a Matroska container file and accepts video, audio and subtitle streams,
321so ffmpeg will try to select one of each type.@*
322For video, it will select @code{stream 0} from @file{B.mp4}, which has the highest
323resolution among all the input video streams.@*
324For audio, it will select @code{stream 3} from @file{B.mp4}, since it has the greatest
325number of channels.@*
326For subtitles, it will select @code{stream 2} from @file{B.mp4}, which is the first subtitle
327stream from among @file{A.avi} and @file{B.mp4}.
328
329@file{out2.wav} accepts only audio streams, so only @code{stream 3} from @file{B.mp4} is
330selected.
331
332For @file{out3.mov}, since a @code{-map} option is set, no automatic stream selection will
333occur. The @code{-map 1:a} option will select all audio streams from the second input
334@file{B.mp4}. No other streams will be included in this output file.
335
336For the first two outputs, all included streams will be transcoded. The encoders chosen will
337be the default ones registered by each output format, which may not match the codec of the
338selected input streams.
339
340For the third output, codec option for audio streams has been set
341to @code{copy}, so no decoding-filtering-encoding operations will occur, or @emph{can} occur.
342Packets of selected streams shall be conveyed from the input file and muxed within the output
343file.
344
345@subsubheading Example: automatic subtitles selection
346@example
347ffmpeg -i C.mkv out1.mkv -c:s dvdsub -an out2.mkv
348@end example
349Although @file{out1.mkv} is a Matroska container file which accepts subtitle streams, only a
350video and audio stream shall be selected. The subtitle stream of @file{C.mkv} is image-based
351and the default subtitle encoder of the Matroska muxer is text-based, so a transcode operation
352for the subtitles is expected to fail and hence the stream isn't selected. However, in
353@file{out2.mkv}, a subtitle encoder is specified in the command and so, the subtitle stream is
354selected, in addition to the video stream. The presence of @code{-an} disables audio stream
355selection for @file{out2.mkv}.
356
357@subsubheading Example: unlabeled filtergraph outputs
358@example
359ffmpeg -i A.avi -i C.mkv -i B.mp4 -filter_complex "overlay" out1.mp4 out2.srt
360@end example
361A filtergraph is setup here using the @code{-filter_complex} option and consists of a single
362video filter. The @code{overlay} filter requires exactly two video inputs, but none are
363specified, so the first two available video streams are used, those of @file{A.avi} and
364@file{C.mkv}. The output pad of the filter has no label and so is sent to the first output file
365@file{out1.mp4}. Due to this, automatic selection of the video stream is skipped, which would
366have selected the stream in @file{B.mp4}. The audio stream with most channels viz. @code{stream 3}
367in @file{B.mp4}, is chosen automatically. No subtitle stream is chosen however, since the MP4
368format has no default subtitle encoder registered, and the user hasn't specified a subtitle encoder.
369
370The 2nd output file, @file{out2.srt}, only accepts text-based subtitle streams. So, even though
371the first subtitle stream available belongs to @file{C.mkv}, it is image-based and hence skipped.
372The selected stream, @code{stream 2} in @file{B.mp4}, is the first text-based subtitle stream.
373
374@subsubheading Example: labeled filtergraph outputs
375@example
376ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \
377       -map '[outv]' -an        out1.mp4 \
378                                out2.mkv \
379       -map '[outv]' -map 1:a:0 out3.mkv
380@end example
381
382The above command will fail, as the output pad labelled @code{[outv]} has been mapped twice.
383None of the output files shall be processed.
384
385@example
386ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \
387       -an        out1.mp4 \
388                  out2.mkv \
389       -map 1:a:0 out3.mkv
390@end example
391
392This command above will also fail as the hue filter output has a label, @code{[outv]},
393and hasn't been mapped anywhere.
394
395The command should be modified as follows,
396@example
397ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0,split=2[outv1][outv2];overlay;aresample" \
398        -map '[outv1]' -an        out1.mp4 \
399                                  out2.mkv \
400        -map '[outv2]' -map 1:a:0 out3.mkv
401@end example
402The video stream from @file{B.mp4} is sent to the hue filter, whose output is cloned once using
403the split filter, and both outputs labelled. Then a copy each is mapped to the first and third
404output files.
405
406The overlay filter, requiring two video inputs, uses the first two unused video streams. Those
407are the streams from @file{A.avi} and @file{C.mkv}. The overlay output isn't labelled, so it is
408sent to the first output file @file{out1.mp4}, regardless of the presence of the @code{-map} option.
409
410The aresample filter is sent the first unused audio stream, that of @file{A.avi}. Since this filter
411output is also unlabelled, it too is mapped to the first output file. The presence of @code{-an}
412only suppresses automatic or manual stream selection of audio streams, not outputs sent from
413filtergraphs. Both these mapped streams shall be ordered before the mapped stream in @file{out1.mp4}.
414
415The video, audio and subtitle streams mapped to @code{out2.mkv} are entirely determined by
416automatic stream selection.
417
418@file{out3.mkv} consists of the cloned video output from the hue filter and the first audio
419stream from @file{B.mp4}.
420@*
421
422@c man end STREAM SELECTION
423
424@chapter Options
425@c man begin OPTIONS
426
427@include fftools-common-opts.texi
428
429@section Main options
430
431@table @option
432
433@item -f @var{fmt} (@emph{input/output})
434Force input or output file format. The format is normally auto detected for input
435files and guessed from the file extension for output files, so this option is not
436needed in most cases.
437
438@item -i @var{url} (@emph{input})
439input file url
440
441@item -y (@emph{global})
442Overwrite output files without asking.
443
444@item -n (@emph{global})
445Do not overwrite output files, and exit immediately if a specified
446output file already exists.
447
448@item -stream_loop @var{number} (@emph{input})
449Set number of times input stream shall be looped. Loop 0 means no loop,
450loop -1 means infinite loop.
451
452@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
453@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
454Select an encoder (when used before an output file) or a decoder (when used
455before an input file) for one or more streams. @var{codec} is the name of a
456decoder/encoder or a special value @code{copy} (output only) to indicate that
457the stream is not to be re-encoded.
458
459For example
460@example
461ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
462@end example
463encodes all video streams with libx264 and copies all audio streams.
464
465For each stream, the last matching @code{c} option is applied, so
466@example
467ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
468@end example
469will copy all the streams except the second video, which will be encoded with
470libx264, and the 138th audio, which will be encoded with libvorbis.
471
472@item -t @var{duration} (@emph{input/output})
473When used as an input option (before @code{-i}), limit the @var{duration} of
474data read from the input file.
475
476When used as an output option (before an output url), stop writing the
477output after its duration reaches @var{duration}.
478
479@var{duration} must be a time duration specification,
480see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
481
482-to and -t are mutually exclusive and -t has priority.
483
484@item -to @var{position} (@emph{input/output})
485Stop writing the output or reading the input at @var{position}.
486@var{position} must be a time duration specification,
487see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
488
489-to and -t are mutually exclusive and -t has priority.
490
491@item -fs @var{limit_size} (@emph{output})
492Set the file size limit, expressed in bytes. No further chunk of bytes is written
493after the limit is exceeded. The size of the output file is slightly more than the
494requested file size.
495
496@item -ss @var{position} (@emph{input/output})
497When used as an input option (before @code{-i}), seeks in this input file to
498@var{position}. Note that in most formats it is not possible to seek exactly,
499so @command{ffmpeg} will seek to the closest seek point before @var{position}.
500When transcoding and @option{-accurate_seek} is enabled (the default), this
501extra segment between the seek point and @var{position} will be decoded and
502discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
503will be preserved.
504
505When used as an output option (before an output url), decodes but discards
506input until the timestamps reach @var{position}.
507
508@var{position} must be a time duration specification,
509see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
510
511@item -sseof @var{position} (@emph{input})
512
513Like the @code{-ss} option but relative to the "end of file". That is negative
514values are earlier in the file, 0 is at EOF.
515
516@item -itsoffset @var{offset} (@emph{input})
517Set the input time offset.
518
519@var{offset} must be a time duration specification,
520see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
521
522The offset is added to the timestamps of the input files. Specifying
523a positive offset means that the corresponding streams are delayed by
524the time duration specified in @var{offset}.
525
526@item -itsscale @var{scale} (@emph{input,per-stream})
527Rescale input timestamps. @var{scale} should be a floating point number.
528
529@item -timestamp @var{date} (@emph{output})
530Set the recording timestamp in the container.
531
532@var{date} must be a date specification,
533see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
534
535@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
536Set a metadata key/value pair.
537
538An optional @var{metadata_specifier} may be given to set metadata
539on streams, chapters or programs. See @code{-map_metadata}
540documentation for details.
541
542This option overrides metadata set with @code{-map_metadata}. It is
543also possible to delete metadata by using an empty value.
544
545For example, for setting the title in the output file:
546@example
547ffmpeg -i in.avi -metadata title="my title" out.flv
548@end example
549
550To set the language of the first audio stream:
551@example
552ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
553@end example
554
555@item -disposition[:stream_specifier] @var{value} (@emph{output,per-stream})
556Sets the disposition for a stream.
557
558This option overrides the disposition copied from the input stream. It is also
559possible to delete the disposition by setting it to 0.
560
561The following dispositions are recognized:
562@table @option
563@item default
564@item dub
565@item original
566@item comment
567@item lyrics
568@item karaoke
569@item forced
570@item hearing_impaired
571@item visual_impaired
572@item clean_effects
573@item attached_pic
574@item captions
575@item descriptions
576@item dependent
577@item metadata
578@end table
579
580For example, to make the second audio stream the default stream:
581@example
582ffmpeg -i in.mkv -c copy -disposition:a:1 default out.mkv
583@end example
584
585To make the second subtitle stream the default stream and remove the default
586disposition from the first subtitle stream:
587@example
588ffmpeg -i in.mkv -c copy -disposition:s:0 0 -disposition:s:1 default out.mkv
589@end example
590
591To add an embedded cover/thumbnail:
592@example
593ffmpeg -i in.mp4 -i IMAGE -map 0 -map 1 -c copy -c:v:1 png -disposition:v:1 attached_pic out.mp4
594@end example
595
596Not all muxers support embedded thumbnails, and those who do, only support a few formats, like JPEG or PNG.
597
598@item -program [title=@var{title}:][program_num=@var{program_num}:]st=@var{stream}[:st=@var{stream}...] (@emph{output})
599
600Creates a program with the specified @var{title}, @var{program_num} and adds the specified
601@var{stream}(s) to it.
602
603@item -target @var{type} (@emph{output})
604Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
605@code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
606@code{film-} to use the corresponding standard. All the format options
607(bitrate, codecs, buffer sizes) are then set automatically. You can just type:
608
609@example
610ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
611@end example
612
613Nevertheless you can specify additional options as long as you know
614they do not conflict with the standard, as in:
615
616@example
617ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
618@end example
619
620The parameters set for each target are as follows.
621
622@strong{VCD}
623@example
624@var{pal}:
625-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
626-s 352x288 -r 25
627-codec:v mpeg1video -g 15 -b:v 1150k -maxrate:v 1150v -minrate:v 1150k -bufsize:v 327680
628-ar 44100 -ac 2
629-codec:a mp2 -b:a 224k
630
631@var{ntsc}:
632-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
633-s 352x240 -r 30000/1001
634-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150v -minrate:v 1150k -bufsize:v 327680
635-ar 44100 -ac 2
636-codec:a mp2 -b:a 224k
637
638@var{film}:
639-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
640-s 352x240 -r 24000/1001
641-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150v -minrate:v 1150k -bufsize:v 327680
642-ar 44100 -ac 2
643-codec:a mp2 -b:a 224k
644@end example
645
646@strong{SVCD}
647@example
648@var{pal}:
649-f svcd -packetsize 2324
650-s 480x576 -pix_fmt yuv420p -r 25
651-codec:v mpeg2video -g 15 -b:v 2040k -maxrate:v 2516k -minrate:v 0 -bufsize:v 1835008 -scan_offset 1
652-ar 44100
653-codec:a mp2 -b:a 224k
654
655@var{ntsc}:
656-f svcd -packetsize 2324
657-s 480x480 -pix_fmt yuv420p -r 30000/1001
658-codec:v mpeg2video -g 18 -b:v 2040k -maxrate:v 2516k -minrate:v 0 -bufsize:v 1835008 -scan_offset 1
659-ar 44100
660-codec:a mp2 -b:a 224k
661
662@var{film}:
663-f svcd -packetsize 2324
664-s 480x480 -pix_fmt yuv420p -r 24000/1001
665-codec:v mpeg2video -g 18 -b:v 2040k -maxrate:v 2516k -minrate:v 0 -bufsize:v 1835008 -scan_offset 1
666-ar 44100
667-codec:a mp2 -b:a 224k
668@end example
669
670@strong{DVD}
671@example
672@var{pal}:
673-f dvd -muxrate 10080k -packetsize 2048
674-s 720x576 -pix_fmt yuv420p -r 25
675-codec:v mpeg2video -g 15 -b:v 6000k -maxrate:v 9000k -minrate:v 0 -bufsize:v 1835008
676-ar 48000
677-codec:a ac3 -b:a 448k
678
679@var{ntsc}:
680-f dvd -muxrate 10080k -packetsize 2048
681-s 720x480 -pix_fmt yuv420p -r 30000/1001
682-codec:v mpeg2video -g 18 -b:v 6000k -maxrate:v 9000k -minrate:v 0 -bufsize:v 1835008
683-ar 48000
684-codec:a ac3 -b:a 448k
685
686@var{film}:
687-f dvd -muxrate 10080k -packetsize 2048
688-s 720x480 -pix_fmt yuv420p -r 24000/1001
689-codec:v mpeg2video -g 18 -b:v 6000k -maxrate:v 9000k -minrate:v 0 -bufsize:v 1835008
690-ar 48000
691-codec:a ac3 -b:a 448k
692@end example
693
694@strong{DV}
695@example
696@var{pal}:
697-f dv
698-s 720x576 -pix_fmt yuv420p -r 25
699-ar 48000 -ac 2
700
701@var{ntsc}:
702-f dv
703-s 720x480 -pix_fmt yuv411p -r 30000/1001
704-ar 48000 -ac 2
705
706@var{film}:
707-f dv
708-s 720x480 -pix_fmt yuv411p -r 24000/1001
709-ar 48000 -ac 2
710@end example
711The @code{dv50} target is identical to the @code{dv} target except that the pixel format set is @code{yuv422p} for all three standards.
712
713Any user-set value for a parameter above will override the target preset value. In that case, the output may
714not comply with the target standard.
715
716@item -dn (@emph{input/output})
717As an input option, blocks all data streams of a file from being filtered or
718being automatically selected or mapped for any output. See @code{-discard}
719option to disable streams individually.
720
721As an output option, disables data recording i.e. automatic selection or
722mapping of any data stream. For full manual control see the @code{-map}
723option.
724
725@item -dframes @var{number} (@emph{output})
726Set the number of data frames to output. This is an obsolete alias for
727@code{-frames:d}, which you should use instead.
728
729@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
730Stop writing to the stream after @var{framecount} frames.
731
732@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
733@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
734Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
735codec-dependent.
736If @var{qscale} is used without a @var{stream_specifier} then it applies only
737to the video stream, this is to maintain compatibility with previous behavior
738and as specifying the same codec specific value to 2 different codecs that is
739audio and video generally is not what is intended when no stream_specifier is
740used.
741
742@anchor{filter_option}
743@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
744Create the filtergraph specified by @var{filtergraph} and use it to
745filter the stream.
746
747@var{filtergraph} is a description of the filtergraph to apply to
748the stream, and must have a single input and a single output of the
749same type of the stream. In the filtergraph, the input is associated
750to the label @code{in}, and the output to the label @code{out}. See
751the ffmpeg-filters manual for more information about the filtergraph
752syntax.
753
754See the @ref{filter_complex_option,,-filter_complex option} if you
755want to create filtergraphs with multiple inputs and/or outputs.
756
757@item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
758This option is similar to @option{-filter}, the only difference is that its
759argument is the name of the file from which a filtergraph description is to be
760read.
761
762@item -filter_threads @var{nb_threads} (@emph{global})
763Defines how many threads are used to process a filter pipeline. Each pipeline
764will produce a thread pool with this many threads available for parallel processing.
765The default is the number of available CPUs.
766
767@item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
768Specify the preset for matching stream(s).
769
770@item -stats (@emph{global})
771Print encoding progress/statistics. It is on by default, to explicitly
772disable it you need to specify @code{-nostats}.
773
774@item -stats_period @var{time} (@emph{global})
775Set period at which encoding progress/statistics are updated. Default is 0.5 seconds.
776
777@item -progress @var{url} (@emph{global})
778Send program-friendly progress information to @var{url}.
779
780Progress information is written periodically and at the end of
781the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
782consists of only alphanumeric characters. The last key of a sequence of
783progress information is always "progress".
784
785The update period is set using @code{-stats_period}.
786
787@anchor{stdin option}
788@item -stdin
789Enable interaction on standard input. On by default unless standard input is
790used as an input. To explicitly disable interaction you need to specify
791@code{-nostdin}.
792
793Disabling interaction on standard input is useful, for example, if
794ffmpeg is in the background process group. Roughly the same result can
795be achieved with @code{ffmpeg ... < /dev/null} but it requires a
796shell.
797
798@item -debug_ts (@emph{global})
799Print timestamp information. It is off by default. This option is
800mostly useful for testing and debugging purposes, and the output
801format may change from one version to another, so it should not be
802employed by portable scripts.
803
804See also the option @code{-fdebug ts}.
805
806@item -attach @var{filename} (@emph{output})
807Add an attachment to the output file. This is supported by a few formats
808like Matroska for e.g. fonts used in rendering subtitles. Attachments
809are implemented as a specific type of stream, so this option will add
810a new stream to the file. It is then possible to use per-stream options
811on this stream in the usual way. Attachment streams created with this
812option will be created after all the other streams (i.e. those created
813with @code{-map} or automatic mappings).
814
815Note that for Matroska you also have to set the mimetype metadata tag:
816@example
817ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
818@end example
819(assuming that the attachment stream will be third in the output file).
820
821@item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
822Extract the matching attachment stream into a file named @var{filename}. If
823@var{filename} is empty, then the value of the @code{filename} metadata tag
824will be used.
825
826E.g. to extract the first attachment to a file named 'out.ttf':
827@example
828ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
829@end example
830To extract all attachments to files determined by the @code{filename} tag:
831@example
832ffmpeg -dump_attachment:t "" -i INPUT
833@end example
834
835Technical note -- attachments are implemented as codec extradata, so this
836option can actually be used to extract extradata from any stream, not just
837attachments.
838@end table
839
840@section Video Options
841
842@table @option
843@item -vframes @var{number} (@emph{output})
844Set the number of video frames to output. This is an obsolete alias for
845@code{-frames:v}, which you should use instead.
846@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
847Set frame rate (Hz value, fraction or abbreviation).
848
849As an input option, ignore any timestamps stored in the file and instead
850generate timestamps assuming constant frame rate @var{fps}.
851This is not the same as the @option{-framerate} option used for some input formats
852like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
853If in doubt use @option{-framerate} instead of the input option @option{-r}.
854
855As an output option, duplicate or drop input frames to achieve constant output
856frame rate @var{fps}.
857
858@item -fpsmax[:@var{stream_specifier}] @var{fps} (@emph{output,per-stream})
859Set maximum frame rate (Hz value, fraction or abbreviation).
860
861Clamps output frame rate when output framerate is auto-set and is higher than this value.
862Useful in batch processing or when input framerate is wrongly detected as very high.
863It cannot be set together with @code{-r}. It is ignored during streamcopy.
864
865@item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
866Set frame size.
867
868As an input option, this is a shortcut for the @option{video_size} private
869option, recognized by some demuxers for which the frame size is either not
870stored in the file or is configurable -- e.g. raw video or video grabbers.
871
872As an output option, this inserts the @code{scale} video filter to the
873@emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
874directly to insert it at the beginning or some other place.
875
876The format is @samp{wxh} (default - same as source).
877
878@item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
879Set the video display aspect ratio specified by @var{aspect}.
880
881@var{aspect} can be a floating point number string, or a string of the
882form @var{num}:@var{den}, where @var{num} and @var{den} are the
883numerator and denominator of the aspect ratio. For example "4:3",
884"16:9", "1.3333", and "1.7777" are valid argument values.
885
886If used together with @option{-vcodec copy}, it will affect the aspect ratio
887stored at container level, but not the aspect ratio stored in encoded
888frames, if it exists.
889
890@item -vn (@emph{input/output})
891As an input option, blocks all video streams of a file from being filtered or
892being automatically selected or mapped for any output. See @code{-discard}
893option to disable streams individually.
894
895As an output option, disables video recording i.e. automatic selection or
896mapping of any video stream. For full manual control see the @code{-map}
897option.
898
899@item -vcodec @var{codec} (@emph{output})
900Set the video codec. This is an alias for @code{-codec:v}.
901
902@item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
903Select the pass number (1 or 2). It is used to do two-pass
904video encoding. The statistics of the video are recorded in the first
905pass into a log file (see also the option -passlogfile),
906and in the second pass that log file is used to generate the video
907at the exact requested bitrate.
908On pass 1, you may just deactivate audio and set output to null,
909examples for Windows and Unix:
910@example
911ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
912ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
913@end example
914
915@item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
916Set two-pass log file name prefix to @var{prefix}, the default file name
917prefix is ``ffmpeg2pass''. The complete file name will be
918@file{PREFIX-N.log}, where N is a number specific to the output
919stream
920
921@item -vf @var{filtergraph} (@emph{output})
922Create the filtergraph specified by @var{filtergraph} and use it to
923filter the stream.
924
925This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
926
927@item -autorotate
928Automatically rotate the video according to file metadata. Enabled by
929default, use @option{-noautorotate} to disable it.
930
931@item -autoscale
932Automatically scale the video according to the resolution of first frame.
933Enabled by default, use @option{-noautoscale} to disable it. When autoscale is
934disabled, all output frames of filter graph might not be in the same resolution
935and may be inadequate for some encoder/muxer. Therefore, it is not recommended
936to disable it unless you really know what you are doing.
937Disable autoscale at your own risk.
938@end table
939
940@section Advanced Video options
941
942@table @option
943@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
944Set pixel format. Use @code{-pix_fmts} to show all the supported
945pixel formats.
946If the selected pixel format can not be selected, ffmpeg will print a
947warning and select the best pixel format supported by the encoder.
948If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
949if the requested pixel format can not be selected, and automatic conversions
950inside filtergraphs are disabled.
951If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
952as the input (or graph output) and automatic conversions are disabled.
953
954@item -sws_flags @var{flags} (@emph{input/output})
955Set SwScaler flags.
956
957@item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
958Rate control override for specific intervals, formatted as "int,int,int"
959list separated with slashes. Two first values are the beginning and
960end frame numbers, last one is quantizer to use if positive, or quality
961factor if negative.
962
963@item -ilme
964Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
965Use this option if your input file is interlaced and you want
966to keep the interlaced format for minimum losses.
967The alternative is to deinterlace the input stream by use of a filter
968such as @code{yadif} or @code{bwdif}, but deinterlacing introduces losses.
969@item -psnr
970Calculate PSNR of compressed frames.
971@item -vstats
972Dump video coding statistics to @file{vstats_HHMMSS.log}.
973@item -vstats_file @var{file}
974Dump video coding statistics to @var{file}.
975@item -vstats_version @var{file}
976Specifies which version of the vstats format to use. Default is 2.
977
978version = 1 :
979
980@code{frame= %5d q= %2.1f PSNR= %6.2f f_size= %6d s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s}
981
982version > 1:
983
984@code{out= %2d st= %2d frame= %5d q= %2.1f PSNR= %6.2f f_size= %6d s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s}
985@item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
986top=1/bottom=0/auto=-1 field first
987@item -dc @var{precision}
988Intra_dc_precision.
989@item -vtag @var{fourcc/tag} (@emph{output})
990Force video tag/fourcc. This is an alias for @code{-tag:v}.
991@item -qphist (@emph{global})
992Show QP histogram
993@item -vbsf @var{bitstream_filter}
994Deprecated see -bsf
995
996@item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
997@item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
998@item -force_key_frames[:@var{stream_specifier}] source (@emph{output,per-stream})
999
1000@var{force_key_frames} can take arguments of the following form:
1001
1002@table @option
1003
1004@item @var{time}[,@var{time}...]
1005If the argument consists of timestamps, ffmpeg will round the specified times to the nearest
1006output timestamp as per the encoder time base and force a keyframe at the first frame having
1007timestamp equal or greater than the computed timestamp. Note that if the encoder time base is too
1008coarse, then the keyframes may be forced on frames with timestamps lower than the specified time.
1009The default encoder time base is the inverse of the output framerate but may be set otherwise
1010via @code{-enc_time_base}.
1011
1012If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
1013the time of the beginning of all chapters in the file, shifted by
1014@var{delta}, expressed as a time in seconds.
1015This option can be useful to ensure that a seek point is present at a
1016chapter mark or any other designated place in the output file.
1017
1018For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
1019before the beginning of every chapter:
1020@example
1021-force_key_frames 0:05:00,chapters-0.1
1022@end example
1023
1024@item expr:@var{expr}
1025If the argument is prefixed with @code{expr:}, the string @var{expr}
1026is interpreted like an expression and is evaluated for each frame. A
1027key frame is forced in case the evaluation is non-zero.
1028
1029The expression in @var{expr} can contain the following constants:
1030@table @option
1031@item n
1032the number of current processed frame, starting from 0
1033@item n_forced
1034the number of forced frames
1035@item prev_forced_n
1036the number of the previous forced frame, it is @code{NAN} when no
1037keyframe was forced yet
1038@item prev_forced_t
1039the time of the previous forced frame, it is @code{NAN} when no
1040keyframe was forced yet
1041@item t
1042the time of the current processed frame
1043@end table
1044
1045For example to force a key frame every 5 seconds, you can specify:
1046@example
1047-force_key_frames expr:gte(t,n_forced*5)
1048@end example
1049
1050To force a key frame 5 seconds after the time of the last forced one,
1051starting from second 13:
1052@example
1053-force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
1054@end example
1055
1056@item source
1057If the argument is @code{source}, ffmpeg will force a key frame if
1058the current frame being encoded is marked as a key frame in its source.
1059
1060@end table
1061
1062Note that forcing too many keyframes is very harmful for the lookahead
1063algorithms of certain encoders: using fixed-GOP options or similar
1064would be more efficient.
1065
1066@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
1067When doing stream copy, copy also non-key frames found at the
1068beginning.
1069
1070@item -init_hw_device @var{type}[=@var{name}][:@var{device}[,@var{key=value}...]]
1071Initialise a new hardware device of type @var{type} called @var{name}, using the
1072given device parameters.
1073If no name is specified it will receive a default name of the form "@var{type}%d".
1074
1075The meaning of @var{device} and the following arguments depends on the
1076device type:
1077@table @option
1078
1079@item cuda
1080@var{device} is the number of the CUDA device.
1081
1082@item dxva2
1083@var{device} is the number of the Direct3D 9 display adapter.
1084
1085@item vaapi
1086@var{device} is either an X11 display name or a DRM render node.
1087If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY})
1088and then the first DRM render node (@emph{/dev/dri/renderD128}).
1089
1090@item vdpau
1091@var{device} is an X11 display name.
1092If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY}).
1093
1094@item qsv
1095@var{device} selects a value in @samp{MFX_IMPL_*}. Allowed values are:
1096@table @option
1097@item auto
1098@item sw
1099@item hw
1100@item auto_any
1101@item hw_any
1102@item hw2
1103@item hw3
1104@item hw4
1105@end table
1106If not specified, @samp{auto_any} is used.
1107(Note that it may be easier to achieve the desired result for QSV by creating the
1108platform-appropriate subdevice (@samp{dxva2} or @samp{vaapi}) and then deriving a
1109QSV device from that.)
1110
1111@item opencl
1112@var{device} selects the platform and device as @emph{platform_index.device_index}.
1113
1114The set of devices can also be filtered using the key-value pairs to find only
1115devices matching particular platform or device strings.
1116
1117The strings usable as filters are:
1118@table @option
1119@item platform_profile
1120@item platform_version
1121@item platform_name
1122@item platform_vendor
1123@item platform_extensions
1124@item device_name
1125@item device_vendor
1126@item driver_version
1127@item device_version
1128@item device_profile
1129@item device_extensions
1130@item device_type
1131@end table
1132
1133The indices and filters must together uniquely select a device.
1134
1135Examples:
1136@table @emph
1137@item -init_hw_device opencl:0.1
1138Choose the second device on the first platform.
1139
1140@item -init_hw_device opencl:,device_name=Foo9000
1141Choose the device with a name containing the string @emph{Foo9000}.
1142
1143@item -init_hw_device opencl:1,device_type=gpu,device_extensions=cl_khr_fp16
1144Choose the GPU device on the second platform supporting the @emph{cl_khr_fp16}
1145extension.
1146@end table
1147
1148@item vulkan
1149If @var{device} is an integer, it selects the device by its index in a
1150system-dependent list of devices.  If @var{device} is any other string, it
1151selects the first device with a name containing that string as a substring.
1152
1153The following options are recognized:
1154@table @option
1155@item debug
1156If set to 1, enables the validation layer, if installed.
1157@item linear_images
1158If set to 1, images allocated by the hwcontext will be linear and locally mappable.
1159@item instance_extensions
1160A plus separated list of additional instance extensions to enable.
1161@item device_extensions
1162A plus separated list of additional device extensions to enable.
1163@end table
1164
1165Examples:
1166@table @emph
1167@item -init_hw_device vulkan:1
1168Choose the second device on the system.
1169
1170@item -init_hw_device vulkan:RADV
1171Choose the first device with a name containing the string @emph{RADV}.
1172
1173@item -init_hw_device vulkan:0,instance_extensions=VK_KHR_wayland_surface+VK_KHR_xcb_surface
1174Choose the first device and enable the Wayland and XCB instance extensions.
1175@end table
1176
1177@end table
1178
1179@item -init_hw_device @var{type}[=@var{name}]@@@var{source}
1180Initialise a new hardware device of type @var{type} called @var{name},
1181deriving it from the existing device with the name @var{source}.
1182
1183@item -init_hw_device list
1184List all hardware device types supported in this build of ffmpeg.
1185
1186@item -filter_hw_device @var{name}
1187Pass the hardware device called @var{name} to all filters in any filter graph.
1188This can be used to set the device to upload to with the @code{hwupload} filter,
1189or the device to map to with the @code{hwmap} filter.  Other filters may also
1190make use of this parameter when they require a hardware device.  Note that this
1191is typically only required when the input is not already in hardware frames -
1192when it is, filters will derive the device they require from the context of the
1193frames they receive as input.
1194
1195This is a global setting, so all filters will receive the same device.
1196
1197@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
1198Use hardware acceleration to decode the matching stream(s). The allowed values
1199of @var{hwaccel} are:
1200@table @option
1201@item none
1202Do not use any hardware acceleration (the default).
1203
1204@item auto
1205Automatically select the hardware acceleration method.
1206
1207@item vdpau
1208Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
1209
1210@item dxva2
1211Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
1212
1213@item vaapi
1214Use VAAPI (Video Acceleration API) hardware acceleration.
1215
1216@item qsv
1217Use the Intel QuickSync Video acceleration for video transcoding.
1218
1219Unlike most other values, this option does not enable accelerated decoding (that
1220is used automatically whenever a qsv decoder is selected), but accelerated
1221transcoding, without copying the frames into the system memory.
1222
1223For it to work, both the decoder and the encoder must support QSV acceleration
1224and no filters must be used.
1225@end table
1226
1227This option has no effect if the selected hwaccel is not available or not
1228supported by the chosen decoder.
1229
1230Note that most acceleration methods are intended for playback and will not be
1231faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
1232will usually need to copy the decoded frames from the GPU memory into the system
1233memory, resulting in further performance loss. This option is thus mainly
1234useful for testing.
1235
1236@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
1237Select a device to use for hardware acceleration.
1238
1239This option only makes sense when the @option{-hwaccel} option is also specified.
1240It can either refer to an existing device created with @option{-init_hw_device}
1241by name, or it can create a new device as if
1242@samp{-init_hw_device} @var{type}:@var{hwaccel_device}
1243were called immediately before.
1244
1245@item -hwaccels
1246List all hardware acceleration methods supported in this build of ffmpeg.
1247
1248@end table
1249
1250@section Audio Options
1251
1252@table @option
1253@item -aframes @var{number} (@emph{output})
1254Set the number of audio frames to output. This is an obsolete alias for
1255@code{-frames:a}, which you should use instead.
1256@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
1257Set the audio sampling frequency. For output streams it is set by
1258default to the frequency of the corresponding input stream. For input
1259streams this option only makes sense for audio grabbing devices and raw
1260demuxers and is mapped to the corresponding demuxer options.
1261@item -aq @var{q} (@emph{output})
1262Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
1263@item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
1264Set the number of audio channels. For output streams it is set by
1265default to the number of input audio channels. For input streams
1266this option only makes sense for audio grabbing devices and raw demuxers
1267and is mapped to the corresponding demuxer options.
1268@item -an (@emph{input/output})
1269As an input option, blocks all audio streams of a file from being filtered or
1270being automatically selected or mapped for any output. See @code{-discard}
1271option to disable streams individually.
1272
1273As an output option, disables audio recording i.e. automatic selection or
1274mapping of any audio stream. For full manual control see the @code{-map}
1275option.
1276@item -acodec @var{codec} (@emph{input/output})
1277Set the audio codec. This is an alias for @code{-codec:a}.
1278@item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
1279Set the audio sample format. Use @code{-sample_fmts} to get a list
1280of supported sample formats.
1281
1282@item -af @var{filtergraph} (@emph{output})
1283Create the filtergraph specified by @var{filtergraph} and use it to
1284filter the stream.
1285
1286This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
1287@end table
1288
1289@section Advanced Audio options
1290
1291@table @option
1292@item -atag @var{fourcc/tag} (@emph{output})
1293Force audio tag/fourcc. This is an alias for @code{-tag:a}.
1294@item -absf @var{bitstream_filter}
1295Deprecated, see -bsf
1296@item -guess_layout_max @var{channels} (@emph{input,per-stream})
1297If some input channel layout is not known, try to guess only if it
1298corresponds to at most the specified number of channels. For example, 2
1299tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
1300stereo but not 6 channels as 5.1. The default is to always try to guess. Use
13010 to disable all guessing.
1302@end table
1303
1304@section Subtitle options
1305
1306@table @option
1307@item -scodec @var{codec} (@emph{input/output})
1308Set the subtitle codec. This is an alias for @code{-codec:s}.
1309@item -sn (@emph{input/output})
1310As an input option, blocks all subtitle streams of a file from being filtered or
1311being automatically selected or mapped for any output. See @code{-discard}
1312option to disable streams individually.
1313
1314As an output option, disables subtitle recording i.e. automatic selection or
1315mapping of any subtitle stream. For full manual control see the @code{-map}
1316option.
1317@item -sbsf @var{bitstream_filter}
1318Deprecated, see -bsf
1319@end table
1320
1321@section Advanced Subtitle options
1322
1323@table @option
1324
1325@item -fix_sub_duration
1326Fix subtitles durations. For each subtitle, wait for the next packet in the
1327same stream and adjust the duration of the first to avoid overlap. This is
1328necessary with some subtitles codecs, especially DVB subtitles, because the
1329duration in the original packet is only a rough estimate and the end is
1330actually marked by an empty subtitle frame. Failing to use this option when
1331necessary can result in exaggerated durations or muxing failures due to
1332non-monotonic timestamps.
1333
1334Note that this option will delay the output of all data until the next
1335subtitle packet is decoded: it may increase memory consumption and latency a
1336lot.
1337
1338@item -canvas_size @var{size}
1339Set the size of the canvas used to render subtitles.
1340
1341@end table
1342
1343@section Advanced options
1344
1345@table @option
1346@item -map [-]@var{input_file_id}[:@var{stream_specifier}][?][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
1347
1348Designate one or more input streams as a source for the output file. Each input
1349stream is identified by the input file index @var{input_file_id} and
1350the input stream index @var{input_stream_id} within the input
1351file. Both indices start at 0. If specified,
1352@var{sync_file_id}:@var{stream_specifier} sets which input stream
1353is used as a presentation sync reference.
1354
1355The first @code{-map} option on the command line specifies the
1356source for output stream 0, the second @code{-map} option specifies
1357the source for output stream 1, etc.
1358
1359A @code{-} character before the stream identifier creates a "negative" mapping.
1360It disables matching streams from already created mappings.
1361
1362A trailing @code{?} after the stream index will allow the map to be
1363optional: if the map matches no streams the map will be ignored instead
1364of failing. Note the map will still fail if an invalid input file index
1365is used; such as if the map refers to a non-existent input.
1366
1367An alternative @var{[linklabel]} form will map outputs from complex filter
1368graphs (see the @option{-filter_complex} option) to the output file.
1369@var{linklabel} must correspond to a defined output link label in the graph.
1370
1371For example, to map ALL streams from the first input file to output
1372@example
1373ffmpeg -i INPUT -map 0 output
1374@end example
1375
1376For example, if you have two audio streams in the first input file,
1377these streams are identified by "0:0" and "0:1". You can use
1378@code{-map} to select which streams to place in an output file. For
1379example:
1380@example
1381ffmpeg -i INPUT -map 0:1 out.wav
1382@end example
1383will map the input stream in @file{INPUT} identified by "0:1" to
1384the (single) output stream in @file{out.wav}.
1385
1386For example, to select the stream with index 2 from input file
1387@file{a.mov} (specified by the identifier "0:2"), and stream with
1388index 6 from input @file{b.mov} (specified by the identifier "1:6"),
1389and copy them to the output file @file{out.mov}:
1390@example
1391ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
1392@end example
1393
1394To select all video and the third audio stream from an input file:
1395@example
1396ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
1397@end example
1398
1399To map all the streams except the second audio, use negative mappings
1400@example
1401ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
1402@end example
1403
1404To map the video and audio streams from the first input, and using the
1405trailing @code{?}, ignore the audio mapping if no audio streams exist in
1406the first input:
1407@example
1408ffmpeg -i INPUT -map 0:v -map 0:a? OUTPUT
1409@end example
1410
1411To pick the English audio stream:
1412@example
1413ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
1414@end example
1415
1416Note that using this option disables the default mappings for this output file.
1417
1418@item -ignore_unknown
1419Ignore input streams with unknown type instead of failing if copying
1420such streams is attempted.
1421
1422@item -copy_unknown
1423Allow input streams with unknown type to be copied instead of failing if copying
1424such streams is attempted.
1425
1426@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][?][:@var{output_file_id}.@var{stream_specifier}]
1427Map an audio channel from a given input to an output. If
1428@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
1429be mapped on all the audio streams.
1430
1431Using "-1" instead of
1432@var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
1433channel.
1434
1435A trailing @code{?} will allow the map_channel to be
1436optional: if the map_channel matches no channel the map_channel will be ignored instead
1437of failing.
1438
1439For example, assuming @var{INPUT} is a stereo audio file, you can switch the
1440two audio channels with the following command:
1441@example
1442ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
1443@end example
1444
1445If you want to mute the first channel and keep the second:
1446@example
1447ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
1448@end example
1449
1450The order of the "-map_channel" option specifies the order of the channels in
1451the output stream. The output channel layout is guessed from the number of
1452channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
1453in combination of "-map_channel" makes the channel gain levels to be updated if
1454input and output channel layouts don't match (for instance two "-map_channel"
1455options and "-ac 6").
1456
1457You can also extract each channel of an input to specific outputs; the following
1458command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
1459to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
1460@example
1461ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
1462@end example
1463
1464The following example splits the channels of a stereo input into two separate
1465streams, which are put into the same output file:
1466@example
1467ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
1468@end example
1469
1470Note that currently each output stream can only contain channels from a single
1471input stream; you can't for example use "-map_channel" to pick multiple input
1472audio channels contained in different streams (from the same or different files)
1473and merge them into a single output stream. It is therefore not currently
1474possible, for example, to turn two separate mono streams into a single stereo
1475stream. However splitting a stereo stream into two single channel mono streams
1476is possible.
1477
1478If you need this feature, a possible workaround is to use the @emph{amerge}
1479filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
1480mono audio streams into one single stereo channel audio stream (and keep the
1481video stream), you can use the following command:
1482@example
1483ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
1484@end example
1485
1486To map the first two audio channels from the first input, and using the
1487trailing @code{?}, ignore the audio channel mapping if the first input is
1488mono instead of stereo:
1489@example
1490ffmpeg -i INPUT -map_channel 0.0.0 -map_channel 0.0.1? OUTPUT
1491@end example
1492
1493@item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
1494Set metadata information of the next output file from @var{infile}. Note that
1495those are file indices (zero-based), not filenames.
1496Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
1497A metadata specifier can have the following forms:
1498@table @option
1499@item @var{g}
1500global metadata, i.e. metadata that applies to the whole file
1501
1502@item @var{s}[:@var{stream_spec}]
1503per-stream metadata. @var{stream_spec} is a stream specifier as described
1504in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
1505matching stream is copied from. In an output metadata specifier, all matching
1506streams are copied to.
1507
1508@item @var{c}:@var{chapter_index}
1509per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
1510
1511@item @var{p}:@var{program_index}
1512per-program metadata. @var{program_index} is the zero-based program index.
1513@end table
1514If metadata specifier is omitted, it defaults to global.
1515
1516By default, global metadata is copied from the first input file,
1517per-stream and per-chapter metadata is copied along with streams/chapters. These
1518default mappings are disabled by creating any mapping of the relevant type. A negative
1519file index can be used to create a dummy mapping that just disables automatic copying.
1520
1521For example to copy metadata from the first stream of the input file to global metadata
1522of the output file:
1523@example
1524ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
1525@end example
1526
1527To do the reverse, i.e. copy global metadata to all audio streams:
1528@example
1529ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
1530@end example
1531Note that simple @code{0} would work as well in this example, since global
1532metadata is assumed by default.
1533
1534@item -map_chapters @var{input_file_index} (@emph{output})
1535Copy chapters from input file with index @var{input_file_index} to the next
1536output file. If no chapter mapping is specified, then chapters are copied from
1537the first input file with at least one chapter. Use a negative file index to
1538disable any chapter copying.
1539
1540@item -benchmark (@emph{global})
1541Show benchmarking information at the end of an encode.
1542Shows real, system and user time used and maximum memory consumption.
1543Maximum memory consumption is not supported on all systems,
1544it will usually display as 0 if not supported.
1545@item -benchmark_all (@emph{global})
1546Show benchmarking information during the encode.
1547Shows real, system and user time used in various steps (audio/video encode/decode).
1548@item -timelimit @var{duration} (@emph{global})
1549Exit after ffmpeg has been running for @var{duration} seconds in CPU user time.
1550@item -dump (@emph{global})
1551Dump each input packet to stderr.
1552@item -hex (@emph{global})
1553When dumping packets, also dump the payload.
1554@item -re (@emph{input})
1555Read input at native frame rate. Mainly used to simulate a grab device,
1556or live input stream (e.g. when reading from a file). Should not be used
1557with actual grab devices or live input streams (where it can cause packet
1558loss).
1559By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
1560This option will slow down the reading of the input(s) to the native frame rate
1561of the input(s). It is useful for real-time output (e.g. live streaming).
1562@item -vsync @var{parameter}
1563Video sync method.
1564For compatibility reasons old values can be specified as numbers.
1565Newly added values will have to be specified as strings always.
1566
1567@table @option
1568@item 0, passthrough
1569Each frame is passed with its timestamp from the demuxer to the muxer.
1570@item 1, cfr
1571Frames will be duplicated and dropped to achieve exactly the requested
1572constant frame rate.
1573@item 2, vfr
1574Frames are passed through with their timestamp or dropped so as to
1575prevent 2 frames from having the same timestamp.
1576@item drop
1577As passthrough but destroys all timestamps, making the muxer generate
1578fresh timestamps based on frame-rate.
1579@item -1, auto
1580Chooses between 1 and 2 depending on muxer capabilities. This is the
1581default method.
1582@end table
1583
1584Note that the timestamps may be further modified by the muxer, after this.
1585For example, in the case that the format option @option{avoid_negative_ts}
1586is enabled.
1587
1588With -map you can select from which stream the timestamps should be
1589taken. You can leave either video or audio unchanged and sync the
1590remaining stream(s) to the unchanged one.
1591
1592@item -frame_drop_threshold @var{parameter}
1593Frame drop threshold, which specifies how much behind video frames can
1594be before they are dropped. In frame rate units, so 1.0 is one frame.
1595The default is -1.1. One possible usecase is to avoid framedrops in case
1596of noisy timestamps or to increase frame drop precision in case of exact
1597timestamps.
1598
1599@item -async @var{samples_per_second}
1600Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
1601the parameter is the maximum samples per second by which the audio is changed.
1602-async 1 is a special case where only the start of the audio stream is corrected
1603without any later correction.
1604
1605Note that the timestamps may be further modified by the muxer, after this.
1606For example, in the case that the format option @option{avoid_negative_ts}
1607is enabled.
1608
1609This option has been deprecated. Use the @code{aresample} audio filter instead.
1610
1611@item -adrift_threshold @var{time}
1612Set the minimum difference between timestamps and audio data (in seconds) to trigger
1613adding/dropping samples to make it match the timestamps. This option effectively is
1614a threshold to select between hard (add/drop) and soft (squeeze/stretch) compensation.
1615@code{-async} must be set to a positive value.
1616
1617@item -apad @var{parameters} (@emph{output,per-stream})
1618Pad the output audio stream(s). This is the same as applying @code{-af apad}.
1619Argument is a string of filter parameters composed the same as with the @code{apad} filter.
1620@code{-shortest} must be set for this output for the option to take effect.
1621
1622@item -copyts
1623Do not process input timestamps, but keep their values without trying
1624to sanitize them. In particular, do not remove the initial start time
1625offset value.
1626
1627Note that, depending on the @option{vsync} option or on specific muxer
1628processing (e.g. in case the format option @option{avoid_negative_ts}
1629is enabled) the output timestamps may mismatch with the input
1630timestamps even when this option is selected.
1631
1632@item -start_at_zero
1633When used with @option{copyts}, shift input timestamps so they start at zero.
1634
1635This means that using e.g. @code{-ss 50} will make output timestamps start at
163650 seconds, regardless of what timestamp the input file started at.
1637
1638@item -copytb @var{mode}
1639Specify how to set the encoder timebase when stream copying.  @var{mode} is an
1640integer numeric value, and can assume one of the following values:
1641
1642@table @option
1643@item 1
1644Use the demuxer timebase.
1645
1646The time base is copied to the output encoder from the corresponding input
1647demuxer. This is sometimes required to avoid non monotonically increasing
1648timestamps when copying video streams with variable frame rate.
1649
1650@item 0
1651Use the decoder timebase.
1652
1653The time base is copied to the output encoder from the corresponding input
1654decoder.
1655
1656@item -1
1657Try to make the choice automatically, in order to generate a sane output.
1658@end table
1659
1660Default value is -1.
1661
1662@item -enc_time_base[:@var{stream_specifier}] @var{timebase} (@emph{output,per-stream})
1663Set the encoder timebase. @var{timebase} is a floating point number,
1664and can assume one of the following values:
1665
1666@table @option
1667@item 0
1668Assign a default value according to the media type.
1669
1670For video - use 1/framerate, for audio - use 1/samplerate.
1671
1672@item -1
1673Use the input stream timebase when possible.
1674
1675If an input stream is not available, the default timebase will be used.
1676
1677@item >0
1678Use the provided number as the timebase.
1679
1680This field can be provided as a ratio of two integers (e.g. 1:24, 1:48000)
1681or as a floating point number (e.g. 0.04166, 2.0833e-5)
1682@end table
1683
1684Default value is 0.
1685
1686@item -bitexact (@emph{input/output})
1687Enable bitexact mode for (de)muxer and (de/en)coder
1688@item -shortest (@emph{output})
1689Finish encoding when the shortest input stream ends.
1690@item -dts_delta_threshold
1691Timestamp discontinuity delta threshold.
1692@item -dts_error_threshold @var{seconds}
1693Timestamp error delta threshold. This threshold use to discard crazy/damaged
1694timestamps and the default is 30 hours which is arbitrarily picked and quite
1695conservative.
1696@item -muxdelay @var{seconds} (@emph{output})
1697Set the maximum demux-decode delay.
1698@item -muxpreload @var{seconds} (@emph{output})
1699Set the initial demux-decode delay.
1700@item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
1701Assign a new stream-id value to an output stream. This option should be
1702specified prior to the output filename to which it applies.
1703For the situation where multiple output files exist, a streamid
1704may be reassigned to a different value.
1705
1706For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
1707an output mpegts file:
1708@example
1709ffmpeg -i inurl -streamid 0:33 -streamid 1:36 out.ts
1710@end example
1711
1712@item -bsf[:@var{stream_specifier}] @var{bitstream_filters} (@emph{output,per-stream})
1713Set bitstream filters for matching streams. @var{bitstream_filters} is
1714a comma-separated list of bitstream filters. Use the @code{-bsfs} option
1715to get the list of bitstream filters.
1716@example
1717ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
1718@end example
1719@example
1720ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
1721@end example
1722
1723@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
1724Force a tag/fourcc for matching streams.
1725
1726@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
1727Specify Timecode for writing. @var{SEP} is ':' for non drop timecode and ';'
1728(or '.') for drop.
1729@example
1730ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
1731@end example
1732
1733@anchor{filter_complex_option}
1734@item -filter_complex @var{filtergraph} (@emph{global})
1735Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1736outputs. For simple graphs -- those with one input and one output of the same
1737type -- see the @option{-filter} options. @var{filtergraph} is a description of
1738the filtergraph, as described in the ``Filtergraph syntax'' section of the
1739ffmpeg-filters manual.
1740
1741Input link labels must refer to input streams using the
1742@code{[file_index:stream_specifier]} syntax (i.e. the same as @option{-map}
1743uses). If @var{stream_specifier} matches multiple streams, the first one will be
1744used. An unlabeled input will be connected to the first unused input stream of
1745the matching type.
1746
1747Output link labels are referred to with @option{-map}. Unlabeled outputs are
1748added to the first output file.
1749
1750Note that with this option it is possible to use only lavfi sources without
1751normal input files.
1752
1753For example, to overlay an image over video
1754@example
1755ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map
1756'[out]' out.mkv
1757@end example
1758Here @code{[0:v]} refers to the first video stream in the first input file,
1759which is linked to the first (main) input of the overlay filter. Similarly the
1760first video stream in the second input is linked to the second (overlay) input
1761of overlay.
1762
1763Assuming there is only one video stream in each input file, we can omit input
1764labels, so the above is equivalent to
1765@example
1766ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map
1767'[out]' out.mkv
1768@end example
1769
1770Furthermore we can omit the output label and the single output from the filter
1771graph will be added to the output file automatically, so we can simply write
1772@example
1773ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
1774@end example
1775
1776As a special exception, you can use a bitmap subtitle stream as input: it
1777will be converted into a video with the same size as the largest video in
1778the file, or 720x576 if no video is present. Note that this is an
1779experimental and temporary solution. It will be removed once libavfilter has
1780proper support for subtitles.
1781
1782For example, to hardcode subtitles on top of a DVB-T recording stored in
1783MPEG-TS format, delaying the subtitles by 1 second:
1784@example
1785ffmpeg -i input.ts -filter_complex \
1786  '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \
1787  -sn -map '#0x2dc' output.mkv
1788@end example
1789(0x2d0, 0x2dc and 0x2ef are the MPEG-TS PIDs of respectively the video,
1790audio and subtitles streams; 0:0, 0:3 and 0:7 would have worked too)
1791
1792To generate 5 seconds of pure red video using lavfi @code{color} source:
1793@example
1794ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
1795@end example
1796
1797@item -filter_complex_threads @var{nb_threads} (@emph{global})
1798Defines how many threads are used to process a filter_complex graph.
1799Similar to filter_threads but used for @code{-filter_complex} graphs only.
1800The default is the number of available CPUs.
1801
1802@item -lavfi @var{filtergraph} (@emph{global})
1803Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1804outputs. Equivalent to @option{-filter_complex}.
1805
1806@item -filter_complex_script @var{filename} (@emph{global})
1807This option is similar to @option{-filter_complex}, the only difference is that
1808its argument is the name of the file from which a complex filtergraph
1809description is to be read.
1810
1811@item -accurate_seek (@emph{input})
1812This option enables or disables accurate seeking in input files with the
1813@option{-ss} option. It is enabled by default, so seeking is accurate when
1814transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
1815e.g. when copying some streams and transcoding the others.
1816
1817@item -seek_timestamp (@emph{input})
1818This option enables or disables seeking by timestamp in input files with the
1819@option{-ss} option. It is disabled by default. If enabled, the argument
1820to the @option{-ss} option is considered an actual timestamp, and is not
1821offset by the start time of the file. This matters only for files which do
1822not start from timestamp 0, such as transport streams.
1823
1824@item -thread_queue_size @var{size} (@emph{input})
1825This option sets the maximum number of queued packets when reading from the
1826file or device. With low latency / high rate live streams, packets may be
1827discarded if they are not read in a timely manner; setting this value can
1828force ffmpeg to use a separate input thread and read packets as soon as they
1829arrive. By default ffmpeg only do this if multiple inputs are specified.
1830
1831@item -sdp_file @var{file} (@emph{global})
1832Print sdp information for an output stream to @var{file}.
1833This allows dumping sdp information when at least one output isn't an
1834rtp stream. (Requires at least one of the output formats to be rtp).
1835
1836@item -discard (@emph{input})
1837Allows discarding specific streams or frames from streams.
1838Any input stream can be fully discarded, using value @code{all} whereas
1839selective discarding of frames from a stream occurs at the demuxer
1840and is not supported by all demuxers.
1841
1842@table @option
1843@item none
1844Discard no frame.
1845
1846@item default
1847Default, which discards no frames.
1848
1849@item noref
1850Discard all non-reference frames.
1851
1852@item bidir
1853Discard all bidirectional frames.
1854
1855@item nokey
1856Discard all frames excepts keyframes.
1857
1858@item all
1859Discard all frames.
1860@end table
1861
1862@item -abort_on @var{flags} (@emph{global})
1863Stop and abort on various conditions. The following flags are available:
1864
1865@table @option
1866@item empty_output
1867No packets were passed to the muxer, the output is empty.
1868@item empty_output_stream
1869No packets were passed to the muxer in some of the output streams.
1870@end table
1871
1872@item -max_error_rate (@emph{global})
1873Set fraction of decoding frame failures across all inputs which when crossed
1874ffmpeg will return exit code 69. Crossing this threshold does not terminate
1875processing. Range is a floating-point number between 0 to 1. Default is 2/3.
1876
1877@item -xerror (@emph{global})
1878Stop and exit on error
1879
1880@item -max_muxing_queue_size @var{packets} (@emph{output,per-stream})
1881When transcoding audio and/or video streams, ffmpeg will not begin writing into
1882the output until it has one packet for each such stream. While waiting for that
1883to happen, packets for other streams are buffered. This option sets the size of
1884this buffer, in packets, for the matching output stream.
1885
1886The default value of this option should be high enough for most uses, so only
1887touch this option if you are sure that you need it.
1888
1889@item -muxing_queue_data_threshold @var{bytes} (@emph{output,per-stream})
1890This is a minimum threshold until which the muxing queue size is not taken into
1891account. Defaults to 50 megabytes per stream, and is based on the overall size
1892of packets passed to the muxer.
1893
1894@item -auto_conversion_filters (@emph{global})
1895Enable automatically inserting format conversion filters in all filter
1896graphs, including those defined by @option{-vf}, @option{-af},
1897@option{-filter_complex} and @option{-lavfi}. If filter format negotiation
1898requires a conversion, the initialization of the filters will fail.
1899Conversions can still be performed by inserting the relevant conversion
1900filter (scale, aresample) in the graph.
1901On by default, to explicitly disable it you need to specify
1902@code{-noauto_conversion_filters}.
1903
1904@end table
1905
1906@section Preset files
1907A preset file contains a sequence of @var{option}=@var{value} pairs,
1908one for each line, specifying a sequence of options which would be
1909awkward to specify on the command line. Lines starting with the hash
1910('#') character are ignored and are used to provide comments. Check
1911the @file{presets} directory in the FFmpeg source tree for examples.
1912
1913There are two types of preset files: ffpreset and avpreset files.
1914
1915@subsection ffpreset files
1916ffpreset files are specified with the @code{vpre}, @code{apre},
1917@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
1918filename of the preset instead of a preset name as input and can be
1919used for any kind of codec. For the @code{vpre}, @code{apre}, and
1920@code{spre} options, the options specified in a preset file are
1921applied to the currently selected codec of the same type as the preset
1922option.
1923
1924The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
1925preset options identifies the preset file to use according to the
1926following rules:
1927
1928First ffmpeg searches for a file named @var{arg}.ffpreset in the
1929directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1930the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
1931or in a @file{ffpresets} folder along the executable on win32,
1932in that order. For example, if the argument is @code{libvpx-1080p}, it will
1933search for the file @file{libvpx-1080p.ffpreset}.
1934
1935If no such file is found, then ffmpeg will search for a file named
1936@var{codec_name}-@var{arg}.ffpreset in the above-mentioned
1937directories, where @var{codec_name} is the name of the codec to which
1938the preset file options will be applied. For example, if you select
1939the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
1940then it will search for the file @file{libvpx-1080p.ffpreset}.
1941
1942@subsection avpreset files
1943avpreset files are specified with the @code{pre} option. They work similar to
1944ffpreset files, but they only allow encoder- specific options. Therefore, an
1945@var{option}=@var{value} pair specifying an encoder cannot be used.
1946
1947When the @code{pre} option is specified, ffmpeg will look for files with the
1948suffix .avpreset in the directories @file{$AVCONV_DATADIR} (if set), and
1949@file{$HOME/.avconv}, and in the datadir defined at configuration time (usually
1950@file{PREFIX/share/ffmpeg}), in that order.
1951
1952First ffmpeg searches for a file named @var{codec_name}-@var{arg}.avpreset in
1953the above-mentioned directories, where @var{codec_name} is the name of the codec
1954to which the preset file options will be applied. For example, if you select the
1955video codec with @code{-vcodec libvpx} and use @code{-pre 1080p}, then it will
1956search for the file @file{libvpx-1080p.avpreset}.
1957
1958If no such file is found, then ffmpeg will search for a file named
1959@var{arg}.avpreset in the same directories.
1960
1961@c man end OPTIONS
1962
1963@chapter Examples
1964@c man begin EXAMPLES
1965
1966@section Video and Audio grabbing
1967
1968If you specify the input format and device then ffmpeg can grab video
1969and audio directly.
1970
1971@example
1972ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
1973@end example
1974
1975Or with an ALSA audio source (mono input, card id 1) instead of OSS:
1976@example
1977ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
1978@end example
1979
1980Note that you must activate the right video source and channel before
1981launching ffmpeg with any TV viewer such as
1982@uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
1983have to set the audio recording levels correctly with a
1984standard mixer.
1985
1986@section X11 grabbing
1987
1988Grab the X11 display with ffmpeg via
1989
1990@example
1991ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
1992@end example
1993
19940.0 is display.screen number of your X11 server, same as
1995the DISPLAY environment variable.
1996
1997@example
1998ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
1999@end example
2000
20010.0 is display.screen number of your X11 server, same as the DISPLAY environment
2002variable. 10 is the x-offset and 20 the y-offset for the grabbing.
2003
2004@section Video and Audio file format conversion
2005
2006Any supported file format and protocol can serve as input to ffmpeg:
2007
2008Examples:
2009@itemize
2010@item
2011You can use YUV files as input:
2012
2013@example
2014ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
2015@end example
2016
2017It will use the files:
2018@example
2019/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
2020/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
2021@end example
2022
2023The Y files use twice the resolution of the U and V files. They are
2024raw files, without header. They can be generated by all decent video
2025decoders. You must specify the size of the image with the @option{-s} option
2026if ffmpeg cannot guess it.
2027
2028@item
2029You can input from a raw YUV420P file:
2030
2031@example
2032ffmpeg -i /tmp/test.yuv /tmp/out.avi
2033@end example
2034
2035test.yuv is a file containing raw YUV planar data. Each frame is composed
2036of the Y plane followed by the U and V planes at half vertical and
2037horizontal resolution.
2038
2039@item
2040You can output to a raw YUV420P file:
2041
2042@example
2043ffmpeg -i mydivx.avi hugefile.yuv
2044@end example
2045
2046@item
2047You can set several input files and output files:
2048
2049@example
2050ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
2051@end example
2052
2053Converts the audio file a.wav and the raw YUV video file a.yuv
2054to MPEG file a.mpg.
2055
2056@item
2057You can also do audio and video conversions at the same time:
2058
2059@example
2060ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
2061@end example
2062
2063Converts a.wav to MPEG audio at 22050 Hz sample rate.
2064
2065@item
2066You can encode to several formats at the same time and define a
2067mapping from input stream to output streams:
2068
2069@example
2070ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2
2071@end example
2072
2073Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
2074file:index' specifies which input stream is used for each output
2075stream, in the order of the definition of output streams.
2076
2077@item
2078You can transcode decrypted VOBs:
2079
2080@example
2081ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi
2082@end example
2083
2084This is a typical DVD ripping example; the input is a VOB file, the
2085output an AVI file with MPEG-4 video and MP3 audio. Note that in this
2086command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
2087GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
2088input video. Furthermore, the audio stream is MP3-encoded so you need
2089to enable LAME support by passing @code{--enable-libmp3lame} to configure.
2090The mapping is particularly useful for DVD transcoding
2091to get the desired audio language.
2092
2093NOTE: To see the supported input formats, use @code{ffmpeg -demuxers}.
2094
2095@item
2096You can extract images from a video, or create a video from many images:
2097
2098For extracting images from a video:
2099@example
2100ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
2101@end example
2102
2103This will extract one video frame per second from the video and will
2104output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
2105etc. Images will be rescaled to fit the new WxH values.
2106
2107If you want to extract just a limited number of frames, you can use the
2108above command in combination with the @code{-frames:v} or @code{-t} option,
2109or in combination with -ss to start extracting from a certain point in time.
2110
2111For creating a video from many images:
2112@example
2113ffmpeg -f image2 -framerate 12 -i foo-%03d.jpeg -s WxH foo.avi
2114@end example
2115
2116The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
2117composed of three digits padded with zeroes to express the sequence
2118number. It is the same syntax supported by the C printf function, but
2119only formats accepting a normal integer are suitable.
2120
2121When importing an image sequence, -i also supports expanding
2122shell-like wildcard patterns (globbing) internally, by selecting the
2123image2-specific @code{-pattern_type glob} option.
2124
2125For example, for creating a video from filenames matching the glob pattern
2126@code{foo-*.jpeg}:
2127@example
2128ffmpeg -f image2 -pattern_type glob -framerate 12 -i 'foo-*.jpeg' -s WxH foo.avi
2129@end example
2130
2131@item
2132You can put many streams of the same type in the output:
2133
2134@example
2135ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
2136@end example
2137
2138The resulting output file @file{test12.nut} will contain the first four streams
2139from the input files in reverse order.
2140
2141@item
2142To force CBR video output:
2143@example
2144ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
2145@end example
2146
2147@item
2148The four options lmin, lmax, mblmin and mblmax use 'lambda' units,
2149but you may use the QP2LAMBDA constant to easily convert from 'q' units:
2150@example
2151ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
2152@end example
2153
2154@end itemize
2155@c man end EXAMPLES
2156
2157@include config.texi
2158@ifset config-all
2159@ifset config-avutil
2160@include utils.texi
2161@end ifset
2162@ifset config-avcodec
2163@include codecs.texi
2164@include bitstream_filters.texi
2165@end ifset
2166@ifset config-avformat
2167@include formats.texi
2168@include protocols.texi
2169@end ifset
2170@ifset config-avdevice
2171@include devices.texi
2172@end ifset
2173@ifset config-swresample
2174@include resampler.texi
2175@end ifset
2176@ifset config-swscale
2177@include scaler.texi
2178@end ifset
2179@ifset config-avfilter
2180@include filters.texi
2181@end ifset
2182@include general_contents.texi
2183@end ifset
2184
2185@chapter See Also
2186
2187@ifhtml
2188@ifset config-all
2189@url{ffmpeg.html,ffmpeg}
2190@end ifset
2191@ifset config-not-all
2192@url{ffmpeg-all.html,ffmpeg-all},
2193@end ifset
2194@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
2195@url{ffmpeg-utils.html,ffmpeg-utils},
2196@url{ffmpeg-scaler.html,ffmpeg-scaler},
2197@url{ffmpeg-resampler.html,ffmpeg-resampler},
2198@url{ffmpeg-codecs.html,ffmpeg-codecs},
2199@url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
2200@url{ffmpeg-formats.html,ffmpeg-formats},
2201@url{ffmpeg-devices.html,ffmpeg-devices},
2202@url{ffmpeg-protocols.html,ffmpeg-protocols},
2203@url{ffmpeg-filters.html,ffmpeg-filters}
2204@end ifhtml
2205
2206@ifnothtml
2207@ifset config-all
2208ffmpeg(1),
2209@end ifset
2210@ifset config-not-all
2211ffmpeg-all(1),
2212@end ifset
2213ffplay(1), ffprobe(1),
2214ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
2215ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
2216ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
2217@end ifnothtml
2218
2219@include authors.texi
2220
2221@ignore
2222
2223@setfilename ffmpeg
2224@settitle ffmpeg video converter
2225
2226@end ignore
2227
2228@bye
2229