mirror of
https://github.com/xenia-project/FFmpeg.git
synced 2024-11-24 12:09:55 +00:00
6537827189
* commit 'e3b225a4fe0ff1e64a220b757c6f0a5cf9258521': matroskaenc: add an option to put the index at the start of the file Conflicts: doc/muxers.texi libavformat/matroskaenc.c libavformat/version.h Merged-by: Michael Niedermayer <michaelni@gmx.at>
818 lines
28 KiB
Plaintext
818 lines
28 KiB
Plaintext
@chapter Muxers
|
|
@c man begin MUXERS
|
|
|
|
Muxers are configured elements in FFmpeg which allow writing
|
|
multimedia streams to a particular type of file.
|
|
|
|
When you configure your FFmpeg build, all the supported muxers
|
|
are enabled by default. You can list all available muxers using the
|
|
configure option @code{--list-muxers}.
|
|
|
|
You can disable all the muxers with the configure option
|
|
@code{--disable-muxers} and selectively enable / disable single muxers
|
|
with the options @code{--enable-muxer=@var{MUXER}} /
|
|
@code{--disable-muxer=@var{MUXER}}.
|
|
|
|
The option @code{-formats} of the ff* tools will display the list of
|
|
enabled muxers.
|
|
|
|
A description of some of the currently available muxers follows.
|
|
|
|
@anchor{crc}
|
|
@section crc
|
|
|
|
CRC (Cyclic Redundancy Check) testing format.
|
|
|
|
This muxer computes and prints the Adler-32 CRC of all the input audio
|
|
and video frames. By default audio frames are converted to signed
|
|
16-bit raw audio and video frames to raw video before computing the
|
|
CRC.
|
|
|
|
The output of the muxer consists of a single line of the form:
|
|
CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
|
|
8 digits containing the CRC for all the decoded input frames.
|
|
|
|
For example to compute the CRC of the input, and store it in the file
|
|
@file{out.crc}:
|
|
@example
|
|
ffmpeg -i INPUT -f crc out.crc
|
|
@end example
|
|
|
|
You can print the CRC to stdout with the command:
|
|
@example
|
|
ffmpeg -i INPUT -f crc -
|
|
@end example
|
|
|
|
You can select the output format of each frame with @command{ffmpeg} by
|
|
specifying the audio and video codec and format. For example to
|
|
compute the CRC of the input audio converted to PCM unsigned 8-bit
|
|
and the input video converted to MPEG-2 video, use the command:
|
|
@example
|
|
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
|
|
@end example
|
|
|
|
See also the @ref{framecrc} muxer.
|
|
|
|
@anchor{framecrc}
|
|
@section framecrc
|
|
|
|
Per-packet CRC (Cyclic Redundancy Check) testing format.
|
|
|
|
This muxer computes and prints the Adler-32 CRC for each audio
|
|
and video packet. By default audio frames are converted to signed
|
|
16-bit raw audio and video frames to raw video before computing the
|
|
CRC.
|
|
|
|
The output of the muxer consists of a line for each audio and video
|
|
packet of the form:
|
|
@example
|
|
@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, 0x@var{CRC}
|
|
@end example
|
|
|
|
@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
|
|
CRC of the packet.
|
|
|
|
For example to compute the CRC of the audio and video frames in
|
|
@file{INPUT}, converted to raw audio and video packets, and store it
|
|
in the file @file{out.crc}:
|
|
@example
|
|
ffmpeg -i INPUT -f framecrc out.crc
|
|
@end example
|
|
|
|
To print the information to stdout, use the command:
|
|
@example
|
|
ffmpeg -i INPUT -f framecrc -
|
|
@end example
|
|
|
|
With @command{ffmpeg}, you can select the output format to which the
|
|
audio and video frames are encoded before computing the CRC for each
|
|
packet by specifying the audio and video codec. For example, to
|
|
compute the CRC of each decoded input audio frame converted to PCM
|
|
unsigned 8-bit and of each decoded input video frame converted to
|
|
MPEG-2 video, use the command:
|
|
@example
|
|
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
|
|
@end example
|
|
|
|
See also the @ref{crc} muxer.
|
|
|
|
@anchor{framemd5}
|
|
@section framemd5
|
|
|
|
Per-packet MD5 testing format.
|
|
|
|
This muxer computes and prints the MD5 hash for each audio
|
|
and video packet. By default audio frames are converted to signed
|
|
16-bit raw audio and video frames to raw video before computing the
|
|
hash.
|
|
|
|
The output of the muxer consists of a line for each audio and video
|
|
packet of the form:
|
|
@example
|
|
@var{stream_index}, @var{packet_dts}, @var{packet_pts}, @var{packet_duration}, @var{packet_size}, @var{MD5}
|
|
@end example
|
|
|
|
@var{MD5} is a hexadecimal number representing the computed MD5 hash
|
|
for the packet.
|
|
|
|
For example to compute the MD5 of the audio and video frames in
|
|
@file{INPUT}, converted to raw audio and video packets, and store it
|
|
in the file @file{out.md5}:
|
|
@example
|
|
ffmpeg -i INPUT -f framemd5 out.md5
|
|
@end example
|
|
|
|
To print the information to stdout, use the command:
|
|
@example
|
|
ffmpeg -i INPUT -f framemd5 -
|
|
@end example
|
|
|
|
See also the @ref{md5} muxer.
|
|
|
|
@anchor{hls}
|
|
@section hls
|
|
|
|
Apple HTTP Live Streaming muxer that segments MPEG-TS according to
|
|
the HTTP Live Streaming specification.
|
|
|
|
It creates a playlist file and numbered segment files. The output
|
|
filename specifies the playlist filename; the segment filenames
|
|
receive the same basename as the playlist, a sequential number and
|
|
a .ts extension.
|
|
|
|
@example
|
|
ffmpeg -i in.nut out.m3u8
|
|
@end example
|
|
|
|
@table @option
|
|
@item -hls_time @var{seconds}
|
|
Set the segment length in seconds.
|
|
@item -hls_list_size @var{size}
|
|
Set the maximum number of playlist entries.
|
|
@item -hls_wrap @var{wrap}
|
|
Set the number after which index wraps.
|
|
@item -start_number @var{number}
|
|
Start the sequence from @var{number}.
|
|
@end table
|
|
|
|
@anchor{ico}
|
|
@section ico
|
|
|
|
ICO file muxer.
|
|
|
|
Microsoft's icon file format (ICO) has some strict limitations that should be noted:
|
|
|
|
@itemize
|
|
@item
|
|
Size cannot exceed 256 pixels in any dimension
|
|
|
|
@item
|
|
Only BMP and PNG images can be stored
|
|
|
|
@item
|
|
If a BMP image is used, it must be one of the following pixel formats:
|
|
@example
|
|
BMP Bit Depth FFmpeg Pixel Format
|
|
1bit pal8
|
|
4bit pal8
|
|
8bit pal8
|
|
16bit rgb555le
|
|
24bit bgr24
|
|
32bit bgra
|
|
@end example
|
|
|
|
@item
|
|
If a BMP image is used, it must use the BITMAPINFOHEADER DIB header
|
|
|
|
@item
|
|
If a PNG image is used, it must use the rgba pixel format
|
|
@end itemize
|
|
|
|
@anchor{image2}
|
|
@section image2
|
|
|
|
Image file muxer.
|
|
|
|
The image file muxer writes video frames to image files.
|
|
|
|
The output filenames are specified by a pattern, which can be used to
|
|
produce sequentially numbered series of files.
|
|
The pattern may contain the string "%d" or "%0@var{N}d", this string
|
|
specifies the position of the characters representing a numbering in
|
|
the filenames. If the form "%0@var{N}d" is used, the string
|
|
representing the number in each filename is 0-padded to @var{N}
|
|
digits. The literal character '%' can be specified in the pattern with
|
|
the string "%%".
|
|
|
|
If the pattern contains "%d" or "%0@var{N}d", the first filename of
|
|
the file list specified will contain the number 1, all the following
|
|
numbers will be sequential.
|
|
|
|
The pattern may contain a suffix which is used to automatically
|
|
determine the format of the image files to write.
|
|
|
|
For example the pattern "img-%03d.bmp" will specify a sequence of
|
|
filenames of the form @file{img-001.bmp}, @file{img-002.bmp}, ...,
|
|
@file{img-010.bmp}, etc.
|
|
The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
|
|
form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
|
|
etc.
|
|
|
|
The following example shows how to use @command{ffmpeg} for creating a
|
|
sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
|
|
taking one image every second from the input video:
|
|
@example
|
|
ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
|
|
@end example
|
|
|
|
Note that with @command{ffmpeg}, if the format is not specified with the
|
|
@code{-f} option and the output filename specifies an image file
|
|
format, the image2 muxer is automatically selected, so the previous
|
|
command can be written as:
|
|
@example
|
|
ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
|
|
@end example
|
|
|
|
Note also that the pattern must not necessarily contain "%d" or
|
|
"%0@var{N}d", for example to create a single image file
|
|
@file{img.jpeg} from the input video you can employ the command:
|
|
@example
|
|
ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
|
|
@end example
|
|
|
|
@table @option
|
|
@item start_number @var{number}
|
|
Start the sequence from @var{number}. Default value is 1. Must be a
|
|
positive number.
|
|
|
|
@item -update @var{number}
|
|
If @var{number} is nonzero, the filename will always be interpreted as just a
|
|
filename, not a pattern, and this file will be continuously overwritten with new
|
|
images.
|
|
|
|
@end table
|
|
|
|
The image muxer supports the .Y.U.V image file format. This format is
|
|
special in that that each image frame consists of three files, for
|
|
each of the YUV420P components. To read or write this image file format,
|
|
specify the name of the '.Y' file. The muxer will automatically open the
|
|
'.U' and '.V' files as required.
|
|
|
|
@anchor{md5}
|
|
@section md5
|
|
|
|
MD5 testing format.
|
|
|
|
This muxer computes and prints the MD5 hash of all the input audio
|
|
and video frames. By default audio frames are converted to signed
|
|
16-bit raw audio and video frames to raw video before computing the
|
|
hash.
|
|
|
|
The output of the muxer consists of a single line of the form:
|
|
MD5=@var{MD5}, where @var{MD5} is a hexadecimal number representing
|
|
the computed MD5 hash.
|
|
|
|
For example to compute the MD5 hash of the input converted to raw
|
|
audio and video, and store it in the file @file{out.md5}:
|
|
@example
|
|
ffmpeg -i INPUT -f md5 out.md5
|
|
@end example
|
|
|
|
You can print the MD5 to stdout with the command:
|
|
@example
|
|
ffmpeg -i INPUT -f md5 -
|
|
@end example
|
|
|
|
See also the @ref{framemd5} muxer.
|
|
|
|
@section MOV/MP4/ISMV
|
|
|
|
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
|
|
file has all the metadata about all packets stored in one location
|
|
(written at the end of the file, it can be moved to the start for
|
|
better playback by adding @var{faststart} to the @var{movflags}, or
|
|
using the @command{qt-faststart} tool). A fragmented
|
|
file consists of a number of fragments, where packets and metadata
|
|
about these packets are stored together. Writing a fragmented
|
|
file has the advantage that the file is decodable even if the
|
|
writing is interrupted (while a normal MOV/MP4 is undecodable if
|
|
it is not properly finished), and it requires less memory when writing
|
|
very long files (since writing normal MOV/MP4 files stores info about
|
|
every single packet in memory until the file is closed). The downside
|
|
is that it is less compatible with other applications.
|
|
|
|
Fragmentation is enabled by setting one of the AVOptions that define
|
|
how to cut the file into fragments:
|
|
|
|
@table @option
|
|
@item -moov_size @var{bytes}
|
|
Reserves space for the moov atom at the beginning of the file instead of placing the
|
|
moov atom at the end. If the space reserved is insufficient, muxing will fail.
|
|
@item -movflags frag_keyframe
|
|
Start a new fragment at each video keyframe.
|
|
@item -frag_duration @var{duration}
|
|
Create fragments that are @var{duration} microseconds long.
|
|
@item -frag_size @var{size}
|
|
Create fragments that contain up to @var{size} bytes of payload data.
|
|
@item -movflags frag_custom
|
|
Allow the caller to manually choose when to cut fragments, by
|
|
calling @code{av_write_frame(ctx, NULL)} to write a fragment with
|
|
the packets written so far. (This is only useful with other
|
|
applications integrating libavformat, not from @command{ffmpeg}.)
|
|
@item -min_frag_duration @var{duration}
|
|
Don't create fragments that are shorter than @var{duration} microseconds long.
|
|
@end table
|
|
|
|
If more than one condition is specified, fragments are cut when
|
|
one of the specified conditions is fulfilled. The exception to this is
|
|
@code{-min_frag_duration}, which has to be fulfilled for any of the other
|
|
conditions to apply.
|
|
|
|
Additionally, the way the output file is written can be adjusted
|
|
through a few other options:
|
|
|
|
@table @option
|
|
@item -movflags empty_moov
|
|
Write an initial moov atom directly at the start of the file, without
|
|
describing any samples in it. Generally, an mdat/moov pair is written
|
|
at the start of the file, as a normal MOV/MP4 file, containing only
|
|
a short portion of the file. With this option set, there is no initial
|
|
mdat atom, and the moov atom only describes the tracks but has
|
|
a zero duration.
|
|
|
|
Files written with this option set do not work in QuickTime.
|
|
This option is implicitly set when writing ismv (Smooth Streaming) files.
|
|
@item -movflags separate_moof
|
|
Write a separate moof (movie fragment) atom for each track. Normally,
|
|
packets for all tracks are written in a moof atom (which is slightly
|
|
more efficient), but with this option set, the muxer writes one moof/mdat
|
|
pair for each track, making it easier to separate tracks.
|
|
|
|
This option is implicitly set when writing ismv (Smooth Streaming) files.
|
|
@item -movflags faststart
|
|
Run a second pass moving the moov atom on top of the file. This
|
|
operation can take a while, and will not work in various situations such
|
|
as fragmented output, thus it is not enabled by default.
|
|
@item -movflags rtphint
|
|
Add RTP hinting tracks to the output file.
|
|
@end table
|
|
|
|
Smooth Streaming content can be pushed in real time to a publishing
|
|
point on IIS with this muxer. Example:
|
|
@example
|
|
ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
|
|
@end example
|
|
|
|
@section mpegts
|
|
|
|
MPEG transport stream muxer.
|
|
|
|
This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
|
|
|
|
The muxer options are:
|
|
|
|
@table @option
|
|
@item -mpegts_original_network_id @var{number}
|
|
Set the original_network_id (default 0x0001). This is unique identifier
|
|
of a network in DVB. Its main use is in the unique identification of a
|
|
service through the path Original_Network_ID, Transport_Stream_ID.
|
|
@item -mpegts_transport_stream_id @var{number}
|
|
Set the transport_stream_id (default 0x0001). This identifies a
|
|
transponder in DVB.
|
|
@item -mpegts_service_id @var{number}
|
|
Set the service_id (default 0x0001) also known as program in DVB.
|
|
@item -mpegts_pmt_start_pid @var{number}
|
|
Set the first PID for PMT (default 0x1000, max 0x1f00).
|
|
@item -mpegts_start_pid @var{number}
|
|
Set the first PID for data packets (default 0x0100, max 0x0f00).
|
|
@end table
|
|
|
|
The recognized metadata settings in mpegts muxer are @code{service_provider}
|
|
and @code{service_name}. If they are not set the default for
|
|
@code{service_provider} is "FFmpeg" and the default for
|
|
@code{service_name} is "Service01".
|
|
|
|
@example
|
|
ffmpeg -i file.mpg -c copy \
|
|
-mpegts_original_network_id 0x1122 \
|
|
-mpegts_transport_stream_id 0x3344 \
|
|
-mpegts_service_id 0x5566 \
|
|
-mpegts_pmt_start_pid 0x1500 \
|
|
-mpegts_start_pid 0x150 \
|
|
-metadata service_provider="Some provider" \
|
|
-metadata service_name="Some Channel" \
|
|
-y out.ts
|
|
@end example
|
|
|
|
@section null
|
|
|
|
Null muxer.
|
|
|
|
This muxer does not generate any output file, it is mainly useful for
|
|
testing or benchmarking purposes.
|
|
|
|
For example to benchmark decoding with @command{ffmpeg} you can use the
|
|
command:
|
|
@example
|
|
ffmpeg -benchmark -i INPUT -f null out.null
|
|
@end example
|
|
|
|
Note that the above command does not read or write the @file{out.null}
|
|
file, but specifying the output file is required by the @command{ffmpeg}
|
|
syntax.
|
|
|
|
Alternatively you can write the command as:
|
|
@example
|
|
ffmpeg -benchmark -i INPUT -f null -
|
|
@end example
|
|
|
|
@section matroska
|
|
|
|
Matroska container muxer.
|
|
|
|
This muxer implements the matroska and webm container specs.
|
|
|
|
The recognized metadata settings in this muxer are:
|
|
|
|
@table @option
|
|
|
|
@item title=@var{title name}
|
|
Name provided to a single track
|
|
@end table
|
|
|
|
@table @option
|
|
|
|
@item language=@var{language name}
|
|
Specifies the language of the track in the Matroska languages form
|
|
@end table
|
|
|
|
@table @option
|
|
|
|
@item stereo_mode=@var{mode}
|
|
Stereo 3D video layout of two views in a single video track
|
|
@table @option
|
|
@item mono
|
|
video is not stereo
|
|
@item left_right
|
|
Both views are arranged side by side, Left-eye view is on the left
|
|
@item bottom_top
|
|
Both views are arranged in top-bottom orientation, Left-eye view is at bottom
|
|
@item top_bottom
|
|
Both views are arranged in top-bottom orientation, Left-eye view is on top
|
|
@item checkerboard_rl
|
|
Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
|
|
@item checkerboard_lr
|
|
Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
|
|
@item row_interleaved_rl
|
|
Each view is constituted by a row based interleaving, Right-eye view is first row
|
|
@item row_interleaved_lr
|
|
Each view is constituted by a row based interleaving, Left-eye view is first row
|
|
@item col_interleaved_rl
|
|
Both views are arranged in a column based interleaving manner, Right-eye view is first column
|
|
@item col_interleaved_lr
|
|
Both views are arranged in a column based interleaving manner, Left-eye view is first column
|
|
@item anaglyph_cyan_red
|
|
All frames are in anaglyph format viewable through red-cyan filters
|
|
@item right_left
|
|
Both views are arranged side by side, Right-eye view is on the left
|
|
@item anaglyph_green_magenta
|
|
All frames are in anaglyph format viewable through green-magenta filters
|
|
@item block_lr
|
|
Both eyes laced in one Block, Left-eye view is first
|
|
@item block_rl
|
|
Both eyes laced in one Block, Right-eye view is first
|
|
@end table
|
|
@end table
|
|
|
|
For example a 3D WebM clip can be created using the following command line:
|
|
@example
|
|
ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
|
|
@end example
|
|
|
|
This muxer supports the following options:
|
|
|
|
@table @option
|
|
|
|
@item reserve_index_space
|
|
By default, this muxer writes the index for seeking (called cues in Matroska
|
|
terms) at the end of the file, because it cannot know in advance how much space
|
|
to leave for the index at the beginning of the file. However for some use cases
|
|
-- e.g. streaming where seeking is possible but slow -- it is useful to put the
|
|
index at the beginning of the file.
|
|
|
|
If this option is set to a non-zero value, the muxer will reserve a given amount
|
|
of space in the file header and then try to write the cues there when the muxing
|
|
finishes. If the available space does not suffice, muxing will fail. A safe size
|
|
for most use cases should be about 50kB per hour of video.
|
|
|
|
Note that cues are only written if the output is seekable and this option will
|
|
have no effect if it is not.
|
|
|
|
@end table
|
|
|
|
@section segment, stream_segment, ssegment
|
|
|
|
Basic stream segmenter.
|
|
|
|
The segmenter muxer outputs streams to a number of separate files of nearly
|
|
fixed duration. Output filename pattern can be set in a fashion similar to
|
|
@ref{image2}.
|
|
|
|
@code{stream_segment} is a variant of the muxer used to write to
|
|
streaming output formats, i.e. which do not require global headers,
|
|
and is recommended for outputting e.g. to MPEG transport stream segments.
|
|
@code{ssegment} is a shorter alias for @code{stream_segment}.
|
|
|
|
Every segment starts with a keyframe of the selected reference stream,
|
|
which is set through the @option{reference_stream} option.
|
|
|
|
Note that if you want accurate splitting for a video file, you need to
|
|
make the input key frames correspond to the exact splitting times
|
|
expected by the segmenter, or the segment muxer will start the new
|
|
segment with the key frame found next after the specified start
|
|
time.
|
|
|
|
The segment muxer works best with a single constant frame rate video.
|
|
|
|
Optionally it can generate a list of the created segments, by setting
|
|
the option @var{segment_list}. The list type is specified by the
|
|
@var{segment_list_type} option.
|
|
|
|
The segment muxer supports the following options:
|
|
|
|
@table @option
|
|
@item reference_stream @var{specifier}
|
|
Set the reference stream, as specified by the string @var{specifier}.
|
|
If @var{specifier} is set to @code{auto}, the reference is choosen
|
|
automatically. Otherwise it must be a stream specifier (see the ``Stream
|
|
specifiers'' chapter in the ffmpeg manual) which specifies the
|
|
reference stream. The default value is ``auto''.
|
|
|
|
@item segment_format @var{format}
|
|
Override the inner container format, by default it is guessed by the filename
|
|
extension.
|
|
|
|
@item segment_list @var{name}
|
|
Generate also a listfile named @var{name}. If not specified no
|
|
listfile is generated.
|
|
|
|
@item segment_list_flags @var{flags}
|
|
Set flags affecting the segment list generation.
|
|
|
|
It currently supports the following flags:
|
|
@table @var
|
|
@item cache
|
|
Allow caching (only affects M3U8 list files).
|
|
|
|
@item live
|
|
Allow live-friendly file generation.
|
|
@end table
|
|
|
|
Default value is @code{cache}.
|
|
|
|
@item segment_list_size @var{size}
|
|
Update the list file so that it contains at most the last @var{size}
|
|
segments. If 0 the list file will contain all the segments. Default
|
|
value is 0.
|
|
|
|
@item segment_list type @var{type}
|
|
Specify the format for the segment list file.
|
|
|
|
The following values are recognized:
|
|
@table @option
|
|
@item flat
|
|
Generate a flat list for the created segments, one segment per line.
|
|
|
|
@item csv, ext
|
|
Generate a list for the created segments, one segment per line,
|
|
each line matching the format (comma-separated values):
|
|
@example
|
|
@var{segment_filename},@var{segment_start_time},@var{segment_end_time}
|
|
@end example
|
|
|
|
@var{segment_filename} is the name of the output file generated by the
|
|
muxer according to the provided pattern. CSV escaping (according to
|
|
RFC4180) is applied if required.
|
|
|
|
@var{segment_start_time} and @var{segment_end_time} specify
|
|
the segment start and end time expressed in seconds.
|
|
|
|
A list file with the suffix @code{".csv"} or @code{".ext"} will
|
|
auto-select this format.
|
|
|
|
@code{ext} is deprecated in favor or @code{csv}.
|
|
|
|
@item ffconcat
|
|
Generate an ffconcat file for the created segments. The resulting file
|
|
can be read using the FFmpeg @ref{concat} demuxer.
|
|
|
|
A list file with the suffix @code{".ffcat"} or @code{".ffconcat"} will
|
|
auto-select this format.
|
|
|
|
@item m3u8
|
|
Generate an extended M3U8 file, version 3, compliant with
|
|
@url{http://tools.ietf.org/id/draft-pantos-http-live-streaming}.
|
|
|
|
A list file with the suffix @code{".m3u8"} will auto-select this format.
|
|
@end table
|
|
|
|
If not specified the type is guessed from the list file name suffix.
|
|
|
|
@item segment_time @var{time}
|
|
Set segment duration to @var{time}, the value must be a duration
|
|
specification. Default value is "2". See also the
|
|
@option{segment_times} option.
|
|
|
|
Note that splitting may not be accurate, unless you force the
|
|
reference stream key-frames at the given time. See the introductory
|
|
notice and the examples below.
|
|
|
|
@item segment_time_delta @var{delta}
|
|
Specify the accuracy time when selecting the start time for a
|
|
segment, expressed as a duration specification. Default value is "0".
|
|
|
|
When delta is specified a key-frame will start a new segment if its
|
|
PTS satisfies the relation:
|
|
@example
|
|
PTS >= start_time - time_delta
|
|
@end example
|
|
|
|
This option is useful when splitting video content, which is always
|
|
split at GOP boundaries, in case a key frame is found just before the
|
|
specified split time.
|
|
|
|
In particular may be used in combination with the @file{ffmpeg} option
|
|
@var{force_key_frames}. The key frame times specified by
|
|
@var{force_key_frames} may not be set accurately because of rounding
|
|
issues, with the consequence that a key frame time may result set just
|
|
before the specified time. For constant frame rate videos a value of
|
|
1/2*@var{frame_rate} should address the worst case mismatch between
|
|
the specified time and the time set by @var{force_key_frames}.
|
|
|
|
@item segment_times @var{times}
|
|
Specify a list of split points. @var{times} contains a list of comma
|
|
separated duration specifications, in increasing order. See also
|
|
the @option{segment_time} option.
|
|
|
|
@item segment_frames @var{frames}
|
|
Specify a list of split video frame numbers. @var{frames} contains a
|
|
list of comma separated integer numbers, in increasing order.
|
|
|
|
This option specifies to start a new segment whenever a reference
|
|
stream key frame is found and the sequential number (starting from 0)
|
|
of the frame is greater or equal to the next value in the list.
|
|
|
|
@item segment_wrap @var{limit}
|
|
Wrap around segment index once it reaches @var{limit}.
|
|
|
|
@item segment_start_number @var{number}
|
|
Set the sequence number of the first segment. Defaults to @code{0}.
|
|
|
|
@item reset_timestamps @var{1|0}
|
|
Reset timestamps at the begin of each segment, so that each segment
|
|
will start with near-zero timestamps. It is meant to ease the playback
|
|
of the generated segments. May not work with some combinations of
|
|
muxers/codecs. It is set to @code{0} by default.
|
|
@end table
|
|
|
|
@subsection Examples
|
|
|
|
@itemize
|
|
@item
|
|
To remux the content of file @file{in.mkv} to a list of segments
|
|
@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
|
|
generated segments to @file{out.list}:
|
|
@example
|
|
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut
|
|
@end example
|
|
|
|
@item
|
|
As the example above, but segment the input file according to the split
|
|
points specified by the @var{segment_times} option:
|
|
@example
|
|
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
|
|
@end example
|
|
|
|
@item
|
|
As the example above, but use the @code{ffmpeg} @var{force_key_frames}
|
|
option to force key frames in the input at the specified location, together
|
|
with the segment option @var{segment_time_delta} to account for
|
|
possible roundings operated when setting key frame times.
|
|
@example
|
|
ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \
|
|
-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
|
|
@end example
|
|
In order to force key frames on the input file, transcoding is
|
|
required.
|
|
|
|
@item
|
|
Segment the input file by splitting the input file according to the
|
|
frame numbers sequence specified with the @var{segment_frames} option:
|
|
@example
|
|
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
|
|
@end example
|
|
|
|
@item
|
|
To convert the @file{in.mkv} to TS segments using the @code{libx264}
|
|
and @code{libfaac} encoders:
|
|
@example
|
|
ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
|
|
@end example
|
|
|
|
@item
|
|
Segment the input file, and create an M3U8 live playlist (can be used
|
|
as live HLS source):
|
|
@example
|
|
ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
|
|
-segment_list_flags +live -segment_time 10 out%03d.mkv
|
|
@end example
|
|
@end itemize
|
|
|
|
@section mp3
|
|
|
|
The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
|
|
optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
|
|
@code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
|
|
not written by default, but may be enabled with the @code{write_id3v1} option.
|
|
|
|
For seekable output the muxer also writes a Xing frame at the beginning, which
|
|
contains the number of frames in the file. It is useful for computing duration
|
|
of VBR files.
|
|
|
|
The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
|
|
are supplied to the muxer in form of a video stream with a single packet. There
|
|
can be any number of those streams, each will correspond to a single APIC frame.
|
|
The stream metadata tags @var{title} and @var{comment} map to APIC
|
|
@var{description} and @var{picture type} respectively. See
|
|
@url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
|
|
|
|
Note that the APIC frames must be written at the beginning, so the muxer will
|
|
buffer the audio frames until it gets all the pictures. It is therefore advised
|
|
to provide the pictures as soon as possible to avoid excessive buffering.
|
|
|
|
Examples:
|
|
|
|
Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
|
|
@example
|
|
ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
|
|
@end example
|
|
|
|
To attach a picture to an mp3 file select both the audio and the picture stream
|
|
with @code{map}:
|
|
@example
|
|
ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
|
|
-metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3
|
|
@end example
|
|
|
|
@section ogg
|
|
|
|
Ogg container muxer.
|
|
|
|
@table @option
|
|
@item -page_duration @var{duration}
|
|
Preferred page duration, in microseconds. The muxer will attempt to create
|
|
pages that are approximately @var{duration} microseconds long. This allows the
|
|
user to compromise between seek granularity and container overhead. The default
|
|
is 1 second. A value of 0 will fill all segments, making pages as large as
|
|
possible. A value of 1 will effectively use 1 packet-per-page in most
|
|
situations, giving a small seek granularity at the cost of additional container
|
|
overhead.
|
|
@end table
|
|
|
|
@section tee
|
|
|
|
The tee muxer can be used to write the same data to several files or any
|
|
other kind of muxer. It can be used, for example, to both stream a video to
|
|
the network and save it to disk at the same time.
|
|
|
|
It is different from specifying several outputs to the @command{ffmpeg}
|
|
command-line tool because the audio and video data will be encoded only once
|
|
with the tee muxer; encoding can be a very expensive process. It is not
|
|
useful when using the libavformat API directly because it is then possible
|
|
to feed the same packets to several muxers directly.
|
|
|
|
The slave outputs are specified in the file name given to the muxer,
|
|
separated by '|'. If any of the slave name contains the '|' separator,
|
|
leading or trailing spaces or any special character, it must be
|
|
escaped (see the ``Quoting and escaping'' section in the ffmpeg-utils
|
|
manual).
|
|
|
|
Options can be specified for each slave by prepending them as a list of
|
|
@var{key}=@var{value} pairs separated by ':', between square brackets. If
|
|
the options values contain a special character or the ':' separator, they
|
|
must be escaped; note that this is a second level escaping.
|
|
|
|
Example: encode something and both archive it in a WebM file and stream it
|
|
as MPEG-TS over UDP (the streams need to be explicitly mapped):
|
|
|
|
@example
|
|
ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a
|
|
"archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
|
|
@end example
|
|
|
|
Note: some codecs may need different options depending on the output format;
|
|
the auto-detection of this can not work with the tee muxer. The main example
|
|
is the @option{global_header} flag.
|
|
|
|
@c man end MUXERS
|