FFmpeg/doc/ffmpeg-doc.texi

1807 lines
61 KiB
Plaintext
Raw Normal View History

\input texinfo @c -*- texinfo -*-
@settitle FFmpeg Documentation
@titlepage
@sp 7
@center @titlefont{FFmpeg Documentation}
@sp 3
@end titlepage
@chapter Introduction
FFmpeg is a very fast video and audio converter. It can also grab from
a live audio/video source.
The command line interface is designed to be intuitive, in the sense
that FFmpeg tries to figure out all parameters that can possibly be
derived automatically. You usually only have to specify the target
bitrate you want.
FFmpeg can also convert from any sample rate to any other, and resize
video on the fly with a high quality polyphase filter.
@chapter Quick Start
@c man begin EXAMPLES
@section Video and Audio grabbing
FFmpeg can grab video and audio from devices given that you specify the input
format and device.
@example
ffmpeg -f audio_device -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
@end example
Note that you must activate the right video source and channel before
launching FFmpeg with any TV viewer such as xawtv
(@url{http://bytesex.org/xawtv/}) by Gerd Knorr. You also
have to set the audio recording levels correctly with a
standard mixer.
@section X11 grabbing
FFmpeg can grab the X11 display.
@example
ffmpeg -f x11grab -s cif -i :0.0 /tmp/out.mpg
@end example
0.0 is display.screen number of your X11 server, same as
the DISPLAY environment variable.
@example
ffmpeg -f x11grab -s cif -i :0.0+10,20 /tmp/out.mpg
@end example
0.0 is display.screen number of your X11 server, same as the DISPLAY environment
variable. 10 is the x-offset and 20 the y-offset for the grabbing.
@section Video and Audio file format conversion
* FFmpeg can use any supported file format and protocol as input:
Examples:
* You can use YUV files as input:
@example
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
@end example
It will use the files:
@example
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
@end example
The Y files use twice the resolution of the U and V files. They are
raw files, without header. They can be generated by all decent video
decoders. You must specify the size of the image with the @option{-s} option
if FFmpeg cannot guess it.
* You can input from a raw YUV420P file:
@example
ffmpeg -i /tmp/test.yuv /tmp/out.avi
@end example
test.yuv is a file containing raw YUV planar data. Each frame is composed
of the Y plane followed by the U and V planes at half vertical and
horizontal resolution.
* You can output to a raw YUV420P file:
@example
ffmpeg -i mydivx.avi hugefile.yuv
@end example
* You can set several input files and output files:
@example
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
@end example
Converts the audio file a.wav and the raw YUV video file a.yuv
to MPEG file a.mpg.
* You can also do audio and video conversions at the same time:
@example
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
@end example
Converts a.wav to MPEG audio at 22050Hz sample rate.
* You can encode to several formats at the same time and define a
mapping from input stream to output streams:
@example
ffmpeg -i /tmp/a.wav -ab 64k /tmp/a.mp2 -ab 128k /tmp/b.mp2 -map 0:0 -map 0:0
@end example
Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
file:index' specifies which input stream is used for each output
stream, in the order of the definition of output streams.
* You can transcode decrypted VOBs
@example
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800k -g 300 -bf 2 -acodec libmp3lame -ab 128k snatch.avi
@end example
This is a typical DVD ripping example; the input is a VOB file, the
output an AVI file with MPEG-4 video and MP3 audio. Note that in this
command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
input video. Furthermore, the audio stream is MP3-encoded so you need
to enable LAME support by passing @code{--enable-libmp3lame} to configure.
The mapping is particularly useful for DVD transcoding
to get the desired audio language.
NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
@c man end
@chapter Invocation
@section Syntax
The generic syntax is:
@example
@c man begin SYNOPSIS
ffmpeg [[infile options][@option{-i} @var{infile}]]... @{[outfile options] @var{outfile}@}...
@c man end
@end example
@c man begin DESCRIPTION
As a general rule, options are applied to the next specified
file. Therefore, order is important, and you can have the same
option on the command line multiple times. Each occurrence is
then applied to the next input or output file.
* To set the video bitrate of the output file to 64kbit/s:
@example
ffmpeg -i input.avi -b 64k output.avi
@end example
* To force the frame rate of the input and output file to 24 fps:
@example
ffmpeg -r 24 -i input.avi output.avi
@end example
* To force the frame rate of the output file to 24 fps:
@example
ffmpeg -i input.avi -r 24 output.avi
@end example
* To force the frame rate of input file to 1 fps and the output file to 24 fps:
@example
ffmpeg -r 1 -i input.avi -r 24 output.avi
@end example
The format option may be needed for raw input files.
By default, FFmpeg tries to convert as losslessly as possible: It
uses the same audio and video parameters for the outputs as the one
specified for the inputs.
@c man end
@c man begin OPTIONS
@section Main options
@table @option
@item -L
Show license.
@item -h
Show help.
@item -version
Show version.
@item -formats
Show available formats, codecs, protocols, ...
@item -f fmt
Force format.
@item -i filename
input filename
@item -y
Overwrite output files.
@item -t duration
Set the recording time in seconds.
@code{hh:mm:ss[.xxx]} syntax is also supported.
@item -fs limit_size
Set the file size limit.
@item -ss position
Seek to given time position in seconds.
@code{hh:mm:ss[.xxx]} syntax is also supported.
@item -itsoffset offset
Set the input time offset in seconds.
@code{[-]hh:mm:ss[.xxx]} syntax is also supported.
This option affects all the input files that follow it.
The offset is added to the timestamps of the input files.
Specifying a positive offset means that the corresponding
streams are delayed by 'offset' seconds.
@item -title string
Set the title.
@item -timestamp time
Set the timestamp.
@item -author string
Set the author.
@item -copyright string
Set the copyright.
@item -comment string
Set the comment.
@item -album string
Set the album.
@item -track number
Set the track.
@item -year number
Set the year.
@item -v verbose
Control amount of logging.
@item -target type
Specify target file type ("vcd", "svcd", "dvd", "dv", "dv50", "pal-vcd",
"ntsc-svcd", ... ). All the format options (bitrate, codecs,
buffer sizes) are then set automatically. You can just type:
@example
ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
@end example
Nevertheless you can specify additional options as long as you know
they do not conflict with the standard, as in:
@example
ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
@end example
@item -dframes number
Set the number of data frames to record.
@item -scodec codec
Force subtitle codec ('copy' to copy stream).
@item -newsubtitle
Add a new subtitle stream to the current output stream.
@item -slang code
Set the ISO 639 language code (3 letters) of the current subtitle stream.
@end table
@section Video Options
@table @option
@item -b bitrate
Set the video bitrate in bit/s (default = 200 kb/s).
@item -vframes number
Set the number of video frames to record.
@item -r fps
Set frame rate (Hz value, fraction or abbreviation), (default = 25).
@item -s size
Set frame size. The format is @samp{wxh} (ffserver default = 160x128, ffmpeg default = same as source).
The following abbreviations are recognized:
@table @samp
@item sqcif
128x96
@item qcif
176x144
@item cif
352x288
@item 4cif
704x576
@item qqvga
160x120
@item qvga
320x240
@item vga
640x480
@item svga
800x600
@item xga
1024x768
@item uxga
1600x1200
@item qxga
2048x1536
@item sxga
1280x1024
@item qsxga
2560x2048
@item hsxga
5120x4096
@item wvga
852x480
@item wxga
1366x768
@item wsxga
1600x1024
@item wuxga
1920x1200
@item woxga
2560x1600
@item wqsxga
3200x2048
@item wquxga
3840x2400
@item whsxga
6400x4096
@item whuxga
7680x4800
@item cga
320x200
@item ega
640x350
@item hd480
852x480
@item hd720
1280x720
@item hd1080
1920x1080
@end table
@item -aspect aspect
Set aspect ratio (4:3, 16:9 or 1.3333, 1.7777).
@item -croptop size
Set top crop band size (in pixels).
@item -cropbottom size
Set bottom crop band size (in pixels).
@item -cropleft size
Set left crop band size (in pixels).
@item -cropright size
Set right crop band size (in pixels).
@item -padtop size
Set top pad band size (in pixels).
@item -padbottom size
Set bottom pad band size (in pixels).
@item -padleft size
Set left pad band size (in pixels).
@item -padright size
Set right pad band size (in pixels).
@item -padcolor (hex color)
Set color of padded bands. The value for padcolor is expressed
as a six digit hexadecimal number where the first two digits
represent red, the middle two digits green and last two digits
blue (default = 000000 (black)).
@item -vn
Disable video recording.
@item -bt tolerance
Set video bitrate tolerance (in bit/s).
@item -maxrate bitrate
Set max video bitrate (in bit/s).
@item -minrate bitrate
Set min video bitrate (in bit/s).
@item -bufsize size
Set video buffer verifier buffer size (in bits).
@item -vcodec codec
Force video codec to @var{codec}. Use the @code{copy} special value to
tell that the raw codec data must be copied as is.
@item -sameq
Use same video quality as source (implies VBR).
@item -pass n
Select the pass number (1 or 2). It is useful to do two pass
encoding. The statistics of the video are recorded in the first
pass and the video is generated at the exact requested bitrate
in the second pass.
@item -passlogfile file
Set two pass logfile name to @var{file}.
@item -newvideo
Add a new video stream to the current output stream.
@end table
@section Advanced Video Options
@table @option
@item -pix_fmt format
Set pixel format. Use 'list' as parameter to show all the supported
pixel formats.
@item -sws_flags flags
Set SwScaler flags (only available when compiled with SwScaler support).
@item -g gop_size
Set the group of pictures size.
@item -intra
Use only intra frames.
@item -vdt n
Discard threshold.
@item -qscale q
Use fixed video quantizer scale (VBR).
@item -qmin q
minimum video quantizer scale (VBR)
@item -qmax q
maximum video quantizer scale (VBR)
@item -qdiff q
maximum difference between the quantizer scales (VBR)
@item -qblur blur
video quantizer scale blur (VBR)
@item -qcomp compression
video quantizer scale compression (VBR)
@item -lmin lambda
minimum video lagrange factor (VBR)
@item -lmax lambda
max video lagrange factor (VBR)
@item -mblmin lambda
minimum macroblock quantizer scale (VBR)
@item -mblmax lambda
maximum macroblock quantizer scale (VBR)
These four options (lmin, lmax, mblmin, mblmax) use 'lambda' units,
but you may use the QP2LAMBDA constant to easily convert from 'q' units:
@example
ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
@end example
@item -rc_init_cplx complexity
initial complexity for single pass encoding
@item -b_qfactor factor
qp factor between P- and B-frames
@item -i_qfactor factor
qp factor between P- and I-frames
@item -b_qoffset offset
qp offset between P- and B-frames
@item -i_qoffset offset
qp offset between P- and I-frames
@item -rc_eq equation
Set rate control equation (@pxref{FFmpeg formula
evaluator}) (default = @code{tex^qComp}).
@item -rc_override override
rate control override for specific intervals
@item -me_method method
Set motion estimation method to @var{method}.
Available methods are (from lowest to best quality):
@table @samp
@item zero
Try just the (0, 0) vector.
@item phods
@item log
@item x1
@item hex
@item umh
@item epzs
(default method)
@item full
exhaustive search (slow and marginally better than epzs)
@end table
@item -dct_algo algo
Set DCT algorithm to @var{algo}. Available values are:
@table @samp
@item 0
FF_DCT_AUTO (default)
@item 1
FF_DCT_FASTINT
@item 2
FF_DCT_INT
@item 3
FF_DCT_MMX
@item 4
FF_DCT_MLIB
@item 5
FF_DCT_ALTIVEC
@end table
@item -idct_algo algo
Set IDCT algorithm to @var{algo}. Available values are:
@table @samp
@item 0
FF_IDCT_AUTO (default)
@item 1
FF_IDCT_INT
@item 2
FF_IDCT_SIMPLE
@item 3
FF_IDCT_SIMPLEMMX
@item 4
FF_IDCT_LIBMPEG2MMX
@item 5
FF_IDCT_PS2
@item 6
FF_IDCT_MLIB
@item 7
FF_IDCT_ARM
@item 8
FF_IDCT_ALTIVEC
@item 9
FF_IDCT_SH4
@item 10
FF_IDCT_SIMPLEARM
@end table
@item -er n
Set error resilience to @var{n}.
@table @samp
@item 1
FF_ER_CAREFUL (default)
@item 2
FF_ER_COMPLIANT
@item 3
FF_ER_AGGRESSIVE
@item 4
FF_ER_VERY_AGGRESSIVE
@end table
@item -ec bit_mask
Set error concealment to @var{bit_mask}. @var{bit_mask} is a bit mask of
the following values:
@table @samp
@item 1
FF_EC_GUESS_MVS (default = enabled)
@item 2
FF_EC_DEBLOCK (default = enabled)
@end table
@item -bf frames
Use 'frames' B-frames (supported for MPEG-1, MPEG-2 and MPEG-4).
@item -mbd mode
macroblock decision
@table @samp
@item 0
FF_MB_DECISION_SIMPLE: Use mb_cmp (cannot change it yet in FFmpeg).
@item 1
FF_MB_DECISION_BITS: Choose the one which needs the fewest bits.
@item 2
FF_MB_DECISION_RD: rate distortion
@end table
@item -4mv
Use four motion vector by macroblock (MPEG-4 only).
@item -part
Use data partitioning (MPEG-4 only).
@item -bug param
Work around encoder bugs that are not auto-detected.
@item -strict strictness
How strictly to follow the standards.
@item -aic
Enable Advanced intra coding (h263+).
@item -umv
Enable Unlimited Motion Vector (h263+)
@item -deinterlace
Deinterlace pictures.
@item -ilme
Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
Use this option if your input file is interlaced and you want
to keep the interlaced format for minimum losses.
The alternative is to deinterlace the input stream with
@option{-deinterlace}, but deinterlacing introduces losses.
@item -psnr
Calculate PSNR of compressed frames.
@item -vstats
Dump video coding statistics to @file{vstats_HHMMSS.log}.
@item -vstats_file file
Dump video coding statistics to @var{file}.
@item -vhook module
Insert video processing @var{module}. @var{module} contains the module
name and its parameters separated by spaces.
@item -top n
top=1/bottom=0/auto=-1 field first
@item -dc precision
Intra_dc_precision.
@item -vtag fourcc/tag
Force video tag/fourcc.
@item -qphist
Show QP histogram.
@item -vbsf bitstream filter
Bitstream filters available are "dump_extra", "remove_extra", "noise".
@end table
@section Audio Options
@table @option
@item -aframes number
Set the number of audio frames to record.
@item -ar freq
Set the audio sampling frequency (default = 44100 Hz).
@item -ab bitrate
Set the audio bitrate in bit/s (default = 64k).
@item -ac channels
Set the number of audio channels (default = 1).
@item -an
Disable audio recording.
@item -acodec codec
Force audio codec to @var{codec}. Use the @code{copy} special value to
specify that the raw codec data must be copied as is.
@item -newaudio
Add a new audio track to the output file. If you want to specify parameters,
do so before @code{-newaudio} (@code{-acodec}, @code{-ab}, etc..).
Mapping will be done automatically, if the number of output streams is equal to
the number of input streams, else it will pick the first one that matches. You
can override the mapping using @code{-map} as usual.
Example:
@example
ffmpeg -i file.mpg -vcodec copy -acodec ac3 -ab 384k test.mpg -acodec mp2 -ab 192k -newaudio
@end example
@item -alang code
Set the ISO 639 language code (3 letters) of the current audio stream.
@end table
@section Advanced Audio options:
@table @option
@item -atag fourcc/tag
Force audio tag/fourcc.
@item -absf bitstream filter
Bitstream filters available are "dump_extra", "remove_extra", "noise", "mp3comp", "mp3decomp".
@end table
@section Subtitle options:
@table @option
@item -scodec codec
Force subtitle codec ('copy' to copy stream).
@item -newsubtitle
Add a new subtitle stream to the current output stream.
@item -slang code
Set the ISO 639 language code (3 letters) of the current subtitle stream.
@end table
@section Audio/Video grab options
@table @option
@item -vc channel
Set video grab channel (DV1394 only).
@item -tvstd standard
Set television standard (NTSC, PAL (SECAM)).
@item -isync
Synchronize read on input.
@end table
@section Advanced options
@table @option
@item -map input stream id[:input stream id]
Set stream mapping from input streams to output streams.
Just enumerate the input streams in the order you want them in the output.
[input stream id] sets the (input) stream to sync against.
@item -map_meta_data outfile:infile
Set meta data information of outfile from infile.
@item -debug
Print specific debug info.
@item -benchmark
Add timings for benchmarking.
@item -dump
Dump each input packet.
@item -hex
When dumping packets, also dump the payload.
@item -bitexact
Only use bit exact algorithms (for codec testing).
@item -ps size
Set packet size in bits.
@item -re
Read input at native frame rate. Mainly used to simulate a grab device.
@item -loop_input
Loop over the input stream. Currently it works only for image
streams. This option is used for automatic FFserver testing.
@item -loop_output number_of_times
Repeatedly loop output for formats that support looping such as animated GIF
(0 will loop the output infinitely).
@item -threads count
Thread count.
@item -vsync parameter
Video sync method. Video will be stretched/squeezed to match the timestamps,
it is done by duplicating and dropping frames. With -map you can select from
which stream the timestamps should be taken. You can leave either video or
audio unchanged and sync the remaining stream(s) to the unchanged one.
@item -async samples_per_second
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
the parameter is the maximum samples per second by which the audio is changed.
-async 1 is a special case where only the start of the audio stream is corrected
without any later correction.
@item -copyts
Copy timestamps from input to output.
@item -shortest
Finish encoding when the shortest input stream ends.
@item -dts_delta_threshold
Timestamp discontinuity delta threshold.
@item -muxdelay seconds
Set the maximum demux-decode delay.
@item -muxpreload seconds
Set the initial demux-decode delay.
@end table
@node FFmpeg formula evaluator
@section FFmpeg formula evaluator
When evaluating a rate control string, FFmpeg uses an internal formula
evaluator.
The following binary operators are available: @code{+}, @code{-},
@code{*}, @code{/}, @code{^}.
The following unary operators are available: @code{+}, @code{-},
@code{(...)}.
The following functions are available:
@table @var
@item sinh(x)
@item cosh(x)
@item tanh(x)
@item sin(x)
@item cos(x)
@item tan(x)
@item exp(x)
@item log(x)
@item squish(x)
@item gauss(x)
@item abs(x)
@item max(x, y)
@item min(x, y)
@item gt(x, y)
@item lt(x, y)
@item eq(x, y)
@item bits2qp(bits)
@item qp2bits(qp)
@end table
The following constants are available:
@table @var
@item PI
@item E
@item iTex
@item pTex
@item tex
@item mv
@item fCode
@item iCount
@item mcVar
@item var
@item isI
@item isP
@item isB
@item avgQP
@item qComp
@item avgIITex
@item avgPITex
@item avgPPTex
@item avgBPTex
@item avgTex
@end table
@c man end
@ignore
@setfilename ffmpeg
@settitle FFmpeg video converter
@c man begin SEEALSO
ffserver(1), ffplay(1) and the HTML documentation of @file{ffmpeg}.
@c man end
@c man begin AUTHOR
Fabrice Bellard
@c man end
@end ignore
@section Protocols
The filename can be @file{-} to read from standard input or to write
to standard output.
FFmpeg also handles many protocols specified with an URL syntax.
Use 'ffmpeg -formats' to see a list of the supported protocols.
The protocol @code{http:} is currently used only to communicate with
FFserver (see the FFserver documentation). When FFmpeg will be a
video player it will also be used for streaming :-)
@chapter Tips
@itemize
@item For streaming at very low bitrate application, use a low frame rate
and a small GOP size. This is especially true for RealVideo where
the Linux player does not seem to be very fast, so it can miss
frames. An example is:
@example
ffmpeg -g 3 -r 3 -t 10 -b 50k -s qcif -f rv10 /tmp/b.rm
@end example
@item The parameter 'q' which is displayed while encoding is the current
quantizer. The value 1 indicates that a very good quality could
be achieved. The value 31 indicates the worst quality. If q=31 appears
too often, it means that the encoder cannot compress enough to meet
your bitrate. You must either increase the bitrate, decrease the
frame rate or decrease the frame size.
@item If your computer is not fast enough, you can speed up the
compression at the expense of the compression ratio. You can use
'-me zero' to speed up motion estimation, and '-intra' to disable
motion estimation completely (you have only I-frames, which means it
is about as good as JPEG compression).
@item To have very low audio bitrates, reduce the sampling frequency
(down to 22050 kHz for MPEG audio, 22050 or 11025 for AC3).
@item To have a constant quality (but a variable bitrate), use the option
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
quality).
@item When converting video files, you can use the '-sameq' option which
uses the same quality factor in the encoder as in the decoder.
It allows almost lossless encoding.
@end itemize
@chapter external libraries
FFmpeg can be hooked up with a number of external libraries to add support
for more formats. None of them are used by default, their use has to be
explicitly requested by passing the appropriate flags to @file{./configure}.
@section AMR
AMR comes in two different flavors, WB and NB. FFmpeg can make use of the
AMR WB (floating-point mode) and the AMR NB (floating-point mode) reference
decoders and encoders.
Go to @url{http://www.penguin.cz/~utx/amr} and follow the instructions for
installing the libraries. Then pass @code{--enable-libamr-nb} and/or
@code{--enable-libamr-wb} to configure to enable the libraries.
@chapter Supported File Formats and Codecs
You can use the @code{-formats} option to have an exhaustive list.
@section File Formats
FFmpeg supports the following file formats through the @code{libavformat}
library:
@multitable @columnfractions .4 .1 .1 .4
@item Supported File Format @tab Encoding @tab Decoding @tab Comments
@item MPEG audio @tab X @tab X
@item MPEG-1 systems @tab X @tab X
@tab muxed audio and video
@item MPEG-2 PS @tab X @tab X
@tab also known as @code{VOB} file
@item MPEG-2 TS @tab @tab X
@tab also known as DVB Transport Stream
@item ASF@tab X @tab X
@item AVI@tab X @tab X
@item WAV@tab X @tab X
@item Macromedia Flash@tab X @tab X
@tab Only embedded audio is decoded.
@item FLV @tab X @tab X
@tab Macromedia Flash video files
@item Real Audio and Video @tab X @tab X
@item Raw AC3 @tab X @tab X
@item Raw MJPEG @tab X @tab X
@item Raw MPEG video @tab X @tab X
@item Raw PCM8/16 bits, mulaw/Alaw@tab X @tab X
@item Raw CRI ADX audio @tab X @tab X
@item Raw Shorten audio @tab @tab X
@item SUN AU format @tab X @tab X
@item NUT @tab X @tab X @tab NUT Open Container Format
@item QuickTime @tab X @tab X
@item MPEG-4 @tab X @tab X
@tab MPEG-4 is a variant of QuickTime.
@item Raw MPEG4 video @tab X @tab X
@item DV @tab X @tab X
@item 4xm @tab @tab X
@tab 4X Technologies format, used in some games.
@item Playstation STR @tab @tab X
@item Id RoQ @tab X @tab X
@tab Used in Quake III, Jedi Knight 2, other computer games.
@item Interplay MVE @tab @tab X
@tab Format used in various Interplay computer games.
@item WC3 Movie @tab @tab X
@tab Multimedia format used in Origin's Wing Commander III computer game.
@item Sega FILM/CPK @tab @tab X
@tab Used in many Sega Saturn console games.
@item Westwood Studios VQA/AUD @tab @tab X
@tab Multimedia formats used in Westwood Studios games.
@item Id Cinematic (.cin) @tab @tab X
@tab Used in Quake II.
@item FLIC format @tab @tab X
@tab .fli/.flc files
@item Sierra VMD @tab @tab X
@tab Used in Sierra CD-ROM games.
@item Sierra Online @tab @tab X
@tab .sol files used in Sierra Online games.
@item Matroska @tab @tab X
@item Electronic Arts Multimedia @tab @tab X
@tab Used in various EA games; files have extensions like WVE and UV2.
@item Nullsoft Video (NSV) format @tab @tab X
@item ADTS AAC audio @tab X @tab X
@item Creative VOC @tab X @tab X @tab Created for the Sound Blaster Pro.
@item American Laser Games MM @tab @tab X
@tab Multimedia format used in games like Mad Dog McCree
@item AVS @tab @tab X
@tab Multimedia format used by the Creature Shock game.
@item Smacker @tab @tab X
@tab Multimedia format used by many games.
@item GXF @tab X @tab X
@tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley playout servers.
@item CIN @tab @tab X
@tab Multimedia format used by Delphine Software games.
@item MXF @tab @tab X
@tab Material eXchange Format SMPTE 377M, used by D-Cinema, broadcast industry.
@item SEQ @tab @tab X
@tab Tiertex .seq files used in the DOS CDROM version of the game Flashback.
@item DXA @tab @tab X
@tab This format is used in non-Windows version of Feeble Files game and
different game cutscenes repacked for use with ScummVM.
@item THP @tab @tab X
@tab Used on the Nintendo GameCube.
@item C93 @tab @tab X
@tab Used in the game Cyberia from Interplay.
@item Bethsoft VID @tab @tab X
@tab Used in some games from Bethesda Softworks.
@item CRYO APC @tab @tab X
@tab Audio format used in some games by CRYO Interactive Entertainment.
@end multitable
@code{X} means that encoding (resp. decoding) is supported.
@section Image Formats
FFmpeg can read and write images for each frame of a video sequence. The
following image formats are supported:
@multitable @columnfractions .4 .1 .1 .4
@item Supported Image Format @tab Encoding @tab Decoding @tab Comments
@item PGM, PPM @tab X @tab X
@item PAM @tab X @tab X @tab PAM is a PNM extension with alpha support.
@item PGMYUV @tab X @tab X @tab PGM with U and V components in YUV 4:2:0
@item JPEG @tab X @tab X @tab Progressive JPEG is not supported.
@item .Y.U.V @tab X @tab X @tab one raw file per component
@item animated GIF @tab X @tab X @tab Only uncompressed GIFs are generated.
@item PNG @tab X @tab X @tab 2 bit and 4 bit/pixel not supported yet.
@item Targa @tab @tab X @tab Targa (.TGA) image format.
@item TIFF @tab X @tab X @tab YUV, JPEG and some extension is not supported yet.
@item SGI @tab X @tab X @tab SGI RGB image format
@item PTX @tab @tab X @tab V.Flash PTX format
@end multitable
@code{X} means that encoding (resp. decoding) is supported.
@section Video Codecs
@multitable @columnfractions .4 .1 .1 .4
@item Supported Codec @tab Encoding @tab Decoding @tab Comments
@item MPEG-1 video @tab X @tab X
@item MPEG-2 video @tab X @tab X
@item MPEG-4 @tab X @tab X
@item MSMPEG4 V1 @tab X @tab X
@item MSMPEG4 V2 @tab X @tab X
@item MSMPEG4 V3 @tab X @tab X
@item WMV7 @tab X @tab X
@item WMV8 @tab X @tab X @tab not completely working
@item WMV9 @tab @tab X @tab not completely working
@item VC1 @tab @tab X
@item H.261 @tab X @tab X
@item H.263(+) @tab X @tab X @tab also known as RealVideo 1.0
@item H.264 @tab @tab X
@item RealVideo 1.0 @tab X @tab X
@item RealVideo 2.0 @tab X @tab X
@item MJPEG @tab X @tab X
@item lossless MJPEG @tab X @tab X
@item JPEG-LS @tab X @tab X @tab fourcc: MJLS, lossless and near-lossless is supported
@item Apple MJPEG-B @tab @tab X
@item Sunplus MJPEG @tab @tab X @tab fourcc: SP5X
@item DV @tab X @tab X
@item HuffYUV @tab X @tab X
@item FFmpeg Video 1 @tab X @tab X @tab experimental lossless codec (fourcc: FFV1)
@item FFmpeg Snow @tab X @tab X @tab experimental wavelet codec (fourcc: SNOW)
@item Asus v1 @tab X @tab X @tab fourcc: ASV1
@item Asus v2 @tab X @tab X @tab fourcc: ASV2
@item Creative YUV @tab @tab X @tab fourcc: CYUV
@item Sorenson Video 1 @tab X @tab X @tab fourcc: SVQ1
@item Sorenson Video 3 @tab @tab X @tab fourcc: SVQ3
@item On2 VP3 @tab @tab X @tab still experimental
@item On2 VP5 @tab @tab X @tab fourcc: VP50
@item On2 VP6 @tab @tab X @tab fourcc: VP60,VP61,VP62
@item Theora @tab X @tab X @tab still experimental
@item Intel Indeo 3 @tab @tab X
@item FLV @tab X @tab X @tab Sorenson H.263 used in Flash
@item Flash Screen Video @tab X @tab X @tab fourcc: FSV1
@item ATI VCR1 @tab @tab X @tab fourcc: VCR1
@item ATI VCR2 @tab @tab X @tab fourcc: VCR2
@item Cirrus Logic AccuPak @tab @tab X @tab fourcc: CLJR
@item 4X Video @tab @tab X @tab Used in certain computer games.
@item Sony Playstation MDEC @tab @tab X
@item Id RoQ @tab X @tab X @tab Used in Quake III, Jedi Knight 2, other computer games.
@item Xan/WC3 @tab @tab X @tab Used in Wing Commander III .MVE files.
@item Interplay Video @tab @tab X @tab Used in Interplay .MVE files.
@item Apple Animation @tab X @tab X @tab fourcc: 'rle '
@item Apple Graphics @tab @tab X @tab fourcc: 'smc '
@item Apple Video @tab @tab X @tab fourcc: rpza
@item Apple QuickDraw @tab @tab X @tab fourcc: qdrw
@item Cinepak @tab @tab X
@item Microsoft RLE @tab @tab X
@item Microsoft Video-1 @tab @tab X
@item Westwood VQA @tab @tab X
@item Id Cinematic Video @tab @tab X @tab Used in Quake II.
@item Planar RGB @tab @tab X @tab fourcc: 8BPS
@item FLIC video @tab @tab X
@item Duck TrueMotion v1 @tab @tab X @tab fourcc: DUCK
@item Duck TrueMotion v2 @tab @tab X @tab fourcc: TM20
@item VMD Video @tab @tab X @tab Used in Sierra VMD files.
@item MSZH @tab @tab X @tab Part of LCL
@item ZLIB @tab X @tab X @tab Part of LCL, encoder experimental
@item TechSmith Camtasia @tab @tab X @tab fourcc: TSCC
@item IBM Ultimotion @tab @tab X @tab fourcc: ULTI
@item Miro VideoXL @tab @tab X @tab fourcc: VIXL
@item QPEG @tab @tab X @tab fourccs: QPEG, Q1.0, Q1.1
@item LOCO @tab @tab X @tab
@item Winnov WNV1 @tab @tab X @tab
@item Autodesk Animator Studio Codec @tab @tab X @tab fourcc: AASC
@item Fraps FPS1 @tab @tab X @tab
@item CamStudio @tab @tab X @tab fourcc: CSCD
@item American Laser Games Video @tab @tab X @tab Used in games like Mad Dog McCree
@item ZMBV @tab X @tab X @tab Encoder works only on PAL8
@item AVS Video @tab @tab X @tab Video encoding used by the Creature Shock game.
@item Smacker Video @tab @tab X @tab Video encoding used in Smacker.
@item RTjpeg @tab @tab X @tab Video encoding used in NuppelVideo files.
@item KMVC @tab @tab X @tab Codec used in Worms games.
@item VMware Video @tab @tab X @tab Codec used in videos captured by VMware.
@item Cin Video @tab @tab X @tab Codec used in Delphine Software games.
@item Tiertex Seq Video @tab @tab X @tab Codec used in DOS CDROM FlashBack game.
@item DXA Video @tab @tab X @tab Codec originally used in Feeble Files game.
@item AVID DNxHD @tab @tab X @tab aka SMPTE VC3
@item C93 Video @tab @tab X @tab Codec used in Cyberia game.
@item THP @tab @tab X @tab Used on the Nintendo GameCube.
@item Bethsoft VID @tab @tab X @tab Used in some games from Bethesda Softworks.
@item Renderware TXD @tab @tab X @tab Texture dictionaries used by the Renderware Engine.
@end multitable
@code{X} means that encoding (resp. decoding) is supported.
@section Audio Codecs
@multitable @columnfractions .4 .1 .1 .1 .7
@item Supported Codec @tab Encoding @tab Decoding @tab Comments
@item MPEG audio layer 2 @tab IX @tab IX
@item MPEG audio layer 1/3 @tab IX @tab IX
@tab MP3 encoding is supported through the external library LAME.
@item AC3 @tab IX @tab IX
@tab liba52 is used internally for decoding.
@item Vorbis @tab X @tab X
@item WMA V1/V2 @tab X @tab X
@item AAC @tab X @tab X
@tab Supported through the external library libfaac/libfaad.
@item Microsoft ADPCM @tab X @tab X
@item MS IMA ADPCM @tab X @tab X
@item QT IMA ADPCM @tab @tab X
@item 4X IMA ADPCM @tab @tab X
@item G.726 ADPCM @tab X @tab X
@item Duck DK3 IMA ADPCM @tab @tab X
@tab Used in some Sega Saturn console games.
@item Duck DK4 IMA ADPCM @tab @tab X
@tab Used in some Sega Saturn console games.
@item Westwood Studios IMA ADPCM @tab @tab X
@tab Used in Westwood Studios games like Command and Conquer.
@item SMJPEG IMA ADPCM @tab @tab X
@tab Used in certain Loki game ports.
@item CD-ROM XA ADPCM @tab @tab X
@item CRI ADX ADPCM @tab X @tab X
@tab Used in Sega Dreamcast games.
@item Electronic Arts ADPCM @tab @tab X
@tab Used in various EA titles.
@item Creative ADPCM @tab @tab X
@tab 16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2
@item THP ADPCM @tab @tab X
@tab Used on the Nintendo GameCube.
@item RA144 @tab @tab X
@tab Real 14400 bit/s codec
@item RA288 @tab @tab X
@tab Real 28800 bit/s codec
@item RADnet @tab X @tab IX
@tab Real low bitrate AC3 codec, liba52 is used for decoding.
@item AMR-NB @tab X @tab X
@tab Supported through an external library.
@item AMR-WB @tab X @tab X
@tab Supported through an external library.
@item DV audio @tab @tab X
@item Id RoQ DPCM @tab X @tab X
@tab Used in Quake III, Jedi Knight 2, other computer games.
@item Interplay MVE DPCM @tab @tab X
@tab Used in various Interplay computer games.
@item Xan DPCM @tab @tab X
@tab Used in Origin's Wing Commander IV AVI files.
@item Sierra Online DPCM @tab @tab X
@tab Used in Sierra Online game audio files.
@item Apple MACE 3 @tab @tab X
@item Apple MACE 6 @tab @tab X
@item FLAC lossless audio @tab X @tab X
@item Shorten lossless audio @tab @tab X
@item Apple lossless audio @tab @tab X
@tab QuickTime fourcc 'alac'
@item FFmpeg Sonic @tab X @tab X
@tab experimental lossy/lossless codec
@item Qdesign QDM2 @tab @tab X
@tab there are still some distortions
@item Real COOK @tab @tab X
@tab All versions except 5.1 are supported
@item DSP Group TrueSpeech @tab @tab X
@item True Audio (TTA) @tab @tab X
@item Smacker Audio @tab @tab X
@item WavPack Audio @tab @tab X
@item Cin Audio @tab @tab X
@tab Codec used in Delphine Software games.
@item Intel Music Coder @tab @tab X
@item Musepack @tab @tab X
@tab Only SV7 is supported
@item DT$ Coherent Audio @tab @tab X
@item ATRAC 3 @tab @tab X
@end multitable
@code{X} means that encoding (resp. decoding) is supported.
@code{I} means that an integer-only version is available, too (ensures high
performance on systems without hardware floating point support).
@chapter Platform Specific information
@section BSD
BSD make will not build FFmpeg, you need to install and use GNU Make
(@file{gmake}).
@section Windows
To get help and instructions for using FFmpeg under Windows, check out
the FFmpeg Windows Help Forum at
@url{http://arrozcru.no-ip.org/ffmpeg/}.
@subsection Native Windows compilation
@itemize
@item Install the current versions of MSYS and MinGW from
@url{http://www.mingw.org/}. You can find detailed installation
instructions in the download section and the FAQ.
NOTE: Use at least bash 3.1. Older versions are known to be failing on the
configure script.
@item If you want to test the FFplay, also download
the MinGW development library of SDL 1.2.x
(@file{SDL-devel-1.2.x-mingw32.tar.gz}) from
@url{http://www.libsdl.org}. Unpack it in a temporary directory, and
unpack the archive @file{i386-mingw32msvc.tar.gz} in the MinGW tool
directory. Edit the @file{sdl-config} script so that it gives the
correct SDL directory when invoked.
@item If you want to use vhooks, you must have a POSIX compliant libdl in your
MinGW system. Get dlfcn-win32 from @url{http://code.google.com/p/dlfcn-win32}.
@item Extract the current version of FFmpeg.
@item Start the MSYS shell (file @file{msys.bat}).
@item Change to the FFmpeg directory and follow
the instructions of how to compile FFmpeg (file
@file{INSTALL}). Usually, launching @file{./configure} and @file{make}
suffices. If you have problems using SDL, verify that
@file{sdl-config} can be launched from the MSYS command line.
@item You can install FFmpeg in @file{Program Files/FFmpeg} by typing
@file{make install}. Do not forget to copy @file{SDL.dll} to the place
you launch @file{ffplay} from.
@end itemize
Notes:
@itemize
@item The target @file{make wininstaller} can be used to create a
Nullsoft based Windows installer for FFmpeg and FFplay. @file{SDL.dll}
must be copied to the FFmpeg directory in order to build the
installer.
@item By using @code{./configure --enable-shared} when configuring FFmpeg,
you can build @file{avcodec.dll} and @file{avformat.dll}. With
@code{make install} you install the FFmpeg DLLs and the associated
headers in @file{Program Files/FFmpeg}.
@item Visual C++ compatibility: If you used @code{./configure --enable-shared}
when configuring FFmpeg, FFmpeg tries to use the Microsoft Visual
C++ @code{lib} tool to build @code{avcodec.lib} and
@code{avformat.lib}. With these libraries you can link your Visual C++
code directly with the FFmpeg DLLs (see below).
@end itemize
@subsection Visual C++ compatibility
FFmpeg will not compile under Visual C++ -- and it has too many
dependencies on the GCC compiler to make a port viable. However,
if you want to use the FFmpeg libraries in your own applications,
you can still compile those applications using Visual C++. An
important restriction to this is that you have to use the
dynamically linked versions of the FFmpeg libraries (i.e. the
DLLs), and you have to make sure that Visual-C++-compatible
import libraries are created during the FFmpeg build process.
This description of how to use the FFmpeg libraries with Visual C++ is
based on Visual C++ 2005 Express Edition Beta 2. If you have a different
version, you might have to modify the procedures slightly.
Here are the step-by-step instructions for building the FFmpeg libraries
so they can be used with Visual C++:
@enumerate
@item Install Visual C++ (if you have not done so already).
@item Install MinGW and MSYS as described above.
@item Add a call to @file{vcvars32.bat} (which sets up the environment
variables for the Visual C++ tools) as the first line of
@file{msys.bat}. The standard location for @file{vcvars32.bat} is
@file{C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat},
and the standard location for @file{msys.bat} is
@file{C:\msys\1.0\msys.bat}. If this corresponds to your setup, add the
following line as the first line of @file{msys.bat}:
@code{call "C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat"}
@item Start the MSYS shell (file @file{msys.bat}) and type @code{link.exe}.
If you get a help message with the command line options of @code{link.exe},
this means your environment variables are set up correctly, the
Microsoft linker is on the path and will be used by FFmpeg to
create Visual-C++-compatible import libraries.
@item Extract the current version of FFmpeg and change to the FFmpeg directory.
@item Type the command
@code{./configure --enable-shared --disable-static --enable-memalign-hack}
to configure and, if that did not produce any errors,
type @code{make} to build FFmpeg.
@item The subdirectories @file{libavformat}, @file{libavcodec}, and
@file{libavutil} should now contain the files @file{avformat.dll},
@file{avformat.lib}, @file{avcodec.dll}, @file{avcodec.lib},
@file{avutil.dll}, and @file{avutil.lib}, respectively. Copy the three
DLLs to your System32 directory (typically @file{C:\Windows\System32}).
@end enumerate
And here is how to use these libraries with Visual C++:
@enumerate
@item Create a new console application ("File / New / Project") and then
select "Win32 Console Application". On the appropriate page of the
Application Wizard, uncheck the "Precompiled headers" option.
@item Write the source code for your application, or, for testing, just
copy the code from an existing sample application into the source file
that Visual C++ has already created for you. (Note that your source
filehas to have a @code{.cpp} extension; otherwise, Visual C++ will not
compile the FFmpeg headers correctly because in C mode, it does not
recognize the @code{inline} keyword.) For example, you can copy
@file{output_example.c} from the FFmpeg distribution (but you will
have to make minor modifications so the code will compile under
C++, see below).
@item Open the "Project / Properties" dialog box. In the "Configuration"
combo box, select "All Configurations" so that the changes you make will
affect both debug and release builds. In the tree view on the left hand
side, select "C/C++ / General", then edit the "Additional Include
Directories" setting to contain the complete paths to the
@file{libavformat}, @file{libavcodec}, and @file{libavutil}
subdirectories of your FFmpeg directory. Note that the directories have
to be separated using semicolons. Now select "Linker / General" from the
tree view and edit the "Additional Library Directories" setting to
contain the same three directories.
@item Still in the "Project / Properties" dialog box, select "Linker / Input"
from the tree view, then add the files @file{avformat.lib},
@file{avcodec.lib}, and @file{avutil.lib} to the end of the "Additional
Dependencies". Note that the names of the libraries have to be separated
using spaces.
@item Now, select "C/C++ / Code Generation" from the tree view. Select
"Debug" in the "Configuration" combo box. Make sure that "Runtime
Library" is set to "Multi-threaded Debug DLL". Then, select "Release" in
the "Configuration" combo box and make sure that "Runtime Library" is
set to "Multi-threaded DLL".
@item Click "OK" to close the "Project / Properties" dialog box and build
the application. Hopefully, it should compile and run cleanly. If you
used @file{output_example.c} as your sample application, you will get a
few compiler errors, but they are easy to fix. The first type of error
occurs because Visual C++ does not allow an @code{int} to be converted to
an @code{enum} without a cast. To solve the problem, insert the required
casts (this error occurs once for a @code{CodecID} and once for a
@code{CodecType}). The second type of error occurs because C++ requires
the return value of @code{malloc} to be cast to the exact type of the
pointer it is being assigned to. Visual C++ will complain that, for
example, @code{(void *)} is being assigned to @code{(uint8_t *)} without
an explicit cast. So insert an explicit cast in these places to silence
the compiler. The third type of error occurs because the @code{snprintf}
library function is called @code{_snprintf} under Visual C++. So just
add an underscore to fix the problem. With these changes,
@file{output_example.c} should compile under Visual C++, and the
resulting executable should produce valid video files.
@end enumerate
@subsection Cross compilation for Windows with Linux
You must use the MinGW cross compilation tools available at
@url{http://www.mingw.org/}.
Then configure FFmpeg with the following options:
@example
./configure --target-os=mingw32 --cross-prefix=i386-mingw32msvc-
@end example
(you can change the cross-prefix according to the prefix chosen for the
MinGW tools).
Then you can easily test FFmpeg with Wine
(@url{http://www.winehq.com/}).
@subsection Compilation under Cygwin
Cygwin works very much like Unix.
Just install your Cygwin with all the "Base" packages, plus the
following "Devel" ones:
@example
binutils, gcc-core, make, subversion
@end example
Do not install binutils-20060709-1 (they are buggy on shared builds);
use binutils-20050610-1 instead.
Then run
@example
./configure --enable-static --disable-shared
@end example
to make a static build or
@example
./configure --enable-shared --disable-static
@end example
to build shared libraries.
If you want to build FFmpeg with additional libraries, download Cygwin
"Devel" packages for Ogg and Vorbis from any Cygwin packages repository
and/or SDL, xvid, faac, faad2 packages from Cygwin Ports,
(@url{http://cygwinports.dotsrc.org/}).
@subsection Crosscompilation for Windows under Cygwin
With Cygwin you can create Windows binaries that do not need the cygwin1.dll.
Just install your Cygwin as explained before, plus these additional
"Devel" packages:
@example
gcc-mingw-core, mingw-runtime, mingw-zlib
@end example
and add some special flags to your configure invocation.
For a static build run
@example
./configure --target-os=mingw32 --enable-memalign-hack --enable-static --disable-shared --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin
@end example
and for a build with shared libraries
@example
./configure --target-os=mingw32 --enable-memalign-hack --enable-shared --disable-static --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin
@end example
@section BeOS
The configure script should guess the configuration itself.
Networking support is currently not finished.
errno issues fixed by Andrew Bachmann.
Old stuff:
François Revol - revol at free dot fr - April 2002
The configure script should guess the configuration itself,
however I still did not test building on the net_server version of BeOS.
FFserver is broken (needs poll() implementation).
There are still issues with errno codes, which are negative in BeOS, and
that FFmpeg negates when returning. This ends up turning errors into
valid results, then crashes.
(To be fixed)
@chapter Developers Guide
@section API
@itemize @bullet
@item libavcodec is the library containing the codecs (both encoding and
decoding). Look at @file{libavcodec/apiexample.c} to see how to use it.
@item libavformat is the library containing the file format handling (mux and
demux code for several formats). Look at @file{ffplay.c} to use it in a
player. See @file{output_example.c} to use it to generate audio or video
streams.
@end itemize
@section Integrating libavcodec or libavformat in your program
You can integrate all the source code of the libraries to link them
statically to avoid any version problem. All you need is to provide a
'config.mak' and a 'config.h' in the parent directory. See the defines
generated by ./configure to understand what is needed.
You can use libavcodec or libavformat in your commercial program, but
@emph{any patch you make must be published}. The best way to proceed is
to send your patches to the FFmpeg mailing list.
@node Coding Rules
@section Coding Rules
FFmpeg is programmed in the ISO C90 language with a few additional
features from ISO C99, namely:
@itemize @bullet
@item
the @samp{inline} keyword;
@item
@samp{//} comments;
@item
designated struct initializers (@samp{struct s x = @{ .i = 17 @};})
@item
compound literals (@samp{x = (struct s) @{ 17, 23 @};})
@end itemize
These features are supported by all compilers we care about, so we will not
accept patches to remove their use unless they absolutely do not impair
clarity and performance.
All code must compile with GCC 2.95 and GCC 3.3. Currently, FFmpeg also
compiles with several other compilers, such as the Compaq ccc compiler
or Sun Studio 9, and we would like to keep it that way unless it would
be exceedingly involved. To ensure compatibility, please do not use any
additional C99 features or GCC extensions. Especially watch out for:
@itemize @bullet
@item
mixing statements and declarations;
@item
@samp{long long} (use @samp{int64_t} instead);
@item
@samp{__attribute__} not protected by @samp{#ifdef __GNUC__} or similar;
@item
GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
@end itemize
Indent size is 4.
The presentation is the one specified by 'indent -i4 -kr -nut'.
The TAB character is forbidden outside of Makefiles as is any
form of trailing whitespace. Commits containing either will be
rejected by the Subversion repository.
The main priority in FFmpeg is simplicity and small code size in order to
minimize the bug count.
Comments: Use the JavaDoc/Doxygen
format (see examples below) so that code documentation
can be generated automatically. All nontrivial functions should have a comment
above them explaining what the function does, even if it is just one sentence.
All structures and their member variables should be documented, too.
@example
/**
* @@file mpeg.c
* MPEG codec.
* @@author ...
*/
/**
* Summary sentence.
* more text ...
* ...
*/
typedef struct Foobar@{
int var1; /**< var1 description */
int var2; ///< var2 description
/** var3 description */
int var3;
@} Foobar;
/**
* Summary sentence.
* more text ...
* ...
* @@param my_parameter description of my_parameter
* @@return return value description
*/
int myfunc(int my_parameter)
...
@end example
fprintf and printf are forbidden in libavformat and libavcodec,
please use av_log() instead.
Casts should be used only when necessary. Unneeded parentheses
should also be avoided if they don't make the code easier to understand.
@section Development Policy
@enumerate
@item
Contributions should be licensed under the LGPL 2.1, including an
"or any later version" clause, or the MIT license. GPL 2 including
an "or any later version" clause is also acceptable, but LGPL is
preferred.
@item
You must not commit code which breaks FFmpeg! (Meaning unfinished but
enabled code which breaks compilation or compiles but does not work or
breaks the regression tests)
You can commit unfinished stuff (for testing etc), but it must be disabled
(#ifdef etc) by default so it does not interfere with other developers'
work.
@item
You do not have to over-test things. If it works for you, and you think it
should work for others, then commit. If your code has problems
(portability, triggers compiler bugs, unusual environment etc) they will be
reported and eventually fixed.
@item
Do not commit unrelated changes together, split them into self-contained
pieces. Also do not forget that if part B depends on part A, but A does not
depend on B, then A can and should be committed first and separate from B.
Keeping changes well split into self-contained parts makes reviewing and
understanding them on the commit log mailing list easier. This also helps
in case of debugging later on.
Also if you have doubts about splitting or not splitting, do not hesitate to
ask/discuss it on the developer mailing list.
@item
Do not change behavior of the program (renaming options etc) without
first discussing it on the ffmpeg-devel mailing list. Do not remove
functionality from the code. Just improve!
Note: Redundant code can be removed.
@item
Do not commit changes to the build system (Makefiles, configure script)
which change behavior, defaults etc, without asking first. The same
applies to compiler warning fixes, trivial looking fixes and to code
maintained by other developers. We usually have a reason for doing things
the way we do. Send your changes as patches to the ffmpeg-devel mailing
list, and if the code maintainers say OK, you may commit. This does not
apply to files you wrote and/or maintain.
@item
We refuse source indentation and other cosmetic changes if they are mixed
with functional changes, such commits will be rejected and removed. Every
developer has his own indentation style, you should not change it. Of course
if you (re)write something, you can use your own style, even though we would
prefer if the indentation throughout FFmpeg was consistent (Many projects
force a given indentation style - we do not.). If you really need to make
indentation changes (try to avoid this), separate them strictly from real
changes.
NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
then either do NOT change the indentation of the inner part within (do not
move it to the right)! or do so in a separate commit
@item
Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
@item
If you apply a patch by someone else, include the name and email address in
the log message. Since the ffmpeg-cvslog mailing list is publicly
archived you should add some SPAM protection to the email address. Send an
answer to ffmpeg-devel (or wherever you got the patch from) saying that
you applied the patch.
@item
When applying patches that have been discussed (at length) on the mailing
list, reference the thread in the log message.
@item
Do NOT commit to code actively maintained by others without permission.
Send a patch to ffmpeg-devel instead. If no one answers within a reasonable
timeframe (12h for build failures and security fixes, 3 days small changes,
1 week for big patches) then commit your patch if you think it is OK.
Also note, the maintainer can simply ask for more time to review!
@item
Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits
are sent there and reviewed by all the other developers. Bugs and possible
improvements or general questions regarding commits are discussed there. We
expect you to react if problems with your code are uncovered.
@item
Update the documentation if you change behavior or add features. If you are
unsure how best to do this, send a patch to ffmpeg-devel, the documentation
maintainer(s) will review and commit your stuff.
@item
Try to keep important discussions and requests (also) on the public
developer mailing list, so that all developers can benefit from them.
@item
Never write to unallocated memory, never write over the end of arrays,
always check values read from some untrusted source before using them
as array index or other risky things.
@item
Remember to check if you need to bump versions for the specific libav
parts (libavutil, libavcodec, libavformat) you are changing. You need
to change the version integer and the version string.
Incrementing the first component means no backward compatibility to
previous versions (e.g. removal of a function from the public API).
Incrementing the second component means backward compatible change
(e.g. addition of a function to the public API).
Incrementing the third component means a noteworthy binary compatible
change (e.g. encoder bug fix that matters for the decoder).
@item
If you add a new codec, remember to update the changelog, add it to
the supported codecs table in the documentation and bump the second
component of the @file{libavcodec} version number appropriately. If
it has a fourcc, add it to @file{libavformat/avienc.c}, even if it
is only a decoder.
@item
Do not change code to hide warnings without ensuring that the underlying
logic is correct and thus the warning was inappropriate.
@item
If you add a new file, give it a proper license header. Do not copy and
paste it from a random place, use an existing file as template.
@end enumerate
We think our rules are not too hard. If you have comments, contact us.
Note, these rules are mostly borrowed from the MPlayer project.
@section Submitting patches
First, (@pxref{Coding Rules}) above if you did not yet.
When you submit your patch, try to send a unified diff (diff '-up'
option). We cannot read other diffs :-)
Also please do not submit a patch which contains several unrelated changes.
Split it into separate, self-contained pieces. This does not mean splitting
file by file. Instead, make the patch as small as possible while still
keeping it as a logical unit that contains an individual change, even
if it spans multiple files. This makes reviewing your patches much easier
for us and greatly increases your chances of getting your patch applied.
Run the regression tests before submitting a patch so that you can
verify that there are no big problems.
Patches should be posted as base64 encoded attachments (or any other
encoding which ensures that the patch will not be trashed during
transmission) to the ffmpeg-devel mailing list, see
@url{http://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-devel}
It also helps quite a bit if you tell us what the patch does (for example
'replaces lrint by lrintf'), and why (for example '*BSD isn't C99 compliant
and has no lrint()')
Also please if you send several patches, send each patch as a separate mail,
do not attach several unrelated patches to the same mail.
@section patch submission checklist
@enumerate
@item
Do the regression tests pass with the patch applied?
@item
Is the patch a unified diff?
@item
Is the patch against latest FFmpeg SVN?
@item
Are you subscribed to ffmpeg-dev?
(the list is subscribers only due to spam)
@item
Have you checked that the changes are minimal, so that the same cannot be
achieved with a smaller patch and/or simpler final code?
@item
If the change is to speed critical code, did you benchmark it?
@item
If you did any benchmarks, did you provide them in the mail?
@item
Have you checked that the patch does not introduce buffer overflows or
other security issues?
@item
Is the patch created from the root of the source tree, so it can be
applied with @code{patch -p0}?
@item
Does the patch not mix functional and cosmetic changes?
@item
Did you add tabs or trailing whitespace to the code? Both are forbidden.
@item
Is the patch attached to the email you send?
@item
Is the mime type of the patch correct? It should be text/x-diff or
text/x-patch or at least text/plain and not application/octet-stream.
@item
If the patch fixes a bug, did you provide a verbose analysis of the bug?
@item
If the patch fixes a bug, did you provide enough information, including
a sample, so the bug can be reproduced and the fix can be verified?
Note please do not attach samples >100k to mails but rather provide a
URL, you can upload to ftp://upload.mplayerhq.hu
@item
Did you provide a verbose summary about what the patch does change?
@item
Did you provide a verbose explanation why it changes things like it does?
@item
Did you provide a verbose summary of the user visible advantages and
disadvantages if the patch is applied?
@item
Did you provide an example so we can verify the new feature added by the
patch easily?
@item
If you added a new file, did you insert a license header? It should be
taken from FFmpeg, not randomly copied and pasted from somewhere else.
@item
You should maintain alphabetical order in alphabetically ordered lists as
long as doing so does not break API/ABI compatibility.
@item
Lines with similar content should be aligned vertically when doing so
improves readability.
@item
Did you provide a suggestion for a clear commit log message?
@end enumerate
@section Patch review process
All patches posted to ffmpeg-devel will be reviewed, unless they contain a
clear note that the patch is not for SVN.
Reviews and comments will be posted as replies to the patch on the
mailing list. The patch submitter then has to take care of every comment,
that can be by resubmitting a changed patch or by discussion. Resubmitted
patches will themselves be reviewed like any other patch. If at some point
a patch passes review with no comments then it is approved, that can for
simple and small patches happen immediately while large patches will generally
have to be changed and reviewed many times before they are approved.
After a patch is approved it will be committed to the repository.
We will review all submitted patches, but sometimes we are quite busy so
especially for large patches this can take several weeks.
When resubmitting patches, please do not make any significant changes
not related to the comments received during review. Such patches will
be rejected. Instead, submit significant changes or new features as
separate patches.
@section Regression tests
Before submitting a patch (or committing to the repository), you should at least
test that you did not break anything.
The regression tests build a synthetic video stream and a synthetic
audio stream. These are then encoded and decoded with all codecs or
formats. The CRC (or MD5) of each generated file is recorded in a
result file. A 'diff' is launched to compare the reference results and
the result file.
The regression tests then go on to test the FFserver code with a
limited set of streams. It is important that this step runs correctly
as well.
Run 'make test' to test all the codecs and formats.
Run 'make fulltest' to test all the codecs, formats and FFserver.
[Of course, some patches may change the results of the regression tests. In
this case, the reference results of the regression tests shall be modified
accordingly].
@bye