Invalid buffer ids are defined by VA_INVALID_ID. Use that through out
vaapi_*.c support files now that we have private data initialized and
managed by libavcodec. Previously, the only requirement for the public
vaapi_context struct was to be zero-initialized.
This fixes support for 3rdparty VA drivers that strictly conform to
the API whereby an invalid buffer id is VA_INVALID_ID and the first
valid buffer id can actually be zero.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Move libavcodec managed objects from the public struct vaapi_context
to a new privately owned FFVAContext. This is done so that to clean up
and streamline the public structure, but also to prepare for new codec
support, thus requiring new internal data to be added in there.
The AVCodecContext.hwaccel_context, that holds the public vaapi_context,
shall no longer be accessed from within vaapi_*.c codec support files.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Deprecate older VA pixel formats (MOCO, IDCT) as it is now very unlikely
to ever be useful in the future. Only keep plain AV_PIX_FMT_VAAPI format
that is aliased to the older VLD variant.
This is an API change.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Currently, when forcing an I frame, via API, or via the ffmpeg cli,
using -force_key_frames, we still let x264 decide what sort of
keyframe to user. In some cases, it is useful to be able to force
an IDR frame, e.g. for cutting streams.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit '7bf9647264308d2df74b2b50669f2d02a7ecc90b':
vp7: bound checking in vp7_decode_frame_header
Only partially merged, see 46f72ea507
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit 'f34b152eb7b7e8d2aee57c710a072cf74173fbe1':
libfdk-aacdec: Clean up properly if the init fails
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '87de6ddb7b7674e329d5c96677bd8685bc7f7855':
libfdk-aacdec: Bump the max number of channels to 8
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
The h264 decoder reports 4.1 as its maximum level, but it will decode
5.1 4K video just fine. In practice, the published level limits in
vdpau do not communicate anything that's actually useful.
Previously most of the error paths leaked.
Also add FF_CODEC_CAP_INIT_THREADSAFE while adding caps_internal;
this decoder wrapper doesn't have any static data that is initialized.
Signed-off-by: Martin Storsjö <martin@martin.st>
For ADTS streams, the output format (number of channels, frame size)
can change at any point (with the latest version of fdk-aac, the decoder
seems to change format after a handful of frames, not outputting the
right format immediately, for cases that worked fine with the earlier
version of the lib).
Previously, the decoder decoded straight into the output frame once the
number of channels and frame size was known. This obviously does not
work if the number of channels or frame size changes.
The alternative would be to allocate the AVFrame with the maximum number
of channels and frame size, and change them afterward decoding into it,
but that may cause confusion to users e.g. of the get_buffer callback.
This solution should be more robust.
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
In the latest version of fdk-aac, the decoder can output up to 8
channels; take this into account when preallocating buffers that
need to fit the output from any packet.
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
The value of wrap_flag in the Text Wrap Box specifies if the text is to
be wrapped or not. Uses 'end of line wrap' amongst the wrap styles
supported by ASS if the text is to be wrapped, i.e; fill as much text
in a line as possible, then break to next line.
The 3GPP spec has no provision for smart wrapping.
Signed-off-by: Niklesh <niklesh.lalwani@iitb.ac.in>
As suggested, posting the combined patch with the fate changes.
The patch sets the default style in ASS from the default style
information present in the movtext header.
Signed-off-by: Niklesh <niklesh.lalwani@iitb.ac.in>
A parser should never be called with a mismatching codec
Found-by: Ganesh Ajjanagadde <gajjanag@mit.edu>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The table is used in libavutil/eval.c.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Don't try to do a blocking wait for MMAL output if we haven't even sent
a single real packet, but only flush packets. Obviously we can't expect
to get anything back.
Additionally, don't send a flush packet to MMAL in the same case. It
appears the MMAL decoder will sometimes hang in mmal_vc_port_disable()
(called from ffmmal_close_decoder()), waiting for a reply from the GPU
which never arrives. Either MMAL disallows sending flush packets without
preceding real data, or it's a MMAL bug.