Personally, I need the decoder to back out if get_format() returns no
usable pixel format. This didn't work because the error code was not
propagated down the call chain. This in turn happened because the
variable declaration removed in this patch shadowed the variable, whose
value is returned at the end of the function. Consequently, failures of
decode_nal_unit() were ignored in this place.
Reviewed-by: Andreas Cadhalpun <andreas.cadhalpun@googlemail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Some clients incorrectly set 24 as bits_per_coded_sample, while
the actual value is preserved in one of the codec headers.
In order to work around this, delay the check until decode_frame().
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This is useful for client programs to ask for nv12 surfaces instead of the
current default (uyvy), since those are more efficient to decode to.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Use the Multi-library interface to load at runtime x265 libraries
supporting alternative bit depths (e.g. 8bit and 16bit).
The linked library will try to load the library supporting the
pixel format if it is not supported by itself.
Fallback requesting the native library (passing 0 to x265_api_get) if
a library supporting the requested bit depth is not available.
Signed-off-by: Gopu Govindaswamy <gopu@multicorewareinc.com>
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The header had a wrong version description.
Bug-Id: 808
Signed-off-by: Shiina Hideaki <shiina@yndrd.com>
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
The previous version checked for 14-bit streams and did not properly
work across buffer boundaries.
Use the 64-bit parser state to make extended sync word detection work
across buffer boundary and check the extended sync word for 16-bit LE
and BE core streams to reduce probability of alias sync detection.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
If in compression mode rice_limit = 0 leads to call
`show_bits(gb, k)` in `decode_scalar` with k = 0.
Request a sample in case it is valid and it should be accepted.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
CC: libav-stable@libav.org
The decode_array_0000 assumed that 64 is the minimal block size
while it is not.
CC: libav-stable@libav.org
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Quickdraw packs data as a series of codes that the application is supposed
to handle, but it does not define any order in which they might appear.
Since it's unfeasible to support *all* opcodes defined by the spec,
only handle well-known blocks containing video data and ignore any unknown
or unsupported ones.
Move palette loading and rle decoding to separate functions to resue them
in other blocks and drop format initialization in init since it can
support more formats than pal8.
Validate width and height.
For max_order = 0 the clipping range is invalid. (amin = 2, amax = 1)
CC: libav-stable@libav.org
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Also change the type of begin, end and smp to ptrdiff_t to make the
comparison well-defined.
CC: libav-stable@libav.org
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The minimum of the ath(x, ATH_ADD) function depends on ATH_ADD.
This patch uses the first order approximation to determine it.
For ATH_ADD = 4 this results in the value at 3407.06812 (-5.24241638)
not the one at 3410 (-5.24237967).
CC: libav-stabl@libav.org
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
avctx->bits_per_raw_sample is used in get_sbits_long, which only
supports up to 32 bits.
CC: libav-stable@libav.org
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It does not make sense to copy is_avc without copying this as well. This
patch should not change anything for now, but will have an effect in
later commits.
That function currently does two things -- reinitializing the DSP
contexts and setting low_delay based on the SPS values.
The former more appropriately belongs in h264_slice_header_init(), while
the latter only really makes sense in decode_slice_header().
The third call to ff_h264_set_parameter_from_sps(), done immediately
after parsing a new SPS, appears to serve no useful purpose, so it is
just dropped.
Also, drop now unneeded H264Context.cur_chroma_format_idc.
Currently, the DPB is initialized in alloc_tables() and uninitialized in
free_tables(), but those functions manage frame size-dependent
variables, so DPB management does not logically belong in there.
Since we want the init/uninit to happen exactly once per the context
lifetime, init_context()/free_context() are the proper place for this
code.