Imported Upstream version 2.6.1

This commit is contained in:
Andreas Cadhalpun 2015-03-16 23:51:01 +01:00
parent 59f25c2fc8
commit 070a7b81ab
41 changed files with 263 additions and 131 deletions

View File

@ -1,6 +1,31 @@
Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest.
version 2.6.1:
- avformat/mov: Disallow ".." in dref unless use_absolute_path is set
- avfilter/palettegen: make sure at least one frame was sent to the filter
- avformat/mov: Check for string truncation in mov_open_dref()
- ac3_fixed: fix out-of-bound read
- mips/asmdefs: use _ABI64 as defined by gcc
- hevc: delay ff_thread_finish_setup for hwaccel
- avcodec/012v: Check dimensions more completely
- asfenc: fix leaking asf->index_ptr on error
- roqvideoenc: set enc->avctx in roq_encode_init
- avcodec/options_table: remove extradata_size from the AVOptions table
- ffmdec: limit the backward seek to the last resync position
- Add dependencies to configure file for vf_fftfilt
- ffmdec: make sure the time base is valid
- ffmdec: fix infinite loop at EOF
- ffmdec: initialize f_cprv, f_stvi and f_stau
- arm: Suppress tags about used cpu arch and extensions
- mxfdec: Fix the error handling for when strftime fails
- avcodec/opusdec: Fix delayed sample value
- avcodec/opusdec: Clear out pointers per packet
- avcodec/utils: Align YUV411 by as much as the other YUV variants
- lavc/hevcdsp: Fix compilation for arm with --disable-neon.
- vp9: fix segmentation map retention with threading enabled.
- Revert "avutil/opencl: is_compiled flag not being cleared in av_opencl_uninit"
version 2.6:
- nvenc encoder
- 10bit spp filter
@ -35,7 +60,6 @@ version 2.6:
- Fix stsd atom corruption in DNxHD QuickTimes
- Canopus HQX decoder
- RTP depacketization of T.140 text (RFC 4103)
- VP9 RTP payload format (draft 0) experimental depacketizer
- Port MIPS optimizations to 64-bit

View File

@ -1 +1 @@
2.6
2.6.1

View File

@ -21,32 +21,31 @@
10-bit support in spp, but maybe it's more important to mention the addition
of colorlevels (yet another color handling filter), tblend (allowing you
to for example run a diff between successive frames of a video stream), or
eventually the dcshift audio filter.
the dcshift audio filter.
There is also two other important filters landing in libavfilter: palettegen
and paletteuse, submitted by the Stupeflix company. These filters will be
very useful in case you are looking for creating high quality GIF, a format
that still bravely fights annihilation in 2015.
There are also two other important filters landing in libavfilter: palettegen
and paletteuse. Both submitted by the Stupeflix company. These filters will
be very useful in case you are looking for creating high quality GIFs, a
format that still bravely fights annihilation in 2015.
There are many other features, but let's follow-up on one big cleanup
There are many other new features, but let's follow-up on one big cleanup
achievement: the libmpcodecs (MPlayer filters) wrapper is finally dead. The
last remaining filters (softpulldown/repeatfields, eq*, and various
postprocessing filters) were ported by Arwa Arif (OPW student) and Paul B
Mahol.
Concerning API changes, not much things to mention. Though, the introduction
of devices inputs and outputs listing by Lukasz Marek is a notable addition
(try ffmpeg -sources or ffmpeg -sinks for an example of the usage). As
usual, see doc/APIchanges for more information.
Concerning API changes, there are not many things to mention. Though, the
introduction of device inputs and outputs listing by Lukasz Marek is a
notable addition (try ffmpeg -sources or ffmpeg -sinks for an example of
the usage). As usual, see doc/APIchanges for more information.
Now let's talk about optimizations. Ronald S. Bultje made the VP9 decoder
usable on x86 32-bit systems and pre-ssse3 CPUs like Phenom (even dual core
Athlons can play 1080p 30fps VP9 content now), so we now secretly hope for
Google and Mozilla to use ffvp9 instead of libvpx.
But VP9 is not the center of attention anymore, and HEVC/H.265 is also
getting many improvements, which includes optimizations, both in C and x86
ASM, mainly from James Almer, Christophe Gisquet and Pierre-Edouard Lepere.
Google and Mozilla to use ffvp9 instead of libvpx. But VP9 is not the
center of attention anymore, and HEVC/H.265 is also getting many
improvements, which include C and x86 ASM optimizations, mainly from James
Almer, Christophe Gisquet and Pierre-Edouard Lepere.
Even though we had many x86 contributions, it is not the only architecture
getting some love, with Seppo Tomperi adding ARM NEON optimizations to the
@ -61,6 +60,6 @@
complete Git history on http://source.ffmpeg.org.
We hope you will like this release as much as we enjoyed working on it, and
as usual, if you have any question about it, or any FFmpeg related topic,
as usual, if you have any questions about it, or any FFmpeg related topic,
feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
on the mailing-lists.

View File

@ -1 +1 @@
2.6
2.6.1

9
configure vendored
View File

@ -1769,6 +1769,7 @@ SYSTEM_FUNCS="
TOOLCHAIN_FEATURES="
as_dn_directive
as_func
as_object_arch
asm_mod_q
attribute_may_alias
attribute_packed
@ -2595,6 +2596,8 @@ deshake_filter_select="pixelutils"
drawtext_filter_deps="libfreetype"
ebur128_filter_deps="gpl"
eq_filter_deps="gpl"
fftfilt_filter_deps="avcodec"
fftfilt_filter_select="rdft"
flite_filter_deps="libflite"
frei0r_filter_deps="frei0r dlopen"
frei0r_src_filter_deps="frei0r dlopen"
@ -4560,6 +4563,11 @@ EOF
check_as <<EOF && enable as_dn_directive
ra .dn d0.i16
.unreq ra
EOF
# llvm's integrated assembler supports .object_arch from llvm 3.5
[ "$objformat" = elf ] && check_as <<EOF && enable as_object_arch
.object_arch armv4
EOF
[ $target_os != win32 ] && enabled_all armv6t2 shared !pic && enable_weak_pic
@ -5451,6 +5459,7 @@ enabled asyncts_filter && prepend avfilter_deps "avresample"
enabled atempo_filter && prepend avfilter_deps "avcodec"
enabled ebur128_filter && enabled swresample && prepend avfilter_deps "swresample"
enabled elbg_filter && prepend avfilter_deps "avcodec"
enabled fftfilt_filter && prepend avfilter_deps "avcodec"
enabled mcdeint_filter && prepend avfilter_deps "avcodec"
enabled movie_filter && prepend avfilter_deps "avformat avcodec"
enabled pan_filter && prepend avfilter_deps "swresample"

View File

@ -31,7 +31,7 @@ PROJECT_NAME = FFmpeg
# This could be handy for archiving the generated documentation or
# if some version control system is used.
PROJECT_NUMBER = 2.6
PROJECT_NUMBER = 2.6.1
# With the PROJECT_LOGO tag one can specify a logo or icon that is included
# in the documentation. The maximum height of the logo should not exceed 55

View File

@ -349,7 +349,7 @@ FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat,
@code{concat}} protocol designed specifically for that, with examples in the
documentation.
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
video by merely concatenating the files containing them.
Hence you may concatenate your multimedia files by first transcoding them to

View File

@ -72,7 +72,7 @@ the HTTP server (configured through the @option{HTTPPort} option), and
configuration file.
Each feed is associated to a file which is stored on disk. This stored
file is used to allow to send pre-recorded data to a player as fast as
file is used to send pre-recorded data to a player as fast as
possible when new content is added in real-time to the stream.
A "live-stream" or "stream" is a resource published by

View File

@ -3486,7 +3486,7 @@ Set number overlapping pixels for each block. Since the filter can be slow, you
may want to reduce this value, at the cost of a less effective filter and the
risk of various artefacts.
If the overlapping value doesn't allow to process the whole input width or
If the overlapping value doesn't permit processing the whole input width or
height, a warning will be displayed and according borders won't be denoised.
Default value is @var{blocksize}-1, which is the best possible setting.

View File

@ -23,7 +23,7 @@ Reduce buffering.
@item probesize @var{integer} (@emph{input})
Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will allow to detect more
stream information. A higher value will enable detecting more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by default.
@ -67,7 +67,7 @@ Default is 0.
@item analyzeduration @var{integer} (@emph{input})
Specify how many microseconds are analyzed to probe the input. A
higher value will allow to detect more accurate information, but will
higher value will enable detecting more accurate information, but will
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
@item cryptokey @var{hexadecimal string} (@emph{input})

View File

@ -1,7 +1,7 @@
@chapter Input Devices
@c man begin INPUT DEVICES
Input devices are configured elements in FFmpeg which allow to access
Input devices are configured elements in FFmpeg which enable accessing
the data coming from a multimedia device attached to your system.
When you configure your FFmpeg build, all the supported input devices

View File

@ -844,7 +844,7 @@ Return 1.0 if @var{x} is +/-INFINITY, 0.0 otherwise.
Return 1.0 if @var{x} is NAN, 0.0 otherwise.
@item ld(var)
Allow to load the value of the internal variable with number
Load the value of the internal variable with number
@var{var}, which was previously stored with st(@var{var}, @var{expr}).
The function returns the loaded value.
@ -912,7 +912,7 @@ Compute the square root of @var{expr}. This is equivalent to
Compute expression @code{1/(1 + exp(4*x))}.
@item st(var, expr)
Allow to store the value of the expression @var{expr} in an internal
Store the value of the expression @var{expr} in an internal
variable. @var{var} specifies the number of the variable where to
store the value, and it is a value ranging from 0 to 9. The function
returns the value stored in the internal variable.

View File

@ -38,15 +38,15 @@ static av_cold int zero12v_decode_init(AVCodecContext *avctx)
static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
int *got_frame, AVPacket *avpkt)
{
int line = 0, ret;
int line, ret;
const int width = avctx->width;
AVFrame *pic = data;
uint16_t *y, *u, *v;
const uint8_t *line_end, *src = avpkt->data;
int stride = avctx->width * 8 / 3;
if (width == 1) {
av_log(avctx, AV_LOG_ERROR, "Width 1 not supported.\n");
if (width <= 1 || avctx->height <= 0) {
av_log(avctx, AV_LOG_ERROR, "Dimensions %dx%d not supported.\n", width, avctx->height);
return AVERROR_INVALIDDATA;
}
@ -67,45 +67,45 @@ static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
pic->pict_type = AV_PICTURE_TYPE_I;
pic->key_frame = 1;
y = (uint16_t *)pic->data[0];
u = (uint16_t *)pic->data[1];
v = (uint16_t *)pic->data[2];
line_end = avpkt->data + stride;
for (line = 0; line < avctx->height; line++) {
uint16_t y_temp[6] = {0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000};
uint16_t u_temp[3] = {0x8000, 0x8000, 0x8000};
uint16_t v_temp[3] = {0x8000, 0x8000, 0x8000};
int x;
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
while (line++ < avctx->height) {
while (1) {
uint32_t t = AV_RL32(src);
for (x = 0; x < width; x += 6) {
uint32_t t;
if (width - x < 6 || line_end - src < 16) {
y = y_temp;
u = u_temp;
v = v_temp;
}
if (line_end - src < 4)
break;
t = AV_RL32(src);
src += 4;
*u++ = t << 6 & 0xFFC0;
*y++ = t >> 4 & 0xFFC0;
*v++ = t >> 14 & 0xFFC0;
if (src >= line_end - 1) {
*y = 0x80;
src++;
line_end += stride;
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
if (line_end - src < 4)
break;
}
t = AV_RL32(src);
src += 4;
*y++ = t << 6 & 0xFFC0;
*u++ = t >> 4 & 0xFFC0;
*y++ = t >> 14 & 0xFFC0;
if (src >= line_end - 2) {
if (!(width & 1)) {
*y = 0x80;
src += 2;
}
line_end += stride;
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
if (line_end - src < 4)
break;
}
t = AV_RL32(src);
src += 4;
@ -113,15 +113,8 @@ static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
*y++ = t >> 4 & 0xFFC0;
*u++ = t >> 14 & 0xFFC0;
if (src >= line_end - 1) {
*y = 0x80;
src++;
line_end += stride;
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
if (line_end - src < 4)
break;
}
t = AV_RL32(src);
src += 4;
@ -129,18 +122,21 @@ static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
*v++ = t >> 4 & 0xFFC0;
*y++ = t >> 14 & 0xFFC0;
if (src >= line_end - 2) {
if (width & 1) {
*y = 0x80;
src += 2;
}
line_end += stride;
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
if (width - x < 6)
break;
}
}
if (x < width) {
y = x + (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
u = x/2 + (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
v = x/2 + (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
memcpy(y, y_temp, sizeof(*y) * (width - x));
memcpy(u, u_temp, sizeof(*u) * (width - x + 1) / 2);
memcpy(v, v_temp, sizeof(*v) * (width - x + 1) / 2);
}
line_end += stride;
src = line_end - stride;
}
*got_frame = 1;

View File

@ -217,7 +217,7 @@ OBJS-$(CONFIG_DVVIDEO_DECODER) += dvdec.o dv.o dvdata.o
OBJS-$(CONFIG_DVVIDEO_ENCODER) += dvenc.o dv.o dvdata.o
OBJS-$(CONFIG_DXA_DECODER) += dxa.o
OBJS-$(CONFIG_DXTORY_DECODER) += dxtory.o
OBJS-$(CONFIG_EAC3_DECODER) += eac3dec.o eac3_data.o
OBJS-$(CONFIG_EAC3_DECODER) += eac3_data.o
OBJS-$(CONFIG_EAC3_ENCODER) += eac3enc.o eac3_data.o
OBJS-$(CONFIG_EACMV_DECODER) += eacmv.o
OBJS-$(CONFIG_EAMAD_DECODER) += eamad.o eaidct.o mpeg12.o \

View File

@ -872,7 +872,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
start_subband += start_subband - 7;
end_subband = get_bits(gbc, 3) + 5;
#if USE_FIXED
s->spx_dst_end_freq = end_freq_inv_tab[end_subband];
s->spx_dst_end_freq = end_freq_inv_tab[end_subband-5];
#endif
if (end_subband > 7)
end_subband += end_subband - 7;
@ -939,7 +939,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
nblend = 0;
sblend = 0x800000;
} else if (nratio > 0x7fffff) {
nblend = 0x800000;
nblend = 14529495; // sqrt(3) in FP.23
sblend = 0;
} else {
nblend = fixed_sqrt(nratio, 23);

View File

@ -243,19 +243,19 @@ typedef struct AC3DecodeContext {
* Parse the E-AC-3 frame header.
* This parses both the bit stream info and audio frame header.
*/
int ff_eac3_parse_header(AC3DecodeContext *s);
static int ff_eac3_parse_header(AC3DecodeContext *s);
/**
* Decode mantissas in a single channel for the entire frame.
* This is used when AHT mode is enabled.
*/
void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch);
static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch);
/**
* Apply spectral extension to each channel by copying lower frequency
* coefficients to higher frequency bins and applying side information to
* approximate the original high frequency signal.
*/
void ff_eac3_apply_spectral_extension(AC3DecodeContext *s);
static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s);
#endif /* AVCODEC_AC3DEC_H */

View File

@ -164,6 +164,7 @@ static void ac3_downmix_c_fixed16(int16_t **samples, int16_t (*matrix)[2],
}
}
#include "eac3dec.c"
#include "ac3dec.c"
static const AVOption options[] = {

View File

@ -28,6 +28,7 @@
* Upmix delay samples from stereo to original channel layout.
*/
#include "ac3dec.h"
#include "eac3dec.c"
#include "ac3dec.c"
static const AVOption options[] = {

View File

@ -37,6 +37,7 @@ OBJS-$(CONFIG_DCA_DECODER) += arm/dcadsp_init_arm.o
OBJS-$(CONFIG_FLAC_DECODER) += arm/flacdsp_init_arm.o \
arm/flacdsp_arm.o
OBJS-$(CONFIG_FLAC_ENCODER) += arm/flacdsp_init_arm.o
OBJS-$(CONFIG_HEVC_DECODER) += arm/hevcdsp_init_arm.o
OBJS-$(CONFIG_MLP_DECODER) += arm/mlpdsp_init_arm.o
OBJS-$(CONFIG_VC1_DECODER) += arm/vc1dsp_init_arm.o
OBJS-$(CONFIG_VORBIS_DECODER) += arm/vorbisdsp_init_arm.o

View File

@ -0,0 +1,26 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_ARM_HEVCDSP_ARM_H
#define AVCODEC_ARM_HEVCDSP_ARM_H
#include "libavcodec/hevcdsp.h"
void ff_hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth);
#endif /* AVCODEC_ARM_HEVCDSP_ARM_H */

View File

@ -0,0 +1,32 @@
/*
* Copyright (c) 2014 Seppo Tomperi <seppo.tomperi@vtt.fi>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/attributes.h"
#include "libavutil/arm/cpu.h"
#include "libavcodec/hevcdsp.h"
#include "hevcdsp_arm.h"
av_cold void ff_hevcdsp_init_arm(HEVCDSPContext *c, const int bit_depth)
{
int cpu_flags = av_get_cpu_flags();
if (have_neon(cpu_flags))
ff_hevcdsp_init_neon(c, bit_depth);
}

View File

@ -21,6 +21,7 @@
#include "libavutil/attributes.h"
#include "libavutil/arm/cpu.h"
#include "libavcodec/hevcdsp.h"
#include "hevcdsp_arm.h"
void ff_hevc_v_loop_filter_luma_neon(uint8_t *_pix, ptrdiff_t _stride, int _beta, int *_tc, uint8_t *_no_p, uint8_t *_no_q);
void ff_hevc_h_loop_filter_luma_neon(uint8_t *_pix, ptrdiff_t _stride, int _beta, int *_tc, uint8_t *_no_p, uint8_t *_no_q);
@ -141,9 +142,8 @@ void ff_hevc_put_qpel_bi_neon_wrapper(uint8_t *dst, ptrdiff_t dststride, uint8_t
put_hevc_qpel_uw_neon[my][mx](dst, dststride, src, srcstride, width, height, src2, MAX_PB_SIZE);
}
static av_cold void hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth)
av_cold void ff_hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth)
{
#if HAVE_NEON
if (bit_depth == 8) {
int x;
c->hevc_v_loop_filter_luma = ff_hevc_v_loop_filter_luma_neon;
@ -221,13 +221,4 @@ static av_cold void hevcdsp_init_neon(HEVCDSPContext *c, const int bit_depth)
c->put_hevc_qpel_uni[8][0][0] = ff_hevc_put_qpel_uw_pixels_w48_neon_8;
c->put_hevc_qpel_uni[9][0][0] = ff_hevc_put_qpel_uw_pixels_w64_neon_8;
}
#endif // HAVE_NEON
}
void ff_hevcdsp_init_arm(HEVCDSPContext *c, const int bit_depth)
{
int cpu_flags = av_get_cpu_flags();
if (have_neon(cpu_flags))
hevcdsp_init_neon(c, bit_depth);
}

View File

@ -63,7 +63,7 @@ typedef enum {
#define EAC3_SR_CODE_REDUCED 3
void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
{
int bin, bnd, ch, i;
uint8_t wrapflag[SPX_MAX_BANDS]={1,0,}, num_copy_sections, copy_sizes[SPX_MAX_BANDS];
@ -101,7 +101,7 @@ void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
for (i = 0; i < num_copy_sections; i++) {
memcpy(&s->transform_coeffs[ch][bin],
&s->transform_coeffs[ch][s->spx_dst_start_freq],
copy_sizes[i]*sizeof(float));
copy_sizes[i]*sizeof(INTFLOAT));
bin += copy_sizes[i];
}
@ -124,7 +124,7 @@ void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
bin = s->spx_src_start_freq - 2;
for (bnd = 0; bnd < s->num_spx_bands; bnd++) {
if (wrapflag[bnd]) {
float *coeffs = &s->transform_coeffs[ch][bin];
INTFLOAT *coeffs = &s->transform_coeffs[ch][bin];
coeffs[0] *= atten_tab[0];
coeffs[1] *= atten_tab[1];
coeffs[2] *= atten_tab[2];
@ -142,6 +142,11 @@ void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
for (bnd = 0; bnd < s->num_spx_bands; bnd++) {
float nscale = s->spx_noise_blend[ch][bnd] * rms_energy[bnd] * (1.0f / INT32_MIN);
float sscale = s->spx_signal_blend[ch][bnd];
#if USE_FIXED
// spx_noise_blend and spx_signal_blend are both FP.23
nscale *= 1.0 / (1<<23);
sscale *= 1.0 / (1<<23);
#endif
for (i = 0; i < s->spx_band_sizes[bnd]; i++) {
float noise = nscale * (int32_t)av_lfg_get(&s->dith_state);
s->transform_coeffs[ch][bin] *= sscale;
@ -195,7 +200,7 @@ static void idct6(int pre_mant[6])
pre_mant[5] = even0 - odd0;
}
void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
{
int bin, blk, gs;
int end_bap, gaq_mode;
@ -288,7 +293,7 @@ void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
}
}
int ff_eac3_parse_header(AC3DecodeContext *s)
static int ff_eac3_parse_header(AC3DecodeContext *s)
{
int i, blk, ch;
int ac3_exponent_strategy, parse_aht_info, parse_spx_atten_data;

View File

@ -2600,7 +2600,8 @@ static int hevc_frame_start(HEVCContext *s)
if (ret < 0)
goto fail;
ff_thread_finish_setup(s->avctx);
if (!s->avctx->hwaccel)
ff_thread_finish_setup(s->avctx);
return 0;

View File

@ -103,7 +103,6 @@ static const AVOption avcodec_options[] = {
{"hex", "hex motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_HEX }, INT_MIN, INT_MAX, V|E, "me_method" },
{"umh", "umh motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_UMH }, INT_MIN, INT_MAX, V|E, "me_method" },
{"iter", "iter motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_ITER }, INT_MIN, INT_MAX, V|E, "me_method" },
{"extradata_size", NULL, OFFSET(extradata_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"time_base", NULL, OFFSET(time_base), AV_OPT_TYPE_RATIONAL, {.dbl = 0}, INT_MIN, INT_MAX},
{"g", "set the group of picture (GOP) size", OFFSET(gop_size), AV_OPT_TYPE_INT, {.i64 = 12 }, INT_MIN, INT_MAX, V|E},
{"ar", "set audio sampling rate (in Hz)", OFFSET(sample_rate), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|D|E},

View File

@ -449,6 +449,14 @@ static int opus_decode_packet(AVCodecContext *avctx, void *data,
int coded_samples = 0;
int decoded_samples = 0;
int i, ret;
int delayed_samples = 0;
for (i = 0; i < c->nb_streams; i++) {
OpusStreamContext *s = &c->streams[i];
s->out[0] =
s->out[1] = NULL;
delayed_samples = FFMAX(delayed_samples, s->delayed_samples);
}
/* decode the header of the first sub-packet to find out the sample count */
if (buf) {
@ -462,7 +470,7 @@ static int opus_decode_packet(AVCodecContext *avctx, void *data,
c->streams[0].silk_samplerate = get_silk_samplerate(pkt->config);
}
frame->nb_samples = coded_samples + c->streams[0].delayed_samples;
frame->nb_samples = coded_samples + delayed_samples;
/* no input or buffered data => nothing to do */
if (!frame->nb_samples) {

View File

@ -999,6 +999,8 @@ static av_cold int roq_encode_init(AVCodecContext *avctx)
av_lfg_init(&enc->randctx, 1);
enc->avctx = avctx;
enc->framesSinceKeyframe = 0;
if ((avctx->width & 0xf) || (avctx->height & 0xf)) {
av_log(avctx, AV_LOG_ERROR, "Dimensions must be divisible by 16\n");

View File

@ -839,13 +839,6 @@ static int tiff_decode_tag(TiffContext *s, AVFrame *frame)
s->bpp = -1;
}
}
if (s->bpp > 64U) {
av_log(s->avctx, AV_LOG_ERROR,
"This format is not supported (bpp=%d, %d components)\n",
s->bpp, count);
s->bpp = 0;
return AVERROR_INVALIDDATA;
}
break;
case TIFF_SAMPLES_PER_PIXEL:
if (count != 1) {
@ -1158,6 +1151,13 @@ static int tiff_decode_tag(TiffContext *s, AVFrame *frame)
}
}
end:
if (s->bpp > 64U) {
av_log(s->avctx, AV_LOG_ERROR,
"This format is not supported (bpp=%d, %d components)\n",
s->bpp, count);
s->bpp = 0;
return AVERROR_INVALIDDATA;
}
bytestream2_seek(&s->gb, start, SEEK_SET);
return 0;
}

View File

@ -374,7 +374,7 @@ void avcodec_align_dimensions2(AVCodecContext *s, int *width, int *height,
case AV_PIX_FMT_YUVJ411P:
case AV_PIX_FMT_UYYVYY411:
w_align = 32;
h_align = 8;
h_align = 16 * 2;
break;
case AV_PIX_FMT_YUV410P:
if (s->codec_id == AV_CODEC_ID_SVQ1) {

View File

@ -279,7 +279,8 @@ static int vp9_alloc_frame(AVCodecContext *ctx, VP9Frame *f)
// retain segmentation map if it doesn't update
if (s->segmentation.enabled && !s->segmentation.update_map &&
!s->intraonly && !s->keyframe && !s->errorres) {
!s->intraonly && !s->keyframe && !s->errorres &&
ctx->active_thread_type != FF_THREAD_FRAME) {
memcpy(f->segmentation_map, s->frames[LAST_FRAME].segmentation_map, sz);
}
@ -1351,9 +1352,18 @@ static void decode_mode(AVCodecContext *ctx)
if (!s->last_uses_2pass)
ff_thread_await_progress(&s->frames[LAST_FRAME].tf, row >> 3, 0);
for (y = 0; y < h4; y++)
for (y = 0; y < h4; y++) {
int idx_base = (y + row) * 8 * s->sb_cols + col;
for (x = 0; x < w4; x++)
pred = FFMIN(pred, refsegmap[(y + row) * 8 * s->sb_cols + x + col]);
pred = FFMIN(pred, refsegmap[idx_base + x]);
if (!s->segmentation.update_map && ctx->active_thread_type == FF_THREAD_FRAME) {
// FIXME maybe retain reference to previous frame as
// segmap reference instead of copying the whole map
// into a new buffer
memcpy(&s->frames[CUR_FRAME].segmentation_map[idx_base],
&refsegmap[idx_base], w4);
}
}
av_assert1(pred < 8);
b->seg_id = pred;
} else {

View File

@ -124,7 +124,7 @@ static int query_formats(AVFilterContext *ctx)
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt);
if (!(desc->flags & (AV_PIX_FMT_FLAG_HWACCEL | AV_PIX_FMT_FLAG_BITSTREAM | AV_PIX_FMT_FLAG_PAL)) &&
(desc->flags & AV_PIX_FMT_FLAG_PLANAR || desc->nb_components == 1) &&
(!(desc->flags & AV_PIX_FMT_FLAG_BE) == !HAVE_BIGENDIAN) || desc->comp[0].depth_minus1 == 7)
(!(desc->flags & AV_PIX_FMT_FLAG_BE) == !HAVE_BIGENDIAN || desc->comp[0].depth_minus1 == 7))
ff_add_format(&formats, fmt);
}

View File

@ -504,7 +504,7 @@ static int request_frame(AVFilterLink *outlink)
int r;
r = ff_request_frame(inlink);
if (r == AVERROR_EOF && !s->palette_pushed) {
if (r == AVERROR_EOF && !s->palette_pushed && s->nb_refs) {
r = ff_filter_frame(outlink, get_palette_frame(ctx));
s->palette_pushed = 1;
return r;

View File

@ -660,6 +660,7 @@ static int asf_write_header(AVFormatContext *s)
* It is needed to use asf as a streamable format. */
if (asf_write_header1(s, 0, DATA_HEADER_SIZE) < 0) {
//av_free(asf);
av_freep(&asf->index_ptr);
return -1;
}

View File

@ -36,6 +36,7 @@
#include "riff.h"
#include "libavcodec/bytestream.h"
#include "libavcodec/exif.h"
#include "libavformat/isom.h"
typedef struct AVIStream {
int64_t frame_offset; /* current frame (video) or byte (audio) counter
@ -773,6 +774,12 @@ static int avi_read_header(AVFormatContext *s)
st->codec->codec_tag = tag1;
st->codec->codec_id = ff_codec_get_id(ff_codec_bmp_tags,
tag1);
if (!st->codec->codec_id) {
st->codec->codec_id = ff_codec_get_id(ff_codec_movvideo_tags,
tag1);
if (st->codec->codec_id)
av_log(s, AV_LOG_WARNING, "mov tag found in avi\n");
}
/* This is needed to get the pict type which is necessary
* for generating correct pts. */
st->need_parsing = AVSTREAM_PARSE_HEADERS;

View File

@ -82,6 +82,7 @@ static int ffm_read_data(AVFormatContext *s,
FFMContext *ffm = s->priv_data;
AVIOContext *pb = s->pb;
int len, fill_size, size1, frame_offset, id;
int64_t last_pos = -1;
size1 = size;
while (size > 0) {
@ -101,9 +102,11 @@ static int ffm_read_data(AVFormatContext *s,
avio_seek(pb, tell, SEEK_SET);
}
id = avio_rb16(pb); /* PACKET_ID */
if (id != PACKET_ID)
if (id != PACKET_ID) {
if (ffm_resync(s, id) < 0)
return -1;
last_pos = avio_tell(pb);
}
fill_size = avio_rb16(pb);
ffm->dts = avio_rb64(pb);
frame_offset = avio_rb16(pb);
@ -117,7 +120,9 @@ static int ffm_read_data(AVFormatContext *s,
if (!frame_offset) {
/* This packet has no frame headers in it */
if (avio_tell(pb) >= ffm->packet_size * 3LL) {
avio_seek(pb, -ffm->packet_size * 2LL, SEEK_CUR);
int64_t seekback = FFMIN(ffm->packet_size * 2LL, avio_tell(pb) - last_pos);
seekback = FFMAX(seekback, 0);
avio_seek(pb, -seekback, SEEK_CUR);
goto retry_read;
}
/* This is bad, we cannot find a valid frame header */
@ -261,7 +266,7 @@ static int ffm2_read_header(AVFormatContext *s)
AVIOContext *pb = s->pb;
AVCodecContext *codec;
int ret;
int f_main = 0, f_cprv, f_stvi, f_stau;
int f_main = 0, f_cprv = -1, f_stvi = -1, f_stau = -1;
AVCodec *enc;
char *buffer;
@ -331,6 +336,12 @@ static int ffm2_read_header(AVFormatContext *s)
}
codec->time_base.num = avio_rb32(pb);
codec->time_base.den = avio_rb32(pb);
if (codec->time_base.num <= 0 || codec->time_base.den <= 0) {
av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n",
codec->time_base.num, codec->time_base.den);
ret = AVERROR_INVALIDDATA;
goto fail;
}
codec->width = avio_rb16(pb);
codec->height = avio_rb16(pb);
codec->gop_size = avio_rb16(pb);
@ -434,7 +445,7 @@ static int ffm2_read_header(AVFormatContext *s)
}
/* get until end of block reached */
while ((avio_tell(pb) % ffm->packet_size) != 0)
while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached)
avio_r8(pb);
/* init packet demux */
@ -503,6 +514,11 @@ static int ffm_read_header(AVFormatContext *s)
case AVMEDIA_TYPE_VIDEO:
codec->time_base.num = avio_rb32(pb);
codec->time_base.den = avio_rb32(pb);
if (codec->time_base.num <= 0 || codec->time_base.den <= 0) {
av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n",
codec->time_base.num, codec->time_base.den);
goto fail;
}
codec->width = avio_rb16(pb);
codec->height = avio_rb16(pb);
codec->gop_size = avio_rb16(pb);
@ -561,7 +577,7 @@ static int ffm_read_header(AVFormatContext *s)
}
/* get until end of block reached */
while ((avio_tell(pb) % ffm->packet_size) != 0)
while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached)
avio_r8(pb);
/* init packet demux */

View File

@ -2600,7 +2600,7 @@ static int mov_open_dref(AVIOContext **pb, const char *src, MOVDref *ref,
/* try relative path, we do not try the absolute because it can leak information about our
system to an attacker */
if (ref->nlvl_to > 0 && ref->nlvl_from > 0) {
char filename[1024];
char filename[1025];
const char *src_path;
int i, l;
@ -2626,10 +2626,15 @@ static int mov_open_dref(AVIOContext **pb, const char *src, MOVDref *ref,
filename[src_path - src] = 0;
for (i = 1; i < ref->nlvl_from; i++)
av_strlcat(filename, "../", 1024);
av_strlcat(filename, "../", sizeof(filename));
av_strlcat(filename, ref->path + l + 1, 1024);
av_strlcat(filename, ref->path + l + 1, sizeof(filename));
if (!use_absolute_path)
if(strstr(ref->path + l + 1, "..") || ref->nlvl_from > 1)
return AVERROR(ENOENT);
if (strlen(filename) + 1 == sizeof(filename))
return AVERROR(ENOENT);
if (!avio_open2(pb, filename, AVIO_FLAG_READ, int_cb, NULL))
return 0;
}

View File

@ -2041,7 +2041,7 @@ static int mxf_timestamp_to_str(uint64_t timestamp, char **str)
if (!*str)
return AVERROR(ENOMEM);
if (!strftime(*str, 32, "%Y-%m-%d %H:%M:%S", &time))
str[0] = '\0';
(*str)[0] = '\0';
return 0;
}

View File

@ -362,9 +362,6 @@ const AVCodecTag ff_codec_bmp_tags[] = {
{ AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '4') },
{ AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '5') },
{ AV_CODEC_ID_FIC, MKTAG('F', 'I', 'C', 'V') },
{ AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'N') },
{ AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'H') },
{ AV_CODEC_ID_QTRLE, MKTAG('r', 'l', 'e', ' ') },
{ AV_CODEC_ID_HQX, MKTAG('C', 'H', 'Q', 'X') },
{ AV_CODEC_ID_NONE, 0 }
};

View File

@ -49,11 +49,17 @@
#elif HAVE_ARMV5TE
.arch armv5te
#endif
#if HAVE_AS_OBJECT_ARCH
ELF .object_arch armv4
#endif
#if HAVE_NEON
.fpu neon
ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch
ELF .eabi_attribute 12, 0 @ suppress Tag_Advanced_SIMD_arch
#elif HAVE_VFP
.fpu vfp
ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch
#endif
.syntax unified

View File

@ -24,12 +24,10 @@
* assembly (rather than from within .s files).
*/
#ifndef AVCODEC_MIPS_ASMDEFS_H
#define AVCODEC_MIPS_ASMDEFS_H
#ifndef AVUTIL_MIPS_ASMDEFS_H
#define AVUTIL_MIPS_ASMDEFS_H
#include <sgidefs.h>
#if _MIPS_SIM == _ABI64
#if defined(_ABI64) && _MIPS_SIM == _ABI64
# define PTRSIZE " 8 "
# define PTRLOG " 3 "
# define PTR_ADDU "daddu "

View File

@ -611,9 +611,6 @@ void av_opencl_uninit(void)
}
opencl_ctx.context = NULL;
}
for (i = 0; i < opencl_ctx.kernel_code_count; i++) {
opencl_ctx.kernel_code[i].is_compiled = 0;
}
free_device_list(&opencl_ctx.device_list);
end:
if (opencl_ctx.init_count <= 0)