In order to indicate that the frames in a BlockGroup are not keyframes,
one has to add a ReferenceBlock element containing the timestamp of a
referenced Block that has already been written. The timestamp ought to be
relative to the timestamp of the Block it is attached to. Yet the
Matroska muxer used the relative timestamp of the preceding Block of the
track, i.e. the timestamp of the preceding block relative to the
timestamp of the Cluster containing said block (that need not be the
Cluster containing the current Block). This has been fixed.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
libavformat/img2.h: New field export_path_metadata to
VideoDemuxData to only allow the use of the extra metadata
upon explicit user request, for security reasons.
libavformat/img2dec.c: Modify image2 demuxer to make available
two special metadata entries called lavf.image2dec.source_path
and lavf.image2dec.source_basename, which represents, respectively,
the complete path to the source image for the current frame and
the basename i.e. the file name related to the current frame.
These can then be used by filters like drawtext and others. The
metadata fields will only be available when explicitly enabled
with image2 option -export_path_metadata 1.
doc/demuxers.texi: Documented the new metadata fields available
for image2 and how to use them.
doc/filters.texi: Added an example on how to use the new metadata
fields with drawtext filter, in order to plot the input file path
to each output frame.
Usage example:
ffmpeg -f image2 -export_path_metadata 1 -pattern_type glob
-framerate 18 -i '/path/to/input/files/*.jpg'
-filter_complex drawtext="fontsize=40:fontcolor=white:
fontfile=FreeSans.ttf:borderw=2:bordercolor=black:
text='%{metadata\:lavf.image2dec.source_basename\:NA}':x=5:y=50"
output.avi
Fixes#2874.
Signed-off-by: Alexandre Heitor Schmidt <alexandre.schmidt@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
This will likely also fix CID 1452574 and 1452565, false positives
resulting from Coverity thinking that av_dict_set() automatically
frees its key and value parameters (even without the
AV_DICT_DONT_STRDUP_* flags).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The tests for concat use this option which is scheduled for removal and
does nothing any more. So remove it; otherwise, these tests would fail
at the next major version bump.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Keep all the existing data fields as they are (there's lots and
lots of nontrivial calculation and heuristics based on them in
their current form), but derive the duration as the difference
between the pts of the first packet to the maximum pts+duration
(not necessarily the last packet); use this duration in any box
where the actual presentation duration is supposed to be.
Fixes: 8420
Signed-off-by: Martin Storsjö <martin@martin.st>
If the size of the input packet is zero, av_grow_packet() used to call
av_new_packet() which would initialize the packet and (in particular)
reset the pos field. This behaviour (which was never documented and
arguably always contradicted the documented behaviour) was changed in
2fe04630. This means that it is unnecessary to save and restore the
packet's position in append_packet_chunked().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This fixes ticket #7997 as well as the vsynth*-prores_# FATE-tests
(where * ranges over { 1, 2, 3, _lena } and # over { , _int, _444,
_444_int }).
(Given that prev_dc is in the range -0xC000..0x3FFF, no overflow can
happen upon multiplication with 2.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
After 06ec9c4746 we check for these
functions in configure (which will succeed in cygwin), but cmdutils.c
only includes windows.h if _WIN32 is defined (which it isn't in cygwin).
Retain the old intent from before 06ec9c4746,
that these functions only would be used when _WIN32 is defined, while
only using them if configure has agreed that they do exist.
Signed-off-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: Peter Ross <pross@xvid.org>
Signed-off-by: James Almer <jamrial@gmail.com>
Fixes#8314.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: Peter Ross <pross@xvid.org>
Signed-off-by: James Almer <jamrial@gmail.com>
The Dash muxer uses submuxers and when one such submuxer has been allocated,
it is initially only stored in a temporary variable. Therefore it leaks
if an error happens between the allocation and storing it permanently.
This commit changes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Reviewed-by: "Jeyapal, Karthick" <kjeyapal@akamai.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This commit improves returned error codes by forwarding error codes. In
some instances, the hardcoded returned error codes made no sense at all:
The normal error code for failure of av_new_packet() is AVERROR(ENOMEM),
yet there were instances where AVERROR(EIO) was returned.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
by freeing it a bit earlier.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Converting explicit avio_flush() calls helps us to buffer more data and avoid
flushing the IO context too often which causes reduced IO throughput for
non-streamed file output.
The user can control FLUSH_POINT flushing behaviour using the -flush_packets
option, the default typically means to flush unless a non-streamed file output
is used, so this change should have no adverse effect on streaming even if it
is assumed that after an avio_flush() the output buffer is clean so small
seekbacks within the output buffer will work even when the IO context is not
seekable.
Signed-off-by: Marton Balint <cus@passwd.hu>
These instances are simply redundant or present because avio_flush() used to be
required before doing a seekback. That is no longer the case, aviobuf code does
the flush automatically on seek.
This only affects code which is either disabled for streaming IO contexts or
does no seekbacks after the flush, so this change should have no adverse effect
on streaming.
Signed-off-by: Marton Balint <cus@passwd.hu>
Removing explicit avio_flush() calls helps us to buffer more data and avoid
flushing the IO context too often which causes reduced IO throughput for
non-streamed file output.
The user can control flushing behaviour at the end of every packet using the
-flush_packets option, the default typically means to flush unless a
non-streamed file output is used.
Therefore this change should have no adverse effect on streaming, even if it is
assumed that a new packet has a clean buffer so small seekbacks within the
output buffer work even when the IO context is not seekable.
Signed-off-by: Marton Balint <cus@passwd.hu>
To make it consistent with other muxers.
The user can still control the generic flushing behaviour after write_header
(same way as after packets) using the -flush_packets option, the default
typically means to flush unless a non-streamed file output is used.
Therefore this change should have no adverse effect on streaming, even if it is
assumed that the first packet has a clean buffer, so small seekbacks within the
output buffer work even when the IO context is not seekable.
Signed-off-by: Marton Balint <cus@passwd.hu>
The following is a python script to halve the value of the gray
image. It demos how to setup and execute dnn model with python+tensorflow.
It also generates .pb file which will be used by ffmpeg.
import tensorflow as tf
import numpy as np
from skimage import color
from skimage import io
in_img = io.imread('input.jpg')
in_img = color.rgb2gray(in_img)
io.imsave('ori_gray.jpg', np.squeeze(in_img))
in_data = np.expand_dims(in_img, axis=0)
in_data = np.expand_dims(in_data, axis=3)
filter_data = np.array([0.5]).reshape(1,1,1,1).astype(np.float32)
filter = tf.Variable(filter_data)
x = tf.placeholder(tf.float32, shape=[1, None, None, 1], name='dnn_in')
y = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding='VALID', name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'halve_gray_float.pb', as_text=False)
print("halve_gray_float.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate halve_gray_float.model\n")
output = sess.run(y, feed_dict={x: in_data})
output = output * 255.0
output = output.astype(np.uint8)
io.imsave("out.jpg", np.squeeze(output))
To do the same thing with ffmpeg:
- generate halve_gray_float.pb with the above script
- generate halve_gray_float.model with tools/python/convert.py
- try with following commands
./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native out.native.png
./ffmpeg -i input.jpg -vf format=grayf32,dnn_processing=model=halve_gray_float.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow out.tf.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
do not request AVFrame's format in vf_ddn_processing with 'fmt',
but to add another filter for the format.
command examples:
./ffmpeg -i input.jpg -vf format=bgr24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png
./ffmpeg -i input.jpg -vf format=rgb24,dnn_processing=model=halve_first_channel.model:input=dnn_in:output=dnn_out:dnn_backend=native -y out.native.png
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com>
Fixes: signed integer overflow: 2147483647 + 1 cannot be represented in type 'int'
Fixes: 19788/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_VMDAUDIO_fuzzer-5743379690553344
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The documentation of both avio_open() as well as avio_open2() states
that on failure, the pointer to an AVIOContext given to this function
(via a pointer to a pointer to an AVIOContext) will be set to NULL. Yet
it didn't happen upon failure of ffurl_open_whitelist() or when allocating
the internal buffer failed. This commit changes this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
"VAProcFilterParameterBufferHDRToneMapping" was defined in libva 2.4.1, which will lead to
build failure for the filter tonemap_vaapi for libva 2.3.0 with current check. This patch
is to fix this build error.
Signed-off-by: Xinpeng Sun <xinpeng.sun@intel.com>