avcodec/cuvid: Restore initialization of pixel format in init()

I moved this into the handle_video_sequence callback because that's
the earliest time you can make an accurate decision as to what the
format should be.

However, transcoding requires that the decision between using
the accelerated PIX_FMT_CUDA vs a normal pix format happen at init()
time. There is enough information available to make that decision
and things work out with the underlying format only being discovered
in the sequence callback.
This commit is contained in:
Philip Langdale 2016-11-23 12:10:32 -08:00
parent b96a6e2024
commit dd10e7253a

View File

@ -664,6 +664,21 @@ static av_cold int cuvid_decode_init(AVCodecContext *avctx)
const AVBitStreamFilter *bsf;
int ret = 0;
enum AVPixelFormat pix_fmts[3] = { AV_PIX_FMT_CUDA,
AV_PIX_FMT_NV12,
AV_PIX_FMT_NONE };
// Accelerated transcoding scenarios with 'ffmpeg' require that the
// pix_fmt be set to AV_PIX_FMT_CUDA early. The sw_pix_fmt, and the
// pix_fmt for non-accelerated transcoding, do not need to be correct
// but need to be set to something. We arbitrarily pick NV12.
ret = ff_get_format(avctx, pix_fmts);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "ff_get_format failed: %d\n", ret);
return ret;
}
avctx->pix_fmt = ret;
ret = cuvid_load_functions(&ctx->cvdl);
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "Failed loading nvcuvid.\n");