FFmpeg/libswscale/aarch64
Sebastian Pop c3a17ffff6 swscale/aarch64: use multiply accumulate and shift-right narrow
This patch rewrites the innermost loop of ff_yuv2planeX_8_neon to avoid zips and
horizontal adds by using fused multiply adds. The patch also uses ld1r to load
one element and replicate it across all lanes of the vector. The patch also
improves the clipping code by removing the shift right instructions and
performing the shift with the shift-right narrow instructions.

I see 8% difference on an m6g instance with neoverse-n1 CPUs:
$ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
before: t:0.014015 avg:0.014096 max:0.015018 min:0.013971
after:  t:0.012985 avg:0.013013 max:0.013996 min:0.012818

Tested with `make check` on aarch64-linux.

Signed-off-by: Sebastian Pop <spop@amazon.com>
Reviewed-by: Clément Bœsch <u@pkh.me>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2020-01-04 20:59:31 +01:00
..
hscale.S swscale/aarch64: use multiply accumulate and increase vector factor to 4 2019-12-17 23:41:47 +01:00
Makefile sws/aarch64: add ff_yuv2planeX_8_neon 2016-04-11 16:27:19 +02:00
output.S swscale/aarch64: use multiply accumulate and shift-right narrow 2020-01-04 20:59:31 +01:00
swscale_unscaled.c sws/aarch64: add {nv12,nv21,yuv420p,yuv422p}_to_{argb,rgba,abgr,rgba}_neon 2016-03-01 17:53:33 +01:00
swscale.c sws/aarch64: add ff_yuv2planeX_8_neon 2016-04-11 16:27:19 +02:00
yuv2rgb_neon.S sws/aarch64/yuv2rgb: honor iOS calling convention 2016-04-08 17:58:43 +02:00