I've see a large drop in HEVC encoding throughput when migrating from libva 1.7.3 / i965 1.7.3 SKL / VA-API 0.39.4 (which were the defaults in Debian 9 Stretch) to more the recent libva 2.4.0 / i965 2.3.0 SKL / VA-API 1.5.
I could previously encode 9x streams of 1080p30, now I can encode 3x streams of 1080p30.
(I also see higher quality output at lower bitrates, which is a very good thing!)
Is the drop in encoding throughput due to quality improvements in the drivers? If so, are there options I can change to explore the trade-off between quality and throughput?
None of the options exposed through ffmpeg with hevc_vaapi seem to have a very large impact on throughput.
Or is there a chance that something is configured incorrectly?
I have tried encoding with both i965 and iHD drivers.
I have tried using ffmpeg with both libmfx / hevc_qsv and vaapi / hevc_vaapi.
I have tried running on Debian 9 and Ubuntu 19.04.
I have tried running on SKL (E3-1505M) and CFL (E-2176M).
In all cases I see roughly similar maximum thoughput.
The only other 'symptom' I have been able to find is that with the old libva / VA-API, intel_gpu_top reported "render space: 0/16384" while the newer version reports "render space: 0/4096".
Any insight into this would be appreciated!
According to libva document
Surfaces are bound to a context if passing them as an argument when
creating the context.
Seeing intel-vaapi-driver code, the surfaces are just stored in
A surface processed by
vaBeginPicture()-vaRenderPicture()-vaEndPicture() are specified in
It looks like a surface can be processed using a context by being
specified in vaBeginPicture(), even if it is not bound to the context.
Here, my questions are below.
What is the advantage of binding?
In what circumstances do we need to associate the context with surfaces?
In which scenarios passing surfaces to vaCreateContext is required,
and in which it is not?