According to libva document
Surfaces are bound to a context if passing them as an argument when
creating the context.
Seeing intel-vaapi-driver code, the surfaces are just stored in
A surface processed by
vaBeginPicture()-vaRenderPicture()-vaEndPicture() are specified in
It looks like a surface can be processed using a context by being
specified in vaBeginPicture(), even if it is not bound to the context.
Here, my questions are below.
What is the advantage of binding?
In what circumstances do we need to associate the context with surfaces?
In which scenarios passing surfaces to vaCreateContext is required,
and in which it is not?
I'm needing to detect which chroma subsampling formats are supported by the
VAAPI driver for JPEG decoding. Currently, I'm
doing vaQueryConfigAttributes() and reading the value of
Is this the expected way to do it? Or do I have to do something with
There are few questions regarding GTT and PPGTT related to h264 encoder.
We are using libdrm 2.4.74 and intel-vaapi-driver and libva to encode YUV
1. By default intel-vaapi-driver uses GTT or PPGTT address space? We are
using baytrail series gpu which is 7th generation..
2. Will it affect encoded output if I pass i915.enable_ppgtt=0 to linux
kernel commandline ?