According to libva document
Surfaces are bound to a context if passing them as an argument when
creating the context.
Seeing intel-vaapi-driver code, the surfaces are just stored in
A surface processed by
vaBeginPicture()-vaRenderPicture()-vaEndPicture() are specified in
It looks like a surface can be processed using a context by being
specified in vaBeginPicture(), even if it is not bound to the context.
Here, my questions are below.
What is the advantage of binding?
In what circumstances do we need to associate the context with surfaces?
In which scenarios passing surfaces to vaCreateContext is required,
and in which it is not?
For already long time there is the known existing and working implementation for decoding H264 on the X4500HD chipset. Please include it into mainline driver so that it gets finally usable for the "normal" people. Because exactly those are not able to compile their drivers by themself.
Also the many hours of work implemeting this decoding support should not have been useless.
I took to the ##intel-media IRC to find out why
VAEncMiscParameterMaxSliceSize has no effect for the VAAPI hevc encoder.
Unfortunately, jkqxz let me know the parameter is simply not implemented.
The VAEncMiscParameterMaxSliceSize parameter is documented in va.h, so I
assume there is a plan to implement this feature at some point. I'm hoping
it could be done sooner than later.
Limiting hevc slice size greatly improves error resiliency when streaming
via RTP over a lossy network. The VAAPI hevc encoder is my encoder of
choice for a number of reasons; albeit, without the ability to limit slice
size, I will have to rely on NVENC, which provides the ability to limit
* Allow a maximum slice size to be specified (in bits).
* The encoder will attempt to make sure that individual slices do not
exceed this size
* Or to signal applicate if the slice size exceed this size, see "status"
typedef struct _VAEncMiscParameterMaxSliceSize
/** \brief Reserved bytes for future use, must be zero */