According to libva document
Surfaces are bound to a context if passing them as an argument when
creating the context.
Seeing intel-vaapi-driver code, the surfaces are just stored in
A surface processed by
vaBeginPicture()-vaRenderPicture()-vaEndPicture() are specified in
It looks like a surface can be processed using a context by being
specified in vaBeginPicture(), even if it is not bound to the context.
Here, my questions are below.
What is the advantage of binding?
In what circumstances do we need to associate the context with surfaces?
In which scenarios passing surfaces to vaCreateContext is required,
and in which it is not?
I use DeriveImage to compute the memory usage of VASurface.
There is no function in libva to obtain obj_surface->size, except this function.
The image gotten by vaDeriveImage is destroyed by vaDestroyImage.
However, decoding does not work correctly in jpeg decoding.
More precisely, the decoding with HW codec is too different from one
with Soft codec, although no error happens in each libva function.
I also confirmed video decodeing result is good and there is no problem.
The different points between jpeg decoding and video decoding is the
buffer contents that are passed to vaRenderPicture
According to libva document, DeriveImage and DestroyImage does not
have any side effects. Then, it seems wired that decoding does not
works correctly only if these functions are inserted before
vaEndPicture like below
> VA_SUCCESS_OR_RETURN(va_res, "vaRenderPicture for slices failed", false);
> + VAImage image;
> + va_res = vaDeriveImage(va_display_, va_surface_id, &image);
> + va_res = vaDestroyImage(va_display_, image.image_id);
> // Instruct HW codec to start processing committed buffers.
> // Does not block and the job is not finished after this returns.
> va_res = vaEndPicture(va_display_, va_context_id);
> VA_SUCCESS_OR_RETURN(va_res, "vaEndPicture failed", false);