According to libva document
Surfaces are bound to a context if passing them as an argument when
creating the context.
Seeing intel-vaapi-driver code, the surfaces are just stored in
A surface processed by
vaBeginPicture()-vaRenderPicture()-vaEndPicture() are specified in
It looks like a surface can be processed using a context by being
specified in vaBeginPicture(), even if it is not bound to the context.
Here, my questions are below.
What is the advantage of binding?
In what circumstances do we need to associate the context with surfaces?
In which scenarios passing surfaces to vaCreateContext is required,
and in which it is not?
Hi, I would like to know the way of negotiating yuv format of raw data
(input buffers) for video encoding.
In my code base (Chromium), it always assumes intel chips is able to
encode NV12 format buffer.
I guess intel chips can encode other yuv format buffers.
Although I searched an example code that negotiates yuv format in
video encoding using VA-API, I didn't find yet.
Are there any example code to negotiate yuv format in video encoding?
On 10/10/18 23:22, Steven Toth wrote:
> Thanks for sharing Mark.
> I do have other ffmpeg pipelines that work very nicely - but are a mix
> of S/W and GPU for various stages. I neglected to mention these for
> the sake of brevity. I'm looking for a solution that completely
> constrains the pipeline to GPU only.
>> $ ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 -i in.mp4 -an -vf 'fps=30,scale_vaapi=1280:720' -c:v h264_vaapi -b:v 2M out.mp4
> Does fps=30 in your example demonstrate that something in the GPU
> between the decoder and the scaler is in fact dropping decoded frames?
> Or are those surfaces actually being returned to ffmpegs internal
> graph only to be dropped (or forwarded back to the scaler)?
The fps filter doesn't care about the frame data at all, so it works on any type of frame. That includes those which are stored in hardware surfaces and opaque to the CPU - VAAPI, D3D, OpenCL, etc.
> If surface handles are being returned to ffmpegs graph, to be dropped
> or forwarded then this is perfect.
Yes, that is what is happening.
Hey, a question on the VPPs capabilities.
Can the VPP do frame dropping? For example, every other frame, or one
frame every N?
Some context: Imagine I have a previously compressed 1280x720p60 h264
video stream, from some arbitrate network encoder. I'd like to decode
and re-encode this entirely on the GPU, using VAAPI (intel), have it
decompress, drop every other frame, scale remaining frames downwards,
re-compress as (for example) p30 with a lower bitrate.
A quick look at the VPP implementation suggests 'no'. Am I wrong?
Steven Toth - Kernel Labs