Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify when requestFrame() captures the canvas content #28

Open
foolip opened this issue Feb 18, 2016 · 7 comments
Open

Specify when requestFrame() captures the canvas content #28

foolip opened this issue Feb 18, 2016 · 7 comments
Assignees
Labels

Comments

@foolip
Copy link
Member

foolip commented Feb 18, 2016

http://w3c.github.io/mediacapture-fromelement/#dfn-requestframe

The requestFrame() method allows applications to manually request that a frame from the canvas be captured and rendered into the track. In cases where applications progressively render to a canvas, this allows applications to avoid capturing a partially rendered frame.

This doesn't say what invoking requestFrame() should actually do. Does requesting mean that what is currently on the canvas gets encoded as a new frame, or is it a request to capture the canvas content at some later time, and if so at what time?

@foolip
Copy link
Member Author

foolip commented Feb 18, 2016

Another detail in need of spec'ing here is the behavior when the frameRate argument was omitted or non-zero. In Blink, requestFrame() seems to be a no-op unless frameRate is specified and zero.

@Pehrsons
Copy link
Contributor

Pehrsons commented Mar 2, 2016

The text for captureStream() defines this. requestFrame() should reflect that too.

http://w3c.github.io/mediacapture-fromelement/#methods-1

Each track that captures a canvas has an internal frameCaptureRequested property that is set to true when a new frame is requested from the canvas.

I think for simplicity requestFrame() should work for all values of frameRate.

@foolip
Copy link
Member Author

foolip commented Mar 10, 2016

Yep, spelling out that it sets frameCaptureRequested sounds right, it's currently implicit.

@uysalere uysalere self-assigned this Feb 7, 2017
@EliasHasle
Copy link

EliasHasle commented Oct 25, 2018

A use case of captureStream is to record video of rendered graphics on a canvas. Naturally, with graphics rendering the FPS may be variable. For realtime graphics a variable output FPS is acceptable, but for graphics rendered on given time steps, but with variable processing time, such as for a movie (could be using raytracing or other expensive techniques), it should be possible to render frames one by one and specify a fixed framerate. I actually expected this behavior for captureStream(0), but found no way to control the output framerate. Maybe requestFrame could take a dt or timestamp parameter, to override the realtime default? Or maybe there could be a way to apply constraints?

Syncing with an audio stream would then be another challenge.

I understand that realtime streams may be prioritized, but then maybe the output framerate control belongs in MediaRecorder.

Edit: This answers seems to provide a kind of solution for the non-realtime rendering at variable rate: https://stackoverflow.com/a/41275515/4071801

@guest271314
Copy link

it should be possible to render frames one by one and specify a fixed framerate.

@EliasHasle What issue are you having achieving the requirement?

@EliasHasle
Copy link

Thank you. The issue is that MediaRecorder seems to record real-time with variable framerate when stream framerate is 0. This is probably by design. Trying to hack around this by pausing and resuming the recorder is tricky and possibly depends on timely triggering of events etc.. However, I found that ccapture.js solves my use case well. It captures frame by frame as images and joins them into a fixed-framerate video afterwards. :-)

@Kaiido
Copy link

Kaiido commented Nov 20, 2021

It seems that implementations currently read

The requestFrame() method allows applications to manually request that a frame from the canvas be captured and rendered into the track.

as saying "Set this's [[frameCaptureRequested]] to true".

Given that to actually "render" a new frame to the track, captureStream says that we need both [[frameCaptureRequested]] and an undefined "canvas is painted" event, this method actually fails its goal:

Applications that depend on tight control over the rendering of content to the media stream can use this method to control when frames from the canvas are captured.

Both Firefox and Chrome do render half-baked frames when we do use captureStream(0) + requestFrame(), Firefox because they wait for the next painting frame of the event loop before actually capturing the canvas frame, Chrome ... for more obscure reasons(?).

To avoid this, requestFrame() should explicitly capture the canvas frame like any other API does (drawImage(), toDataURL(), createImageBitmap(), etc.), that is to capture it synchronously and push it to the track directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants