Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuda bug: falling back to software decoding doesn't play video from the beginning #3914

Closed
pavelxdd opened this issue Dec 17, 2016 · 7 comments

Comments

@pavelxdd
Copy link
Contributor

pavelxdd commented Dec 17, 2016

mpv version and platform

mpv git-2b8b174 (C) 2000-2016 mpv/MPlayer/mplayer2 projects
 built on Fri Dec 16 19:04:39 MSK 2016
ffmpeg library versions:
   libavutil       55.43.100
   libavcodec      57.68.100
   libavformat     57.60.100
   libswscale      4.3.101
   libavfilter     6.68.100
   libswresample   2.4.100
ffmpeg version: git-2016-12-16-a5cf600

Reproduction steps

mpv --no-config 10bit-eldorado.mkv --hwdec=cuda --opengl-backend=dxinterop

Expected behavior

CUDA fails to decode 10-bit h264 and mpv falls back to software decoding. Playback starts from 00:00

Actual behavior

Playback starts from 00:07. It happens only when trying hwdec=cuda, and doesn't happen with other hardware decoders.

Log file

http://sprunge.us/gARO

Sample files

Any 10-bit h264 (assuming that your GPU can't decode them)
https://ps-auxw.de/10bit-h264-sample/10bit-eldorado.mkv

@philipl do you know what could be the problem here?

@ghost
Copy link

ghost commented Dec 17, 2016

It's not a cuda bug. It's because FFmpeg has no API defined for signaling that hardware decoding is not supported, so mpv just keeps going until too many frames fail at once, and it switches back to swdec as consequence.

@pavelxdd
Copy link
Contributor Author

Can for example mpv remember the current frame before probing hwdec, and when hwdec fails, jump back to that frame?

@ghost
Copy link

ghost commented Dec 17, 2016

@philipl this is made somewhat harder by the fact that I get the following return values from the API:

  • send_frame(first packet) -> ok
  • receive_frame() -> EAGAIN
  • send_frame(second packet) -> error

Is this necessarily so? If I got the error on the first send_frame, then --vd-lavc-software-fallback=1 could be used to force a fallback without discarding packets. I might look into an alternative solution, though.

@ghost
Copy link

ghost commented Dec 17, 2016

@pavelxdd it's not as simple because frames usually depend on other frames. But for start of playback, one could just put all packets into a queue until we get a positive/negative response from the decoder, which is what I might look into.

@philipl
Copy link
Member

philipl commented Dec 17, 2016

Yeah, it's not possible to fail in send_frame. We get notified of failure asynchronously in a callback from cuda that happens after send_frame returns to the caller. So receive_frame is the first opportunity to pass that error back. We could try and force a synchronisation in the first send_frame but that'll be ugly and you'd still have this problem with other async decoders at some point, so a more general solution is going to be better in the end.

@ghost
Copy link

ghost commented Dec 17, 2016

Noted.

@philipl
Copy link
Member

philipl commented Dec 17, 2016

I did an additional investigation - the behaviour will depend on what format you're decoding and what exactly isn't supported.

10bit vp9 fails immediately, and you will get an error back from send_frame. I cannot test 10bit hevc failing properly (it works on my hardware) but I tried to emulate a failure and I did see the first send_frame return success.

The difference here is that the callback appears to fire immediately in the vp9 case (so we have the error in hand when we return) and it's delayed in the hevc case.

@ghost ghost closed this as completed in c000b37 Jan 10, 2017
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants