Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WEBM: Init segment with HTTP PUT destination hangs #1196

Closed
petzeb opened this issue Apr 5, 2023 · 1 comment · Fixed by #1201 or #1312
Closed

WEBM: Init segment with HTTP PUT destination hangs #1196

petzeb opened this issue Apr 5, 2023 · 1 comment · Fixed by #1201 or #1312
Labels
status: archived Archived and locked; will not be updated

Comments

@petzeb
Copy link
Contributor

petzeb commented Apr 5, 2023

System info

Operating System: Ubuntu Focal
Shaka Packager Version: v2.6.1

Issue and steps to reproduce the problem

Packager Command:

packager 'input=opus_96.opus.webm,stream=0,init_segment=http://localhost:35559/http/init.webm,segment_template=http://localhost:35559/http/$Number%05d$.webm'

Extra steps to reproduce the problem?
(1) Spin up a local HTTP server supporting chunk transfer encoding
(2) Any supported input file will probably do, but I have only tested with opus in webm so far.

What is the expected result?

Expected shaka to produce and upload the init segment and the segments to the local http server.

What happens instead?

The local http server does get a couple of PUT requests for the init segment path, but they contain 0 bytes of data.
Shaka packager then appears to hang, but if one enables more verbose logging it can be observed that there appears to be an infinite loop of write attempts to the init segment internally. See atached log.

What happens is that when the HttpFile::Write call is made the upload_cache_ is already close, so it returns 0 bytes written and this will be retried over and over again but will never succeed. I think the reason that the upload_cache_ is closed is due to the Flush happening in ThreadedIoFile::Seek causing the HttpFile::Flush method to close the upload cache. Somewhere, someone needs to Reopen the upload cache for this to break out of the infinite retry loop.

@petzeb
Copy link
Contributor Author

petzeb commented Apr 5, 2023

joeyparrish pushed a commit that referenced this issue Jul 5, 2023
Closing the upstream on flush will effectively terminate the ongoing
curl connection. This means that we would need re-establish the
connection in order to resume writing, this is not what we want. In the
spirit of the documentation of File::Flush

```c++
/// Flush the file so that recently written data will survive an 
/// application crash (but not necessarily an OS crash). For 
/// instance, in LocalFile the data is flushed into the OS but not 
/// necessarily to disk.
```

We will instead wait for the curl thread to finish consuming what ever
might be in the upload cache, but leave the connection open for
subsequent writes.

Fixes #1196
@github-actions github-actions bot added the status: archived Archived and locked; will not be updated label Sep 3, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 3, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
status: archived Archived and locked; will not be updated
Projects
None yet
1 participant