Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-19221. S3A: Unable to recover from failure of multipart block upload attempt (#6938) #7044

Conversation

steveloughran
Copy link
Contributor

This is a major change which handles 400 error responses when uploading large files from memory heap/buffer (or staging committer) and the remote S3 store returns a 500 response from a upload of a block in a multipart upload.

The SDK's own streaming code seems unable to fully replay the upload; at attempts to but then blocks and the S3 store returns a 400 response

"Your socket connection to the server was not read from or written to
 within the timeout period. Idle connections will be closed.
 (Service: S3, Status Code: 400...)"

There is an option to control whether or not the S3A client itself attempts to retry on a 50x error other than 503 throttling events (which are independently processed as before)

Option: fs.s3a.retry.http.5xx.errors
Default: true

500 errors are very rare from standard AWS S3, which has a five nines SLA. It may be more common against S3 Express which has lower guarantees.

Third party stores have unknown guarantees, and the exception may indicate a bad server configuration. Consider setting fs.s3a.retry.http.5xx.errors to false when working with such stores.

Signification Code changes:

There is now a custom set of implementations of
software.amazon.awssdk.http.ContentStreamProvidercontent in the class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.

These:

  • Restart on failures
  • Do not copy buffers/byte buffers into new private byte arrays, so avoid exacerbating memory problems..

There new IOStatistics for specific http error codes -these are collected even when all recovery is performed within the SDK.

S3ABlockOutputStream has major changes, including handling of Thread.interrupt() on the main thread, which now triggers and briefly awaits cancellation of any ongoing uploads.

If the writing thread is interrupted in close(), it is mapped to an InterruptedIOException. Applications like Hive and Spark must catch these after cancelling a worker thread.

Contributed by Steve Loughran

How was this patch tested?

in progress

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

…upload attempt (apache#6938)


This is a major change which handles 400 error responses when uploading
large files from memory heap/buffer (or staging committer) and the remote S3
store returns a 500 response from a upload of a block in a multipart upload.

The SDK's own streaming code seems unable to fully replay the upload;
at attempts to but then blocks and the S3 store returns a 400 response

    "Your socket connection to the server was not read from or written to
     within the timeout period. Idle connections will be closed.
     (Service: S3, Status Code: 400...)"

There is an option to control whether or not the S3A client itself
attempts to retry on a 50x error other than 503 throttling events
(which are independently processed as before)

Option:  fs.s3a.retry.http.5xx.errors
Default: true

500 errors are very rare from standard AWS S3, which has a five nines
SLA. It may be more common against S3 Express which has lower
guarantees.

Third party stores have unknown guarantees, and the exception may
indicate a bad server configuration. Consider setting
fs.s3a.retry.http.5xx.errors to false when working with
such stores.

Signification Code changes:

There is now a custom set of implementations of
software.amazon.awssdk.http.ContentStreamProvidercontent in
the class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.

These:

* Restart on failures
* Do not copy buffers/byte buffers into new private byte arrays,
  so avoid exacerbating memory problems..

There new IOStatistics for specific http error codes -these are collected
even when all recovery is performed within the SDK.
  
S3ABlockOutputStream has major changes, including handling of
Thread.interrupt() on the main thread, which now triggers and briefly
awaits cancellation of any ongoing uploads.

If the writing thread is interrupted in close(), it is mapped to
an InterruptedIOException. Applications like Hive and Spark must
catch these after cancelling a worker thread.

Contributed by Steve Loughran
@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 7m 19s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 21 new or modified test files.
_ branch-3.4 Compile Tests _
+0 🆗 mvndep 3m 59s Maven dependency ordering for branch
+1 💚 mvninstall 28m 49s branch-3.4 passed
+1 💚 compile 9m 4s branch-3.4 passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 8m 19s branch-3.4 passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 2m 3s branch-3.4 passed
+1 💚 mvnsite 1m 35s branch-3.4 passed
+1 💚 javadoc 1m 12s branch-3.4 passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 1m 11s branch-3.4 passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 spotbugs 2m 19s branch-3.4 passed
+1 💚 shadedclient 20m 38s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 0m 50s the patch passed
+1 💚 compile 10m 16s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 10m 16s the patch passed
+1 💚 compile 9m 54s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 9m 54s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 2m 7s root: The patch generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27)
+1 💚 mvnsite 1m 35s the patch passed
+1 💚 javadoc 1m 9s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 1m 11s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 spotbugs 2m 31s the patch passed
+1 💚 shadedclient 20m 39s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 59s hadoop-common in the patch passed.
+1 💚 unit 2m 13s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 43s The patch does not generate ASF License warnings.
160m 7s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7044/1/artifact/out/Dockerfile
GITHUB PR #7044
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 228a70297df0 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision branch-3.4 / 76e462f
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7044/1/testReport/
Max. process+thread count 2387 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7044/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran steveloughran merged commit 97abdb4 into apache:branch-3.4 Sep 16, 2024
3 checks passed
steveloughran added a commit to steveloughran/hadoop that referenced this pull request Oct 2, 2024
…upload attempt (apache#6938) (apache#7044)


This is a major change which handles 400 error responses when uploading
large files from memory heap/buffer (or staging committer) and the remote S3
store returns a 500 response from a upload of a block in a multipart upload.

The SDK's own streaming code seems unable to fully replay the upload;
at attempts to but then blocks and the S3 store returns a 400 response

    "Your socket connection to the server was not read from or written to
     within the timeout period. Idle connections will be closed.
     (Service: S3, Status Code: 400...)"

There is an option to control whether or not the S3A client itself
attempts to retry on a 50x error other than 503 throttling events
(which are independently processed as before)

Option:  fs.s3a.retry.http.5xx.errors
Default: true

500 errors are very rare from standard AWS S3, which has a five nines
SLA. It may be more common against S3 Express which has lower
guarantees.

Third party stores have unknown guarantees, and the exception may
indicate a bad server configuration. Consider setting
fs.s3a.retry.http.5xx.errors to false when working with
such stores.

Signification Code changes:

There is now a custom set of implementations of
software.amazon.awssdk.http.ContentStreamProvidercontent in
the class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.

These:

* Restart on failures
* Do not copy buffers/byte buffers into new private byte arrays,
  so avoid exacerbating memory problems..

There new IOStatistics for specific http error codes -these are collected
even when all recovery is performed within the SDK.
  
S3ABlockOutputStream has major changes, including handling of
Thread.interrupt() on the main thread, which now triggers and briefly
awaits cancellation of any ongoing uploads.

If the writing thread is interrupted in close(), it is mapped to
an InterruptedIOException. Applications like Hive and Spark must
catch these after cancelling a worker thread.

Contributed by Steve Loughran
steveloughran added a commit that referenced this pull request Oct 3, 2024
…upload attempt (#6938) (#7044) (#7094)


This is a major change which handles 400 error responses when uploading
large files from memory heap/buffer (or staging committer) and the remote S3
store returns a 500 response from a upload of a block in a multipart upload.

The SDK's own streaming code seems unable to fully replay the upload;
at attempts to but then blocks and the S3 store returns a 400 response

    "Your socket connection to the server was not read from or written to
     within the timeout period. Idle connections will be closed.
     (Service: S3, Status Code: 400...)"

There is an option to control whether or not the S3A client itself
attempts to retry on a 50x error other than 503 throttling events
(which are independently processed as before)

Option:  fs.s3a.retry.http.5xx.errors
Default: true

500 errors are very rare from standard AWS S3, which has a five nines
SLA. It may be more common against S3 Express which has lower
guarantees.

Third party stores have unknown guarantees, and the exception may
indicate a bad server configuration. Consider setting
fs.s3a.retry.http.5xx.errors to false when working with
such stores.

Signification Code changes:

There is now a custom set of implementations of
software.amazon.awssdk.http.ContentStreamProvidercontent in
the class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.

These:

* Restart on failures
* Do not copy buffers/byte buffers into new private byte arrays,
  so avoid exacerbating memory problems..

There new IOStatistics for specific http error codes -these are collected
even when all recovery is performed within the SDK.
  
S3ABlockOutputStream has major changes, including handling of
Thread.interrupt() on the main thread, which now triggers and briefly
awaits cancellation of any ongoing uploads.

If the writing thread is interrupted in close(), it is mapped to
an InterruptedIOException. Applications like Hive and Spark must
catch these after cancelling a worker thread.

Contributed by Steve Loughran
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants