Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): Update aws-config crate #20663

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

jszwedko
Copy link
Member

Also adds the aws-runtime crate as some types were moved there.

Closes: #20662

Also adds the aws-runtime crate as some types were moved there.

Signed-off-by: Jesse Szwedko <[email protected]>
@jszwedko jszwedko added the no-changelog Changes in this PR do not need user-facing explanations in the release changelog label Jun 13, 2024
@jszwedko jszwedko enabled auto-merge June 13, 2024 16:20
@datadog-vectordotdev
Copy link

Datadog Report

Branch report: jszwedko/update-aws-config
Commit report: cc7a433
Test service: vector

✅ 0 Failed, 7 Passed, 0 Skipped, 25.44s Total Time

@balonik
Copy link
Contributor

balonik commented Jul 11, 2024

@jszwedko could you please check the failing checks so this can get merged?

@jszwedko
Copy link
Member Author

@jszwedko could you please check the failing checks so this can get merged?

Apologies for the delay. I had looked a bit before, the integration tests for AWS are failing, but I wasn't able to sort it out 😓 I'll try to take another pass soon.

@balonik
Copy link
Contributor

balonik commented Aug 17, 2024

@jszwedko maybe the issue is with the integration test itself? I am just guessing here, because I don't know what mockwatchlogs image does, but it seems like it is used to emulate AWS Cloudwatch, because back in the days (that container is 5 years old) localstack didn't support Cloudwatch Logs?
Using localstack for all Cloudwatch tests (metrics are using localstack) could do the trick.

@jszwedko jszwedko requested a review from a team as a code owner August 19, 2024 22:44
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Aug 19, 2024

Datadog Report

Branch report: jszwedko/update-aws-config
Commit report: da60d06
Test service: vector

✅ 0 Failed, 7 Passed, 0 Skipped, 25.47s Total Time

@jszwedko
Copy link
Member Author

@jszwedko maybe the issue is with the integration test itself? I am just guessing here, because I don't know what mockwatchlogs image does, but it seems like it is used to emulate AWS Cloudwatch, because back in the days (that container is 5 years old) localstack didn't support Cloudwatch Logs? Using localstack for all Cloudwatch tests (metrics are using localstack) could do the trick.

Good observation! I opened a PR to switch from mockwatchlogs to localstack here: #21114. The integration tests are currently failing, but if we can fix that that might unblock this. I'll try to look at it again soon, but also happy to see someone else take a look if they have time.

@balonik
Copy link
Contributor

balonik commented Sep 6, 2024

@jszwedko now after we fixed the integration tests in another PR, could you please rebase this branch and see if it works now?

@jszwedko
Copy link
Member Author

jszwedko commented Sep 6, 2024

@jszwedko now after we fixed the integration tests in another PR, could you please rebase this branch and see if it works now?

👍 thanks for the bump on this. I merged in master 🤞

Signed-off-by: Jesse Szwedko <[email protected]>
@jszwedko
Copy link
Member Author

jszwedko commented Sep 6, 2024

Looks like we are still seeing:

2024-09-06T20:47:02.214518Z  WARN request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=CloudwatchError::DescribeLogStreams: dispatch failure internal_log_rate_limit=true

I'll try to dig into this later unless someone else gets to it first. It may just be a matter of a missing feature flag.

@chlunde
Copy link

chlunde commented Sep 9, 2024

Looks like we are still seeing:

2024-09-06T20:47:02.214518Z  WARN request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=CloudwatchError::DescribeLogStreams: dispatch failure internal_log_rate_limit=true

I'll try to dig into this later unless someone else gets to it first. It may just be a matter of a missing feature flag.

@jszwedko I wonder if the logging should be improved here, there's also typically a message in the AWS error response, which tells what's actually wrong. vector never logs this. This makes it harder to diagnose any issue with the AWS config, like missing permissions. I haven't had the bandwidth to report a proper issue on this.

@jszwedko
Copy link
Member Author

Looks like we are still seeing:

2024-09-06T20:47:02.214518Z  WARN request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=CloudwatchError::DescribeLogStreams: dispatch failure internal_log_rate_limit=true

I'll try to dig into this later unless someone else gets to it first. It may just be a matter of a missing feature flag.

@jszwedko I wonder if the logging should be improved here, there's also typically a message in the AWS error response, which tells what's actually wrong. vector never logs this. This makes it harder to diagnose any issue with the AWS config, like missing permissions. I haven't had the bandwidth to report a proper issue on this.

I believe "dispatch failure" is an error returned by the SDK before it even makes the request, and so there wouldn't be a response from AWS, but I could be wrong.

@milas
Copy link

milas commented Sep 24, 2024

@jszwedko
Copy link
Member Author

FYI dispatch failure internal_log_rate_limit=true doesn't appear to be new / introduced by this PR:

It is new in that the integration tests previously didn't hit it, but it does seem like it can potentially be hit in other circumstances too (thanks fro the links!).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
no-changelog Changes in this PR do not need user-facing explanations in the release changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[AWS] support AWS Pod Identity authentication - AWS SDK bump
5 participants