Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance-Tuning enhancement proposal. #1540

Merged
merged 1 commit into from
Mar 22, 2024

Conversation

alanconway
Copy link
Contributor

Covers delivery reliability (log loss) and sundry output tuning (reconnect, compression, batching)
I'm happy that this proposal is usable and outlines a feasible implementation.

Copy link
Contributor

@jcantrill jcantrill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I don't wish to add a field for every vector tuning option, I don't think we can capture all tuning in the way described here. I believe we are really going need to allow a user to explicitly define a parameter like blocking behavior or max events. I don't see any option other then explicit tuning and we have customers who are already experiencing this short fall

enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
@alanconway
Copy link
Contributor Author

@jcantrill Great feedback, thanks. I'll have to do some re-writing to address it, see next push.

Copy link
Contributor Author

@alanconway alanconway left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcantrill @periklis please check if I've addressed your concerns.

enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
enhancements/cluster-logging/performance-tuning.md Outdated Show resolved Hide resolved
@periklis
Copy link
Contributor

LGTM

@dhellmann
Copy link
Contributor

#1555 is changing the enhancement template in a way that will cause the header check in the linter job to fail for existing PRs. If this PR is merged within the development period for 4.16 you may override the linter if the only failures are caused by issues with the headers (please make sure the markdown formatting is correct). If this PR is not merged before 4.16 development closes, please update the enhancement to conform to the new template.

@alanconway alanconway force-pushed the performance-tuning branch 2 times, most recently from e8e62c1 to a530ea4 Compare February 21, 2024 22:37
@alanconway
Copy link
Contributor Author

Hopefully final draft: updated to be compatible with the v2 API proposal and @jcantrill PRs using AtLeastOnce/AtMostOnce instead of fast/safe. With the new terms, everything is easier to explain and I've re-directed the focus to reliability rather than performance which fixes some of @cahartma concerns. I've also added a "fallback" position to force use of persistence as suggested by @kattz-kawa.

Copy link
Contributor

@jcantrill jcantrill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 22, 2024
@alanconway
Copy link
Contributor Author

/retest

@alanconway
Copy link
Contributor Author

/approve

Copy link
Contributor

openshift-ci bot commented Mar 22, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alanconway

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 22, 2024
@jcantrill
Copy link
Contributor

/hold cancel
/lgtm

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 22, 2024
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 22, 2024
Copy link
Contributor

openshift-ci bot commented Mar 22, 2024

@alanconway: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 21ea555 into openshift:master Mar 22, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants