Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[occm] Support loadbalancer.openstack.org/flavor-name instead of only loadbalancer.openstack.org/flavor-id #2600

Open
Sinscerly opened this issue May 23, 2024 · 9 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Sinscerly
Copy link

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:

Reading the documentation for the cloud-provider-openstack I found it is possible to specify a loadbalancer flavor by id, although it would be handy and simpeler if it is possible to also create one by using the flavor name (if no flavor-id is specified).

What you expected to happen:

That a loadbalancer flavor is selected based on the name when the annotation loadbalancer.openstack.org/flavor-name is used

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label May 23, 2024
@dulek
Copy link
Contributor

dulek commented May 23, 2024

I think Octavia allows to have multiple flavors using the same name. We could probably just fail when there is more than one flavor of such a name, but that's increasing the complexity of debugging a bit.

Anyway I'm not totally opposed to this.

@Sinscerly
Copy link
Author

Hi @dulek

It is indeed possible to have the same flavor although an OpenStack admin creates those, so I would guess that the change of having duplicated is not that high. Failing the creation of the loadbalancer would be a good idea then or to fallback to the default by not setting any of the config (so using not the name as none could be found).

@zetaab
Copy link
Member

zetaab commented Jun 8, 2024

at least in our openstack, these flavors are not visible for normal users. How to solve this issue?

@Sinscerly
Copy link
Author

at least in our openstack, these flavors are not visible for normal users. How to solve this issue?

Do you currently provide them with an ID? Then a name would make it easier.

@zetaab
Copy link
Member

zetaab commented Jun 9, 2024

yes, we provide them with id. I do not see how name could make it easier from the api perspective. https://docs.openstack.org/api-ref/load-balancer/v2/#create-a-load-balancer takes flavor_id NOT flavor_name. So it means that somehow name should first be converted to id, and at least we do not have API which provides this information.

@Sinscerly
Copy link
Author

yes, we provide them with id. I do not see how name could make it easier from the api perspective. https://docs.openstack.org/api-ref/load-balancer/v2/#create-a-load-balancer takes flavor_id NOT flavor_name. So it means that somehow name should first be converted to id, and at least we do not have API which provides this information.

Is it possible in your environment to list the loadbalancer flavors (through this endpoint: https://docs.openstack.org/api-ref/load-balancer/v2/#list-flavors), in our environment this is supported. So this would be my best guess to make it work. Using names would make it easier and understandable for the people that are defining the loadbalancer.

@zetaab
Copy link
Member

zetaab commented Jun 10, 2024

Right, I did not know about that endpoint. Seems to work for me as well. So it should be open for normal openstack users as well.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants