Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OPNET-268: Configure-ovs Alternative #1440

Merged

Conversation

cybertron
Copy link
Member

There are a number of limitations of the configure-ovs script. This is a proposed design to address them.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jul 13, 2023
@openshift-ci-robot
Copy link

openshift-ci-robot commented Jul 13, 2023

@cybertron: This pull request references OPNET-268 which is a valid jira issue.

In response to this:

There are a number of limitations of the configure-ovs script. This is a proposed design to address them.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested review from dcbw and dougbtv July 13, 2023 21:38
### Goals

Once this design is implemented, deployers of OpenShift will be able to
explicitly configure br-ex to their exact specifications and will be able to

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will an explicit configuration of br-ex be mandatory moving forward or optional? If mandatory, we need to evaluate upgrade paths.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I see below under the 'Proposal' section that this will be mandatory.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be mandatory if you're using this new mechanism, but configure-ovs will still exist. I think manual configuration of br-ex is probably too big a hurdle for some deployers so I think we need to keep that easy path. I'm open to being convinced otherwise, but we'll need to discuss that with the OVNK folks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we may need to document examples to do it manually so users that do not exactly like the configure-ovs version of br-ex do not start from scratch.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, we're definitely going to want to document what a br-ex config using this mechanism should look like, probably including some more advanced options too.

Comment on lines +83 to +98
per-host interface configuration. Configuration for an OpenVSwitch bridge
named br-ex will be mandatory. The format for the configuration will be

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about an upgrade path, it might be simpler than I had in mind. Deployers upgrading their clusters won't need to explicitly configure br-ex as its configuration already exists in each node. The exception is when scaling out or reinstalling nodes post-upgrade; deployers must create a valid br-ex configuration. If that is so, we will have to highlight such in the test plan and documentation.

### Workflow Description

When the user is populating install-config they will provide the necessary
configuration in the networking section (TODO: Is this the right place?).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

networkConfig YAML section in install-config.yaml for IPI and agent-config.yaml for agent-based installer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean a new networkConfig section in install-config? We can't use the existing one for this functionality because we don't have OVS in the ramdisk.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You say we cannot use the existing one (referring to networkConfig, right?), yet the example below with br-ex is in the networkConfig block. I'm confused.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I reused the name but it's in a different section. I don't know if that's more or less confusing than coming up with a new name for a very similar feature.

Comment on lines 158 to 162
The name field will need to be something that uniquely identifies a given host.
I'm unsure if name is necessarily the best way to do that, but it is one valid
option. I reused the networkConfig name from the baremetal feature for
consistency. Perhaps something different would be preferable to avoid confusion
though?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name field uniquely identifies a given host. Giving an example of master-0 node in this enhancement proposal seems reasonable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that there's no guarantee the name specified in install-config matches the eventual hostname of the node. At least on baremetal, hostnames are controlled by the DHCP server which is outside our control. It's probably fair to say the names need to match if you're going to do this, but I'm open to other options that would reduce the opportunity for misconfiguration too.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point on hostname change because of DHCP.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that different from what we have now ? I understand that we are already using the NMState version per node that generated the NM connection files.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's baremetal-specific, we're trying to make this more platform-agnostic. In baremetal we can know the hostnames ahead of time, but I don't know if that's true for all platforms. I suspect there may be some randomness to the hostnames in non-baremetal cases.

Comment on lines 164 to 185
Note that although the configuration can be provided on a per-host basis, it is
not mandatory to do so. YAML anchors can be used to duplicate a single
configuration across multiple nodes. For example:

networking:
networkType: OVNKubernetes
machineNetwork:
- cidr: 192.0.2.0/24
[...]
hostConfig:
- name: master-0
networkConfig: &BOND
interfaces:
- name: br-ex
...etc...
- name: master-1
networkConfig: *BOND
- name: master-2
networkConfig: *BOND

This reduces duplication in install-config and the chance of errors due to
unintended differences in node config.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reads extraneous to this enhancement.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pre-emptively included this because it was a point of discussion on the baremetal networkConfig design. In that case we agreed to do common config across nodes as a followup, but since it turns out YAML can already handle that for us I thought I'd mention it in case anyone was wondering about that here too.

This could be implemented almost entirely with machine-configs, with only a simple
service on the host to apply the provided NMState files. This is the approach
the [prototype implementation](https://docs.google.com/document/d/1Zp1J2HbVvu-v4mHc9M594JLqt6UwdKoix8xkeSzVL98/edit#heading=h.fba4j9nvp0nl)
used since per-host configuration was not available any other way. However,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other downsides:

  • Machine-configs only apply at a much later install stage, no?
  • Additional node reboot when machine-config created at day-2
  • No NMstate syntax/config validation, could render nodes inaccessible

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Machine-configs are applied by ignition, the same as this.
  • That's a good point. If you scaled out a machine config pool on day 2 you'd have to reboot all of the nodes in that pool because you'd have to add the new node configs to all of them. That's a pretty big limitation.
  • True, although in this case I envision the machine-configs writing NMState YAML instead of raw nmconnection files, so you'd still have NMState validating them before it applies them and rolling back if they break things. You couldn't catch syntax errors in install-config though because the installer would just see opaque machine-configs, not their contents.

Actually, the scaleout case is something we need to consider. At install time we can handle mapping things in the installer, but on day 2 we'll need a manual process, much like baremetal has. Unfortunately I don't think we can use kubernetes-nmstate for scaleout either because it won't be running soon enough to apply the bridge config. :-/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that scaleout works fine with my latest machine-config prototype. You just update the MC with the new node configs and it gets rolled out reboot-less so it will be there when the new nodes start. I discuss how this works below now.


### Risks and Mitigations

Because this gives the deployer a great deal of control over the bridge

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the positive side, should the deployer apply a configuration that renders the node inaccessible, the trio (NetworkManager, nmstate and kubernetes-nmstate) are expected to recover by rolling back to the previous network config state.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I'll mention that as a mitigation to the risk.

Comment on lines +240 to +326
* There has been resistance to using the NMState API in OpenShift in the past
because it is not compliant with the OpenShift API Guidelines. There is work
underway in NMState to address this, but is that mandatory for this to be
implemented?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please share some references to both statements?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a link here, but for my own reference it first came up in this thread: #1267 (comment)

There is an initial PR to smooth integration with OCP here: nmstate/nmstate#2338 It doesn't actually change any of the API yet, but I believe it was a first step in that direction.

Comment on lines +252 to +336
* If changes are made to the bridge configuration on day 2, will OVNKubernetes
handle those correctly without a reboot? If not, how do we orchestrate a
reboot?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OVN-Kubernetes will not handle the replacement of the gateway interface (enp2s0 in the example above). This is an existing limitation where OVN-Kubernetes installs an OpenFlow rule in br-ex that states the gateway interface is the OVS port ID of enp2s0 (port 1) and where attaching a new interface, say enp3s0, its OVS port ID will be different hence no egress traffic forwarding.

Monitoring br-ex port attachments and recalculating the OpenFlow rule may help. This is outside the scope of this enhancement, though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, thanks. I'm just going to include your comment in the enhancement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall once trying this and even a noop change on br-ex through nmstate operator caused undesired changes in the underlying configuration. I am not 100% sure if I am recalling incorrectly or if this might have been fixed since then.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if that's because configure-ovs uses different profile names from NMState. It's possible we ended up with multiple profiles doing the same thing.

Maybe I just need to test this. I can deploy my prototype and change the MTU or something like that and see what happens.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the very least I recall nmstate changing the names of things, like ovs ports and such. Yeah, I am curious about testing this again.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


### Non-Goals

Complete replacement of configure-ovs. This mechanism is intended for more

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the cases where configure-ovs.sh would still be used?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably most single-nic deployments where the logic for creating br-ex is simple. I don't anticipate this new mechanism being widely used in AWS, Azure, or GCP, for instance. In fact, I'm not sure how it will work in non-on-prem envs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Can we add that as an example to this doc?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, done.

@cybertron
Copy link
Member Author

/cc @jcaamano @trozet

@openshift-ci openshift-ci bot requested review from jcaamano and trozet July 14, 2023 16:30
@cybertron
Copy link
Member Author

New revision with updates based on the discussion so far. I also left some more thoughts on the variation where this is mostly implemented with machine-configs. I'm actually liking that more and more as I think about it because it uses existing configuration mechanisms so we aren't going to have to invent something completely new.

@cybertron
Copy link
Member Author

I've pushed an update based on the implementation I did in https:/cybertron/machine-config-operator/tree/configure-ovs-alt

It handles both initial deployment and scaling in a sane fashion and required minimal code changes. The main concern is whether we can use hostname-based mapping of configs on non-baremetal platforms since I think at least some of them add a random identifier into the node names. If we can't use hostnames, does anyone have suggestions on what we could use?

enabled: false
ipv6:
enabled: false
- name: br-ex
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if it is relevant for this design, but in case the configure-ovs script today also needs to configure ovn-bridge-mappings you may find this useful - we have recently introduced a nice API for it to nmstate: https://bugzilla.redhat.com/show_bug.cgi?id=2209690

With this, you could configure the mapping easily from this CR:

hostConfig:
  ovn:
    bridge-mappings:
      - localnet: FOO
        bridge: br-ex

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's definitely a nice fit for this. It would be a pain to introduce an OpenShift-specific API to do that, but with NMState we get it for "free". :-)

### Goals

Once this design is implemented, deployers of OpenShift will be able to
explicitly configure br-ex to their exact specifications and will be able to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we may need to document examples to do it manually so users that do not exactly like the configure-ovs version of br-ex do not start from scratch.

Comment on lines 91 to 95
NMState YAML, which will be provided to each node via Ignition. Around the
same point in the boot process that configure-ovs would run, we will run
NMState to apply the provided configuration. This will only be done on first
boot because subsequent changes will be made using the Kubernetes-NMState
operator.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add that configure-ovs will be skip if NMState is detected.

Also a question, is the NMState configured going to be "converted" automatically into a NNCP for kubernetes-nmstate so users can modify it or they will create the NNCP from scratch.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, will do.

Right now it would need to be done manually. There isn't really a way to convert it at install time because kubernetes-nmstate won't be installed yet. I suppose there may be some way to auto-create some manifests that could be applied later though.

[...]
hostConfig:
- name: master-0
networkConfig:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nmpolicies is already implemented at nmstatectl so we can "generalize" the the nmstate if there is a property to identify the interface.

What I mean if that at some scenarios maybe we can have a nmstate per cluster so maybe we can extend networking part and do it per cluster (it will be configured per node at the backend by MCO though).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect we would use the results of nodeip-configuration to generate the policy. I'm not sure exactly how that would work, but it's definitely something I'd like to look into. It's not my primary goal right now because I want to handle the advanced cases where configure-ovs struggles, but I do see nmpolicy as a potential way to completely replace configure-ovs.

On a more general note, a per-cluster config (maybe with the possibilty of overriding on a per-node basis) may be a solution to platforms where we don't have a way to map per-node configs ahead of time (like if the hostname is not predictable). I don't know how important this feature will be in those cases, but I have heard some discussion of needing to override the MTU at deploy time, so this might provide a solution to that. I think with my current machine-config prototype it would be pretty easy to extend to include a per-cluster config so maybe I'll give that a try and update the design if it works well.

Comment on lines 158 to 162
The name field will need to be something that uniquely identifies a given host.
I'm unsure if name is necessarily the best way to do that, but it is one valid
option. I reused the networkConfig name from the baremetal feature for
consistency. Perhaps something different would be preferable to avoid confusion
though?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that different from what we have now ? I understand that we are already using the NMState version per node that generated the NM connection files.

Comment on lines 179 to 188
hostConfig:
- name: master-0
networkConfig: &BOND
interfaces:
- name: br-ex
...etc...
- name: master-1
networkConfig: *BOND
- name: master-2
networkConfig: *BOND
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's better to add something new to the install-config syntax to configure network cluster-wide for this scenarios (also taking into account the nmpolicy comment above).

clusterConfig:
networkConfig:
      interfaces:
      - name: br-ex

Also with this it would be possible to generate an NNCP from it so you can say at day2 what was done by boot process, although we will have to be super careful if we introduce nmpolicies since captures haste to take value from the cache files.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I'm seriously tempted to drop the install-config part of this entirely for the initial implementation since we can do it all with machine-configs, but even if we do that I think a cluster-wide config makes a lot of sense.

I think day 2 would be handled similar to how it is today, which is the user would need to manually copy their config into an NNCP. I want to eliminate that step, but I think we'll need to get KNMState into the core payload first.

Comment on lines 202 to 203
This could be implemented almost entirely with machine-configs, with only a simple
service on the host to apply the provided NMState files. This is the approach
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The systemd service is already in place as part of nmstate installation, we just need to put yamls at /etc/nmstate/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, but I wonder if that does what we need. I only want this to apply once, and only the configuration that applies to the specific host. If we implemented this with machine-configs, all of the configs for all of the nodes will be present in the directory. I also only want this to be applied at first boot, and then if changes are needed they should be made by kubernetes-nmstate. We could allow updating via machine-config, but then we throw out all of the benefits that kubernetes-nmstate gives us.

Copy link
Contributor

@qinqon qinqon Sep 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stuff under /etc/nmstate apply only once, then nmstate move the file so it's not apply at reboot, also I understand that machine config will dump there just the one for that node.

If we miss features for /etc/nmstate we can ask for them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately there's no way to tell MCO to only deploy the config specific to the node. We have to write all of the configs to all of the nodes and have logic on the node itself to pick the right one. I've added some discussion of how we might integrate with the NMState service to do this in the Variation section.

Comment on lines +267 to +299
configuration for OVNK, they will have the ability to configure it incorrectly,
which may cause problems either on initial deployment or at any point after.
We will want to ensure that must-gather is collecting enough information about
the bridge configuration for us to determine whether it is compliant with our
guidelines, and we must have clear instructions on what is required for br-ex.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To reduce this risk we should document nmstate examples to create br-ex manually for differente scenarios (single-nic, multi-nic, bond + bridge + vlans, etc...)

We have to be super careful with MTUs though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, probably similar to what we have for kubernetes-nmstate today. We'll include examples of some common configs.

Comment on lines +278 to +309
OpenShift has historically avoided per-host configuration for cluster nodes.
This is a large departure from that philosophy, but that is unavoidable.
Certain aspects of networking (notably static IPs) are by nature host-specific
and cannot be handled any other way.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well except for the static IPs stuff is possible that most of the config will be common across nodes, maybe we can intorduce at clusterConfig section at ignition to configure networking if it's the same at all the nodes but keep the hostConfig so if hostConfig will be fallback if there is no clusterConfig (let's not merge them).

Comment on lines 282 to 319

Using a machine-config to do initial deployment and a
NodeNetworkConfigurationPolicy to make day 2 changes will require a manual sync
of the NMState configuration. Unfortunately this is unavoidable because the
Kubernetes-NMState operator is not part of the core payload so it (and its
CRDs) cannot be used for initial deployment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we can change k-nmstate operator to read stuff under /etc/nmstate and convert it to NNCPs.

Copy link
Member Author

@cybertron cybertron Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting idea. We couldn't do that with the baremetal feature because only the nmconnection files made it to the host, but in this case we'll have the full NMState config available. It would be very interesting to add a feature to discover the existing config for the operator.

reboot?

Reply from @cgoncalves:
> OVN-Kubernetes will not handle the replacement of the gateway interface (enp2s0 in the example above). This is an existing limitation where OVN-Kubernetes installs an OpenFlow rule in br-ex that states the gateway interface is the OVS port ID of enp2s0 (port 1) and where attaching a new interface, say enp3s0, its OVS port ID will be different hence no egress traffic forwarding.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just not that we can configure ovs_db per interface at nmstate, don't know if installing a openflow rule it's possible with that.

Copy link
Member Author

@cybertron cybertron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the comments! I will update this and the prototype accordingly.

### Goals

Once this design is implemented, deployers of OpenShift will be able to
explicitly configure br-ex to their exact specifications and will be able to
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, we're definitely going to want to document what a br-ex config using this mechanism should look like, probably including some more advanced options too.

Comment on lines 91 to 95
NMState YAML, which will be provided to each node via Ignition. Around the
same point in the boot process that configure-ovs would run, we will run
NMState to apply the provided configuration. This will only be done on first
boot because subsequent changes will be made using the Kubernetes-NMState
operator.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, will do.

Right now it would need to be done manually. There isn't really a way to convert it at install time because kubernetes-nmstate won't be installed yet. I suppose there may be some way to auto-create some manifests that could be applied later though.

[...]
hostConfig:
- name: master-0
networkConfig:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect we would use the results of nodeip-configuration to generate the policy. I'm not sure exactly how that would work, but it's definitely something I'd like to look into. It's not my primary goal right now because I want to handle the advanced cases where configure-ovs struggles, but I do see nmpolicy as a potential way to completely replace configure-ovs.

On a more general note, a per-cluster config (maybe with the possibilty of overriding on a per-node basis) may be a solution to platforms where we don't have a way to map per-node configs ahead of time (like if the hostname is not predictable). I don't know how important this feature will be in those cases, but I have heard some discussion of needing to override the MTU at deploy time, so this might provide a solution to that. I think with my current machine-config prototype it would be pretty easy to extend to include a per-cluster config so maybe I'll give that a try and update the design if it works well.

enabled: false
ipv6:
enabled: false
- name: br-ex
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's definitely a nice fit for this. It would be a pain to introduce an OpenShift-specific API to do that, but with NMState we get it for "free". :-)

Comment on lines 158 to 162
The name field will need to be something that uniquely identifies a given host.
I'm unsure if name is necessarily the best way to do that, but it is one valid
option. I reused the networkConfig name from the baremetal feature for
consistency. Perhaps something different would be preferable to avoid confusion
though?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's baremetal-specific, we're trying to make this more platform-agnostic. In baremetal we can know the hostnames ahead of time, but I don't know if that's true for all platforms. I suspect there may be some randomness to the hostnames in non-baremetal cases.

This reduces duplication in install-config and the chance of errors due to
unintended differences in node config.

When this mechanism is in use, a flag will be set to indicate to configure-ovs
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, did I not reply to this before?

The answer is yes. configure-ovs will still run in the default case, but if you've provided custom br-ex config it will prevent configure-ovs from running.

Comment on lines 202 to 203
This could be implemented almost entirely with machine-configs, with only a simple
service on the host to apply the provided NMState files. This is the approach
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, but I wonder if that does what we need. I only want this to apply once, and only the configuration that applies to the specific host. If we implemented this with machine-configs, all of the configs for all of the nodes will be present in the directory. I also only want this to be applied at first boot, and then if changes are needed they should be made by kubernetes-nmstate. We could allow updating via machine-config, but then we throw out all of the benefits that kubernetes-nmstate gives us.

This could be implemented almost entirely with machine-configs, with only a simple
service on the host to apply the provided NMState files. This is the approach
the [prototype implementation](https://docs.google.com/document/d/1Zp1J2HbVvu-v4mHc9M594JLqt6UwdKoix8xkeSzVL98/edit#heading=h.fba4j9nvp0nl)
used since per-host configuration was not available any other way. However,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that scaleout works fine with my latest machine-config prototype. You just update the MC with the new node configs and it gets rolled out reboot-less so it will be there when the new nodes start. I discuss how this works below now.

Comment on lines +267 to +299
configuration for OVNK, they will have the ability to configure it incorrectly,
which may cause problems either on initial deployment or at any point after.
We will want to ensure that must-gather is collecting enough information about
the bridge configuration for us to determine whether it is compliant with our
guidelines, and we must have clear instructions on what is required for br-ex.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, probably similar to what we have for kubernetes-nmstate today. We'll include examples of some common configs.

Comment on lines 282 to 319

Using a machine-config to do initial deployment and a
NodeNetworkConfigurationPolicy to make day 2 changes will require a manual sync
of the NMState configuration. Unfortunately this is unavoidable because the
Kubernetes-NMState operator is not part of the core payload so it (and its
CRDs) cannot be used for initial deployment.

Copy link
Member Author

@cybertron cybertron Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an interesting idea. We couldn't do that with the baremetal feature because only the nmconnection files made it to the host, but in this case we'll have the full NMState config available. It would be very interesting to add a feature to discover the existing config for the operator.

@cybertron
Copy link
Member Author

I've updated the document to reflect the fact that I would like to implement the machine-config method as an MVP, and then work on improved integration and more features as followups. I think that will let us get a useful feature into the hands of users more quickly, and we can address any usability shortcomings based on real-world feedback.

Comment on lines +322 to +342
That doesn't have to block this feature being implemented, but it's something
we should look into as a followup with the OVNK team.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some features that depend on day-2 configuration changes, like MTU migration. configure-ovs generates ephemeral configuration on each boot so it picks up any day-2 config changes to its source config. I think it needs to be considered within the scope of the enhancement.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the MTU case, is configure-ovs responsible for changing the MTU or does it just pick up the changed MTU of the underlying interface? I was thinking that you had to manually modify the interface as part of the process, which you would just do with Kubernetes-NMState here.

It would be good if we can come up with a list of day 2 features that might require bridge changes so we know what use cases to target.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It just picks up the changed MTU of the underlying interface that a user is responsible for configuring correctly.

which you would just do with Kubernetes-NMState here

If you mean changing br-ex and children interfaces, then we need to make sure this works with Kubernetes-NMState. That's what I meant that needs to be considered within the scope of the enhancement.

I am aware of any other features that depend directly on changing network configuration on day 2 (perhaps this could also help when enabling dualstack) but I do know some other features depend on the MTU migration itself, like enabling ipsec or sdn<->ovn migration.

Comment on lines +252 to +336
* If changes are made to the bridge configuration on day 2, will OVNKubernetes
handle those correctly without a reboot? If not, how do we orchestrate a
reboot?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall once trying this and even a noop change on br-ex through nmstate operator caused undesired changes in the underlying configuration. I am not 100% sure if I am recalling incorrectly or if this might have been fixed since then.

path: /etc/nmstate/cluster.yaml
```

The only difference is there will be a single file applied to each node of the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth mentioning how we would distinguish a first boot from subsequent boots?

Copy link
Contributor

@jcaamano jcaamano Oct 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read above that the nmstate service moves the files away after they are applied. So we wouldn't need to worry about this if we used that so maybe worth mentioning within that option.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking at that as an implementation detail. My current implementation just writes a file after we process the configuration once (which we'll still need to do even using the default NMState service because we need to make sure it only applies the configuration for the current node).

The file is convenient though because configure-ovs can look for it to determine whether we're using this new method.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. I guess I was being overzealous in crosschecking if that would be something a user might inadvertently revert but I guess there is no need to worry about that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also occurred to me that if we use the nmstate service and that moves the files around, MCO might complain.

Comment on lines 221 to 229
The other is to use the NMState service already included in the OS image to
apply the configurations. One issue with this is that it will apply everything
in the /etc/nmstate directory rather than just the host-specific file.
To get around that, we could still implement a custom service, but instead of
applying the configurations, it would only be responsible for copying the
correct file to /etc/nmstate so the regular service can apply it. This has
the benefit of not reinventing the NMState service and still gives us full
control over what gets applied to the host, and also leaves us a way to signal
to ovs-configuration that we've already handled the bridge setup.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, would we be concerned with any configuration already present in /etc/nmstate, and its order with respect to the copied config if relevant?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally don't support mixing network configuration mechanisms. Still, it would probably be safest to verify there is no conflict with files already in /etc/nmstate to avoid overwriting a file we shouldn't. In that case we should probably just fail the service.

named br-ex will be mandatory. The format for the configuration will be
NMState YAML, which will be provided to each node via Ignition. Around the
same point in the boot process that configure-ovs would run, we will run
NMState to apply the provided configuration. This will only be done on first
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would nmstate generate persistent NetworkManager configuration in /etc/NetwrokManager/system-connections? If so, how does this persistent configuration interact with day-2 config changes from nmstate operator, is it overwritten?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would. To modify the config on day 2 you would copy the entire day 1 config into an NNCP and make your changes from that. That way the generated config from NMState should be identical.

There's some more discussion below about possibly automating the sync of day 1 config to day 2. Right now we have a similar situation with the baremetal day 1 feature and just require a manual copy, but for this we're in better shape because we have access to the original NMState config.

In a perfect world I'd love to have kubernetes-nmstate as part of the core payload and we'd configure everything day 1 or 2 with NNCPs, but based on past discussion that's going to be a much bigger hill to climb so I'd prefer not to in this first version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So let's say that you copy the entire day 1 config into an NNCP and make changes to that NNCP on day 2. Would that result in nmstate operator writing back updated config to /etc/NetwrokManager/system-connections? Or would the node, upon reboot, use the original day 1 configuration in /etc/NetwrokManager/system-connections and then once the nmstate operator is running again would hte configuraiton be reconciled?

Comment on lines 60 to 61
* As an OpenShift developer, I want to simplify the configure-ovs.sh script
by providing a better way to do advanced bridge configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we cannot remove any currently supported use case from configure-ovs if we want to keep backward compatibility. So I guess this is more about an new way of supporting new advanced use cases without complicating configure-ovs more than it already is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Until/unless we replace configure-ovs and provide a migration path we probably can't actually remove anything.

@cybertron
Copy link
Member Author

Unfortunately it looks like applying an NNCP to br-ex, even if there are no changes from the day 1 configuration, still breaks connectivity. That's a pretty significant problem for this design, which means one of two things will need to happen:

  1. We change the scope so that this only applies to day1 and you are still not allowed to change br-ex using the operator.
  2. We find a way to make updates with the operator work. Unless there's some quick fix (which I doubt), this option likely means missing 4.15.

@jcaamano
Copy link
Contributor

Unfortunately it looks like applying an NNCP to br-ex, even if there are no changes from the day 1 configuration, still breaks connectivity. That's a pretty significant problem for this design, which means one of two things will need to happen:

  1. We change the scope so that this only applies to day1 and you are still not allowed to change br-ex using the operator.
  2. We find a way to make updates with the operator work. Unless there's some quick fix (which I doubt), this option likely means missing 4.15.

I was just going to update here that I tried this today and I did not get very far. :/ I would like to touch base with @qinqon on this next week.

  1. We change the scope so that this only applies to day1 and you are still not allowed to change br-ex using the operator.

You would still be allowed to make day-2 configuration changes on the resulting NM configuration that should live in /etc/NetworkManager/system-connections right? Which is probably not much more difficult than handling the source nmstate manifests.

@qinqon
Copy link
Contributor

qinqon commented Oct 23, 2023

Unfortunately it looks like applying an NNCP to br-ex, even if there are no changes from the day 1 configuration, still breaks connectivity. That's a pretty significant problem for this design, which means one of two things will need to happen:

  1. We change the scope so that this only applies to day1 and you are still not allowed to change br-ex using the operator.
  2. We find a way to make updates with the operator work. Unless there's some quick fix (which I doubt), this option likely means missing 4.15.

@cybertron can you dump here the original nmstate from br-ex and what change did you try ? I hink nmstate team has fix some issues related to ovs/ovn like https://docs.rs/nmstate/latest/nmstate/struct.OvsBridgeConfig.html#structfield.allow_extra_patch_ports

@cybertron
Copy link
Member Author

Here's the NMState YAML I used. This is the same both on day 1 and on day 2 in the NNCP.

- name: enp2s0
  type: ethernet
  state: up
  ipv4:
    enabled: false
  ipv6:
    enabled: false
- name: br-ex
  type: ovs-bridge
  state: up
  copy-mac-from: enp2s0
  ipv4:
    enabled: false
    dhcp: false
  ipv6:
    enabled: false
    dhcp: false
  bridge:
    port:
    - name: enp2s0
    - name: br-ex
- name: br-ex
  type: ovs-interface
  state: up
  ipv4:
    enabled: true
    address:
    - ip: "192.168.111.110"
      prefix-length: 24
  ipv6:
    enabled: false
    dhcp: false
dns-resolver:
  config:
    server:
    - 192.168.111.1
routes:
  config:
  - destination: 0.0.0.0/0
    next-hop-address: 192.168.111.1
    next-hop-interface: br-ex```

@jcaamano
Copy link
Contributor

We tested day-2 configuration changes with Kubernetes NMstate and got them more-less working with this fix:
openshift/machine-config-operator#3998
More potential fixes might be needed in Kubernetes NMstate for smoother operation.

There is still the open question of how to quantify the possible disruption of making day-2 configuration changes with Kubernetes NMstate on the default network when compared to the procedure that could be done today which implies a rolling reboot.

I think that the only day-2 change documented today is the MTU migration procedure:
https://docs.openshift.com/container-platform/4.13/networking/changing-cluster-network-mtu.html

So we could try it out. Overall I think we need to cover that procedure for users using the mechanism described in this enhancement.

@cgoncalves
Copy link

FYI, we have 7 test cases for day-2 operations on the br-ex attached default gateway interface (kernel bond) and enslaved VLAN/SR-IOV VF interfaces: https://docs.google.com/document/d/1QSbSziss64vLnGiPhFbH2jL8gJY-26VdhSny0t_MNPE/

This is part of a 4.14.z epic: https://issues.redhat.com/browse/CNF-7461

@openshift-bot
Copy link

Inactive enhancement proposals go stale after 28d of inactivity.

See https:/openshift/enhancements#life-cycle for details.

Mark the proposal as fresh by commenting /remove-lifecycle stale.
Stale proposals rot after an additional 7d of inactivity and eventually close.
Exclude this proposal from closing by commenting /lifecycle frozen.

If this proposal is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 13, 2023
@openshift-bot
Copy link

Stale enhancement proposals rot after 7d of inactivity.

See https:/openshift/enhancements#life-cycle for details.

Mark the proposal as fresh by commenting /remove-lifecycle rotten.
Rotten proposals close after an additional 7d of inactivity.
Exclude this proposal from closing by commenting /lifecycle frozen.

If this proposal is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 20, 2023
@openshift-bot
Copy link

Rotten enhancement proposals close after 7d of inactivity.

See https:/openshift/enhancements#life-cycle for details.

Reopen the proposal by commenting /reopen.
Mark the proposal as fresh by commenting /remove-lifecycle rotten.
Exclude this proposal from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Dec 28, 2023
Copy link
Contributor

openshift-ci bot commented Dec 28, 2023

@openshift-bot: Closed this PR.

In response to this:

Rotten enhancement proposals close after 7d of inactivity.

See https:/openshift/enhancements#life-cycle for details.

Reopen the proposal by commenting /reopen.
Mark the proposal as fresh by commenting /remove-lifecycle rotten.
Exclude this proposal from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dhellmann
Copy link
Contributor

(automated message) This pull request is closed with lifecycle/rotten. The associated Jira ticket, OPNET-265, has status "In Progress". Should the PR be reopened, updated, and merged? If not, removing the lifecycle/rotten label will tell this bot to ignore it in the future.

@openshift-ci openshift-ci bot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 4, 2024
@cybertron
Copy link
Member Author

/remove-lifecycle rotten

I do still need to update this to reflect the new template though.

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 4, 2024
@cybertron
Copy link
Member Author

Rebased and updated to pass lint. I filled out the TBDs for reviewers and approvers, but I'm happy to change those if someone else would be more appropriate.

Comment on lines +252 to +336
* If changes are made to the bridge configuration on day 2, will OVNKubernetes
handle those correctly without a reboot? If not, how do we orchestrate a
reboot?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reboot?

Reply from @cgoncalves:
> OVN-Kubernetes will not handle the replacement of the gateway interface (enp2s0 in the example above). This is an existing limitation where OVN-Kubernetes installs an OpenFlow rule in br-ex that states the gateway interface is the OVS port ID of enp2s0 (port 1) and where attaching a new interface, say enp3s0, its OVS port ID will be different hence no egress traffic forwarding.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> OVN-Kubernetes will not handle the replacement of the gateway interface (enp2s0 in the example above). This is an existing limitation where OVN-Kubernetes installs an OpenFlow rule in br-ex that states the gateway interface is the OVS port ID of enp2s0 (port 1) and where attaching a new interface, say enp3s0, its OVS port ID will be different hence no egress traffic forwarding.
> OVN-Kubernetes will not handle the replacement of the gateway interface (e.g. enp2s0). This is an existing limitation where OVN-Kubernetes installs an OpenFlow rule in br-ex that states the gateway interface is the OVS port ID of enp2s0 (port 1) and where attaching a new interface, say enp3s0, its OVS port ID will be different hence no egress traffic forwarding.

Comment on lines +165 to +166
system NMState service. If there is already a file in /etc/nmstate with the
same name, the service will fail in order to avoid overwriting important
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should check just for a file of the same name or just any file:

  • Do we consider the the method described here as the only supported mechanism to deploy nmstate config on the host?
  • What if there was some configuration already there from a system on a previous version that was upgraded to this version and user wants to start using this mechanism?

What I mean here is that if there are other files we can't really be sure if there's any conflict. So I feel this should be a completely exlusive mechanism and warned as so in the docs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, and this is already our recommendation for kubernetes-nmstate because it is problematic to have multiple separate configurations attempting to configure the same host. I will add to this section to be explicit that when using the new mechanism for configuration we expect that to be the only network configuration used.

When this mechanism is in use, a flag will be set to indicate to configure-ovs
that it should not attempt to configure br-ex.

Day 2 changes will be handled by Kubernetes-NMState. For the initial
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two questions here:

  • Do we prevent in any way processing day-2 changes on the MachineConfigs described in this procedure? If not, would that be an alternative mechanism to deploy day-2 changes if user is willing to put up with a reboot? I guess the answer is yes, to allow for the scale out scenario.
  • Do we prevent day-2 with Kubernetes-NMState to deployments that are still relying on configure-ovs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • A reboot won't happen on day 2 changes to these files because we are adding the /etc/nmstate/openshift path to the reboot-less update list in MCO. We won't otherwise prevent changes to these files, but changes to existing nodes will have no effect because these configs are only applied on first boot.
  • We have a warning in the kubernetes-nmstate docs that modification of br-ex and the underlying interface is not allowed. We'll need to modify that to allow changes if this new feature is used instead of configure ovs.

changes:

- DNS updates
- MTU updates
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the heads up. I will make sure this is handled in the implementation patch.

enhancements/network/configure-ovs-alternative.md Outdated Show resolved Hide resolved
Added a section on how migration will work, and clarified a number
of points based on feedback from reviewers.
@cybertron
Copy link
Member Author

I think this should address all of the comments so far. Let me know what you think.

I'll also be updating the implementation patch today based on what we've discussed here and in the sync meeting.

@cgoncalves
Copy link

LGTM. Thank you!

@jcaamano
Copy link
Contributor

/lgtm

1 similar comment
@qinqon
Copy link
Contributor

qinqon commented Apr 18, 2024

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 18, 2024
Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functionally I think the proposal makes sense. Most of the MCO's concerns are around having a proper API workflow around this, which as I understand is in long-term design considerations. Just leaving some notes based on our discussions on slack so we keep those in mind

NMState YAML, which will be provided to each node via Ignition. Around the
same point in the boot process that configure-ovs would run, we will run
NMState to apply the provided configuration. This will only be done on first
boot because subsequent changes will be made using the Kubernetes-NMState
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a note, "firstboot only" configuration is a pattern in the MCO (for e.g. disks) so it's fine to have it this way, but technically we'd like any machineconfig based changes to take effect, if possible


During initial deployment, the user will provide the installer with a
machine-config manifest containing base64-encoded NMState YAML for each node
in the cluster. This manifest may look something like:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this pattern should work but is somewhat exploiting the MCO to do a "per-machine" config, which is a pattern we don't have atm. We would like the ability to do so eventually so we can just keep that in mind


We have settled on an approach and are not currently pursuing other options.

### API Extensions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed on slack, but it feels like we should have a higher level API for this instead of using MachineConfigs as the configuration point


Since we don't want to trigger a reboot of the entire cluster before each
scaleout operation, we will add the NMState configuration directory to the
list of paths for which the machine-config-operator will not trigger a reboot.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note that this isn't presently directly supported in the new API mentioned below, but we can add an exception for now

@knobunc
Copy link
Contributor

knobunc commented Apr 18, 2024

/approve

Copy link
Contributor

openshift-ci bot commented Apr 18, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: knobunc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 18, 2024
Copy link
Contributor

openshift-ci bot commented Apr 18, 2024

@cybertron: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 6da502e into openshift:master Apr 18, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants