Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove vsphere credentials from csi node daemonset #96

Merged
merged 1 commit into from
Dec 9, 2019

Conversation

RaunakShah
Copy link
Contributor

@RaunakShah RaunakShah commented Nov 11, 2019

What this PR does / why we need it:
Changing the default to remove csi-vsphere.conf as a mounted volume to node daemonset. This is only needed for topology aware clusters.
Also addressing an issue where the node daemonset would fail if this volume was not mounted.

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
This PR addresses #95

Special notes for your reviewer:

Testing:
For non-topology aware setups:

Logs showing node daemonset can successfully register with kubelet

{"log":"I1205 20:21:52.304096       1 service.go:88] configured: csi.vsphere.vmware.com with map[mode:node]\n","stream":"stderr","time":"2019-12-05T20:21:52.307895822Z"}
{"log":"time=\"2019-12-05T20:21:52Z\" level=info msg=\"identity service registered\"\n","stream":"stderr","time":"2019-12-05T20:21:52.308266734Z"}
{"log":"time=\"2019-12-05T20:21:52Z\" level=info msg=\"node service registered\"\n","stream":"stderr","time":"2019-12-05T20:21:52.308275623Z"}
{"log":"time=\"2019-12-05T20:21:52Z\" level=info msg=serving endpoint=\"unix:///csi/csi.sock\"\n","stream":"stderr","time":"2019-12-05T20:21:52.308279094Z"}
{"log":"I1205 20:21:54.591428       1 config.go:261] GetCnsconfig called with cfgPath: /etc/cloud/csi-vsphere.conf\n","stream":"stderr","time":"2019-12-05T20:21:54.591716655Z"}
{"log":"I1205 20:21:54.591838       1 config.go:265] Could not stat /etc/cloud/csi-vsphere.conf, reading config params from env\n","stream":"stderr","time":"2019-12-05T20:21:54.592128802Z"}
{"log":"E1205 20:21:54.592305       1 config.go:202] No Virtual Center hosts defined\n","stream":"stderr","time":"2019-12-05T20:21:54.592819448Z"}
{"log":"E1205 20:21:54.592936       1 config.go:269] Failed to get config params from env. Err: No Virtual Center hosts defined\n","stream":"stderr","time":"2019-12-05T20:21:54.594325039Z"}
{"log":"I1205 20:21:54.593568       1 node.go:367] Config file not provided to node daemonset. Assuming non-topology aware cluster\n","stream":"stderr","time":"2019-12-05T20:21:54.594349877Z"}

Create a PVC and POD

root@k8-master-642:~# kubectl get pvc
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
example-vanilla-block-pvc   Bound    pvc-924d23cd-163b-11ea-ad42-00505691fe6a   1Mi        RWO            example-vanilla-block-sc   42h
root@k8-master-642:~# kubectl get pod
NAME                        READY   STATUS    RESTARTS   AGE
example-vanilla-block-pod   1/1     Running   0          2m59s

For topology aware setups:

Logs showing node daemonset can successfully register with kubelet

{"log":"I1206 00:57:06.057890       1 service.go:88] configured: csi.vsphere.vmware.com with map[mode:node]\n","stream":"stderr","time":"2019-12-06T00:57:06.075632954Z"}
{"log":"time=\"2019-12-06T00:57:06Z\" level=info msg=\"identity service registered\"\n","stream":"stderr","time":"2019-12-06T00:57:06.075952709Z"}
{"log":"time=\"2019-12-06T00:57:06Z\" level=info msg=\"node service registered\"\n","stream":"stderr","time":"2019-12-06T00:57:06.075968483Z"}
{"log":"time=\"2019-12-06T00:57:06Z\" level=info msg=serving endpoint=\"unix:///csi/csi.sock\"\n","stream":"stderr","time":"2019-12-06T00:57:06.075972504Z"}
{"log":"I1206 00:57:08.113064       1 config.go:261] GetCnsconfig called with cfgPath: /etc/cloud/csi-vsphere.conf\n","stream":"stderr","time":"2019-12-06T00:57:08.121567715Z"}
{"log":"I1206 00:57:08.114773       1 config.go:206] Initializing vc server 10.160.122.172\n","stream":"stderr","time":"2019-12-06T00:57:08.121621019Z"}
{"log":"I1206 00:57:08.114914       1 node.go:379] Config file provided with Zone and Region. Assuming topology aware cluster.\n","stream":"stderr","time":"2019-12-06T00:57:08.121628155Z”}

csinodes CRD contains zone and region keys

root@k8s-master:~# kubectl describe csinodes
Name:         k8s-node1
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  storage.k8s.io/v1beta1
Kind:         CSINode
Metadata:
  Creation Timestamp:  2019-12-06T00:57:09Z
  Owner References:
    API Version:     v1
    Kind:            Node
    Name:            k8s-node1
    UID:             b2c6642d-a5a7-43f4-bcd6-9017cbb751cf
  Resource Version:  15588
  Self Link:         /apis/storage.k8s.io/v1beta1/csinodes/k8s-node1
  UID:               ef0a8c72-d613-4d1b-96d6-0fe8ce497daa
Spec:
  Drivers:
    Name:     csi.vsphere.vmware.com
    Node ID:  k8s-node1
    Topology Keys:
      failure-domain.beta.kubernetes.io/region
      failure-domain.beta.kubernetes.io/zone
Events:  <none>

Create PVC and Pod:

root@k8s-master:~# kubectl get pvc
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
example-vanilla-block-pvc   Bound    pvc-5d61d5ba-9fe6-409a-be32-c773c4c77089   5Gi        RWO            example-vanilla-block-sc   16s
root@k8s-master:~# kubectl get pod
NAME                        READY   STATUS    RESTARTS   AGE
example-vanilla-block-pod   1/1     Running   0          21s

Release note:

Changing the default to remove `csi-vsphere.conf` as a mounted volume to node daemonset. 

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 11, 2019
@k8s-ci-robot
Copy link
Contributor

Hi @RaunakShah. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Nov 11, 2019
@RaunakShah
Copy link
Contributor Author

/assign @divyenpatel

Copy link
Member

@divyenpatel divyenpatel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 23, 2019
@divyenpatel
Copy link
Member

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 23, 2019
@divyenpatel
Copy link
Member

@RaunakShah manifest points to the older images which does not have this fix. Do we plan to cut new release on github for this change?

@manojvs157 @SandeepPissay when can we cut a new release on github for this fix? This issue was reported by @misterikkit as well.

@misterikkit
Copy link

Based on #86 it seems like there is a chicken-egg problem where the images can't be tagged and released until after the fix is merged. So there would need to be a follow-up PR to update the manifests?

@SandeepPissay
Copy link
Contributor

@RaunakShah Could you please describe details of testing you have performed to validate that this does not break the existing functionality? It will be nice if you can update it in the PR description. Thanks!

@manojvs157
Copy link

@RaunakShah manifest points to the older images which does not have this fix. Do we plan to cut new release on github for this change?

@manojvs157 @SandeepPissay when can we cut a new release on github for this fix? This issue was reported by @misterikkit as well.

Another issue for forward compatibility was also observed. Let's may be collect all must fix issues and then cut a release.

@RaunakShah did you run this fix through topology aware testbed?

@RaunakShah
Copy link
Contributor Author

@divyenpatel wow that completely went above me!

I agree that we can cut a release after merging this and other known issues/enhancements like #72
What about i remove the diff in the YAMLs and leave the change in source code? We can cut a release when we're ready.

@RaunakShah
Copy link
Contributor Author

@SandeepPissay @manojvs157 Let me post the logs (for topology and non-topology aware setups), somehow missed adding them to the description.

@divyenpatel
Copy link
Member

What about i remove the diff in the YAMLs and leave the change in source code?

@RaunakShah that sounds like a good idea. Let’s check-in the needed code without updating manifest. We will bump up images in the manifest and remove mounted config volume in the follow up PR.

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 5, 2019
@RaunakShah
Copy link
Contributor Author

@divyenpatel can you take a look?

@divyenpatel
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 6, 2019
@divyenpatel
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 6, 2019
@divyenpatel
Copy link
Member

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 6, 2019
Copy link
Member

@dougm dougm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@divyenpatel
Copy link
Member

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: divyenpatel, RaunakShah

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 9, 2019
@k8s-ci-robot k8s-ci-robot merged commit 5b197c4 into kubernetes-sigs:master Dec 9, 2019
flofourcade pushed a commit to flofourcade/vsphere-csi-driver that referenced this pull request Jan 28, 2020
flofourcade added a commit to flofourcade/vsphere-csi-driver that referenced this pull request Jan 28, 2020
flofourcade pushed a commit to flofourcade/vsphere-csi-driver that referenced this pull request Jan 28, 2020
flofourcade added a commit to flofourcade/vsphere-csi-driver that referenced this pull request Jan 28, 2020
add wrongly deleted lines from PR kubernetes-sigs#96
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants