Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix support for volume snapshots by setting snapshot id on volume creation. #129

Merged
merged 2 commits into from
Mar 20, 2019

Conversation

oliviabarrick
Copy link
Contributor

This bug affects 1.0, as well, however, my cluster is on Kubernetes v1.12.2, so I've targeted 0.4.0.

@oliviabarrick oliviabarrick force-pushed the release-0.4.0 branch 2 times, most recently from 63c8ac4 to 4e9c09b Compare March 1, 2019 08:18
if contentSource != nil {
snapshot := contentSource.GetSnapshot()
if snapshot != nil {
ll.Info("using snapshot as volume source")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add the snapshot ID as a K/V to the log? As an example:

ll.WithField("snapshot_id", snapshot.GetId()).Info("using snapshot as volume source")

Command: []string{
"sh", "-c",
"echo testcanary > /data/canary",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have these init containers? I couldn't figure out why we need them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment, but basically, waitForPod will consider it successful once the pod enters the Running state. Regular containers enter the running state once the healthcheck passes, so by using an initcontainer, the pod will never enter the running state if the assertion fails.

@fatih
Copy link
Contributor

fatih commented Mar 4, 2019

Thanks @justinbarrick this looks good. Thanks for the fix. I added couple of fixes. Also please open the PR against master and not against the release branches.

Master is our dev branch and we update the release branches from the master.

@oliviabarrick oliviabarrick changed the base branch from release-0.4.0 to master March 5, 2019 03:41
@oliviabarrick
Copy link
Contributor Author

Addressed comments and based on master.

@oliviabarrick
Copy link
Contributor Author

@fatih rebased. is there anything else you need on this?

@oliviabarrick
Copy link
Contributor Author

@fatih I’d love to get this one merged. Anything I can do to help it over the finish line?

As it stands, restoring from a volume with digital ocean csi is 100% broken forcing me to operate a fork.

@fatih
Copy link
Contributor

fatih commented Mar 20, 2019

Sorry @justinbarrick I was derailed from my other work related stuff and hadn't chance to look again. Thanks for your patience and contribution.

@fatih fatih merged commit 4cf11bf into digitalocean:master Mar 20, 2019
jcodybaker pushed a commit that referenced this pull request Apr 25, 2019
Fix support for volume snapshots by setting snapshot id on volume creation.
jcodybaker pushed a commit that referenced this pull request Apr 25, 2019
Fix support for volume snapshots by setting snapshot id on volume creation.
jcodybaker pushed a commit that referenced this pull request Apr 26, 2019
Fix support for volume snapshots by setting snapshot id on volume creation.
jcodybaker added a commit that referenced this pull request Apr 26, 2019
* Cherry-pick: Add tagging support for Volumes via the new `--do-tag` flag
  [[GH-130]](#130)
* Cherry-pick: Fix support for volume snapshots by setting snapshot id on volume creation
  [[GH-129]](#129)
* Cherry-pick: Goreportcard fixes (typos, exported variables, etc..)
  [[GH-121]](#121)
* Cherry-pick: Rename the cluster role bindings for the `node-driver-registrar` to be
  consistent with the other role bindings.
  [[GH-118]](#118)
* Cherry-pick: Remove the `--token` flag for the `csi-do-node` driver. Drivers running on
  the node don't need the token anymore.
  [[GH-118]](#118)
* Cherry-pick: Don't check the volume limits on the worker nodes (worker nodes are not able
  to talk to DigitalOcean API)
  [[GH-142]](#142)
* Cherry-pick: Update `godo` (DigitalOcean API package) version to v1.13.0
  [[GH-143]](#143)
* Cherry-pick: Fix race in snapshot integration test.
  [[GH-146]](#146)
* Cherry-pick: Add tagging support for Volume snapshots via the new `--do-tag` flag
  [[GH-145]](#145)
jcodybaker added a commit that referenced this pull request Apr 26, 2019
* Cherry-pick: Add tagging support for Volumes via the new `--do-tag` flag
  [[GH-130]](#130)
* Cherry-pick: Fix support for volume snapshots by setting snapshot id on volume creation
  [[GH-129]](#129)
* Cherry-pick: Goreportcard fixes (typos, exported variables, etc..)
  [[GH-121]](#121)
* Cherry-pick: Rename the cluster role bindings for the `node-driver-registrar` to be
  consistent with the other role bindings.
  [[GH-118]](#118)
* Cherry-pick: Remove the `--token` flag for the `csi-do-node` driver. Drivers running on
  the node don't need the token anymore.
  [[GH-118]](#118)
* Cherry-pick: Don't check the volume limits on the worker nodes (worker nodes are not able
  to talk to DigitalOcean API)
  [[GH-142]](#142)
* Cherry-pick: Update `godo` (DigitalOcean API package) version to v1.13.0
  [[GH-143]](#143)
* Cherry-pick: Fix race in snapshot integration test.
  [[GH-146]](#146)
* Cherry-pick: Add tagging support for Volume snapshots via the new `--do-tag` flag
  [[GH-145]](#145)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants