Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JENKINS-70132] Remove anonymous volumes when removing the container #286

Merged
merged 4 commits into from
Dec 1, 2022

Conversation

felipecrs
Copy link
Contributor

@felipecrs felipecrs commented Nov 10, 2022

For example, the following Jenkinsfile:

pipeline {
  agent {
    docker {
      image 'docker:dind'
    }
  }
}

Will create the container based on docker:dind, which declares a VOLUME that docker creates automatically (this is called anonymous volumes).

However, due to the way how Jenkins deletes the container after finishing the build, it will leave the anonymous volume dangling on the agent.

Over time, this can cause issues as these volumes can consume a high amount of disk space and there's no current Jenkins mechanism to delete them other than "manually".

This now aligns with the behavior of docker run --rm, which not only deletes the container itself but also its anonymous volumes.

Fixes JENKINS-70132

@felipecrs
Copy link
Contributor Author

For more context, adding --rm in the args parameter makes sure these anonymous volumes are cleaned up but causes an unwanted side effect that makes the build fail:

java.io.IOException: Failed to rm container '8556d3358aa2dce6927dc847d44af2b2071f42324c080ee65745ff3f6b00133d'.
	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.rm(DockerClient.java:201)
	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.stop(DockerClient.java:187)
	at org.jenkinsci.plugins.docker.workflow.WithContainerStep.destroy(WithContainerStep.java:110)
	at org.jenkinsci.plugins.docker.workflow.WithContainerStep.access$400(WithContainerStep.java:77)
	at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Callback.finished(WithContainerStep.java:402)
	at org.jenkinsci.plugins.workflow.steps.BodyExecutionCallback$TailCall.onSuccess(BodyExecutionCallback.java:118)
	at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution$SuccessAdapter.receive(CpsBodyExecution.java:377)
	at com.cloudbees.groovy.cps.Outcome.resumeFrom(Outcome.java:73)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:166)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:403)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)

@felipecrs felipecrs changed the title Remove anonymous volumes when removing the container [JENKINS-70132] Remove anonymous volumes when removing the container Nov 22, 2022
@felipecrs
Copy link
Contributor Author

@jglick @rsandell sorry to ping you in advance, I found your names from #280 which is somewhat related to running docker inside of a docker agent.

Would you mind taking a look at this PR? I have been using a modified version of this plugin with this change for 2 weeks already, which is working without any issue.

I'm asking because I see several other PRs open for years, but I believe this fixes a critical issue.

@jglick
Copy link
Member

jglick commented Nov 22, 2022

Sounds right. Ideally would have test coverage.

I am not maintaining this plugin and my standing advice is to uninstall it. #280 was special in that it was fixing something that worked “before”.

@jglick
Copy link
Member

jglick commented Nov 22, 2022

(You would need to check whether this CLI flag was a recent addition.)

@felipecrs
Copy link
Contributor Author

(You would need to check whether this CLI flag was a recent addition.)

I'll come up with this information. One moment.

@felipecrs
Copy link
Contributor Author

I tested all the way down to 1.2.0 which dates from 2014-08-20. I did not test further, but I think we can consider it's safe.

$ docker run --rm docker:1.2.0 docker rm --help
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]

Remove one or more containers

  -f, --force=false      Force the removal of a running container (uses SIGKILL)
  -l, --link=false       Remove the specified link and not the underlying container
  -v, --volumes=false    Remove the volumes associated with the container

@felipecrs
Copy link
Contributor Author

Something to note, though, is that from 19.03.5 to 19.03.6 the description of the flag was changed from

Remove the volumes associated with the container

To

Remove anonymous volumes associated with the container

I'm checking here if there was some change in the implementation about it, but judging that this description was changed between a patch level release, no changes in the functionality were introduced.

@felipecrs
Copy link
Contributor Author

felipecrs commented Nov 22, 2022

Found it, it's just a description update:

Therefore I think we are in business with this change, i.e. no concerns related to the docker cli version.

@felipecrs
Copy link
Contributor Author

my standing advice is to uninstall it

I'm a bit curious about your suggestion. Is there any replacement for this plugin? I mean, for changing agent in declarative pipelines.

@jglick
Copy link
Member

jglick commented Nov 22, 2022

Is there any replacement for this plugin?

#105 etc.

@felipecrs
Copy link
Contributor Author

I see. Thank you.

I'm working to add a test for this.

@jglick jglick added the bug label Nov 22, 2022
@felipecrs felipecrs marked this pull request as draft November 22, 2022 20:13
@felipecrs felipecrs marked this pull request as ready for review November 22, 2022 20:37
@jglick jglick merged commit 75d68c0 into jenkinsci:master Dec 1, 2022
@felipecrs felipecrs deleted the rm-volumes branch December 2, 2022 12:04
@felipecrs
Copy link
Contributor Author

@jglick thanks a lot for merging this!

@jglick
Copy link
Member

jglick commented Dec 2, 2022

You are welcome, but if you care about this plugin, please consider https://www.jenkins.io/doc/developer/plugin-governance/adopt-a-plugin/ since there is a considerable backlog of unevaluated PRs, many of which lack test coverage or otherwise do not look completely safe to release without a maintainer actively checking for reports of regressions and committing to following up.

@felipecrs
Copy link
Contributor Author

Yes, that's fair. I'll evaluate it with care.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants