Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"docker-compose up" not rebuilding container that has an underlying updated image #4337

Closed
twelve17 opened this issue Jan 16, 2017 · 27 comments
Milestone

Comments

@twelve17
Copy link

twelve17 commented Jan 16, 2017

This one is a bit lengthy to explain, so I have created a test project to reproduce this behavior with.

As I understand it from the docker compose up documentation, the container should be rebuilt if an image was changed. However, it seems here that docker-compose up is not noticing that a particular image changed, and thus is re-using an existing container which has pointers to an older image. At least that is the closest I can get to a theory. More information below the version and info.

docker version:

Client:
 Version:      1.12.5
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   7392c3b
 Built:        Fri Dec 16 02:42:17 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.5
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   7392c3b
 Built:        Fri Dec 16 02:42:17 2016
 OS/Arch:      linux/amd64

docker info:

docker info
Containers: 5
 Running: 0
 Paused: 0
 Stopped: 5
Images: 32
Server Version: 1.12.5
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 85
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-59-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.682 GiB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

I am including the contents of the README.md here to help explain the problem in detail.

Use Case

  • One simple nodejs project with a Dockerfile.
  • One local NPM dependency used by the above project (copied to container via Dockerfile). The project refers to the dependency via a local path.
  • The nodejs project has one web route (/) that prints the version of the local npm dependency from its package.json. This is used to verify the results of the test case procedure.
  • The docker-compose.yml file uses this volume technique to overlay the host machine's source tree on top of the container's source tree and then overlaying the node_modules from the container on top of the first volume. This allows changing of sources on the host machine while at the same time using the node_modules that were built for the container's platform.

Steps to Reproduce

  1. Clone this repo.
  2. Clean up any previous containers and images related to this repo's project via docker rm and docker rmi.
  3. Check out the test2_run1 tag. This state represents the project using version 1.0.0 of the local NPM dependency.
  4. Do a docker-compose build. All steps should run without any cache usage if step 2 was followed correctly. Note the version of the local NPM dependency during the npm install command, e.g. +-- [email protected].
  5. Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.0.
  6. Stop the running containers. (Ctrl-C on the terminal from which the up command was issued.)
  7. Check out the test2_run2 tag. This introduces a small change to the NPM's index.js file, and a version bump in its package.json to 1.0.1.
  8. Do a docker-compose build. Only the instructions up to COPY ./my-npm ... should use a cache. (E.g., the docker output prints ---> Using cache for that instruction.) All subsequent steps should be run by docker. This is because the changes introduced in step 7 to the NPM package should have invalidated the cache for the COPY ./my-npm ... command, and, as a result, subsequent steps too. Confirm that during the npm install command, the new version of the NPM is printed in the summary tree output, e.g. +-- [email protected].
  9. Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.1.

Expected behavior: Page in step 9 should report 1.0.1. That is, a change in the local npm should be reflected in the container via docker-compose up.

Actual behavior: Page in step 9 reports 1.0.0.

Note that docker itself is re-building images as expected. The observed issue is not that docker is re-using a cached image, as the output shows it re-running NPM install and showing the new version of the local NPM dependency. The issue is that docker-compose is not seeing that the underlying images that comprise the dctest_service1 container have been updated.

In fact, running bash in the container allows us to see that the container has the updated my-npm module files, but the node_modules version is stale:

  # docker exec -it dctest_service1_1 bash
  app@6bf2671b75c6:~/service1$ grep version  my-npm/package.json  node_modules/my-npm/package.json
  my-npm/package.json:  "version": "1.0.1",
  node_modules/my-npm/package.json:  "version": "1.0.0"
  app@6bf2671b75c6:~/service1$

Workaround: Use docker rm to remove the dctest_service1 container. Then re-run docker-compose up, which will re-create the container using the existing images. Notable in this step is that no underlying images are re-built. In re-creating the container, docker-compose seems to figure out to use the newer volume that has the updated node_modules.

See the output directory for the output printed during the first run (steps 4 and 5) and the second run (steps 8 and 9).

@twelve17 twelve17 changed the title docker-compose up not rebuilding container that has an underlying updated image "docker-compose up" not rebuilding container that has an underlying updated image Jan 16, 2017
@shin-
Copy link

shin- commented Jan 17, 2017

Thank you for the detailed reproduction steps. I'll look into this further soon, but I just wanted to note that a simpler workaround would be to run docker-compose up with the --force-recreate flag.

@gornostal
Copy link

I'm seeing this issue too.
Have you been able to find the root cause?

I'm wondering if there is another workaround. --force-recreate is not a great option in my case.

@rhyek
Copy link

rhyek commented Jul 26, 2017

I am having a similar issue where I'm adding new dependencies to my node application during development which cause a change to package.json, but when running docker-compose up or docker-compose up --build, the npm install step of my app service's Dockerfile runs but does not install the new packages (or maybe it does, but according to the OP the resulting image for that step isn't been used? That seems incredibly odd. Not sure if I understood that correctly). I have to rmi all images very frequently during development which is very annoying and time-consuming.

Using --force-recreate for some reason uses a cached image for npm install so this seems worse?

@iangneal
Copy link

I started working on this issue and was able to reproduce the bug. I have a fix, but I'm not sure if it's in the desired direction.

This issue seemed to be caused by merge_volume_bindings when the container is recreated. Compose seems to always merge the volumes from the previous container without checking their age (which are out of date in this case) -- I don't understand why this is the default behavior. My fix is to simply remove the call to merge_volume_bindings within _get_container_create_options, which fixes this bug.

My question: Is there a way to actually verify the age of a volume given a VolumeSpec and/or get the most up-to-date volume? Or would this fix be acceptable as-is? I'm confused about what the expected behavior of merge_volume_bindings is, but to me it always seems to preserve the old volumes even if they are out of date.

@shin-, let me know if you have any input.

@shin- shin- added this to the 1.19.0 milestone Dec 12, 2017
@matekb
Copy link

matekb commented Jan 3, 2018

I have the same problem. Do you have any plan on when a fix will be released?

@cadavre
Copy link

cadavre commented Jan 8, 2018

+1 from me. @Dahca could you provide how to hack-fix with merge_volume_bindings ?

@matekb
Copy link

matekb commented Jan 9, 2018

I think that this should be a high priority issue. For me the only thing that worked was to manually delete all docker images and containers and then rebuild everything again. This is of course not a practical work flow which more or less makes docker-compose useless with node projects.

@shin-
Copy link

shin- commented Jan 9, 2018

image

@StefanoGuerrini
Copy link

StefanoGuerrini commented Jan 9, 2018

I have the same issue.
At moment I have resolved this with:

docker-compose -f docker-compose.yml rm service1
docker-compose -f docker-compose.yml build service1
docker-compose -f docker-compose.yml up service1

It work?

@shin-
Copy link

shin- commented Jan 10, 2018

Hi everyone,

I've spent some time looking more closely at this issue today. The behavior I observed is the expected one: anonymous volumes are preserved when containers are recreated, which is a documented feature.

There are a few considerations:

  1. Is this volume needed? If you want that data to be replaced anytime you recreate the service, you might want to remove that entry in your Compose file entirely.
  2. Do you only need to refresh that data sometimes? Run docker-compose rm -v <service> to get rid of anonymous volumes, then re-up the container.
  3. Is there a need here for more granular control over the application's volumes, something similar to docker volume ls|rm but at the project level?

If you're able to share use-cases, that would be super helpful to me to figure out where the friction is regarding this issue. Thanks!

@twelve17
Copy link
Author

twelve17 commented Jan 10, 2018

@shin- It's been a while since I posted this issue, but from what i recall, it doesn't seem like what is happening is the same as the documented feature you are describing. If anonymous volumes are preserved when containers are recreated, then would that not be the case whether or not one uses the --force-recreate flag on "docker-compose build", as you originally suggested as the workaround?

Is the use case repo I posted originally not suitable to help?

@shin-
Copy link

shin- commented Jan 10, 2018

@twelve17 No, it's consistent - the way it works is Compose retrieves the existing container if it exists, gets the list of associated volumes, and re-attaches anonymous volumes to the new container. If you remove the container and run up again, there's no previous container holding the volume reference, and thus won't be re-attached.
EDIT: sorry, I just saw your edit. Yes, my original message was made before I had a good grasp on the issue. --force-recrate would not help in that scenario as far as I can tell.

The repo you posted helped me reproduce the problem (thanks!). What I'm interested in now is why you felt like you needed to do what you did (create an anonymous volume for node_modules) in the first place. If I can understand the reasoning I can orient people towards a more suitable solution if one exists, or figure out what feature is currently missing.

Hope that makes sense!

@twelve17
Copy link
Author

twelve17 commented Jan 10, 2018

@shin- Ah, thanks for the clarification. The technique I did was based on this post. See "Installing Dependencies" and "The node_modules Volume Trick" sections), which explains the reasoning. Hope that helps!

@shin-
Copy link

shin- commented Jan 10, 2018

@twelve17 Thanks for the link! I see, so it's a bit of a hack to exclude a specific subfolder from a host bind. Maybe that's something we could add as part of the configuration, e.g.

volumes:
  - type: bind
    source: .
    target: /app
    excludes:
      - /app/node_modules

and implement these exclusions as disposable anonymous volumes. Would that make sense?

@shin-
Copy link

shin- commented Jan 17, 2018

Hey folks, I have an update about this issue after talking with some of the Docker engine maintainers. It turns out having nested volumes like in this case is a really bad idea and relies mostly on having a race condition work in your favor. As a result, the idea I mentioned above isn't in consideration anymore (even if it seems to work consistently now, there's no guarantee it will going forward, so codifying it at the Compose level would be a terrible idea).

That said a CLI flag to opt out of preserving anonymous volumes (as mentioned in #5400) is something we can consider, and would cover a wider range of uses.

@shin-
Copy link

shin- commented Jan 23, 2018

As a follow-up to #4337 (comment), I just submitted #5596 . Let me know if that helps!

@Jokero
Copy link

Jokero commented Feb 13, 2018

I also followed http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html and faced with the problem that anonymous volume is reused and npm dependency is not updated.

But it's really easy to solve this without any PRs and other hacky solutions. Using solution from the PR you need not forget every time to specify flag --renew-anon-volumes (shorthand -V). You can define some alias or put command to npm script but that's too complicated 😄 . Just mount what you need and you will not have any problem.

So instead of this:

services:
  web-client:
    build:
      context: .
    command: ["npm", "start"]
    ports:
      - 8080:8080
    volumes:
      - .:/root/app
      - /root/app/node_modules

use

services:
  web-client:
    build:
      context: .
    command: ["npm", "start"]
    ports:
      - 8080:8080
    volumes: # just specify what you need in container
      - ./config:/root/app/config
      - ./src:/root/app/src
      - ./package.json:/root/app/package.json
      - ./webpack.config.js:/root/app/webpack.config.js

@newhouse
Copy link

There's also a "good feeling" solution on SO that I found and just implemented and is working well so far: https://stackoverflow.com/a/35317425/1427426

Your build process can create a separate directory in the image to contain the node_modules you want installed:

# Dockerfile
FROM node:8.9.4

# Need to install packages outside of the mounted volumes so they don't
# get overwritten.
ENV NODE_PACKAGES_HOME=/home/node_install

# Let's put all of our node modules in there.
RUN mkdir -p $NODE_PACKAGES_HOME

# Put the package.json and package-lock.json files into that directory. Maybe don't need package-lock.json?
COPY ./package.json ./package-lock.json $NODE_PACKAGES_HOME/

# Switch to the node modules directory and install dem tings.
WORKDIR $NODE_PACKAGES_HOME

# Run npm install
RUN npm install

# etc...

Then, in your docker-compose.yml you can do something like this:

services:
  app:

    environment:
      # Tell node to look for node modules in the directory you installed them during the build.
      - NODE_PATH=/home/node_install/node_modules

    volumes:
      # Go ahead and load your whole local app files, include node_modules here. Node won't look to them.
      - .:home/app

    # etc...

@Jokero
Copy link

Jokero commented Feb 13, 2018

@newhouse Your solution will work but it's related to nodejs projects only. So if you have some client JS application with module bundler (for example, webpack), you need to set your NODE_PATH value in webpack config to make it possible to load dependencies. That's why I don't recommend this approach

@newhouse
Copy link

@Jokero Yes, this definitely is a NodeJS solution with no consideration for how it might work/help/not-work with webpack.

tlabna added a commit to tlabna/testdriven-app that referenced this issue Apr 5, 2018
- Add react-bootstrap and react-router-bootstrap modules
- Create NavBar.test.js with initial NavBar tests
- Create NavBar.js component
- Update App component to add NavBar
- Update index.html with bootstrap version for react-bootstrap
- Update UsersList component with bootstrap v3 classname
- NOTE. docker-compose-dev.yml has been updated
  - WHY? When adding npm dependencies to react app, the container node_modules directory is not updated.
  - SOLUTION: mount the node_modules directory to volume
  - (docker/compose#4337 (comment))
@mvallebr
Copy link

I am also having the same problem with docker-compose version 1.17.1
Has a fix been released?

@scottwilson312
Copy link

Curious as to why this is closed, I'm still experiencing the issue on 1.22.

@shin-
Copy link

shin- commented Jan 8, 2019

@scottwilson312 #4337 (comment)

Use the -V flag

@scottwilson312
Copy link

@shin- thank you, much appreciated

@Salehjarad
Copy link

Salehjarad commented Nov 2, 2019

I managed to get around this by adding npm install in package.json start scripts.
I'm using pm2 as a runtime service.

And i don't have to use --no-cache with docker-compose build
or
docker-compose up -d --force-recreate

...
  "scripts": {
    "start": "npm install && pm2-runtime index.js --name appone"
  }
...

In Dockerfile

CMD ["npm", "start"]

....

And i automate this with bash script to avoid tangling images or unnecessary volumes
#>> docker volume prune will delete all unused volumes <<#
Make sure to use this if you don't have any volumes that you need.

#!/bin/sh

docker-compose down &&
docker-compose rm &&
docker-compose build &&
docker-compose up -d
sleep 1
docker rmi $(docker images -f "dangling=true" -q)
echo y | docker volume prune
printf "\n... HAPPY CODING ...\n\e[0m"

@rgomezp
Copy link

rgomezp commented Jun 22, 2020

#!/bin/sh

docker-compose down &&
docker-compose rm &&
docker-compose build &&
docker-compose up -d
sleep 1
docker rmi $(docker images -f "dangling=true" -q)
echo y | docker volume prune
printf "\n... HAPPY CODING ...\n\e[0m"

This worked for me.

Thanks mate

@kid1412621
Copy link

It seems docker compose (v2) acting differently. Without --force-recreate flag rebuilding works just fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests