Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add e2e test for docker build #709

Merged
merged 1 commit into from
Nov 23, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 83 additions & 0 deletions e2e/image/build_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
package image

import (
"fmt"
"strings"
"testing"

"github.com/docker/cli/e2e/internal/fixtures"
"github.com/gotestyourself/gotestyourself/fs"
"github.com/gotestyourself/gotestyourself/icmd"
"github.com/pkg/errors"
)

func TestBuildFromContextDirectoryWithTag(t *testing.T) {
dir := fs.NewDir(t, "test-build-context-dir",
fs.WithFile("run", "echo running", fs.WithMode(0755)),
fs.WithDir("data", fs.WithFile("one", "1111")),
fs.WithFile("Dockerfile", fmt.Sprintf(`
FROM %s
COPY run /usr/bin/run
RUN run
COPY data /data
`, fixtures.AlpineImage)))
defer dir.Remove()

result := icmd.RunCmd(
icmd.Command("docker", "build", "-t", "myimage", "."),
withWorkingDir(dir))

result.Assert(t, icmd.Expected{Err: icmd.None})
assertBuildOutput(t, result.Stdout(), map[int]lineCompare{
0: prefix("Sending build context to Docker daemon"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe comment here that this is for catching regressions, not with stability guarantees(even seems to only check specific lines). Also, I'm not sure if these tests are supposed to work with all supported versions of the daemon or not. If they are, then we need to be more flexible in here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need to support multiple daemon versions we can provide different map[int]lineCompare for each supported version, but I don't think we should be less strict about the assertion. I don't think we run these against multiple versions just yet.

The reason it doesn't check every line is because the other lines are just ---> RANDOMID. We could add checks for those too, but I'm not sure we should consider those important parts of the build output.

Why do you think it shouldn't be for stability guarantees?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think it shouldn't be for stability guarantees?

For example, this changed in v17.10 and moby/moby#35549 is open atm. Probably many other times I don't remember.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A better check in here would be to check if the image exists and has right properties. Or if image built with iidfile is actually available.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean. I think it's ok if the output changes if we have a good reason, but it's bad if it changes accidentally so we should still be as strict as possible about the output. We could add the --> RANDOMID checks in a follow up.

A better check in here would be to check if the image exists and has right properties. Or if image built with iidfile is actually available.

No, that is the wrong thing to check. It is not the responsibility of the docker/cli e2e test suite to test the daemon. It should only test the interaction between the two components and behaviour of the cli (most of the cli behaviour should already be covered by unit tests). Checking if the image exists is the responsibility of the engine API tests. The engine reported that it created an image (the last line in the output) which is all the CLI should care about.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a scripting perspective the exit code may be enough, but interactive users should expect some consistency in output. This test does verify the exit code as well (result.Assert()).

It is easy to have a bug in cli that causes the build+inspect flow to get broken while the API works fine

Do you have an example of how that could happen? If both build and inspect are tested separately to behave correctly with a given input, how could the CLI break when used together?

Btw, this is why I pointed to use iidfile for this, as it is another cli feature.

Testing the iidfile contains an ID would be fine, but checking that image is available would be incorrect as part of the build test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this needs to be removed. Even in my first post, I only asked a comment.

Yes, understood. I just wanted to make sure I fully understood your comment correctly, so I wanted to ask a few questions to clarify.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an example of how that could happen?

For example, if the cli decides to implement/guarantee some of the --rm, --force-rm functionality on the client side.

Testing the iidfile contains an ID would be fine, but checking that image is available would be incorrect as part of the build test.

In the API and old integration-cli tests we are not limiting ourselves to a requirement that a single test may only do requests against a single endpoint. It is important that the commands work correctly together not only that they work as an isolated subset. Having this limitation just makes it so that some important cases are not covered. In a lot of cases it doesn't make any sense as well, for example to effectively test docker attach command you need to use it together with create/start/wait/pause/rm/restart/stop/kill etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are not limiting ourselves to a requirement that a single test may only do requests against a single endpoint.

I should clarify. I'm not saying that it's incorrect because it's hitting a second endpoint. I'm saying it's incorrect because it's testing behaviour that is outside the scope of the repository. The scope of this repo is the CLI behaviour: read user input, perform operation (often this is one or more API requests), output to user. That is what should be tested. The engine already reported that it created the image (by sending the message "Successfully built ..."), doing an inspect after that would be testing the internal state of the engine, which is why it's incorrect.

There are plenty of tests in the e2e suite which do make use of multiple endpoints (and commands) to setup state, but there is only ever a single command being tested at once.

for example to effectively test docker attach command you need to use it together with create/start/wait/pause/rm/restart/stop/kill etc.

To effectively test docker attach you need the engine to be in a specific state that accurately reflects the normal runtime state. One safe way of setting up that state is to use create/start/wait, that's true. But it needs to be clear that those other commands are only being used as part of setup. They are not the commands being tested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the engine API integration suite I think it would be appropriate to inspect after build, because in that cause the engine state is relevant to the thing being tested. So that functionality should already be covered.

1: equals("Step 1/4 : FROM\tregistry:5000/alpine:3.6"),
3: equals("Step 2/4 : COPY\trun /usr/bin/run"),
5: equals("Step 3/4 : RUN\t\trun"),
7: equals("running"),
9: equals("Step 4/4 : COPY\tdata /data"),
11: prefix("Removing intermediate container "),
12: prefix("Successfully built "),
13: equals("Successfully tagged myimage:latest"),
})
}

func withWorkingDir(dir *fs.Dir) func(*icmd.Cmd) {
return func(cmd *icmd.Cmd) {
cmd.Dir = dir.Path()
}
}

func assertBuildOutput(t *testing.T, actual string, expectedLines map[int]lineCompare) {
for i, line := range strings.Split(actual, "\n") {
cmp, ok := expectedLines[i]
if !ok {
continue
}
if err := cmp(line); err != nil {
t.Errorf("line %d: %s", i, err)
}
}
if t.Failed() {
t.Log(actual)
}
}

type lineCompare func(string) error

func prefix(expected string) func(string) error {
return func(actual string) error {
if strings.HasPrefix(actual, expected) {
return nil
}
return errors.Errorf("expected %s to start with %s", actual, expected)
}
}

func equals(expected string) func(string) error {
return func(actual string) error {
if expected == actual {
return nil
}
return errors.Errorf("got %s, expected %s", actual, expected)
}
}