Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe comment here that this is for catching regressions, not with stability guarantees(even seems to only check specific lines). Also, I'm not sure if these tests are supposed to work with all supported versions of the daemon or not. If they are, then we need to be more flexible in here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need to support multiple daemon versions we can provide different
map[int]lineCompare
for each supported version, but I don't think we should be less strict about the assertion. I don't think we run these against multiple versions just yet.The reason it doesn't check every line is because the other lines are just
---> RANDOMID
. We could add checks for those too, but I'm not sure we should consider those important parts of the build output.Why do you think it shouldn't be for stability guarantees?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, this changed in v17.10 and moby/moby#35549 is open atm. Probably many other times I don't remember.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A better check in here would be to check if the image exists and has right properties. Or if image built with
iidfile
is actually available.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you mean. I think it's ok if the output changes if we have a good reason, but it's bad if it changes accidentally so we should still be as strict as possible about the output. We could add the
--> RANDOMID
checks in a follow up.No, that is the wrong thing to check. It is not the responsibility of the docker/cli e2e test suite to test the daemon. It should only test the interaction between the two components and behaviour of the cli (most of the cli behaviour should already be covered by unit tests). Checking if the image exists is the responsibility of the engine API tests. The engine reported that it created an image (the last line in the output) which is all the CLI should care about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a scripting perspective the exit code may be enough, but interactive users should expect some consistency in output. This test does verify the exit code as well (
result.Assert()
).Do you have an example of how that could happen? If both build and inspect are tested separately to behave correctly with a given input, how could the CLI break when used together?
Testing the iidfile contains an ID would be fine, but checking that image is available would be incorrect as part of the build test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, understood. I just wanted to make sure I fully understood your comment correctly, so I wanted to ask a few questions to clarify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, if the cli decides to implement/guarantee some of the
--rm
,--force-rm
functionality on the client side.In the API and old integration-cli tests we are not limiting ourselves to a requirement that a single test may only do requests against a single endpoint. It is important that the commands work correctly together not only that they work as an isolated subset. Having this limitation just makes it so that some important cases are not covered. In a lot of cases it doesn't make any sense as well, for example to effectively test
docker attach
command you need to use it together withcreate/start/wait/pause/rm/restart/stop/kill
etc.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should clarify. I'm not saying that it's incorrect because it's hitting a second endpoint. I'm saying it's incorrect because it's testing behaviour that is outside the scope of the repository. The scope of this repo is the CLI behaviour: read user input, perform operation (often this is one or more API requests), output to user. That is what should be tested. The engine already reported that it created the image (by sending the message "Successfully built ..."), doing an inspect after that would be testing the internal state of the engine, which is why it's incorrect.
There are plenty of tests in the
e2e
suite which do make use of multiple endpoints (and commands) to setup state, but there is only ever a single command being tested at once.To effectively test
docker attach
you need the engine to be in a specific state that accurately reflects the normal runtime state. One safe way of setting up that state is to usecreate/start/wait
, that's true. But it needs to be clear that those other commands are only being used as part of setup. They are not the commands being tested.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the engine API integration suite I think it would be appropriate to inspect after build, because in that cause the engine state is relevant to the thing being tested. So that functionality should already be covered.