feat(createtestfromscenario.js): add feature to hide pending scenarios from test run #474
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
First, thank you for the work on this project! I am a huge BDD advocate and practitioner, and to be able to combine Cypress and BDD is a huge win for me. This is my first attempted contribution, but I wouldn't be surprised if I think of new features I'd like, in which case I'm always happy to try to make it work myself.
What
This change allows you to control whether or not you want to hide pending scenarios. If you set the
env var to hide pending scenarios, then any scenario that should not be run will not get a mocha
test case. In other words, only tests that will actually be run will appear in the cypress test run.
Why
If you have a lot of scenarios or features that are skipped, registering them with Cypress can create a lot of noise in the test output, and can even slow down a test run significantly. This allows you to hide those "pending" scenarios from Cypress, if you would like.
In my personal case that drove me adding this change, I have a lot of feature files and scenarios that tagged with a tag that causes them to be skipped. I use feature files to define the overall behavior that needs to be tested, and a lot of those tests are unfortunately still manual. This means I have a lot of scenarios that are skipped, but those may be implemented in due time. So it is important to me to keep all automated and manual test feature files organized together.
When these tests are run in CI however, we run all of the
.features
files. This picks up all of the features, even if all scenarios are to be skipped by Cypress. That I think is fine behavior, but the issue is that going through skipping all of those tests takes a while in the test run. This is because I havebefore
andbeforeEach
hooks. The code there is pretty simple, but that still runs the hooks for every skipped feature/scenario. Perhaps that is a bug in and of itself, but that I think is a higher architectural discussion (which we can have if you'd like!). I felt like the approach here was the simplest given how things work now, but I am open to your suggestions on a preferred solution to my problem. If the skipping of the tests took basically no time, that would solve most of my issue, but I also do like the ability to hide the skipped tests from the output altogether, so the CI output is very clean and concise.Notes on my implementation
As mentioned, I tried to find the simplest solution to the problem I am having. I also looked into the
cypress-tags.js
code and thought that maybe a solution could be to include logic to handle.features
files, butcypress-tags.js
really only determines which files to tell cypress to run, whereas the logic to convert the.features
file into basically a large feature file is run within the plugin code, so that seemed like a major change to make work.Putting the change right into the place where it creates the test allows the option to be used regardless of how you trigger cypress, and it is a really simple bit of logic.
I briefly looked into adding tests for this new feature. It doesn't seem there is any test coverage in the scenario creation files, so currently I don't have any test coverage. I'm not immediately sure how I'd approach seeing which tests are or are not created. It probably would make the most sense to test this at a higher level, perhaps actually calling
cypress run
and looking at the test output. But I am not sure if you want to go down that path. I don't mind spending a little time to help with that if you would like, but it seems out of scope for this change.I chose to use an environment variable as an option since it is easily available to the code that needs it, and I didn't see any precedent for passing through options to the command line. If you would prefer a command line option, let me know and I can try to make that work.
I have tested this change in my test environment with and without the environment variable set, set to
true
orfalse
. Not having it set or set to false follows the same current behavior, but setting it to a truthy value makes it as if the scenarios didn't exist in the feature files.