I am pretty new to unit testing, and I am trying to understand best practice for whether and, if so, how to run Jest alongside webpack.
For context, I am used to using ESLint in the following way. I use eslint-webpack-plugin and configure it so that webpack outputs an error and/or fails the build if there is a linting error. I use this setup for both the development build (using webpack-dev-server) and the production build so that I can be made aware of and address linting issues as they arise. I also use lint-staged and husky to set up a pre-commit hook that runs ESLint before commits for a similar reason.
So, my inclination when learning Jest was to use a similar setup, where tests will be run as part of the webpack compilation process and errors will be obvious and intrusive so that I can address/resolve them as they arise. I tried following the tutorials for Babel and webpack on the Jest site, but I cannot get webpack to throw any errors, and I'm not even sure if it's even running Jest at all to be honest. I looked to see how create-react-app and create-next-app have Jest set up. They both include an npm script for testing, but it seems users are supposed to run that script manually, separately from the dev/build processes, or as part of a CI workflow.
Any advice appreciated!
Related
I am learning AWS code pipeline and code build, I have a project(Angular App) where I am building and deploying it. But I also have a testing script that I was to run at a different stage.
I am doing testing using Karma and Jasmine.
The structure/stages I want is.
Source
Test //This is what I want to have
Build
Deploy
Question is, how do I do that? I have buildspec.yml file in the code build project, but that is oriented towards building the project and not testing it, so do I create a new code build project just to test it?
It would be amazing if I could use another buildspec.yml just for testing.
Anyone who is also running testing pipelines, help me out.
I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.
The documentation for the ember-cli-code-coverage project on Github does not clearly state how exactly to configure and run coverage reports.
The documentation hints that, after installing the addon, you just need to set an environment variable named COVERAGE to true. I interpret that to mean an environment variable in config/environment.js. After running the CLI command ember test I expect to find something saved in a coverage folder at the root of the project, but nothing appears to be generated. My tests run okay without any errors, and with all passing tests.
There are a few statements on Stackoverflow (here, here, and here) that suggest the package works okay. Searching for clear examples or how-to articles appears to be a dead end at the present moment.
I'm trying to get this working using versions:
Ember.js 2.6.0
ember-cli-code-coverage 0.2.2
Windows 10
You need to set the environment variable in the command line environment, not the Ember environment. Run COVERAGE=true ember test.
Side-note: this does seem like a weird choice, requiring a command line environment variable instead of making it configurable in other ways the way ember-cli-blanket does.
I have Jenkins setup to run tests before anything gets pushed into our QA environment. Recently I added python coverage to check code coverage of the tests.
Issue I have is not I see in the output that tests are failing, but the build still pushes through.
I am running the following in a bash script:
coverage run manage.py test --settings=my.settings.jenkins --noinput
When I was running the tests normally without coverage, if the test failed, the build would fail, this is no longer the case.
The project is a Django project on Python 3, any help would greatly be appreciated.
I am hitting the same issue. My Jenkins build always passes even if Coverage.py has some failing tests (I use the same command as you).
I have come up with a workaround but would ideally like to know if you figured it out?
My workaround uses the TextFinder plugin and I search the console for a specific string which I then fail the build if found....Hack-y I know!
https://wiki.jenkins-ci.org/display/JENKINS/Text-finder+Plugin
Question: What is the best solution for executing a 'mvn deploy' such that the deploy part is only run after all unit tests succeed and no processing steps are duplicated?
I was hoping the simple answer was: Execute maven command 'x' (or use a flag) such that the deploy can be run without invoking the prior goals in the default lifecycle.
Sadly this does not appear to have a simple answer. I have included the details on the path I have followed so far below.
We have the following three requirements:
Execute the maven deploy goal to deploy all multi-module artifacts to a remote repository.
Only deploy if ALL unit tests across all projects pass.
Do not repeat any processing.
We started with simply "mvn clean deploy", however we noticed a couple issues:
the build would stop before completing all unit tests :: so we added the --fail-at-end flag
The deploy goal would execute against any modules that were successful.
This results in a "corrupted" state where the remote repository may only has a partial deployment (if there were modules with failures later in the build).
We looked at 3 different solutions:
Staging the artifacts prior to deploying :: this was determined to be too heavy for a fully automated process.
Use a profile to override the default lifecycle such that 'mvn deploy -Pci-deploy' would run without invoking any prior goals :: this worked and was fast, but is obviously an unconventional approach.
Simply running 'mvn clean package' and then only iff successful execute 'mvn deploy' :: this appears to work and seems to only take a minor hit when the goals are invoked (though some of them are smart enough not to reprocess an unchanged workspace)
I pose this question to the community with the background details I have provided to determine if there is a better approach or a strong opinion regarding (potentially) making one of the following requests:
A new deploy goal that can run separate and apart from all other lifecycle goals with the expectation that: all prior steps have already been run and that it will execute the deploy identically to "mvn deploy"
a flag in the deploy goal which would effectively disable the previous goals.
a little more out of the box and definitely against the current convention:
a flag that would tell maven to run the [unit] test goal for all modules prior to proceeding.
Notes:
We are using Jenkins, but for the purposes of this question the CI environment is not the complication.
I tried the 'mvn deploy:deploy' goal, but it had a number of unclear errors.
I have not considered integration tests as part of the requirements.
Update 8/20/2013
I tested the deferred deploy plugin and determined that the tool worked as expected, but took way to long.
For our code base:
mvn clean deploy: for all goals executed in 2:44
mvn clean install 'deferred-deploy-plugin': for all goals executed in 15 min
mvn clean package; mvn deploy -Pci-deploy a custom build profile that disables the earlier goals executed:
for all goals (including deploy): 4:30
deploy only: 1:45
mvn clean package; mvn deploy -Dmaven.test.skip=true on the same workspace executed:
for all goals (including deploy): 4:40
deploy only: 1:54
The clean package followed by deploy skipping the tests runs faster than the deferred deploy and accomplished our desire to delay the deploy until after the tests succeed.
There appears to be a minor time hit for when the deploy lifecycle executes and exits each of the preceding goals (process, compile, test, package, etc). However the only alternative is to hack a non-standard execution, which only saves 10 seconds.
There's a new answer now. Since version 2.8 of the maven deploy plugin there's a way to do this "natively". See the jira issue for details.
Basically you need to force at least v2.8 of the plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8</version>
</plugin>
and use the new parameter deployAtEnd. more info here. This setting usually goes along with installAtEnd of the maven-install-plugin
As an alternative, I also found this
http://code.google.com/p/maven-deferred-deploy-plugin/
A maven plugin that iterates through all projects in a reactor and
executes a deploy on each project individually. Can be used to produce
a near-atomic build for a reactor by deferring artifact deployment
until the install phase has completed.
Sounds alot like what you were asking for. I still think my other answer is easier to implement since you use jenkins, just check a checkbox
Two things.
Disabling all the previous phases i don't see it as an option. It is a basic feature of maven, you would be altering the standard lifecycle so i highly doubt anyone would implement something in a plugin to allow this
Since you said you use Jenkins, there is a setting in jenkins specifically for the case of deploying at the end to guarantee that the repo is not in a corrupt/intermediate state
In "Post-build actions"
Deploy artifacts to a Maven repository. In comparison with the
standard mvn deploy, this feature allows you to deploy artifacts after
the entire build is confirmed to be successful.
This prevents a typical problem in Maven, where some modules are deployed before a critical failure is discovered later down the road,
rendering the repository state inconsistent.
Note that regardless of this configuration, you can always manually come back to Jenkins and deploy any of the past artifacts to
any repository of your choice, after the fact.
To use this feature you shouldn't deactivate the automatic artifact archiving.
I have never used this so i can't confirm whether it works, I just know it's there for this particular use-case