I'd like to use 'ava' tool for both unit and integration testing. But I can't figure out what's the best way to separate those tests. Unit tests should run before the code deployed into test environment, and integration tests need to run after the code has been deployed to the test server.
My challenge is that 'ava' reads it's configuration from 'ava' section of package.json. Not sure how to tell it to use different sets of test sources depending on which stage of deployment it is.
You can also use an ava.config.js file. For now, you could use environment variables to switch the config. Keep an eye on https://github.com/avajs/ava/issues/1857 though which will add a CLI flag so you can select a different config file.
Related
I am attempting to set up Java code coverage for a fairly complex app that
combines multiple large modules, only one of which I need to check coverage on
uses a combination of ant and Maven for builds
cannot be run except as an installed application on a server, with configuration
the automated tests to be analyzed for coverage are not part of the application build and make use of API calls to the application server from a remote client
The examples given in the jacoco documentation and in the online sources I have found assume the app under test is not previously installed and the tests are unit/integration tests run as part of the build. The documentation does not cover the details of how the jacoco instrumentation is done or when the call is recorded to a particular line of code. If I use ant or maven to instrument a particular module, use that module to build the full app, install it on a server, and configure it, will my remote tests then generate the .exec file?
Any advice on how to achieve the end goal (knowing how much of our code is covered by the tests) is greatly appreciated, including better search terms than "jacoco for installed app" which as you can imagine is ... not very useful. My google-fu is humbled.
I'm just starting to mess about with continous integration. Therefore I wanted to set up Jenkins as also Sonarqube. While reading manuals/docs and tutorials I got a little bit confused.
For both systems, there are descriptions about how to set up unit test runners. So where should unit tests ideally be run? In Jenkins or in Sonarqube or in both systems? Where does it belong in theory/best practice?
We have configured Jenkins to launch the unit tests and the results are “forwarded” to Sonar to be interpreted as a post build action
The Best practice would be running the Unit test in Jenkins. This would ensure the Unit test cases are executed before we Build/Deploy.
SonarQube is normally used to ensure the quality of the code which will point out the bad codes, based on the guidelines/rules.It also gives the report on the Unit test coverage, Lines of code etc.
Usually it's done in Jenkins as you want to actually test your code before building the module.
I have successfully begun to write SSDT unit tests for my recent stored procedure changes. One thing I've noticed is that I wind up creating similar test data for many of the tests. This suggests to me that I should create a set of test data during deployment, as a post-deploy step. This data would then be available to all subsequent tests, and there would be less need for lengthy creation of pre-test scripts. Data which is unique to a given unit test would remain in pre-test scripts.
The problem is that the post-deploy script would run not only during deployment for unit tests, but also during deployment to a real environment. Is there a way to make the post-deploy step (or parts of it) run only during the deployment for an SSDT unit test?
I have seen that the test settings in the app.config include the database project configuration to deploy. But I don't see how to cause different configurations to use different SQLCMD variables.
I also see that we can set different SQLCMD variables in the publish profiles, but I don't see how the unit test settings in app.config can reference different publish profiles.
You could use an IF-statement, checking ##SERVERNAME and only running your Unit Testing code on the Unit Test server(s), with the same type of test for the other environments.
Alternatively you could make use of the Build number in your TFS Build Definition. If the Build contains, for example the substring 'test', you could execute the the test-code, otherwise not. Then you make sure to set an appropriate build number in all your builds.
I have written a REST server (in Java using RestEasy) with a unit test suite written in Scala. The test suite uses the mock server provided by RestEasy and runs with every Maven build.
I would like to create a second functional test suite that calls an actual tomcat server and exercises each REST service. I do not want this new suite to run with every build, but only on demand, perhaps controlled with a command line argument to Maven.
Is it possible to create multiple independent test suites in a Maven project and disable some from automatic running, or do I need to create a separate Maven project for this functional suite? How can I segregate the different functional suite code if these tests are in the same project with the unit tests (different directories)? How do I run a selected suite with command line arguments?
I never used it myself but I am aware of maven integration tests run by the Maven Failsafe plugin.
As the surefire plugin by default includes the tests named **/Test*.java, **/*Test.java, **/*TestCase.java the failsafe plugin runs the **/IT*.java, **/*IT.java, **/*ITCase.java tests.
Both test approaches have different intentions which seems to match part of your needs. It might be worth to have a look.....
Another approach would be to use maven profiles and specifiy different surefire includes for each profile.
In a J2EE web application, how do people manage resources so that they are visible to both the web context and to unit/integration tests?
I find that often you end up having your source/resource folders configured a certain way during development (i.e., what Maven expects) and so your unit tests will run in your IDE. But once the web app is built and packaged into a WAR file (i.e., when your Continuous Integration server has done a build) your unit tests won't run anymore because the resources are located elsewhere.
Do you end up keeping resources in two different places and manually keeping them in sync?
We tried using unit tests in the container but gave up on it years ago. It's much better (for us at least) to make each unit test cover a single class and nothing else, mocking out the dependencies on other classes (see JMock or its many competitors). A good basic rule is that if it touches the database, network, or the filesystem, it isn't a unit test. (It may be useful for something else, but it isn't a unit test. See these unit testing rules for more on this.)
Unit tests written this way can be run anywhere, and they are blazingly fast (we have thousands and run them in under 60 seconds on medium-spec hardware.)
You may also want to run integration tests that check a subsystem or the whole application. We find that subsystem tests can also use mocking at their borders - for instance, we fake an external pricing feed - and that end-to-end tests work best with tools like Selenium or WebDriver, which let you deploy the whole application on a server and then hit it with a browser just like users do.
(By the way, our method of unit testing makes us mockists, rather than classicists, in Martin Fowler's taxonomy.)
Normally this is the reason for multi-module builds. The external services are in a separate build unit than the web application. So you build, package and run your integrations tests when you build that module.
Another module can contain your domain model and its unit tests, which are also run at build time.
It is quite common for a module that results in a WAR to not have any java code in it at all, but only web related artifacts. Although not necessary, this is often done because code that is in a war module cannot be included into another module.
The last special case is the module containing web-tests. This module may often need test-scoped artifacts from the other modules (because it is testing the application from the outside, but may need data from the inside). This can be solves by also packaging test-resources in jar files, creating a separate set of "test" jar files for each modules.
Multi module builds are the norm for maven projects, and are also easy to set up for other build systems like ant.
I won't package testing resources nor tests in a WAR file neither run unit tests from the WAR. Why you are trying to do so?