Jenkins and Sonarqube - where to run unit tests - unit-testing

I'm just starting to mess about with continous integration. Therefore I wanted to set up Jenkins as also Sonarqube. While reading manuals/docs and tutorials I got a little bit confused.
For both systems, there are descriptions about how to set up unit test runners. So where should unit tests ideally be run? In Jenkins or in Sonarqube or in both systems? Where does it belong in theory/best practice?

We have configured Jenkins to launch the unit tests and the results are “forwarded” to Sonar to be interpreted as a post build action

The Best practice would be running the Unit test in Jenkins. This would ensure the Unit test cases are executed before we Build/Deploy.
SonarQube is normally used to ensure the quality of the code which will point out the bad codes, based on the guidelines/rules.It also gives the report on the Unit test coverage, Lines of code etc.

Usually it's done in Jenkins as you want to actually test your code before building the module.

Related

Testing Spark: how to create a clean environment for each test

When testing my Apache Spark application, I want to do some integration tests. For that reason I create a local spark appliciation (with hive support enabled), in which the tests are executed.
How can I achieve that after each test, the derby metastore is cleared, so that the next test has a clean environment again.
What I don't want to do is restarting the spark application after each test.
Are there any best practices to achieve what I want?
I think that introduction of some application level logic for integration testing kind of breaks concept of integration testing.
From my point of view correct approach is to restart application for each test.
Anyway I believe another option is to start/stop SparkContext for each test. It should clean any relevant stuff.
UPDATE - answer to comments
Maybe it's possible to do a cleanup by deleting tables/files?
I would ask more general question - what do you want to test with your test?
In a software development is defined unit testing and integration testing. And nothing in between. If you desire to do something that is not integration and not unit test - then you're doing something wrong. Specifically, with your test you try to test something that is already tested.
For the difference and general idea of unit and integration tests you can read here.
I suggest you to rethink your testing and depending on what you want to test do either integration or unit test. For example:
To test application logic - unit test
To test that your application works in environment - integration test. But here you shouldn't test WHAT is stored in Hive. Only that the fact of storage is happened, because WHAT is stored shall be tested by unit test.
So. The conclusion:
I believe you need integration tests to achieve your goals. And the best way to do it - restart your application for each integration test. Because:
In real life your application will be started and stopped
In addition to your Spark stuff - you need to make sure that all your objects in code are correctly deleted/reused. Singletones, Persistent objects, Configurations.. - it all may interfere with your tests
Finally, the code that will perform integration tests - where is a guarantee, that it will not break production logic at some point?

Ava separate integration and unit tests

I'd like to use 'ava' tool for both unit and integration testing. But I can't figure out what's the best way to separate those tests. Unit tests should run before the code deployed into test environment, and integration tests need to run after the code has been deployed to the test server.
My challenge is that 'ava' reads it's configuration from 'ava' section of package.json. Not sure how to tell it to use different sets of test sources depending on which stage of deployment it is.
You can also use an ava.config.js file. For now, you could use environment variables to switch the config. Keep an eye on https://github.com/avajs/ava/issues/1857 though which will add a CLI flag so you can select a different config file.

What is difference of test builds in teamcity and test builds in octopus?

I've been searching for this but no answers.
I've created a unit test project in my solution.
Since we're planning to automate the testing. We don't know where to put it.
Will the teamcity test it or just build the test project? If it will going to be tested does it mean that it is ok not put it in octopus?
You should run your tests in Team City, and fail the build if the tests fail.
Only if the tests pass should you allow the build artefacts to be sent to Octopus, which will then take care of deploying the software.
General test should be run on the build server but you may want to run integration tests from Octopus after the deployment has happened. An example of post-deploy testing would be something like Selenium smoke tests to ensure the deployment was successful and the application is running as expected (e.g. a website on IIS).
Generally you want tests to fail as early as possible (e.g. in UAT instead of production, in CI instead of Test/UAT etc..)

How to Deploy Data in an SSDT Unit Test but Only for the Unit Test

I have successfully begun to write SSDT unit tests for my recent stored procedure changes. One thing I've noticed is that I wind up creating similar test data for many of the tests. This suggests to me that I should create a set of test data during deployment, as a post-deploy step. This data would then be available to all subsequent tests, and there would be less need for lengthy creation of pre-test scripts. Data which is unique to a given unit test would remain in pre-test scripts.
The problem is that the post-deploy script would run not only during deployment for unit tests, but also during deployment to a real environment. Is there a way to make the post-deploy step (or parts of it) run only during the deployment for an SSDT unit test?
I have seen that the test settings in the app.config include the database project configuration to deploy. But I don't see how to cause different configurations to use different SQLCMD variables.
I also see that we can set different SQLCMD variables in the publish profiles, but I don't see how the unit test settings in app.config can reference different publish profiles.
You could use an IF-statement, checking ##SERVERNAME and only running your Unit Testing code on the Unit Test server(s), with the same type of test for the other environments.
Alternatively you could make use of the Build number in your TFS Build Definition. If the Build contains, for example the substring 'test', you could execute the the test-code, otherwise not. Then you make sure to set an appropriate build number in all your builds.

What kind of tests should run on the CI server?

Currently, our unit tests are committed along with the application code and executed by a build bot job on each commit. Likewise, code coverage is calculated. However, both UTs and coverage are - or can be - conducted by the developers before they commit new features to the repository, so the CI process does not seem to add any value.
What kind of tests do you suggest should be executed on the CI server which are not executed by the developer's prior to commit?
so the CI process does not seem to add any value
No?
What happens if a developer doesn't run the tests before committing?
What happens if a developer commits a change which conflicts with a change committed by another developer?
What happens if the tests pass on the developer's machine but not on other machines?
What happens if another developer has changed a test?
...
CI isn't "continuous testing", it's "continuous integration". The need to run the tests as part of the integration build is to validate that the changes committed can successfully integrate with what's already there. Whether or not the tests passed on the developer's local potentially-non-integrated workstation is immaterial.
The unit tests (any reasonably fast automated tests, really) should be executed by the CI server to validate the current state of the build. That state may be different than what is on one individual developer's workstation.
They may be the same tests which were just run by the developer moments prior. But the context in which the tests run is physically different and the need to run them is semantically different. Unless you're being charged by the clock cycle, there's no reason to omit running tests.
David raises very good points. One thing that he didn't mention though is that automated testing in a continuously integrated environment can be even more powerful when it goes beyond unit tests. The CI process allows you to run integration and system level tests that would be too expensive to run on a dev box. For example, you may have unit tests for your persistence layer that run against an in-memory database. Your CI server however can run these same automated tests against a snapshot of your production database.
Completely agree with previous posts, as Unit testing is mostly done by devs. So tests that should be executed as part of the CI process is pretty much opinion based. Depending of the goals of the team/project.
The thing that is also important is that CI (server) gives you a separate testing environment. So your test effort and execution can run independent. Your test are executed in a clone of the production environment.
In my expirience I've used CI server mostly for System,Integration, Functional, Regression testing and UAT.