How to trigger Unit Tests on ClearCase check in - unit-testing

The testing framework I'm writing includes templates for users to write their own tests. These tests have two modes, one for setting up their related files, and one for verifying those files. When users write their test, they must run the test in setup once to generate those files, but then I want to make sure they don't check in tests that are still in setup.
I can assert a test failure in the setup, but how can I trigger the unit tests at checkin and prevent checking in if any of the tests fail?
Is there a better way to prevent users from checking in files in a specific configuration?

You can write a trigger to run the unit tests at checkin and block checkin if the tests fail. I would run the tests against test scripts which are already in ClearCase or someone could create their own local version of the test which is easy to pass (return true).
Another method would be to allow checkin and only fire the trigger upon delivery. That way the user can checkpoint their code but when they deliver it for others to use the unit tests would need to pass before allowing deliver completion.

Related

Is there a way to macro unit test in kdb+?

I have seen a few different unit testing approaches online using qunit and k4unit but I can only get them testing on single functions. I was hoping that I could run a unit test that check the daily checks I execute each day such as, "has the nightjobs ran correctly?", "are the dashboards on the WebUI up?", "did the deploy script run with no errors?". Is there built in kdb+ functionality for these kind of tests or a clean way to adapt the qunit or k4unit unit tests? Or will it require a script written from scratch?
Thanks
I don't think a unit test is what you're looking for here. Some kind of reporting mechanism for jobs would be more appropriate. Within your existing jobs you could generate some kind of alert to indicate the job's success/failure. This qmail library may be useful for that.
I'm not sure what kind of system you're using, but AquaQ Analytics' TorQ system has a reporter process which can (amongst other things) email alerts for specific processes.
(Disclaimer: I'm an employee of AquaQ Analytics)

Testing Spark: how to create a clean environment for each test

When testing my Apache Spark application, I want to do some integration tests. For that reason I create a local spark appliciation (with hive support enabled), in which the tests are executed.
How can I achieve that after each test, the derby metastore is cleared, so that the next test has a clean environment again.
What I don't want to do is restarting the spark application after each test.
Are there any best practices to achieve what I want?
I think that introduction of some application level logic for integration testing kind of breaks concept of integration testing.
From my point of view correct approach is to restart application for each test.
Anyway I believe another option is to start/stop SparkContext for each test. It should clean any relevant stuff.
UPDATE - answer to comments
Maybe it's possible to do a cleanup by deleting tables/files?
I would ask more general question - what do you want to test with your test?
In a software development is defined unit testing and integration testing. And nothing in between. If you desire to do something that is not integration and not unit test - then you're doing something wrong. Specifically, with your test you try to test something that is already tested.
For the difference and general idea of unit and integration tests you can read here.
I suggest you to rethink your testing and depending on what you want to test do either integration or unit test. For example:
To test application logic - unit test
To test that your application works in environment - integration test. But here you shouldn't test WHAT is stored in Hive. Only that the fact of storage is happened, because WHAT is stored shall be tested by unit test.
So. The conclusion:
I believe you need integration tests to achieve your goals. And the best way to do it - restart your application for each integration test. Because:
In real life your application will be started and stopped
In addition to your Spark stuff - you need to make sure that all your objects in code are correctly deleted/reused. Singletones, Persistent objects, Configurations.. - it all may interfere with your tests
Finally, the code that will perform integration tests - where is a guarantee, that it will not break production logic at some point?

Ava separate integration and unit tests

I'd like to use 'ava' tool for both unit and integration testing. But I can't figure out what's the best way to separate those tests. Unit tests should run before the code deployed into test environment, and integration tests need to run after the code has been deployed to the test server.
My challenge is that 'ava' reads it's configuration from 'ava' section of package.json. Not sure how to tell it to use different sets of test sources depending on which stage of deployment it is.
You can also use an ava.config.js file. For now, you could use environment variables to switch the config. Keep an eye on https://github.com/avajs/ava/issues/1857 though which will add a CLI flag so you can select a different config file.

Do SpringBoot's tests with MockMVC fit more in surefire (mvn test) or failsafe (mvn verify)?

I am assessing what is the perks and cons of using each approach.
To begin with, I am not sure whether a mockmvc can be considered a true integration test, since it mocks internal dependencies.
Even if I used an actual instance with true requests for my tests, I'm still mocking my external dependencies, and I am not quite sure the aim of a true integration/verify test is testing the environment as if it was real.
Besides, putting this controller tests in verify makes my pipeline longer and slower, since it will be interrupted after an unnecessary package and the like.
What do you thing is a proper schema for optimizing these tools in a build process?
One of the ideas I have is trying to use it like 2 profiles:
-Profile test would execute all IT tests with mocked external dependencies on test phase
- Profile integration would execute all IT tests with real prod config on verify
But tests would be the same.
Out of my personal experience, we've been in the same dilemma. We've ended up using both types of test:
- unit tests managed by surefire plugin
- integration tests managed by failsafe plugin.
Both were running during the build (but at different phases of course)
Now, regarding the controller tests:
I believe unit tests should be blazing fast, tens or hundreds of them should run within 1 second or so they also should not have external dependencies and run all-in-memory (no sockets, networking, databases, etc.)
These tests should be run by the programmer any time during the development, maybe 5 times in a minute, just to make sure the small refactoring doesn't break something, for example.
On the other hand, controller tests run the whole spring thing, which is by definition is not that fast. As for external dependencies, depending on the configuration of mock MVC you can even end up running some kind of internal server to serve the requests, so its far (IMO) from being a unit test.
That's why we've decided to run those with failsafe plugin and be integration tests.
Of course, Spring configurations if used properly can be cached by Spring between the tests, but this fact can only help and make integration tests run faster, but it doesn't mean that this kind of tests is a unit test.

How to Deploy Data in an SSDT Unit Test but Only for the Unit Test

I have successfully begun to write SSDT unit tests for my recent stored procedure changes. One thing I've noticed is that I wind up creating similar test data for many of the tests. This suggests to me that I should create a set of test data during deployment, as a post-deploy step. This data would then be available to all subsequent tests, and there would be less need for lengthy creation of pre-test scripts. Data which is unique to a given unit test would remain in pre-test scripts.
The problem is that the post-deploy script would run not only during deployment for unit tests, but also during deployment to a real environment. Is there a way to make the post-deploy step (or parts of it) run only during the deployment for an SSDT unit test?
I have seen that the test settings in the app.config include the database project configuration to deploy. But I don't see how to cause different configurations to use different SQLCMD variables.
I also see that we can set different SQLCMD variables in the publish profiles, but I don't see how the unit test settings in app.config can reference different publish profiles.
You could use an IF-statement, checking ##SERVERNAME and only running your Unit Testing code on the Unit Test server(s), with the same type of test for the other environments.
Alternatively you could make use of the Build number in your TFS Build Definition. If the Build contains, for example the substring 'test', you could execute the the test-code, otherwise not. Then you make sure to set an appropriate build number in all your builds.