I have a lot of test suits and tests and the execution time of those tests are so long.
I have an idea of about adaptive testing to modify a TestUnit framework (JUnit for example) to run those tests which takes less time at the beginning and those which are taking a long time at the end.
Also, I'm thinking of defining an annotation like "#RunFirst" to declare and notify the test unit framework to run that test at the beginning so the developer can test the functionality that is working on at the beginning which saves a lot of time to get the answer.
My question are
Is there any programmatic way that we order the execution of tests? (I already checked this page but it doesn't look like an appealing solution to me)
can we access to the statistics of each test ? like how long does each one takes?
Can we get the result of each test after each test is executed and show it to the user? or we have to wait until all the tests are executed?
to run those tests which takes less time at the beginning
If you are really interested in doing this, you have some test-cases that take a long time. Those are almost certainly not really unit tests, but rather integration tests. I would instead suggest moving those test-cases to a separate "integration tests" directory. Run all the integration tests after the unit tests.
Edit
See the following related questions:
How-to organize integration tests and unit tests
Maven - separate integration tests from unit tests
Do you separate your unit tests from your integration tests?
Related
A newbie in this art... but so far, from my reading, I understand that there are broadly 3 categories: unit tests, acceptance/integration tests (not the same) and end-to-end tests.
The thing is, of these 3, it appears that only unit tests are meant to run lightning-fast. It seems perfectly reasonable to be running ALL the unit tests for the entire project, all the time during development. But the same, it seems, can't be said of the other types.
It seems to me, therefore, that you'd want to be running a single acceptance test (or maybe a group of related ones) at each test run, while running all the unit tests for the whole project.
As for the latest end-to-end test that is in the "red" state, given that these can be even slower than acceptance tests, mightn't you want to run that only intermittently? And the entire end-to-end collection maybe only when you're doing something else, or at night or sthg?
I'm using Gradle, and I'm aware you can create a special test task to only run, for example, all the unit tests under a tests\unittests directory... but, if my thinking is valid, is there a habitual way of skipping, or selecting, particular acceptance tests, other than by constantly editing the code - which can get pretty tiresome?
For example, by somehow tagging particular acceptance or end-to-end tests as a certain "category", or maybe by arranging these tests in a hierarchical folder structure?
I have not used gradle, but in python I regularly use both ways you described:
tagging of specific classes of functional tests (a subset are usually tagged as "smoke" tests, to be run on each deploy)
representing tests in hierarchies
small/unit
integration
function (smoke are usually tagged functional tests)
ui
e2e
it appears that only unit tests are meant to run lightning-fast. It seems perfectly reasonable to be running ALL the unit tests for the entire project,
This is the goal, all unit tests are encouraged to be IO free, to run lighting-fast on ever single commit. This process is usually codifed with CI build jobs to trigger on every commit to a repo.
But the same, it seems, can't be said of the other types.
It really depends on what an acceptable build time is, and the size of your projects. I have found that most projects don't actually have that many integration, and if they do have an excessive number of integration, it is usually a good indication that the service should be rethought. For ever integration how many tests are necessary to protect against difficult to reproduce error cases, and to make sure their are checks that will break on interface changes?? In my experience, not many. I have recently started to use docker-compose for integration tests, which allows many tests 20-30 to be executed very quickly for every commit.
docker-compose also allows for a clean e2e environment to be brought up to have acceptance/functional tests executed against it.
It is also my experience that the higher level tests are executed less frequently, but should be executed as frequently as they can be. For example I work with an API, with 300 functional tests covering every method on every endpoint. Because they don't interact with a UI and only use HTTP, they take about a minute to execute. They are executed on every deploy to an environment and at regular intervals.
Let's assume we are not doing TDD (for which unit tests are obviously part and parcel), and have integration tests for all the use cases.
The integration tests assume assume a certain input and validate the output is as expected.
My thinking is that adding a unit test for a method that is traversed in an integration test, using the same data as would exist in the method in the integration test, would not expose any additional bugs.
That would lead to the conclusion that provided you have suffcient integration tests you do not then need to unit test the same code.
So, can someone give a concrete example where a unit test could expose a bug in the above scenario?
Integration tests can be seen as a form of Acceptance Testing. They ensure that the software is doing what it is supposed to be doing.
Unit tests, on the other hand, aren't particularly useful for customers. A customer is not concerned that the InitializeServerConnection is failing, but they are concerned that they're unable to send internal messages to their co-workers as a result.
So what good are unit tests for? They are a development tool, full stop. A unit test verifies that a cog in the machine is working properly. And if it is not, it is very easy to see it failing.
Arialdo Martini offers a great explanation:
Oversimplifying, a software system can be seen as a network of cooperating modules. Since they cooperate, some of them depend on other.
[...]
With integration and end-to-end tests you would be able to find all the broken features.
Yet, this is not of any help in guessing where the bug is. The same system, with the same bug, would result in these unit test failures:
So, even though a unit test doesn't add any business value, it does add value in the form of reducing the amount of time spent manually testing, debugging, and sifting through code looking for the root cause of an issue.
Q1: When is it ideal to run unit test? Should it be ran before each time I go to debug the app? Should they be ran before I commit changes to svn? I think if an app only has a couple of unit test it should be ran each time the app is about to debug. But lets say we hundreds of unit test that can take a bit of time to complete, not sure if this is ideal or not. I think then it would be better to just run them before committing or deploying.
Q2: In my app Im using a repository pattern with a service layer. I've done some research on how to test a service when the service is calling a repository and the repository is querying db. So in order for it to be a true unit test and not an integration test, I have to find a way to test without touching the database. I found people are using Moq to mock their repository. Here's where I have a problem, to me it seems if I mock a repository then I'm changing the behavior of how the method is suppose to work and to me seems like a pointless unit test. It doesn't seem you are actually testing your code. Am I completley wrong about this? Thanks for any advice.
Let me take a shot.
A1: When you refactor existing code, you should execute the corresponding unit tests (not all) and see if anything is broken by your changes. For new functionality you should implement new unit tests in parallel using TDD. You should never execute all the unit tests by your own but should use or rely on continuous integration.
A2: I had a same opinion like you. But now, I am convinced that unit testing for service layer is required. Whatever that can be covered using unit testing should be covered. At this point, the core of your services might just be a delegation to repositories but services evolves. The services takes up the responsibility of parameter validation, authorization, logging, transactions, batch-support API etc. Then, it is not only data-access but many more things. If I were in your place, I would go for unit testing of services by mocking repositories. Sometimes, services provide convenient methods on top of the repository.
Hope it might be of some help to you.
A1. When making changes to your code the more often you run the unit tests the faster you will get feedback on whether the behavior they were written to assert has been affected, so the more often the better! Unit tests should be very fast and running several hundred should only take a couple of minutes at most, but it might be worth looking into infinitest (if working with java, i expect an alternative will exist for .net etc) it is a plugin to eclipse and automatically runs your unit tests when eclipse builds your project. It is clever enough to run only the tests that have been affected since the last time it ran, e.g. if you update a test, or if you update some "application" code that is covered by some unit tests the specific tests will be executed.
A2. Unit tests will cover many different scenarios that will call your services + daos many times, using "real" services will make it difficult to guarantee the results of each call (and setting up the data for each test can be painful), but also the results can be slow. It's usually better when unit testing to mock these services and testing them independently with integration tests.
I am new to testing in general and am working on a Grails application.
I want to write a test that says "when this action is called, the correct view is returned". I don't know how to go about deciding if I should make something like this a unit test or an integration test. Either test would show me what I want - how do I decide?
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds. Especially with mockDomain(). Especially with Grails 2.0 implementing criteria and named queries in unit tests.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
From Grails Docs section 9.1
Unit testing are tests at the "unit" level. In other words you are
testing individual methods or blocks of code without considering for
surrounding infrastructure. In Grails you need to be particularity
aware of the difference between unit and integration tests because in
unit tests Grails does not inject any of the dynamic methods present
during integration tests and at runtime.
From Grails Docs section 9.2
Integration tests differ from unit tests in that you have full access
to the Grails environment within the test. Grails will use an
in-memory HSQLDB database for integration tests and clear out all the
data from the database in between each test.
What this means is that a unit test is completely isolated from the Grails environment whereas an integration test is not. According to Scott Davis, author of this article, it is acceptable to write only integration tests...
Unit vs. integration tests
As I mentioned earlier, Grails supports two basic types of tests: unit
and integration. There's no syntactical difference between the two —
both are written as a GroovyTestCase using the same assertions. The
difference is the semantics. A unit test is meant to test the class in
isolation, whereas the integration test allows you to test the class
in a full, running environment.
Quite frankly, if you want to write all of your Grails tests as
integration tests, that's just fine with me. All of the Grails
create-* commands generate corresponding integration tests, so most
folks simply use what is already there. As you'll see in just a
moment, most of the things you want to test require the full
environment to be up and running anyway, so integration tests are a
pretty good default. If you have noncore Grails classes that you'd
like to test, unit tests are perfectly fine.
First go through this chapter of the grails guide http://grails.org/doc/latest/guide/9.%20Testing.html
It talks about testing controllers and ability to get controller response like so :
controller.response.contentAsString
Now deciding on which test is more of an art rather than science. I prefer unit tests cause they are faster to run :)
Its a really interesting and challenging question to answer, but the truth is it really depends on what exactly you are testing.
Take the following test: "saving a book to the database". The hints are in the description. We are saying we need a book and we need a database, so in this case a unit test wont do because we need the integrated database.
My advice is write the full test description down and break it down like I did above. It will give you the hints to help you decide.
This is made easier with spock where you can use strings for test names.
I am wondering if you guys have any good reading to consider what to classify as unit testing / acceptance / integration testing. I have the following scenario and we're having a bit of a debate at work if it should be in unit tests:
In our data access layer, some statements use sql such as "select * from people where id IN ('x', 'y'), where the IN statement is dynamically generated according to the input. Recently we found out that our Oracle db have a limit of 1000 variables within the IN statement.
I personally think this is not a unit testing scenario. We test if the sql work against the database in unit tests and if the logic is right. However, stress testing should be done at higher level.
If we are to do testing with 1000s of records in unit tests, we need to fill the database each time with large number of records, which might be inefficient.
Any advice?
Regarding your particuliar example, you should in fact consider to do 2 tests for it:
The first one is a Unit Test, and will check that your function can accept the maximum number of input requested by the requirements. If nothing is specified there, ask for clarification to the analyst. Dynamically generated requests such as this are a pain to debug afterward.
The second one is a Stress test. But it should not be performed onthis particular part of your code, but rather on the integrated part tha will use it. If you start stress testing unit block, you'll end up doing premature optimisation because you'll loose the big picture and start making assumptions on how this will work togetheter rather than observing how it does really.
I believe that stress tests should be implemented as part of unit testing. Generally your unit tests should contain
Accuracy tests
Failure tests
Stress tests
If you don't want to run stress tests each time when you running other tests, you can consider to group stress tests in separate text fixture.
Unit tests should test, and specify, the functionality of the unit under test. In this case you are testing the database, not the unit, so I think this test is really a unit test.
A unit should be independent of the database it is using, if you are testing the way a unit interacts with a particular database then it seems like an integration test to me