I have a big mess with 100 tests in one class and running all of them by clicking "Test project (...). They run in a random order and I would like them to run in a specific order - from beginning to the end, the same order that I wrote them. In eclipse it's not a problem because eclipse just works like that, how to do it in netbeans?
Any help will be appreciated.
Edit (due to answers): Tests order is required for the clearance of the log. They are independent.
If your tests needs to run in a specific order, something is wrong with your design.
2 test that needs to run one after another are 1 test. Consider this before searching for a solution.
check this https://blogs.oracle.com/mindless/entry/controlling_the_order_of_junit
Having tests depending on other tests 99.9% of the time a very bad idea. Unit tests should be independent from each other, as otherwise you might have a cascade of errors, or (even worse) one test failing because something another test did sometime before.
If you still want to go through this pain, you'll need to use a different unit testing framework (such as TestNG - see dependsOnMethods) which supports test dependencies.
Junit doesn't support this feature because it's seen by many as a bad practice (for very good reasons).
The next JUnit release will support ordering of test methods. The standard Maven Surefire Plugin supports ordering of test methods already.
Netbeans has good integration with ant build files. You could write a specific ant target that could execute each test in order.
Related
Is there a way to executes only those tests which are affected by recent changes in Go? We have a large unit test suite and now it is starting to take a while before it finishes. We are thinking that we only run those tests which are affected by the code changes in the first pass.
Python has something like this: https://github.com/tarpas/pytest-testmon
Is there a way to do this in Go?
No, there is no way to do it in Go. All you can do is to split your code into packages and tests one package at a time
go test some/thing
Instead of all of them
go test ./...
go test in Go 1.10 and newer does this automatically at the package level; any packages with no changes will return cached test results, while packages with changes will be re-tested.
If a single package's tests are still taking too long, that points to a problem with your tests; good tests in Go generally execute extremely quickly, which means you probably need to review the tests themselves, and do some combination of the following:
Isolate integration tests using build tags. Tests that hit external resources tend to be slower, so making them optional will help speed up runs where you just want unit test results.
Make use of short tests so that you have the option of a quick pass you can do more frequently.
Review your unit tests - do you have unnecessary tests or test cases? Are your tests unnecessarily complex? Are you reading golden files that could be kept in constants instead? Are you deserializing static JSON into objects when you could create the object programmatically?
Optimize your unit tests. Tests are still code and poor-performing code can be optimized for performance. There are many cases in unit tests where we're happy to opt for convenience over performance in ways we wouldn't with production code, but if performance is a problem, that choice must be reconsidered.
Review your test execution - are you using uncacheable parameters to go test that are preventing it from caching results? Are you engaging the race detector, profiler, or code coverage reporting out of habit in cases where it's unnecessary?
Nabaz may be what you are looking for.
The example from their README.md is
export CMDLINE="go test"
export PKGS="./..." # IMPORTANT make sure packages are written SEPERATLY
nabaz test --cmdline $CMDLINE --pkgs $PKGS .
You cannot rerun tests only for the last edited files. But there are a few ways of optimizing running tests.
Firstly, you have to split your project into logical-separated packages. This will lead to a situation that one change will require rerunning test only in the package (in most cases).
Secondly, you can run the test only for the package you're changing by typing
go test mypkg
or... you can use build tags. The last way of optimizing is to use the short test functionality.
I want to follow TDD, but the command grails test-app CUT needs almost a minute to run due to Resolving dependencies... and Resolving new plugins. Please wait... ...
Each of those two stages takes about 20 seconds to complete while the tests only take up some seconds.
(I am unsure if this has any effect on the performance, but I am using dependency resolution via BuildConfig.groovy - and want to stick with it.)
How can I have grails only execute the tests any maybe skip the process of resolving?
How else could I speed up the process? (Note that grails interactive is unable to influence the speed of resolving.)
I had a similar issue and solved it by not using *-SNAPHOT versions of any plugins. I downgraded to the latest non-SNAPSHOT release and cut "resolving dependencies" from 10 seconds to 1 second.
Ideas:
Try removing (or to be safe moving) the directory /.ivy2/cache. The next time you do a 'run-app' all the dependencies will be downloaded again from scratch. After doing this I got my 'Resolving Dependencies...' time down by about 5 seconds.
There are some more tips on how to fully clean your directories here A full clean may help if you have some inconsistent files etc.
Try turning the logging on in BuildConfig.groovy by setting log to "info" in the grails.project.dependency.resolution section. This can give you a better idea of which dependencies are taking the longest.
Make sure your .ivy2 directory is on your local machine. See here for more info
In Grails 2 there's a new variant of the old (now deprecated) 'interactive' command. In order to start it, one must start grails without any arguments (i.e. grails <ENTER>).
Running test-app from there seems to skip dependecy resolution which ultimately makes tests run much faster now (~40 seconds less in the case mentioned).
You should write your unit tests in a way that you can run them directly from the IDE. I like looking at the green bar. For example in STS/Eclipse, just do "Run As->Junit Test". If the test requires Grails to be running, it's not an unit test anymore (it's an integration test).
I am going to have to back up FlareCoder on this. Too many Grails developers get lazy using Grails specific unit tests or worse, make everything an integration test. This is fine if your project is relatively small and your team does not mind Grails to start up every time but it does kind of fly in the face of true TDD.
Once you understand the full power of Groovy outside of Grails, you should try to write unit tests without depending on Grails. The true spirit of a unit test is not requiring a framework. Groovy on its own has many ways to stub/mock classes that don't require a long startup time. Then your unit tests can run individually and as a whole very fast. I do TDD this way in IntelliJ IDEA on a method level that is very fast.
It is NOT true that mocking in Grails requires Grails mocking ALL the time. Sometimes it is harder than other times to achieve this but remember, Grails is simply an abstraction of many cool technologies using some Groovy metaprogramming that allow quick development. If they aren't running like you expect, dig in and understand them so you can remove anything Grails is doing that you don't need.
I have never done ordered tests as I am of the beleif that it's not good practice.
Where I work I am told to do them ,so let's cast aside what's good or bad practice.
I am new to msTests so could you help me here.
I have 10 tests and have to run in a particular order or some of them will fail.
I have created a Basic test class and added all the 10 tests.
I have created an Ordered test and moved to the right in the order I want to execute them.All fine.
Run the tests but MsTest runs the tests twice.Once the ordered tests all successed!! But also runs the same tests in no particular order
Am I missing the obvious if I have a set of tests that are in order shouldnt those be removed as normal tests only run as ordered test.
How can I make a set of tests only run as ordered tests?
Any suggestions?
I too struggled with this one, but then I found the following documentation on MSDN:
Ordered Test Overview
Apparently you don't get a list of the tests in the right order in the Test View.
Instead the ordered test appears as a single test.
To me this was not a very good news as my tests will be run twice when I choose to "Run All Tests In Solution" (and fail the second time when run in the wrong order), but at least I got an explanation to why it is behaving this way.
In VSTS, whenever you create an ordered test, it actually creates a separate file for that test. So, while executing you need to execute that ordered test file only. It will include all the tests in a particular order & during execution it will run as according to it only.
This is a popular question (though I agree, it's very bad practise). Check out this SO question:
How Does MSTEST/Visual Studio 2008 Team Test Decide Test Method Execution Order?
I've not done this myself, so cannot guarantee that any of the answers in the above question worked, but it's worth a shot.
This may be an old topic to answer, but this question does come up on the first page when searching on Google. I think what you are looking for is a Playlist. Create a new test playlist and then add only the tests you want to run.
Well, Maven is too good, when talking about speed.
But I want something that is more acceptable.
Imagine I wrote a test org.fun.AbcTestCase
In such TestCase, I include some JUnit / TestNG tests
Now I want to run only this test case, said org.fun.AbcTestCase from command line.
How can I do that?
I know it's easy to do it within Eclipse or IDEA. However, I am learning Scala and IDE support is currently terrible, especially when it comes to run unit test.
Here is why I find it difficult:
The project would involve many dependencies. When I test my project as a Maven goal, surefire takes care of that. Mimic that with reasonable manual effort is important.
The test process need to be fast enough with real time compiler (well, recompile the whole bunch of scala code is a terrible night mare).
Use the test parameter in the surefire:test mojo
mvn test -Dtest=MyTest
will run only the test MyTest.class, recompiling only if necessary (if changes are found).
If you are free to switch (as I imagine you might be if you have a toy project you're using to learn Scala) you might consider using SBT instead of Maven. Its IDE integration is only rudimentary, but it is quite handy for running tests (it can watch the source tree and re-run tests when you save, or even just the tests that failed during the last run.) Check out the website at http://simple-build-tool.googlecode.com/ .
I am using the Boost 1.34.1 unit test framework. (I know the version is ancient, but right now updating or switching frameworks is not an option for technical reasons.)
I have a single test module (#define BOOST_TEST_MODULE UnitTests) that consists of three test suites (BOOST_AUTO_TEST_SUITE( Suite1 );) which in turn consist of several BOOST_AUTO_TEST_CASE()s.
My question:
Is it possible to run only a subset of the test module, i.e. limit the test run to only one test suite, or even only one test case?
Reasoning:
I integrated the unit tests into our automake framework, so that the whole module is run on make check. I wouldn't want to split it up into multiple modules, because our application generates lots of output and it is nice to see the test summary at the bottom ("X of Y tests failed") instead of spread across several thousand lines of output.
But a full test run is also time consuming, and the output of the test you're looking for is likewise drowned; thus, it would be nice if I could somehow limit the scope of the tests being run.
The Boost documentation left me pretty confused and none the wiser; anyone around who might have a suggestion? (Some trickery allowing to split up the test module while still receiving a usable test summary would also be welcome.)
Take a look at the --run_test parameter - it should provide what you're after.