I'm writing a bunch of unit tests with an HTTP client in them, for a custom Flutter package.
I noticed that when I run the tests with flutter test, the first two unit tests will start at approximately the same time.
This is not something I want. Because the unit tests are supposed to write some data, and at the start of every unit test the data is reset. That way every test starts off with the same data.
But since there are two tests running at the same time, they both access the same file and corrupt it or not get access to it with FileSystemException: lock failed.
Is there any way to force the tests to run one by one, instead of multiple at once?
I tried putting them in separate files, but that did not work.
Thanks
By default, the flutter test command executes the tests concurrently, but you can specify the concurrency using -j, --concurrency=<jobs> in the flutter test command.
As per the Flutter help document:
-j, --concurrency=<jobs> defines the number of concurrent test processes to run. This will be ignored when running integration tests. (defaults to "14")
execute the below command to run all tests one by one
flutter test -j, --concurrency=1
execute the below command to run all tests one by one with coverage,
flutter test --coverage -j, --concurrency=1
If you have several tests that are related to one another, combine them using the group function provided by the test package.
Please check https://flutter.dev/docs/cookbook/testing/unit/introduction#5-combine-multiple-tests-in-a-group
Related
I want to execute all tests from my application, now I do it with command:
go test ./app/...
Unfortunately it takes quite a long time, despite that single tests run quite fast. I think that the problem is that go needs to compile every package (with its dependence) before it runs tests.
I tried to use -i flag, it help a bit but still I'm not satisfied with the testing time.
go test -i ./app/...
go test ./app/...
Do you have any better idea how to efficiently test multiple packages.
This is the nature of go test: it builds a special runtime with addition code to execute (this is how it tracks code coverage).
If it isnt fast enough, you have two options:
1) use bash tooling to compile a list of packages (e.g. using ls), and then execute them each individually in parallel. There exists many ways to do this in bash.
The problem with this approach is that the output will be interleaved and difficult to track down failures.
2) use the t.Parallel() flag with each of your tests to allow the test runtime to execute in parallel. Since Go 1.5, go test runs with GOMAXPROCS set to the number of cores on your CPU which allows for concurrently running tests. Tests are still ran synchronously by default. You have to set the t.Parallel() flag for each test, telling the runtime it is OK to execute this test in parallel.
The problem with this approach being that it assumes you followed best practices and have used SoC/decoupling, don't have global states that would mutate in the middle of another test, no mutex locks (or very few of them), no race condition issues (use -race), etc.
--
Opinion: Personally, I setup my IDE to run gofmt and go test -cover -short on every Save. that way, my code is always formatted and my tests are run, only within the package I am in, telling me if something failed. The -cover flag works with my IDE to show me the lines of code that have been tested versus not tested. The -short flag allows me to write tests that I know will take a while to run, and within those tests I can check t.Short() bool to see if I should t.Skip() that test. There should be packages available for your favorite IDE to set this up (I did it in Sublime, VIM and now Atom).
That way, I have instant feedback within the package I m editing.
Before I commit the code, I then run all tests across all packages. Or, I can just have the C.I. server do it.
Alternatively, you can make use of the -short flag and build tags (e.g. go test -tags integration) to refactor your tests to separate your Unit tests from Integration tests. This is how I write my tests:
test that are fast and can run in parallel <- I make these tests run by default with go test and go test -short.
slow tests or tests that require external components, I require addition input to run, like go test -tags integration is required to run them. This pattern does not run the integration tests with a normal go test, you have to specify the additional tag. I don't run the integration tests across the board either. That's what my CI servers are for.
If you follow a consistent name scheme for your tests you can easily reduce the number of them you execute by using the -run flag.
Quoting from go help testflag:
-run regexp
Run only those tests and examples matching the regular expression.
So let's say you have 3 packages: foo, bar and qux. Tests of those packages are all named like TestFoo.., TestBar.. and TestQux.. respectively.
You could do go test -run '^Test(Foo|Bar)*' ./... to run only tests from foo and bar package.
We've customized a product which includes their own phpunit test suite. In Jenkins, I have two jobs setup: the first runs our own test suite that covers our customizations, and the second job runs the existing core unit tests.
The core unit tests were not designed to be run on a customized version, so failures are expected. Out of the ~5000 tests, 81 fail. What I'd like to setup in Jenkins, is have the build marked as a failure only if the number of failed tests changes from the previous build.
I've looked at the Performance plugin but the documentation seems sparse and I'm trying to find something that matches our use case.
Any suggestions?
You should have a look at the plugin https://wiki.jenkins-ci.org/display/JENKINS/xUnit+Plugin
It handle a threasolding mechanism (I specified this requirement for the xunit plugin when my team developed it )
expect this helps..
But you want to associates the failure to a change ....
Hum maybe more complex .. have to ask .. if such thing should be developped.
We have a large Hybris project here and to run all the tests with takes much too long (hours, yes, a large consulting company created that crap). My target is to reduce all the spring based integration tests and replace them by real unit tests.
But when running the tests with the Hybris ant build for one extension (ant alltests -Dtestclasses.extensions=myext) starts a server with the junit tenant also if there are only non Spring based unit tests in that extension. I also tried to use ant unittests but that one does not even executes my tests.
Is there any way to run only the tests annotated with #UnitTest without any server start in an ant run?
PS: I have a hybris 5.1 and 5.3 commerce suite
You should use ant unittests and not ant unit tests:
ant unittests -Dtestclasses.extensions=myext
Note
Running simple unit tests exclusively is not so easy whenever someone uses somewhere Registry.getApplicationContext() in the code under test!
In fact, Registry.getApplicationContext() starts a Hybris instance. If that happens to you, you need to eliminate that particular call to Registry.getApplicationContext() with a better class design and/or mocks.
This is good information. However, in my opinion, even running the unit tests for a single extension is still too much. Unit tests are supposed to be FAST! I should be able to run a single unit test method from within my IDE if I choose to. The whole concept of "red-green testing" is lost if I have to wait for a bunch of non-relevant unit tests to run every time I want to test my refactored code.
Because these tests rely on a runtime environment, there are NO unit tests in Hybris. There are only integration tests because they all rely on a running Hybris system to be executed.
I would like to give some details how to run unittests from within the IDE.
Install IntelliJ
Install Hybris plugun (https://plugins.jetbrains.com/plugin/7525-hybris-integration)
Import the project
Run the UnitTest as any normal developer will do it
Enjoy :)
When using WebStorms as a test runner every unit test is run. Is there a way to specify running only one test? Even only running one test file would be better than the current solution of running all of them at once. Is there a way to do this?
I'm using Mocha.
not currently possible, please vote for WEB-10067
You can double up the i on it of d on describe and the runner will run only that test/suite. If you prefix it with x it will exclude it.
There is a plugin called ddescribe that gives you a gui for this.
You can use the --grep <pattern> command-line option in the Extra Mocha options box on the Mocha "Run/Debug Configurations" screen. For example, my Extra Mocha options line says:
--timeout 5000 --grep findRow
All of your test *.js files, and the files they require, still get loaded, but the only tests that get run are the ones that match that pattern. So if the parts you don't want to execute are tests, this helps you a lot. If the slow parts of your process automatically get executed when your other modules get loaded with require, this won't solve that problem. You also need to go into the configuration options to change the every time you want to run tests matching a different pattern, but this is quick enough that it definitely saves me time vs. letting all my passing tests run every time I want to debug one failing test.
You can run the tests within a scope when you have a Mocha config setting by using .only either on the describe or on the it clauses
I had some problems getting it to work all the time, but when it went crazy and kept running all my tests and ignoring the .only or .skip I added to the extra mocha options the path to one of the files containing unit tests just like in the example for node setup and suddenly the .only feature started to work again regardless of the file the tests were situated in.
I have a lot of test suits and tests and the execution time of those tests are so long.
I have an idea of about adaptive testing to modify a TestUnit framework (JUnit for example) to run those tests which takes less time at the beginning and those which are taking a long time at the end.
Also, I'm thinking of defining an annotation like "#RunFirst" to declare and notify the test unit framework to run that test at the beginning so the developer can test the functionality that is working on at the beginning which saves a lot of time to get the answer.
My question are
Is there any programmatic way that we order the execution of tests? (I already checked this page but it doesn't look like an appealing solution to me)
can we access to the statistics of each test ? like how long does each one takes?
Can we get the result of each test after each test is executed and show it to the user? or we have to wait until all the tests are executed?
to run those tests which takes less time at the beginning
If you are really interested in doing this, you have some test-cases that take a long time. Those are almost certainly not really unit tests, but rather integration tests. I would instead suggest moving those test-cases to a separate "integration tests" directory. Run all the integration tests after the unit tests.
Edit
See the following related questions:
How-to organize integration tests and unit tests
Maven - separate integration tests from unit tests
Do you separate your unit tests from your integration tests?