Ember acceptance tests fail when running all at once - ember.js

I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.

Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.

Related

Jasmine + Karma tests fail at a specific test count

Ran into a really odd issue yesterday while doing my Jasmine tests (which run headless usually, but can debug in Chrome). A test that usually passes seems to fail when I reach a specific total test count (678), but succeeds again the moment I have more than that. I reduced the number of tests such that I was only running that one test suite, and could repro the same problem at 177 tests, which I did by taking a very simple non-failing test and duplicating it a bunch more times.
I'm not seeing any other issues (i.e. a page reload error), and even stranger is that the test that supposedly fails doesn't match the line number Jasmine spits out as the offending line (which is actually the following test). When I manually step through these, it's obvious that the spy IS called, and I do believe I'm handling the async stuff correctly, as the code involves promises.
I know that isn't super specific, but I'm curious if anyone has run into this before, and has ideas for how to proceed in debugging this?
Came to the same conclusion that Sulthan did in his comment above, but it turned out to be a problem with when I was calling my expects in relation to where I was calling done() in some tests that involved async calls/promises. Seems like the number of tests that would run would create timing issues that finally exposed these problems.

Collect and run all junit tests in parallel with each test class in its own JVM (parallelization by class, not by method)

Problem
I've a bunch of junit tests (many with custom runners such as PowerMockRunner or JUnitParamsRunner) all under some root package tests (they are in various subpackages of tests at various depths).
I'd like to collect all the tests under package tests and run each test class in a different JVM, in parallel. Ideally, the parallelization would be configurable, but a default of number_of_cores is totally fine as well. Note that I do not want to run each method in its own JVM, but each class.
Background
I'm using PowerMock combined with JUnitParams via annotations #RunWith(PowerMockRunner.class) and #PowerMockRunnerDelegate(JUnitParamsRunner.class) for many of my tests. I have ~9000 unit tests which complete in an "ok" amount of time but I've an 8-core CPU and the systems is heavily underutilized with the default single-test-at-a-time runner. As I run the tests quite often, the extra time adds up and I really want to run the test classes in parallel.
Note that, unfortunately, in a good number of the tests I need to mock static methods which is part of the reason I'm using PowerMock.
What I've Tried
Having to mock static methods makes it impossible to use something like com.googlecode.junittoolbox.ParallelSuite (which was my initial solution) since it runs everything in the same JVM and the static mocking gets all interleaved and messed up. Or so it seems to me at least based on the errors I get.
I don't know the JUnit stack at all, but after poking around, it appears that another option might be to try to write and inject my own RunnerBuilder -- but I'm not sure if I can even spawn another JVM process from within a RunnerBuilder, unlikely. I think the proper solution would be some kind of harness that lives as a gradle task.
I also JUST discovered some Android Studio (Intellij's) test options but the only available fork option is method which is not what I want. I am currently exploring this solution so perhaps I will figure it out but I thought I'd ask the community in parallel since I haven't had much lock yet.
UPDATE: Finally was able to get Android Studio (Intellij) to collect all my tests using options Test Kind: All in directory (for some reason the package option did not do recursive searching) and picking fork mode Class. However, this still runs each test class found sequentially and there are no options that I see about parallelization. This is so close to what I want but not quite... :(
Instead of using Intellij's (Android Studio) built-in JUnit run configurations, I noticed that Android Studio has a bunch of pre-build gradle tasks some of which refer to testing. Those however, exhibited the same sequential execution problem. I then found Run parallel test task using gradle and added the following statement to my root build.gradle file:
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
This works great, my CPU is now pegged to 100% (for most of the run, as the number of outstanding test classes becomes < avail processors obviously utilization goes down).
The downside to this solution is that it does not integrate with Android Studio's (Intellij) pretty junit runner UI. So while the gradle task is progressing I cannot really see the rate of test completion, etc. At the end of the task execution, it just spits out the total runtime and a link to an HTML generated report. This is a minor point and I can totally live with it, but it would be nice if I could figure out how to improve the solution to use the the JUnit runner UI.
Maybe this was not possible when the question posted but now you can do it easily in android studio.
I am using gradle build tools: 'com.android.tools.build:gradle:2.2.3'
And I added the following in my root build.gradle file.
allprojects {
// ...
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors()
}
}
Now, I have multiple Gradle Test Executor runners for my tests. The more cores of your running machine, the mores executors you have!
Thanks for sharing your original answer!
It may sound counterintuitive but actually running lower number of forks may be faster than running on all available cores.
For me this setup is 30s faster (1:50 instead of 2:20) for the same tests, compared to all available processors (8 core CPU, 16 threads)
subprojects {
tasks.withType(Test) {
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}

Testify is seemingly running test suites concurrrently?

Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

dart unitest suite never ending on drone.io

I have continuous integration on drone.io for my dart projects, normally there aren't any issues with this, except for actual bugs in my code, but my latest tests are all passing and the test suite reports it completed successfully, but the drone.io test runner never exists it just keeps running until it times out and reports build failed. Has anyone else had anything something familiar to this? or no how to fix it? here is the build if you kick off a new build from the big-refactor-and-enhancement branch that is where it has this odd behaviour.
After a quick look at your code, I would bet that the server launched under the cover is not shut down. You should add a close() method on it and call it in _tearDown().