Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.
Related
We're currently building an internal apparatus to run unit tests on a large C++ codebase, using Catch2 for the framework and our in-house VS test adapter (using [ITestDiscoverer] and ITestExecutor) to attune them to our code practices. However, we've encountered issues with unit tests not always being discovered after a build.
There's a couple of things we're doing out of the norm that may be contributing. While we're using VS2019 for coding, we use FASTBuild and Sharpmake to build our solutions (which can contain countless projects). When we realised that VS would try to build the tests again using MSBuild before running them (even after a full rebuild), we disabled that behaviour in the VS options. Everything else seems to be running as expected, except that sometimes tests aren't picked up.
After doing some digging (namely outputting a verification message to VS's Tests Output the moment our TestDiscoverer is entered), it seems like a test discovery pass isn't always being invoked when we would expect it, sometimes even with a full solution rebuild. Beyond the usual expectation that building a project with new changes (or rebuilding outright) would cause a pass to start, the methodology VS uses to determine when to invoke all installed test adapters seems to be fairly blackbox in terms of what exact parameters/conditions trigger it.
An alternative seems to be to allow the user to manually execute a TD pass via some means that could be wrapped in a VSPackage. However, initial looks through the VSSDK API for anything that'd do the job has come up short.
Using the VSSDK, are there any means to invoke a Test Discovery pass independently from VS's normal means of detecting whether a pass is required?
You would want to use the ITestContainerDiscoverer.TestContainersUpdated event. The platform should then call into your Container Discoverer to get the latest set of containers (ITestContainerDiscoverer.TestContainers). As long as the containers returned from the discoverer are different(based on ITestContainer.CompareTo()) the platform should trigger a discovery for the changed containers. This blog has been quite helpful in the past: https://matthewmanela.com/blog/anatomy-of-the-chutzpah-test-adapter-for-vs-2012-rc/
I have a huge amount of test cases running during the TFS-build process.
Is there a way to rerun all those test cases on my local machine which fail on tfs? Maybe via configuration or an extension?
My problem is that it takes quite a while to run all the tests again, so I would like to run just those which fail.
The second problem is, that the tfs build sometimes failes tests which are working locally. So I'd like to figure out which I really broke.
I've never seen anything like this. I do think it would be possible to write a VS extension to pull the test results from TFS and create a test list file with all the failed tests and then load that in VS to rerun only the failed tests.
I wrote a simple extension and it wasn't that bad - http://dotnetcatch.com/2014/09/08/parameterizationpreview-visual-studio-extension/
I've tried the exact same thing. However rerunning the tests locally didn't change anything, the tests still passed locally (even after >1000 tries) but failed sporadically on TFS. ( for that part I just put the tests in a for-loop ).
Check your log on TFS - or put it up here - the log should tell you what has failed in the tests and maybe a reconsideration/refactoring of the written failed tests should be made. Even though they pass locally doesn't mean they are right, if that makes sense. So check the log, rewrite tests and try again, would be my suggestion.
I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation