We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation
Related
I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.
That's orthogonal why but for clarity: I created a TimeMonitor event listener that at the end of the test compares the elapsed time with a policy and fails it if the test takes longer.
It works great with one exception - from time to time the system gets in weird state and some of the tests might take longer because of that. Note my bar for unit tests is 15ms - it is not so hard to happen.
I had this before and the way I solved it was to create a record and wait until the same test exceed the them several times before I fail it. This has several flows - the major one - the need of persisting the data.
I think it will works better if I simply do two (or more) passes. At first pass I collect the tests that exceeded their time and in pass 2-N I repeat them to confirm or reject the problem.
My question is - how. What I need to do (if possible) to programmatically collect a subset of tests and rerun them. Do I need to remove test from testing::UnitTest::GetInstance() or I should create another UnitTest.
A reference to something similar would be great, like retry failed tests for example.
I know the following does not directly answer your question, but I believe that a suggestion of a different approach is justified. I would suggest doing test execution time analysis from a separate process to simplify things and avoid changing the program that runs the tests. This way, you can be certain that you have not influenced the execution time of your tests by inserting additional code that keeps track of tests whose execution time exceeds the threshold you have defined. Also, you won't be needing to modify state of UnitTest objects and other details of googletest implementation, which is harder to understand and potentially dangerous.
Output of the executable that runs your test suite already provides you with execution time for each test. Write a script that runs your test suite executable once and parses that output to determine which tests take too long to execute (this can be easily achieved in some higher level language like Python). Then, if the script has found some tests that are suspicious, it re-runs the test suite executable 2-N times by specifying --gtest_filter command line parameter to it. For example:
tests.exe --gtest_filter=*test1*:*test2*:...:*testN*
This way, only suspicious tests will be re-run and you will be able to determine if some of them is indeed problematic.
If you do not want to use the values provided by googletest, you can modify your TimeMonitor to output the test execution time and parse those values. However, maybe it would be best to remove it and be 100% sure you are not influencing the execution time of the tests.
Hope this helps!
The solution actually is simple (when you know it). Disclaimer not tested with every possible corner case.
in pseudo code:
time monitor -> just observe and create a filter for the long tests
attach time monitor
testing::InitGoogleTest(&argc, argv);
int result = RUN_ALL_TESTS();
if (result == 0 && time_monitor->has too long tests()) {
time monitor -> activate reporting errors
::testing::GTEST_FLAG(filter) = time monitor -> the filter();
result = RUN_ALL_TESTS();
}
Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.
I'm trying to implement a unit testing platform, a unit test automated runner, in a way that tests can be debugged, and this way involves clearing as many resources as possible between each test run, for example require.cache.
The problem I've been running into is that FSWatcher instances, if any are created by the unit tests and their associated code, are being duplicated for each test run creating an obvious memory leak, and printing big red warnings in the console. Is there a way to locate them from within the process to close them?
http://nodemanual.org/latest/nodejs_ref_guide/fs.FSWatcher.html
You can call close() on a FSWatcher.
I'm looking for a tool that could run a unit test, which is a normal unix binary, as many instances concurrently. I also need the tool to gather any cores and stop on failure. Ability to allow some failures is a bonus.
Idea is to stress test a multi-threaded application with a large amount of test processes running concurrently. A single unit-test crashes very seldom, so I want to run many of them at the same time to maximize my chances of catching the bug.
Extra credit if the tool can be daemonized to constantly run a set of binaries, with the ability to control it from outside.
UPDATE:
I ended up implementing a test driver with Python (runs multiple tests concurrently, restarting a test if it completes successfully). The test driver can be signaled to stop by creating a stamp file. This test driver is in turn invoked by a buildbot builder and stopped when a new revision is published. This approach seems to work reasonably well.
Valgrind Maybe ?
http://valgrind.org/