How Can I Increase the Timeout for Visual Studio Tests? - unit-testing

I'm working on a pretty sizable suite of tests for some code I'm writing (in Visual Studio 2012). For the most part, running the unit tests is no big deal. But I'm also including a lot of integration tests which have more external infrastructure dependencies. The number of tests, combined with re-setting the infrastructure dependencies between tests, has resulting in a rather lengthy test run for the full suite (around 45 minutes at the moment).
Running the tests is no big deal. Unit tests will be run on check-in, integration tests nightly. However, I'm running into an issue when trying to analyze code coverage for all of the tests. No code coverage results are created, and the output window says the following:
This request operation sent to net.pipe://megara/vstest.discoveryengine/14108 did not receive a reply within the configured timeout (00:30:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.
I'm not sure where it's directing me here. I don't use any iContextChannel for anything, all of the test-running is built in to Visual Studio. So I don't really know where/how I can increase any kind of timeout. Does anybody know where I should look?

Try changing the time-out values in your solution .testsettings file.
If you don't have one, you can add it to the solution by using the right-click on solution -> Add New Item -> TestSettings menu. In there you can do time-outs on individual tests (default is 30 minutes), or set the timeout for an entire test run.
It's not clear if this is the root cause or not, but it is worth ruling out.

Old topic, but perhaps some new info: I didn't have any luck trying to set the timeout in the .testsettings file with Visual Studio 2015. No matter what I set it to in test settings, my tests would stop after 30 minutes.
There is now a [Timeout(milliseconds)] attribute which can be applied to individual test methods. This is even better than testsettings since you can fine tune individual tests to make sure they aren't taking longer than expected.
Unfortunately, I could not get this attribute to have any effect when attempting to set it higher than 30 minutes when using a .testsettings file, even if the .testsettings file defined a higher timeout. Values lower than 30 minutes were honored, but higher values would still stop at 30 minutes regardless of what testsettings said.
After I removed the .testsettings file, the timeout attributes seem to be working as expected - the test will run up to whatever timeout I set it to.
If you have trouble getting the timeout attribute to work, try removing .testsettings.

Related

Timeout when starting a Service in Windows

We're currently facing some issues trying to start a Service in Windows, which is an executable file and the output of a compiling process using .NET framework for C++ (Windows\Microsoft.NET\Framework\v2.0.50727).
We are able to compile, start and execute the exact same service in our DEV Environment, which consists of Windows 7 installed in Virtual Box with VisualStudio2005 (it's old software, I know...).
When we do it in our Test environment, we get a timeout error when trying to start the Service (1053: The Service Did Not Respond to the Start or Control Request in a Timely Fashion.). The server is running WindowsServer 2008 R2 Standard. We already tried to change the timeout time, without success. We also compiled the source code using the same Visual Studio 2005 SW and although we were able to compile it successfully we are still getting the same timeout message.
We are currently trying to understand what's objectively causing the different behaviour. The most obvious difference is the windows version, naturally, but since the compilation output appears identical and there are no errors during the compilation process in both environments and both output logs are identical as well we are running low on ideas to identify and validate objective differences. Our latest approach consists in trying to understand if by using Dependency Walker (dependencywalker.com) we are able to identify any issue regarding DLLs (we also checked if there is any corrupt DLL or system file in the environment and there isn't any, using the windows command for it). We're also checking some of the suggestion made on the following post: Error 1053 the service did not respond to the start or control request in a timely fashion.
Any one faced a similar issues? If so, can you suggest any kind of approach to identify the reason why the service isn't staring, other than the ones here mentioned?
Thanks in advance.
We ended up extending the timeout time regarding Service startup in Windows to 10 minutes followed by the necessary restart and we managed to start the service normally and almost immediately, which makes us think that the timeout time was never really an issue here. Although we haven't been able to pinpoint the exact cause we are still inclined to think that it was a Windows/Server related issue.

What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

Run Functional Tests step in Vnext TFS 15 is awfully slow

Running functional tests in TFS 15 with Vnext in comparison to the old system with MTM and Tests Environemnts, it is awefully slow. it takes like 10 minutes after initial test start, before the first tests are started. And while running the tests, they take longer as normal.
Distribution of tests is slightly "unhappy", tests get distributed at the beginning of the test run, but if 1 maschine is finished, while the other one has still 5 long test runs doens't make sense. Bucket size was a way more intelligent system
is there are a way to improve this? We have updated to RC2 and we are not happy with the test outcome. Feeling like test tast is a bottleneck
Ok so case is that the tests get distributed at the start of the test run and the test runner itself works different now. Unlike bucket system that distributes the tests one after another..
Also the tests only shows if failed or finished AFTER they are done, so if 200 tests get distributed it will only show the outcome when all 200 are done.
kinda awkward..

Ember acceptance tests fail when running all at once

I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.

MS Test Inconsistent failing tests after changes when project is under source control?

I have noticed that if I have a set of regression tests and decide to change a property on one of my objects (DTO) from int to decimal for example - i make all the other changes and the tests pass like normal. But if this project is under source control (VSS specifically) this small change will cause something strange to happen...
Similar to this question
Testing in Visual Studio Succeeds Individually, Fails in a Set
But a little different. I can make this change, and try to run my tests and any test that has an assert around this new data type will fail, but if I then click "debug checked tests" and it then runs through the previously failed tests - they pass. No changes to the test code /etc
Does anyone know why this might be happening? I hate to work outside of source control but if my tests are not reliable ... why have them at all in this case ... and I live for testing code :P
Given the age of the question, I doubt it's still an issue for you, but I wonder if you have a bin or obj folders under source control or an assembly that is in them?
If they are then when you compile the app (before MSTest runs) the source controlled assemblies are going to be in read-only mode and won't get overridden by the compiler and thus your tests will be against out of date binaries.