C++ Visual Studio 2015 Google Tests Framework thru Resharper Breaking - c++

I have been using Google Tests that someone else put together for me and it has worked great...until today. Now when I try to run a test I get a very mysterious problem and I am dead in the water. I had the IT guys even recreate my profile which did have some issues etc - still the same error with little to go on:
Any ideas? Anything. Dead in the water here.

If I don't try to run an individual unit test as I was doing, but simply press F5 from the testing framework (something I had never done actually as I always ran tests through the unit test framework) you can actually find a place where your program pukes.
My program was puking somewhere that had absolutely nothing to do with my unit test, so under the hood, it is clear that the framework needs to run and initialize a lot of stuff that has nothing to do with my tests - in this case it initialized an input stream from a file it could not find way outside of what I was doing. Once I tracked that down, we were good again.
It was very counterintuitive that something so orthogonal to what I was doing caused this. Resharper said they would try to give better error messages in the future.

Related

Managed Debugging Assistant 'DisconnectedContext'

After setting up a unit test in VS2105 which created some COM objects using Unity I started getting the following error:
Managed Debugging Assistant 'DisconnectedContext' has detected a problem in 'C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO 14.0\COMMON7\IDE\COMMONEXTENSIONS\MICROSOFT\TESTWINDOW\te.processhost.managed.exe'.
I had a quick look to see if anyone else had the same problems and a lot of the solutions to the problem were either to fire off the test in it's own thread or change the target architecture to x64. Neither of these solutions felt quite right to me as they are more like work-arounds to the problem.
So after little thought I realised the problem is that the COM objects are not being given enough time by the test framework to clear down. So I came up with the following solution which worked.
To fix the problem I added the following code to the tear down / test clean up method of the unit test:
_unity.Dispose();
GC.Collect();
GC.WaitForPendingFinalizers();
The first line is only need if using Unity however the main part of fix is the last two lines. They force a garbage collection and then tell the current thread to wait until it has completed. Thus allowing the COM objects to be cleared down properly.

dart unitest suite never ending on drone.io

I have continuous integration on drone.io for my dart projects, normally there aren't any issues with this, except for actual bugs in my code, but my latest tests are all passing and the test suite reports it completed successfully, but the drone.io test runner never exists it just keeps running until it times out and reports build failed. Has anyone else had anything something familiar to this? or no how to fix it? here is the build if you kick off a new build from the big-refactor-and-enhancement branch that is where it has this odd behaviour.
After a quick look at your code, I would bet that the server launched under the cover is not shut down. You should add a close() method on it and call it in _tearDown().

Why does Run All cause a crash in VS2012 unit testing but running one by one doesn't?

We just "upgraded" from Visual Studio 2008 to Visual Studio 2012. We updated our unit tests and now they pass when running them individually but when I try to Run All, I got the following error:
The active Test Run was aborted because the execution process exited unexpectedly. To investigate further, enable local crash dumps either
at the machine level or for process vstest.executionengine.appcontainer.x86.exe. Go to more details: [http://go.microsoft.com/fwlink/?linkid=232477][1]
So I went to the link and followed the instructions to add the registry key to enable local crash dumps. The error message then changed to:
The active Test Run was aborted because the execution process exited unexpectedly. Check the execution process logs for more information.
If the logs are not enabled, then enable the logs and try again.
Apparently it noticed the changes that I made in the registry to enable crash. However, when I looked in %LOCALAPPDATA%\CrashDumps, no files were being created.
If I run one test at a time (or even a few tests at a time), I can get them all to pass. The problem is only with Run All.
Has anyone else encountered similar problems? If so, how did you solve them?
Essentially the same question was asked on MSDN, but the answer was something like "click the link to the crash dump". That answer doesn't help me because I don't see any link to the crash dump and I am unable to get the crash dump to be generated.
This question on StackOverflow is also similar, and ended up resulting in a bug being logged on Microsoft Connect (which looks to be deferred for some reason), but my problem might be different because my code has nothing to do with "async tasks" (I don't think).
EDIT: The problem went away, seemingly on its own, but the problem was likely an exception that wasn't being caught in the unit test code, as some of the answers below suggest. However, I'm still confused as to why the problem only appeared with Run All, and not when running smaller groups of tests or Debug All.
I had the same problem, the tests failed for apparently no reason. Later I found that a buggy method was causing a StackOverflowException. When I fixed my bug, the VS bug disappeared.
Maybe it works most of the time because you don't run the faulty code.
The best workaround I have so far is to debug all. This is done via TEST -> Debug -> All Tests. It's obviously slower but it doesn't crash.
This can happen with certain errors, such as a stackoverflow. Presumably this is crashing the test runner and so it can't continue when it hits a test that causes the problem.
The solution, therefore, is to run all tests in debug (from the Test -> Debug menu) and Visual Studio will show errors like these.
For anyone else who may need this in future: My test runner was crashing when a console specific command (Environment.Exit(-1);) was executed via the unit test. Even running in debug mode would just crash - I could not get at a useful error message.
So my scenario is different to the main question scenario in that a) debug didn't work at all b) run all vs run individually made no difference. That is because my error scenario always arose but the stack overflows of the original question did not.
The bottom line: test runner is bad and will crash if it finds something it doesn't like. You need to manually isolate and work out what the Bad Thing™ is.
For someone else looking for this: I had some code that was calling System.Environment.Exit(123), and I was unaware of this. So check for any code that terminates the process.
I've just had the same problem. It turned out that was my code - there was an infinite loop of WCF service calls. In your case this might be something else. So my proposal is to either remember (logs in version control system?) or to figure out (by excluding different tests from run, e.g. with bisection method) which place in code leads to this behavior. And wuala! It's cause of the problem and at the same time bug in code.
UPDATE
As for questions in your EDIT. It could happen that running smaller groups of tests didn't reproduce the issue. In this case, given those groups included all tests, one can make an assumption that some tests interfere. Maybe some static data or fields in a test class?
As for running tests in debug mode - I'm not surprised. Visual Studio test runner behaves different in "Run" mode vs "Debug" one.
I had a similar problem except that it wasn't a stackoverflow exception. It was caused by my project under test using Entity Framework and the NUnit project not having references included to the EntityFramework and EntityFramework.SqlServer modules. Adding the references to Entity Framework modules fixed it.
Just had the same problem. Closing and reopening visual studio fixed it for me.

MS Test Inconsistent failing tests after changes when project is under source control?

I have noticed that if I have a set of regression tests and decide to change a property on one of my objects (DTO) from int to decimal for example - i make all the other changes and the tests pass like normal. But if this project is under source control (VSS specifically) this small change will cause something strange to happen...
Similar to this question
Testing in Visual Studio Succeeds Individually, Fails in a Set
But a little different. I can make this change, and try to run my tests and any test that has an assert around this new data type will fail, but if I then click "debug checked tests" and it then runs through the previously failed tests - they pass. No changes to the test code /etc
Does anyone know why this might be happening? I hate to work outside of source control but if my tests are not reliable ... why have them at all in this case ... and I live for testing code :P
Given the age of the question, I doubt it's still an issue for you, but I wonder if you have a bin or obj folders under source control or an assembly that is in them?
If they are then when you compile the app (before MSTest runs) the source controlled assemblies are going to be in read-only mode and won't get overridden by the compiler and thus your tests will be against out of date binaries.

Should you display what's happening in the unit test as it runs?

As I am coding my unit tests, I tend to find that I insert the following lines:
Console.WriteLine("Starting InteropApplication, with runInBackground set to true...");
try
{
InteropApplication application = new InteropApplication(true);
application.Start();
Console.WriteLine("Application started correctly");
}
catch(Exception e)
{
Assert.Fail(string.Format("InteropApplication failed to start: {0}", e.ToString()));
}
//test code continues ...
All of my tests are pretty much the same thing. They are displaying information as to why they failed, or they are displaying information about what they are doing. I haven't had any formal methods of how unit tests should be coded. Should they be displaying information as to what they are doing? Or should the tests be silent and not display any information at all as to what they are doing, and only display failure messages?
NOTE: The language is C#, but I don't care about a language specific answer.
I'm not sure why you would do that - if your unit test is named well, you already know what it's doing. If it fails, you know what test failed (and what assert failed). If it didn't fail you know that it succeeded.
This seems completely subjective, but to me this seems like completely redundant information that just adds noise.
I personally would recommend that you output only errors and a summary of the number of tests run and how many passed. This is a completely subjective view though. Display what suits your needs.
I recommend against it - I think that the unit testing should work on the Unix tools philosophy - don't say anything when things are going well.
I find that constructing tests to give meaningful information when they fail is best - that way you get nice short output when things work and it's easy to see what went wrong when there are problems - errors aren't lost to scroll blindness.
I would actually suggest against it (though not militantly). It couples the user interface of your tests with the test implementation (what if the tests are run through GUI viewer?). As alternative I would suggest one of the following:
I'm not familiar with NUnit, but PyUnit allows you to add a description of the test and when tests are run with the verbose option the description is printed. I would look into the NUnit documentation to see if this is something you can do.
Extend the TestCase class that you're inheriting from to add a function from which you call that logs what the test is trying to do. That way different implementations can handle messages in different ways.
I'd say you should output whatever suits your needs, but showing too much can dilute output from test runner.
BTW, your example code hardly looks as a unit test, more of a integration/system test.
I like to buffer the verbose log (about last 20 lines or so), but I don't display it until it gets to some error. When the error happens, it's nice to have some context.
OTOH, unit tests should be small pieces of unrelated code with specific input and output requirements. In most cases, displaying input that caused the error (i.e. wrong output) is enough to trace the problem to its roots.
This might be a bit too language specific, but when I'm writing NUnit tests I tend to do this, only I use the System.Diagnostics.Trace library instead of the console, that way the information is only shown if I decide to watch the tracing.
You don't need to, if the tests are running silently then that means there was no error. There is usually no reason for tests to give any output other than if the test failed. If it's running, then it is running indicated by the test runner that the test has passed, i.e. it is "green". Running the test (together with many tests with console output) through a test runner in an IDE, you'll be spamming the console log with messages nobody will care about.
The test you've written is not a unit test, but looks more like an integration/system test because you seem to be running an application as a whole. A unit test will test a public method in a class, preferably keeping the class as isolated as possible.
Using console i/o kinda defies the whole purpose of a unit testing framework. you might as well code the whole test manually. If you are using a unit testing framework, your tests should be very malleable, tied to as few things as possible
Displaying information can be useful; if you're trying to find out why a test failed, it can be useful to be able to see more than just a stack trace, and what happened before the program reached the point where it failed.
However, in the "normal" case where everything succeeds, these messages are unnecessary clutter that distract from what you're really trying to do - ie. looking at an overview of which tests succeeded and failed.
I'd suggest redirecting your debugging messages to a log file. You can either do this by writing all your log message code to call a special "log print" function, or if you're writing a console program, you should be able to redirect stdout to a different file (I know for a fact that you can do this in both Unix and Windows). This way, you get the high level overview but the details are there if you need them.
I would avoid putting extra Try/Catch statements in Unit Tests. First of all, an expected exception in a unit test will already cause the test to Fail. That is the default behavior of NUnit. Essentitally, the test harness wraps each call to your test functions with that code already. Also, by just using the e.ToString() to display what happened, I believe you are losing a lot of information. By default, I believe NUnit will display not just the Exception type, but also the Call Stack, which I don't believe you're seeing with your method.
Secondly, there are times when its necessary. For instance, you can use the [ExpectedException] attribute to actually say when it occurs. Just be sure that when you test non-exception related Asserts (for instance Asserting a list count > 0, etc) that you put in a good description as the argument to the assert. That is useful.
Everything else is generally not needed. If your unit tests are so large that you start putting in WriteLines with what "step" of the test you're on, that is generally a sign that your test should really be broken out into multiple smaller tests. In other words, that you're not doing a unit test, but rather an integration test.
Have you looked at the xUnit style of unit test frameworks?
See Ron Jeffries site for a rather large list.
One of the principles of these frameworks is that they produce little or no output during the test run and only really an indicator of success at the end. In the case of failures its possible to get a more descriptive output of the reason for failure.
The reason for this mode is that while everything is OK you don't want to be bothered by extra output, and certainly if there is a failure you don't want to miss it because of the noise of other output.
Well, you should only know when a test failed and why it failed. It's no use to know what's going on, unless, for example, you have a loop and you want to know exactly where in the loop the test died.
I think your making far more work for yourself. The tests either pass or fail, the failure should hopefully be the exception to the rule and you should let the unit test runner handle and throw the exception. What you're doing is adding cruft, the exception logged by the test runner will tell you the same thing.
The only time I would display what's happening is if there was some aspect of it that would be easier to test non-automatically. For example, if you've got code that takes a little while to run, and might get stuck in an infinite loop, you might want to print out a message every so often to indicate that it is still making progress.
Always make sure failure messages clearly stand out from other output, however.
You could have written the test method like this. It's up to your code-nose which style of test you prefer. I prefer not writing extra try-catches and Console.WriteLines.
public void TestApplicationStart()
{
InteropApplication application = new InteropApplication(true);
application.Start();
}
Test frameworks that I have worked with would interpret any unhandled (and unexpected) exception as a failed test.
Think about the time you took to gold-plate this test and how many more meaningful tests you could have written with that time.