After setting up a unit test in VS2105 which created some COM objects using Unity I started getting the following error:
Managed Debugging Assistant 'DisconnectedContext' has detected a problem in 'C:\PROGRAM FILES (X86)\MICROSOFT VISUAL STUDIO 14.0\COMMON7\IDE\COMMONEXTENSIONS\MICROSOFT\TESTWINDOW\te.processhost.managed.exe'.
I had a quick look to see if anyone else had the same problems and a lot of the solutions to the problem were either to fire off the test in it's own thread or change the target architecture to x64. Neither of these solutions felt quite right to me as they are more like work-arounds to the problem.
So after little thought I realised the problem is that the COM objects are not being given enough time by the test framework to clear down. So I came up with the following solution which worked.
To fix the problem I added the following code to the tear down / test clean up method of the unit test:
_unity.Dispose();
GC.Collect();
GC.WaitForPendingFinalizers();
The first line is only need if using Unity however the main part of fix is the last two lines. They force a garbage collection and then tell the current thread to wait until it has completed. Thus allowing the COM objects to be cleared down properly.
Related
My Android app has some slow running functionality. A unit test captures it perfectly. The unit test execution shows that it runs way slower than it should be.
Android Studio keeps offering me next to the run method a menu option: "Profile" instead of run. I select that option, but nothing different than run seems to happen. I expected Android Studio to open a window with the timing of all the method calls after the test completes.
I've searched Google and the Android site. Everything I find talks about profiling in Android Studio in general.
How do I profile an Android unit test? (What does that profile option really do?)
I had the same issue and I decided to investigate a solution because I was thinking that it couldn't be too hard. Boy was I wrong.
My original answer which was never posted contained some awkward fiddling around with Thread.sleep and manual timings and hitting the right button at the right time. This was replaced by a more elegant solution using the Debug API from within the code.
Using Android Studio 3.1.3 these were my steps:
I had to copy my actual unit test into androidTest (because I actually was interested in algorithmic complexity (and not time consumption) I found no way to actually profile inside Android Studio without an emulator. For performance tests this makes sense but in my case I wanted to ensure that even in complex scenarios my methods behave in a predictable fashion.)
To avoid the need of fiddling with with Thread.sleep and log output indicating a start/stop you can use combinations of Debug.startMethodTracing("File"); or Debug.startMethodTracingSampling() and Debug.stopMethodTracing(); or similar (See https://developer.android.com/studio/profile/generate-trace-logs). My code now looks like
#Test
public void Test_Something() throws Exception
{
Debug.startMethodTracing("Predict");
// DO YOUR CODE
Debug.stopMethodTracing();
}
When I now execute the profile I can obtain the .trace generated in the mentioned location on the device as stated in the link above:
(again read the linked page because you will need WRITE_EXTERNAL_STORAGE permission, which my app already had, so it wasn't that much of a hassle in my case.
Double clicking the trace opens it in Android Studio. Unlike stated at the link above I am currently unable to import such a trace in the profiler because either 3.1.3 lacks this function or I am unable to locate it.
Edit: After I upgraded to Android Studio 3.2 I now can indeed load and save sessions and display them in the Profiler. This has improved a lot. And interesting fact: When I opened the trace in Android Studio 3.1.3 I saw the hit count for methods (how often methods were called) and not their clock times. In the profiler on the other hand I was not yet able to find the call times but instead have access to wall clock times. Would be great if someone has a hint on how to display those too.
I have been using Google Tests that someone else put together for me and it has worked great...until today. Now when I try to run a test I get a very mysterious problem and I am dead in the water. I had the IT guys even recreate my profile which did have some issues etc - still the same error with little to go on:
Any ideas? Anything. Dead in the water here.
If I don't try to run an individual unit test as I was doing, but simply press F5 from the testing framework (something I had never done actually as I always ran tests through the unit test framework) you can actually find a place where your program pukes.
My program was puking somewhere that had absolutely nothing to do with my unit test, so under the hood, it is clear that the framework needs to run and initialize a lot of stuff that has nothing to do with my tests - in this case it initialized an input stream from a file it could not find way outside of what I was doing. Once I tracked that down, we were good again.
It was very counterintuitive that something so orthogonal to what I was doing caused this. Resharper said they would try to give better error messages in the future.
I have a simple C++ program (command line with Boost libraries) that I developed under Visual Studio Community 2013. I want to deploy it on other Windows computers, so I am testing InstallShield LE in Visual to do so (I am new with InstallShield). I added an InstallShield project in the current solution and I managed to create a setup.exe.
When I test it on another computer, setup seems OK but when I try the application, I have weird error:
MyProgramm.exe --help
Sends correct result (but it is not really interesting).
MyProgramm.exe -i InputDirectory -o OutputDirectory
Fails with a Windows displaying this message:
A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available.
What did I miss?
I built Release configuration only. How can I be sure that I have checked all the merge modules or InstallShield prerequisite ?
You will have to identify what is going wrong. Typically the symptom you describe indicates that an exception caused the process to terminate. One common source of such exceptions is misuse of an invalid pointer.
But why does it work on one computer and not another? Depending on the code it could be random incidental things. But as long as this repeats every time, it's more likely to be environmental. This could mean a missing data file, a missing registry key, a missing service, or a missing .dll dependency.
Because you can run the program at least one way, you know it's not a static dependency. If it were, you'd get a message about an inability to load some file or one of its dependencies. But instead in some execution paths you see a crash. So if it's a dependency, it's what InstallShield calls a dynamic dependency. I'm not personally a big fan of it (I'd much rather be told exactly what might be required), but there is a dynamic dependency scanning wizard that can help identify such files and include them into the project.
That will only help if the problem stems from something like this:
HMODULE hMod = ::LoadLibrary(TEXT("SomeFunky.dll"));
SOMEPROC proc = (SOMEPROC)::GetProcAddress(hMod, "SomeFunkyProc");
int result = proc(some, args);
Or maybe from a COM-related variant of that that looks something like this:
CComPtr<ISomeFace> spSomeFace;
HRESULT hr = spSomeFace.CoCreateInstance(CLSID_SomeFace);
hr = spSomeFace->SomeMethod(some, args);
The common problem here is that neither of these blocks of code verifies the function it's calling is safe to call. In the first case, proc (or even hMod) could be null; in the second, spSomeFace might not have been successfully created an instance. While the code can (and should) prevent these scenarios from crashing, fixing the crash will not get your application to actually do what it's supposed to, and you'll still have to fix the reason that the procedure, dll, or instance could not be initialized as desired.
It's also possible that you're missing a data file or registry key that at some point is being used in an incorrect fashion. For example, the code may assume a data file exists, build a pointer from data it reads, and fail to work correctly because the file wasn't available and thus the buffer it read into was never actually initialized.
So in short, to solve this, if it's not a dependency scenario that the dynamic dependency scanner can assist with, you may have to debug the code in question. You could try tools like Process Monitor and look for errors that involve your application shortly before the crash. If you have source and symbols, you could try running the program under WinDbg to figure out exactly what was crashing, and then try to figure out why it does so in one environment but not another. But from just the information you've already provided, there's nobody that can tell you the answer.
I recently upgraded to Visual Studio 2013, and found myself in the unusual position of suddenly needing to make use of a new aspect of VS that I've never worked with before. The profiler!
Long story short - I'm working a with a simple GUI framework I've designed, that recently had gesture support added. To my horror I found what worked more or less fine in one project, bogged down my main app quite horribly. I have a fairly good idea of what's causing it, but I'd still like confirmation - and since I will likely be working quite a bit more on the framework I'm building, it certainly doesn't help to have some profiling tools in place to remove eventual bottlenecks.
I ran the Visual studio performance wizard and was surprised to see (in the 'Call Tree' view) that the output consists of essentially nothing but calls to my TTD.exe (main application) and a bunch to ntdll.dll as well as few other DLLs I'm using.
That's fine and dandy - but I was expecting a much more granular report. As in - which of my functions were being used X percent of the time and the likes. Not a single function is mentioned anywhere...
Googling a bit, I found this particular link:
http://blogs.msdn.com/b/scarroll/archive/2005/04/13/407984.aspx
but I highly doubt that I need to use an additional server just to serve up my - possibly missing - symbols?
I'm a bit at a loss where to begin. Perhaps the issue is that I'm using Cinder and it does a bunch of stuff behind the scenes when starting up the app? To clarify - I'm not running my app from a std. main function. Cinder essentially provides a base framework called through a macro and then my app takes over via a number of setup(), draw() and update() calls. I'd just expect to see these littered all about.
But no... O_o
Has anyone encountered anything similar?
Regards,
Gazoo
You need to link your executable and DLLs with debug symbols.
In Debug builds this is on by default but in Release builds it's off by default.
Project properties->Linker->Debugging->Generate Debug Info = Yes (/DEBUG)
We just "upgraded" from Visual Studio 2008 to Visual Studio 2012. We updated our unit tests and now they pass when running them individually but when I try to Run All, I got the following error:
The active Test Run was aborted because the execution process exited unexpectedly. To investigate further, enable local crash dumps either
at the machine level or for process vstest.executionengine.appcontainer.x86.exe. Go to more details: [http://go.microsoft.com/fwlink/?linkid=232477][1]
So I went to the link and followed the instructions to add the registry key to enable local crash dumps. The error message then changed to:
The active Test Run was aborted because the execution process exited unexpectedly. Check the execution process logs for more information.
If the logs are not enabled, then enable the logs and try again.
Apparently it noticed the changes that I made in the registry to enable crash. However, when I looked in %LOCALAPPDATA%\CrashDumps, no files were being created.
If I run one test at a time (or even a few tests at a time), I can get them all to pass. The problem is only with Run All.
Has anyone else encountered similar problems? If so, how did you solve them?
Essentially the same question was asked on MSDN, but the answer was something like "click the link to the crash dump". That answer doesn't help me because I don't see any link to the crash dump and I am unable to get the crash dump to be generated.
This question on StackOverflow is also similar, and ended up resulting in a bug being logged on Microsoft Connect (which looks to be deferred for some reason), but my problem might be different because my code has nothing to do with "async tasks" (I don't think).
EDIT: The problem went away, seemingly on its own, but the problem was likely an exception that wasn't being caught in the unit test code, as some of the answers below suggest. However, I'm still confused as to why the problem only appeared with Run All, and not when running smaller groups of tests or Debug All.
I had the same problem, the tests failed for apparently no reason. Later I found that a buggy method was causing a StackOverflowException. When I fixed my bug, the VS bug disappeared.
Maybe it works most of the time because you don't run the faulty code.
The best workaround I have so far is to debug all. This is done via TEST -> Debug -> All Tests. It's obviously slower but it doesn't crash.
This can happen with certain errors, such as a stackoverflow. Presumably this is crashing the test runner and so it can't continue when it hits a test that causes the problem.
The solution, therefore, is to run all tests in debug (from the Test -> Debug menu) and Visual Studio will show errors like these.
For anyone else who may need this in future: My test runner was crashing when a console specific command (Environment.Exit(-1);) was executed via the unit test. Even running in debug mode would just crash - I could not get at a useful error message.
So my scenario is different to the main question scenario in that a) debug didn't work at all b) run all vs run individually made no difference. That is because my error scenario always arose but the stack overflows of the original question did not.
The bottom line: test runner is bad and will crash if it finds something it doesn't like. You need to manually isolate and work out what the Bad Thing™ is.
For someone else looking for this: I had some code that was calling System.Environment.Exit(123), and I was unaware of this. So check for any code that terminates the process.
I've just had the same problem. It turned out that was my code - there was an infinite loop of WCF service calls. In your case this might be something else. So my proposal is to either remember (logs in version control system?) or to figure out (by excluding different tests from run, e.g. with bisection method) which place in code leads to this behavior. And wuala! It's cause of the problem and at the same time bug in code.
UPDATE
As for questions in your EDIT. It could happen that running smaller groups of tests didn't reproduce the issue. In this case, given those groups included all tests, one can make an assumption that some tests interfere. Maybe some static data or fields in a test class?
As for running tests in debug mode - I'm not surprised. Visual Studio test runner behaves different in "Run" mode vs "Debug" one.
I had a similar problem except that it wasn't a stackoverflow exception. It was caused by my project under test using Entity Framework and the NUnit project not having references included to the EntityFramework and EntityFramework.SqlServer modules. Adding the references to Entity Framework modules fixed it.
Just had the same problem. Closing and reopening visual studio fixed it for me.