I created a program that sends a bunch of text in to the COM port on the speed 9600. In debug mode the program sends all the data and than the COM port is closed. Bit if I create a installer project and than install it on the same machine it doesn't send the last symbols. It closes the port earlier than all data is transmitted. So my question is: Is the debug exe file slower or is it somehow slowed down by the IDE (Visual Studio)?
Also adding a Sleep(100); between the last transition command and the port closing line the problem disappeared.
Your observations correspond with a program written incorrectly, showing signs of a bug that is currently only detectable with a release build.
Release builds tend to run faster, as they are created at an enhanced optimisation level and with various debugging utilities turned off. They are designed for maximum production performance at the expense of development-time debuggability. Creating an installation package evidently created a release build (which makes sense).
This increased performance in turn affects the timing of your program. If you were accidentally relying on there being a long duration before the port was closed, giving your program enough time to transmit all its data only by chance, then when this process speeds up your bug becomes observable. There is no longer enough time for the data to get through. Adding a Sleep simulates the slower execution of the debug build, thus almost certainly confirming the existence of a timing bug.
This is good news! You have strong evidence of where the bug is and the form it takes. Now all you have to do is fix it!
I have used the File New C++ app Wizard in Visual Studio 2015 to create a simple 'Hello World' Window app. When linked with the default stack size, it launches instantly.
When linked with a stack reserve size of 1073741824, the app hangs for tens of minutes before becoming responsive again.
Increasing the stack size gradually from the default will also increase the delay gradually.
In another executable I have used EditBin.exe to change the stack from the 1mb default to the 1gb value above and it also hangs in the same way and becomes responsive later on. Changing it back causes everything to run smoothly again.
I think I have tracked it down to the first WIN32 API call. It does not seem to be the particular call itself but the fact that a new DLL is being loaded and inheriting the stack from the executable. CPU and memory stay idle during this operation. I think it is compounded by the exe being 32 bit and running on 64 bit.
This is a repro for a large application which uses a very big stack so I'd like to figure out what could be happening in general when that first API call is executed. Is memory being zeroed out?
When debugging in Visual Studio, the hang happens here but I think this is misleading.
// Main message loop:
while (GetMessage(&msg, nullptr, 0, 0))
{
Update:
Stack size: Not my code, but the original is a Fortran app which does a lot of calcs for particle physics and that size was set originally. I can't change it without understanding what is going on first.
Release mode: All compiled for release mode, 32 bit.
Memory usage: I don't actually see any memory assigned or CPU activity though. From what I understand, the stack reserve is just a reserve to call upon it's not actively assigned at app start. The app uses the same memory at both stack levels on startup.
Single threaded: Just the example app out of File New.
Loading DLL: In debugging the original Fortran code (of which this is a repro) the issue happened when the DLL was called. Checked with a WinHelp call and a Window Create call. I assumed the DLL was being loaded in process at this point.
Update2:
I understand I need to fix the code, this is part of investigating it. It also happens with this C++ code. What I am after specifically, is what is happening when the WIN API is called and the UI becomes unresponsive? That might provide a clue and a workaround until the 40-50 year old Fortran code is re-worked and upgraded to 64 bit.
I've experienced this with every version of Visual Studio starting from 2012 (2012, 2013, 2015 Preview), on multiple computers and multiple projects, but I haven't figured out how to fix it:
Whenever I'm debugging a 64-bit(?) C++ console program, after a few minutes and seemingly completely randomly (when I'm not clicking or typing anything), the console window for the program spontaneously closes and I can no longer debug or step through the program with Visual Studio. When I press Stop and attempt to restart debugging, I usually get ERROR_NETWORK_UNREACHABLE:
// MessageId: ERROR_NETWORK_UNREACHABLE
// MessageText:
// The network location cannot be reached. For information about network troubleshooting, see Windows Help.
#define ERROR_NETWORK_UNREACHABLE 1231L
If I try to attach to the process manually I get the error:
Unable to attach to the process.
The only fix I've found for this is to restart Visual Studio. I can't find any other way to fix it, and I've tried running Process Monitor but haven't found anything.
What causes this problem and how can I fix it?
(?) Upon further checking it seems that this only happens in 64-bit mode, but I'm not 100% sure.
Ok, this is just so wrong
I also have issues with this bug, and in my case it occurred every other debug session. Which meant debug -> stop -> debug -> bug -> restart visual studio -> go to start (repeat every minute during the whole day).
Needless to say I was driven to find a solution. So yesterday I tried procmon, spend hours looking at API monitor differences, looked at plugins, netstat, etc, etc, etc. And found nothing. I gave up.
Today
Until today.
To track down a stupid bug in my program today, I launched appverifier. For my application, I ran the 'basics' tests and clicked save. After a few hours this led me to the bug in my program, which was something like this (extremely simplified version):
void* dst = _aligned_malloc(4096, 32);
memcpy(dst, src, 8192);
Obviously this is a bug and obviously it needed fixing. I noticed the error after putting a breakpoint on the memcpy line, which was not executed.
After a stop and 'debug' again I was surprised to find that I could actually debug the program for the second time. And now, several hours later, this annoying bug here hasn't re-emerged.
So what appears to be going on
So... apparently data from my program is bleeding through into the data or execution space of the debugger, which in turn appears to generate the bug.
I see you thinking: No, this shouldn't happen... you're right; but apparently it does.
So how to fix it? Basically fixing your program (more particular: heap corruption issues) seems to make the VS debugger bug go away. Using appverifier.exe (It's in Debugging tools for Windows) will give you a head start.
Why this works
Since VS2012, VC++ uses a different way to manage the heap. Hans Passant explains it here: Does msvcrt uses a different heap for allocations since (vs2012/2010/2013) .
What basically happens is that heap corruption will break your debugger. The AppVerifier basic settings will ensure that a breakpoint is triggered just before the application does something to corrupt the heap.
So, what happens now is that before the process will break the heap, a breakpoint will trigger instead, which usually means you will terminate the process. The net effect is that the heap will still be in-tact before you terminate your program, which means that your debugger will still function.
"Test"
Before using appverifier -- bug triggered every 2 minutes
While using appverifier -- VS debugger has been stable for 5 days (and counting)
This is an environmental problem of course. Always hard to troubleshoot, SysInternals' utilities like Process Monitor and Process Explorer are your primary weapons of choice. Some non-intuitive ways that a network error can be generated while debugging:
Starting with VS2012, the C runtime library had a pretty drastic modification that can cause very hard to diagnose mis-behavior if your program corrupts the heap. Much like #atlaste describes. Since time memorial, the CRT always created its own heap, underlying call was HeapCreate(). No more, it now uses GetProcessHeap(). This is very convenient, much easier now to deal with DLLs that were built with /MT. But with a pretty sharp edge, you can now easily corrupt the memory owned by Microsoft code. Not strongly indicated if you can't reattach a 64-bit program, you'd have to kill msvsmon.exe to clear up the corruption.
The Microsoft Symbol Server supplies PDBs for Microsoft executables. They normally have their source+line-number info stripped, but not all of them. Notably not for the CRT for example. Those PDBs were built on a build server owned by DevDiv in Redmond that had the source code on the F: drive. A few around that were built from the E: drive, Patterns+Practices uses that (unlikely in a C++ program). Your debugger will go look there to try to find source code. That usually ends well, it gives up quickly, but not if your machine uses those drive letters as well. Diagnose by clearing the symbol cache and disabling the symbol server with Tools + Options, Debugging, Symbols.
The winapi suffers from two nasty viral infections it inherited from another OS that add global state to any process. The PATH environmental variable and the default working directory. Use Control Panel + System + Advanced + Environment to have a look at PATH, copy/paste the content of the intentionally small textboxes into a text editor. Make sure it is squeaky clean, some paralysis at the usual mess is normal btw. Take no prisoners. Having trouble with the default directory is much harder to troubleshoot. Both should pop out when you use Process Monitor.
No slamdunk explanations, it is a tough problem, but dark corners you can look in.
I have the same problem. Thought it was related to 64 bit console apps, where it is very easily triggered with almost any debug session. But, it also happens on 64 bit windows apps too. Now I am seeing it on 32 bit windows apps. I am running Windows 8.1 pro on a single desktop with the latest version of vs 2013 and no remote debugging. My (added) extensions are Visual Assist, Advanced Installer, ClangFormat, Code Alignment, Code Compare, Duplicate Selection, Productivity Power Tools 2013, and Visual SVN.
I discovered that the "Visual Studio 2013\Settings\CurrentSettings.vssettings" file gets corrupted. You can delete this file and recreate it by restarting VS or you can try to edit the XML. I then keep a copy of a good settings file that I use to replace when it gets corrupted again.
In my case, the corrupted line begins with
</ToolsOptionsSubCategory><ToolsOptionsSubCategory name="XAML" RegisteredName="XAML"
... and it is extremely long (I think this is why it is prone to corruption).
I just disabled in the Menu
Tools > Options
Debugging > Edit and Continue
Native-only options > Enable native Edit and Continue
and now it does not give the that stupid error which was preventing the starting of the debuggee application.
I also had the same problem with VS2015. It was so frustrating that a simple Hello World program gave this error when I ran debugger for the second time. Tried uninstall and reinstall and didn't work.
Finally, the solution mentioned in https://social.msdn.microsoft.com/Forums/vstudio/en-US/8dce0952-234f-4c18-a71a-1d614b44f256/visual-studios-2012-cannot-findlaunch-project-exe?forum=vsdebug
worked. Reset all visual studio settings using Tools->Import and Export settings. Now the issue is not occurring.
For a project, I've created a c++ program that perform a greedy algorithm on a certain set of data. I have about 100 data set (stored in individual files). I've tested each one of these files manually and my program gave me a result each time.
Now, I wanted to "batch process" these 100 data set, because I may have more to do in a near future. So I've created another c++ application that basically loops and call my other program using the system( cmd ); command.
Now, that program works fine, but my other program, which was previously tested, now crashes during that "batch processing". Even weirder, it doesn't crash each time, or even with the same data set.
I've been at it for the past ~6 hours and I can't find what could be wrong. One of my friend suggested to me that maybe my computer calls (with system()) the other program too quickly and so it doesn't have time to free the proper memory space, but I find that hard to believe.
Thank!
EDIT:
I'm running on Windows 7 Professional using Visual Studio 2012
I have an application done in C++ with Visual Studio 2010 and it is firing a lot of debug info with OutputDebugStringW (about 50 per seconds which is obviously a lot).
If I open close DebugView 3 ou 4 times then my application gets unstable and either crashes or behaves erratically. I've tried the same with another application firing the same amount of debug prints (also in C++ done with VS 2010) and I experienced the same behavior, same thing if I try on another computer. Both computers are running Windows 7 32bits.
The length of those prints is controlled to be not more than 512 characters so I don't think there is a buffer overrun (OutputDebugStringW seems limitated to 4kb strings).
I've tried with Hoo Win Tail (which is a software similar to DebugView) and the problem doesn't occur.
Does anybody already experienced this problem?
Best regards,
Jet
I would assume you have a (subtle) race condition in your application that only exposes itself when your program runs at different "speeds".
DebugView will make your app run slower and so introduces different timings. That other tools, which also capture debug output, do not exhibit this behavior in your app, could be related to the fact that they introduce a somehow different (faster/slower) timing.
You could try DebugView++ (https://github.com/djeedjay/DebugViewPP/) it introduces near to no delay into the traced application.