What could possibly "break the debugger" in Visual Studio (maybe std::string?) - c++

Consider setting a breakpoint on the following line, and stepping into it using the Visual Studio debugger (in a fully-cleaned and rebuilt debug build):
Poco::URI testUri( "http://somewhere.com/test/path" );
It turns out this step in will take you to this function:
URI::URI(const char* uri):
_port(0)
{
parse(std::string(uri));
}
And it turns out that when you take a few steps more and pause on the final line after the parse() call, all is well in the newly constructed URI object, specifically:
it has been parsed correctly;
one can expand the this pointer to see correctly assigned member variables (e.g. its_host, _path and _scheme members are set to "somewhere.com", "/test/path" and "http" respectively);
the this pointer at this stage points to a legitimate memory (e.g. 0x002AEE20) location, and at that location one can see what I am confident is the URI object (a set of std::string variables, and one int as it happens).
However, after a single step more, one returns to the original code line, and suddenly:
expanding the testUri object in the "Autos" or "Watch" debugger windows leads to std::string members that cannot be read (there are "errors reading characters of string"), and yet...
the memory where the constructed object resides remains unchanged, and...
the address of testUri is confirmed to point to the unchanged memory
How can this be? Is the VS debugger broken? What broke it?
This is the latest in a sequence of weird issues trying to get POCO Libraries up and going in a multi-threaded MFC project. I have no idea if the MFC or the multi-threading should have any impact on Poco, but I have experienced a week of weirdness -- often with std::string objects involved -- and I'd like to get to the bottom of it. All suggestions for tracing what is occurring greatly appreciated. I'm running VS2015 Community if that make a difference.

As mentioned in the comments, trying to mix different builds (i.e. both release and debug) within the same project can cause issues like this.
In this case, however, it was the mixing of different compilers -- most of the project was built under what seem to be VS2010 conditions, while the Poco libraries were built under VS2015 conditions.
I'm not 100% sure of the conditions under which the wider project was being compiled before, since it was recently upgraded from VS2010 to VS2015, and in the process, the Platform Toolset setting did not show up in the .vcxproj file. I have now (re-)introduced a Platform Toolset for each build configuration and set it to v100, and also rebuilt Poco with the build_vs100.cmd script. Everything seems to work as expected now.
The way I tracked this down was to observe that the application was being compiled with /MDd (multi-threaded debug DLL code generation), yet the linker was attempting to link to the "d" versions of the Poco libraries, not the "mdd" versions. When the compilers were brought into line, the linker correctly linked the "mdd" versions as one would expect.
Since all library linking in Poco is intended to be automatic (see the #pragma directives in PocoFoundation.h), the incorrect library selection was due to changed preprocessor definitions (POCO_STATIC was not being defined). I did not bother to check why this was.

Related

Windows C++ MFC migration: AfxGetThread Assertion. Why does win32u.dll load before mfc140d.dll in some cases?

I have customer code written for Visual Studio 6.0 MFC which has a simple GUI and launches an EXE with arguments. This code was ported from VS6.0 to VS2019 about 2 years ago and works in a production environment on several systems. We now have a new system where the code fails to function... and I'm starting to dig.
The code is throwing exception in appcore.cpp line 196
It is crashing at AfxGetThread() now that I have been able to get VS2019 to find "appcore.cpp".
This is new information.. I will be searching on AfxGetThread next... so this question likely to be a duplicate now.
One difference I have detected is the order where the Visual Studio 2019 debuger loads symbols. I can't say for certain that this is an indication of the actual DLL load order at runtime, but it appears to be. The screenshot below is the SYMBOL load order where a difference is detected between the working and non working instance of the application.
In the image below we have a Tortoise SVN Diff of two ASCII files. One is the DLL symbol load order on the left when the application works. The second is the DLL symbol load order on the right when the application fails to work. Line 7 is the divergance, where in the failig case the library win32u.dll is pulled in before mfc140d.dll.
The customer code uses some Apache log4cxx libraries which I need to investigate, but at this point in the load sequence I'm not 100% sure differences in those libraries or *.h files used at build time could influence the DLL loading order.
So this is the puzzle I'm looking at.
I will include some links to relevant StackOverflow questions that are similar in my search for an answer to this question.
Possibly Useful Links:
https://learn.microsoft.com/en-us/cpp/porting/porting-guide-mfc-scribble?view=msvc-170
The DLLs are searched in order in different locations: Standard Search Order for Desktop Applications
Most likely the DLLs on the failing machine are missing or they are in the wrong location, so Windows grabs something else.
Make sure all the dependencies are installed in the correct folders.
In this case, the crash in appcore.cpp was due to the code having 2 CWinApp derived objects in the code. And the crash occurs at construction time.
The first hurdle is to get VisualStudio2019 to find appcore.cpp and be able to step into this code. I browsed to C:\Program Files(x86)\VisualStudio and searched for "appcore.cpp". This provided the trail of breadcrumbs to get Visual Studio2019 the correct path when it asked for the file.
In my case the path is:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30037\atlmfc\src\appcore.cpp
The second hurdle was to put a breakpoint at the ASSERT point, as the FIRST time the program comes up, this ASSERT is OK. At least in my case... the first constructor of the CWinApp object succeeded. So in my case the offensive code which was unexpectadly constructing a CWinApp derived object ran first. Then the second iteration through the ASSERT would happen.
By placing a breakpoint at the ASSERT, you can re-launch and look at the stack trace of the successful object to determine if it's expected or not.
In my case there was a *.h fix required to get ancient Visual Studio 6.0 MFC code to build and link successfully. I don't have the exact secret, but it's essentially getting a *.h to specify the proper WIN_VER minimum windows version before afx.h is included. For the code that failed, an incorrect *.h was included that included the fix PLUS objects that derived CWinApp.
As near as I can tell, the reason this worked on some sites and not others was due to a regression in the code that was built on that particular system.
The other "strange" behavior was that on the system with the bad *.h include the DLL symbol load order was different. I was able to replicate this bad behavior on more than one system running the "bad" Exe code. Then I did the same on working *.exe code. The strange thing was that the "bad" exe had reasonably correct sources in the workspace. So there were source control issues leading to this belief that it worked on one node and not others.
The runtime behavior I was able to catch matched obsolete code committed to source control.

VS2008 C++ breakpoint becomes permanently inactive after access violation error (no executable code associated with line)

I am reproducing the following behavior in VS2008 (native C++):
attach to an executable that consumes a custom dll (for which I have the source)
debug the code from the dynamic lib
encounter an access violation error (probably caused by the code in the executable - closed source)
break on access violation error with the attached debugger
After this, no matter how many times I reattach, rebuild, restart the application, computer, any breakpoint I will set in the .dll source code becomes inactive (No executable code associated with this line is the alleged cause, according to VS).
I suspect this is an issue with VS2008, as I did the same on a different machine and now I have two machines where debugging is no longer possible.
Are there any recorded solutions of this issue? What can be done to overcome it?
What I have done:
deleting everything (the entire solution, pdbs, binaries, etc.) and starting with the code from scratch (cloning the latest version from the repository)
restarting the machine
changing the machine (it worked once, until the error occurred, then the other computer exhibited the same behavior)
What I cannot do:
change compiler/VS version
debug the executable (sadly no source code available and lack of assembly skills)
The root of the issue was more subtle. Although the project was intended to be native C++, I have found that on the configuration I was testing the code, the entire project was built with CLR support.
When attaching to the application the first time on any machine, in native debugging mode, the breakpoints would trigger. However, when encountering the native access violation error, these breakpoints became permanently inactive thereafter. After deciding to check what happens if I let the debugger attach in auto mode, I have discovered that the breakpoints became active and hence found out that all code had been compiled with the /clr flag except for the entrypoint in the consumed dll, which had no CLR support.
The question here is why VS2008 behaves like this and does not directly disable breakpoints whenever one attempts to debug a managed context using native debug settings.
TL;DR: check if your C++ project is built with CLR support and attach either as native or managed, depending on your needs. Alternatively, if only some of your files require C++-CLI usage, enable the /clr flag only for those. It is more often a better choice since C++-CLI often clashes with certain native libraries (e.g. not std::mutex support, linking against native static libs Linking unmanaged C++ DLL with managed C++ class library DLL, etc.).

The required DLLs in a visual studio c++ project

I've done some searching and seen questions similar in nature to mine, but none that quite hit the nail on the head of the issue I'm having.
I'm making C++ game in Visual Studio (with the Allegro 5 library) and encountering difficulty running it on other computers. I'm well aware of the 'MSVCR##.dll is missing from this computer' issue, but what I'm wondering is why I'm unable to run my Release build because I'm missing the MSVCR##'D'.dll on a certain computer, when I was under the impression that the 'D' suffixed .dll was exclusively required for running the debugger. I've checked in my configuration manager for release build settings and I have 'Generate Debug Info' set to No, which I thought was the only thing I needed to do. My question I guess is whether or not there are any other settings I need to configure to make sure my Release build isn't looking for the MSVCR##D.dll. Thanks in advance anyone who has any info!
You're a bit confused about the use of the *D libraries. They're indeed used for debug builds, but debug builds differ in multiple ways from release builds. For starters, debug builds by default come with a *.PDB file that contains all the function names (This is your "Generate Debug Info" option). A debugger looks into the .PDB file to find a readable name for a crash site.
Another debug option is to not inline code - this keeps your named functions intact. Inlining may put that single finction inside three other functions, which complicates debugging a bit.
Finally the Debug CRT includes functions that perform extra error checking against bad arguments. Many functions exhibit Undefined Behavior when passed a null pointer, for instance. The Debug libraries will catch quite a few of those, whereas the Release versions assume you pass valid pointers only.
Now DLL's can reference each other; there's a whoel dependency graph. That's why the Dependency Walker tool exists: it figures out which DLL's rqeuire which other DLL's, and this will tell you why you need the *D version.
Thank you very much for all your inputs, I was able to learn a fair bit from this. It turns out the issue was (of course) entirely my fault, as when setting up the Allegro 5 dependencies in the project settings (under General->Linker) I was accidentally including a dependency for the debug version of the Allegro monolith-md.dll as well as the non-debug version in my Release build, and that .dll was in turn referencing the *D version of the MSVCR .dll. The issue has been resolved by removing that dependency from the Release build of my game.
Install dependency walker on that machine. Load the exe. Check if any of the dependent dlls are missing.

D3DX11EffectsD.lib not showing up after build (vs2010)

I am starting to learn DX11 and running into trouble with the effects framework. I know it was released as source and I have to build it, but the output from the build is not what I expected.
According to the research I've done on this question, the output from the build should be
D3DX11EffectsD.lib for debug
D3DX11Effects.lib for release
However, when I look into the 'Effects11\Debug' directory after building the project, I only see a file Effects11.lib (well, an Effects11 Object Library file which I assume is a .lib, I'm new to c++), and the exact same file in 'Effects11\Release'. Whats going on here? I've never used VS 2010 for c++ before now but I think I am building the solution correctly.
Is this a matter of renaming the files or have I done something wrong without realizing it? There really isn't much documentation on building and linking libraries in vs 2010 that I could find. Anybody have any suggestions?
Thanks
If you compiled exactly what you got off the web, then I think it would be just a naming convention problem.
You should try compiling it into your end application and see if it yells about debugging symbols missing.
You can also go into the build settings (it has been a while since I have used visual studio for anything other than C# so I don't know exactly where that menu option would be (I assume right clicking on the project should yield some useful results)...I generally do my C++ stuff on linux) and check to see what the built targets are for debug and release. If it turns out that the names are the same for both, but the build targets (i.e. the folder and a few other options, like including debugging symbols) are different then you should be good and it is just a naming problem.
Also, if the files are the exact same size it is likely that they are the same since the debug file should be at least a bit larger than the release one.
If it turns out that they are the same file, try re-downloading or re-extracting the source and just compiling the project again without any changes and see if that gets any results.

Crash when running application due to existence of unexecuted code in source file - c++

I'm working on a pretty tricky problem that I've been on for literally a week now. I've hit a very hard wall and my forehead hurts from banging it so I'm hoping someone can help me out.
I am using Visual Studio 2005 for this project - I have 2008 installed but was running into similar issues when I tried it.
We have an application currently working compiled against OpenCv1.1 and I'm trying to update it to 2.2. When we switch over statically link to the new libs, the application crashes - but only in release mode. So dynamic linking and debug both work fine.
The crash is in std::vector when calling push_back.
I then came up with a sample test application which runs some basic code in opencv which works fine and then took that exact same code and added it to our application. That code fails.
I then gutted the application so it didn't instantiate any code objects except the main gui and 1 class which called that code and it still crashed. However, if I ran that code directly in the main gui, it worked fine.
I then started commenting out huge amounts of the application (in components that should never be instantiated) and eventually I worked my way down down down until...
I have a class that has a method
void Foo()
{
std::vector<int> blah;
blah.begin();
}
If this method is defined in the header, the test code works, but if this code is defined in the cpp file, it crashes. Also, if I use std::vector<double> instead of int, it also works.
I then tried to play with the compiler options and if I have optimizations turned off (/Od) and Inline Function Expansion set to Only __inline (/Ob1) it works even with the code being in the cpp file.
Of course, if we go back to the ungutted application and change those compiler options by themselves, it crashes.
If anyone has any insights on this, please let me know.
Thanks,
Liron
ARGH! Solution figured out.
In our solution we had defined _SECURE_SCL = 0, but in the 3rd party libs we had build, that was undefined (which means = 1). Setting _SECURE_SCL to 0 supposedly reduces runtimes drastically, but it has to be done the same across all included libs otherwise they will treat array sizes differently.
http://msdn.microsoft.com/en-us/library/aa985896%28v=vs.80%29.aspx
That was a fun week.
The STL classes, like vector<>, have a layout mismatch between the release and the debug builds, caused by iterator debugging support. Your problem behaves exactly like the kind of trouble you get into when you link a debug build of a .lib or DLL in the release build of your application and exchange an STL object between them. Heap corruption and access violation exceptions are the result.
Triple check your build settings and ensure that you only ever link the release build of the .libs in your Release build and the debug build of the .libs in your Debug build.
could you try:
void Foo()
{
std::vector<int> blah;
blah.reserve(5);
blah.begin();
}