Trouble tracking down a potential memory overwrite. Windows weirdness - c++

This is driving me nuts. I am using some 3rd-party code in a Windows .lib that, in debug mode, is causing an error similar to the following:
Run-Time Check Failure #2 - Stack around the variable 'foo' was corrupted.
The error is thrown when either the object goes out of scope or is deleted. Simply allocating one of these objects and then deleting it will throw the error. I therefore think the problem is either in one of the many constructors/destructors but despite stepping through every line of code I cannot find the problem.
However, this only happens when creating one of these objects in a static library. If I create one in my EXE application, the error does not appear. The 3rd-party code itself lives in a static lib. For example, this fails:
**3RDPARTY.LIB**
class Foo : public Base
{
...
};
**MY.LIB**
void Test()
{
Foo* foo = new Foo;
delete foo; // CRASH!
}
**MY.EXE**
void Func()
{
Test();
}
But this will work:
**3RDPARTY.LIB**
class Foo : public Base
{
...
};
**MY.EXE**
void Func()
{
Foo* foo = new Foo;
delete foo; // NO ERROR
}
So, cutting out the 'middle' .lib file makes the problem go away and it is this weridness that is driving me mad. The EXE and 2 libs all use the same CRT library. There are no errors linking. The 3rd-party code uses inheritance and there are 5 base classes. I've commented out as much code as I can whilst still getting it to build and I just can't see what's up.
So if anyone knows why code in a .lib would act differently to the same code in a .exe, I would love to hear it. Ditto any tips for tracking down memory overwrites! I am using Visual Studio 2008.

One possibility is that it's a calling convention mismatch - make sure that your libraries and executables are all set to use the same default calling convention (usually __cdecl). To set that, open up your project properties and go to Configuration Properties > C/C++ > Advanced and look at the Calling Convention option. If you call a function with the wrong calling convention, you'll completely mess up the stack.

OK, I tracked the problem down and it's a cracker, if anyone's interested. Basically, my .LIB, which exhibited the problem. had defined _WIN32_WINNT as 0x0501 (Windows 2000 and greater), but my EXE and the 3rd-party LIB had it defined as 0x0600 (Vista). Now, one of the headers included by the 3rd-party lib is sspi.h which defines a structure called SecurityFunctionTable which includes the following snippet:
#if OSVER(NTDDI_VERSION) > NTDDI_WIN2K
// Fields below this are available in OSes after w2k
SET_CONTEXT_ATTRIBUTES_FN_W SetContextAttributesW;
#endif // greater thean 2K
Th cut a long story short, this meant a mismatch in object sizes between the LIBs and this was causing the Run-Time Check Failure.
Class!

Is your .lib file linked against the library's .lib? I assume from your example that you are including the header with the declaration of the destructor; without it, deleting such a type is allowed but can result in UB (in a bizarre manner contrary to the general rule that something must be defined before used). If the .lib files aren't linked together, it's possible that a custom operator delete or destructor is having some weird linking issues, and while that shouldn't happen, you never can quite tell if it won't.

Without seeing more code, it's hard to give you a firm answer. However, for tracking down memory overwrites, I recommend using WinDbg (free from Microsoft, search for "Debugging Tools for Windows").
When you have it attached to your process, you can have it set breakpoints for memory access (read, write, or execute). It's really powerful overall, but it should especially help you with this.

The error is thrown when either the object goes out of scope or is deleted.
Whenever I've run into this it had to do with the compiled library using a different version of the C++ runtime than the rest of the application.

Related

C++ Passing std::string by reference to function in dll

I have the problem with passing by reference std::string to function in dll.
This is function call:
CAFC AFCArchive;
std::string sSSS = std::string("data\\gtasa.afc");
AFCER_PRINT_RET(AFCArchive.OpenArchive(sSSS.c_str()));
//AFCER_PRINT_RET(AFCArchive.OpenArchive(sSSS));
//AFCER_PRINT_RET(AFCArchive.OpenArchive("data\\gtasa.afc"));
This is function header:
#define AFCLIBDLL_API __declspec(dllimport)
AFCLIBDLL_API EAFCErrors CAFC::OpenArchive(std::string const &_sFileName);
I try to debug pass-by-step through calling the function and look at _sFileName value inside function.
_sFileName in function sets any value(for example, t4gs..\n\t).
I try to detect any heap corruption, but compiler says, that there is no error.
DLL has been compiled in debug settings. .exe programm compiled in debug too.
What's wrong?? Help..!
P.S. I used Visual Studio 2013. WinApp.
EDIT
I have change header of func to this code:
AFCLIBDLL_API EAFCErrors CAFC::CreateArchive(char const *const _pArchiveName)
{
std::string _sArchiveName(_pArchiveName);
...
I really don't know, how to fix this bug...
About heap: it is allocated in virtual memory of our process, right? In this case, shared virtual memory is common.
The issue has little to do with STL, and everything to do with passing objects across application boundaries.
1) The DLL and the EXE must be compiled with the same project settings. You must do this so that the struct alignment and packing are the same, the members and member functions do not have different behavior, and even more subtle, the low-level implementation of a reference and reference parameters is exactly the same.
2) The DLL and the EXE must use the same runtime heap. To do this, you must use the DLL version of the runtime library.
You would have encountered the same problem if you created a class that does similar things (in terms of memory management) as std::string.
Probably the reason for the memory corruption is that the object in question (std::string in this case) allocates and manages dynamically allocated memory. If the application uses one heap, and the DLL uses another heap, how is that going to work if you instantiated the std::string in say, the DLL, but the application is resizing the string (meaning a memory allocation could occur)?
C++ classes like std::string can be used across module boundaries, but doing so places significant constraints on the modules. Simply put, both modules must use the same instance of the runtime.
So, for instance, if you compile one module with VS2013, then you must do so for the other module. What's more, you must link to the dynamic runtime rather than statically linking the runtime. The latter results in distinct runtime instances in each module.
And it looks like you are exporting member functions. That also requires a common shared runtime. And you should use __declspec(dllexport) on the entire class rather than individual members.
If you control both modules, then it is easy enough to meet these requirements. If you wish to let other parties produce one or other of the modules, then you are imposing a significant constraint on those other parties. If that is a problem, then consider using more portable interop. For example, instead of std::string use const char*.
Now, it's possible that you are already using a single shared instance of the dynamic runtime. In which case the error will be more prosaic. Perhaps the calling conventions do not match. Given the sparse level of detail in your question, it's hard to say anything with certainty.
I encountered similar problem.
I resolved it synchronizing Configuration Properties -> C / C++ settings.
If you want debug mode:
Set _DEBUG definition in Preprocessor Definitions in both projects.
Set /MDd in Code Generation -> Runtime Library in both projects.
If you want release mode:
Remove _DEBUG definition in Preprocessor Definitions in both projects.
Set /MD in Code Generation -> Runtime Library in both projects.
Both projects I mean exe and dll project.
It works for me especially if I don't want to change any settings of dll but only adjust to them.

Inline class constructor to avoid vc memory crash

C++ class constructor can be inlined or not be inlined. However, I found a strange situation where only inline class constructor can avoid Visual Studio memory crash. The example is as follows:
dll.h
class _declspec(dllexport) Image
{
public:
Image();
virtual ~Image();
};
class _declspec(dllexport) Testimage:public Image
{
public:
Testimage();
virtual ~Testimage();
};
typedef std::auto_ptr<Testimage> TestimagePtr;
dll.cpp
#include "dll.h"
#include <assert.h>
Image::~Image()
{
std::cout<<"Image is being deleted."<<std::endl;
}
Image::Image()
{
}
Testimage::Testimage()
{
}
Testimage::~Testimage()
{
std::cout<<"Geoimage is being deleted."<<std::endl;
}
The dll library is compiled as a dynamic library, and it is statically linked to the C++ runtime library (Multi-threaded Debug (/MTd)). The executable program that runs the library is as follows:
int main()
{
TestimagePtr my_img(new Testimage());
return 0;
}
The executable program will invoke the dll library and it also statically links the runtime library. The problem I have is that when running the executable program the following error message appears:
However, when the class constructor in dll is inlined as the following codes show:
class _declspec(dllexport) Image
{
public:
Image();
virtual ~Image();
};
class _declspec(dllexport) Testimage:public Image
{
public:
Testimage()
{
}
virtual ~Testimage();
};
The crash will disappear. Could someone explain the reason behind? Thanks! By the way, I am using VC2010.
EDIT: The following situation also trigger the same crash
.
Situation 1
int main()
{
//TestimagePtr my_img(new Testimage());
Testimage *p_img;
p_img = new Testimage();
delete p_img;
return 0;
}
it is statically linked to the C++ runtime library (Multi-threaded Debug (/MTd)
This is a very problematic scenario in versions of Visual Studio prior to VS2012. The issue is that you have more than one version of the CRT loaded in your process. One used by your EXE, another used by the DLL. This can cause many subtle problems, and not so subtle problems like this crash.
The CRT has global state, stuff like errno and strtok() cannot work properly when that global state is updated by one copy of the CRT and read back by another copy. Relevant to your crash, a hidden global state variable is the heap that the CRT uses to allocate memory from. Functions like malloc() and ::operator new use that heap.
This goes wrong when objects are allocated by one copy of the CRT and released by another. The pointer that's passed to free() or ::operator delete belongs to the wrong heap. What happens next depends on your operating system. A silent memory leak in XP. In Vista and up, you program runs with the debug version of the memory manager enabled. Which triggers a breakpoint when you have a debugger attached to your process to tell you that there's a problem with the pointer. The dialog in your screenshot is the result. It isn't otherwise very clear to me how inlining the constructor could make a difference, the fundamental issue however is that your code invokes undefined behavior. Which has a knack for producing random outcomes.
There are two approaches available to solve this problem. The first one is the simple one, just build both your EXE and your DLL project with the /MD compile option instead. This selects the DLL version of the CRT. It is now shared by both modules and you'll only have a single copy of the CRT in your process. So there is no longer a problem with having one module allocating and another module releasing memory, the same heap is used.
This will work fine to solve your problem but can still become an issue later. A DLL tends to live a life of its own and may some day be used by another EXE that was built with a different version of the CRT. The CRT will now again not be shared since they'll use different versions of the DLL, invoking the exact same failure mode you are seeing today.
The only way to guarantee that this cannot happen is to design your DLL interface carefully. And ensure that there will never be a case where the DLL allocates memory that the client code needs to release. That requires giving up on a lot of C++ goodies. You for example can never write a function that returns a C++ object, like std::string. And you can never allow an exception to cross the module boundary. You are basically down to a C-style interface. Note how COM addresses this problem by using interface-based programming techniques and a class factory plus reference counting to solve the memory management problem.
VS2012 has a counter-measure against this problem, it has a CRT version that allocates from the default process heap. Which solves this particular problem, not otherwise a workaround for the global state issue for other runtime functions. And adds some new problems, a DLL compiled with /MT that gets unloaded that doesn't release all of its allocations now causes an unpluggable leak for example.
This is an ugly problem in C++, the language fundamentally misses an ABI specification that addresses problems like this. The notion of modules is entirely missing from the language specification. Being worked on today but not yet completed. Not simple to do, it is solved in other languages like Java and the .NET languages by specifying a virtual machine, providing a runtime environment where memory management is centralized. Not the kind of runtime environment that excites C++ programmers.
I tried to reproduce your problem in VC2010 and it doesn't crash. It works with a constructor inline or not. Your problem is probably not in what you write here.
Your project is too hard to open as it seams to have its file pathes set in absolute, probably because generated with CMake. (So the files are not found by the compiler).
The problem I see in your code is that you declare the exported classes with _declspec(dllexport) directly written.
You should have a #Define to do this, and the value should be _declspec(dllimport) when read from the exe compilation. Maybe the problem comes from that.

Do you really need a main() in C++?

From what I can tell you can kick off all the action in a constructor when you create a global object. So do you really need a main() function in C++ or is it just legacy?
I can understand that it could be considered bad practice to do so. I'm just asking out of curiosity.
If you want to run your program on a hosted C++ implementation, you need a main function. That's just how things are defined. You can leave it empty if you want of course. On the technical side of things, the linker wants to resolve the main symbol that's used in the runtime library (which has no clue of your special intentions to omit it - it just still emits a call to it). If the Standard specified that main is optional, then of course implementations could come up with solutions, but that would need to happen in a parallel universe.
If you go with the "Execution starts in the constructor of my global object", beware that you set yourself up to many problems related to the order of constructions of namespace scope objects defined in different translation units (So what is the entry point? The answer is: You will have multiple entry points, and what entry point is executed first is unspecified!). In C++03 you aren't even guaranteed that cout is properly constructed (in C++0x you have a guarantee that it is, before any code tries to use it, as long as there is a preceeding include of <iostream>).
You don't have those problems and don't need to work around them (wich can be very tricky) if you properly start executing things in ::main.
As mentioned in the comments, there are however several systems that hide main from the user by having him tell the name of a class which is instantiated within main. This works similar to the following example
class MyApp {
public:
MyApp(std::vector<std::string> const& argv);
int run() {
/* code comes here */
return 0;
};
};
IMPLEMENT_APP(MyApp);
To the user of this system, it's completely hidden that there is a main function, but that macro would actually define such a main function as follows
#define IMPLEMENT_APP(AppClass) \
int main(int argc, char **argv) { \
AppClass m(std::vector<std::string>(argv, argv + argc)); \
return m.run(); \
}
This doesn't have the problem of unspecified order of construction mentioned above. The benefit of them is that they work with different forms of higher level entry points. For example, Windows GUI programs start up in a WinMain function - IMPLEMENT_APP could then define such a function instead on that platform.
Yes! You can do away with main.
Disclaimer: You asked if it were possible, not if it should be done. This is a totally un-supported, bad idea. I've done this myself, for reasons that I won't get into, but I am not recommending it. My purpose wasn't getting rid of main, but it can do that as well.
The basic steps are as follows:
Find crt0.c in your compiler's CRT source directory.
Add crt0.c to your project (a copy, not the original).
Find and remove the call to main from crt0.c.
Getting it to compile and link can be difficult; How difficult depends on which compiler and which compiler version.
Added
I just did it with Visual Studio 2008, so here are the exact steps you have to take to get it to work with that compiler.
Create a new C++ Win32 Console Application (click next and check Empty Project).
Add new item.. C++ File, but name it crt0.c (not .cpp).
Copy contents of C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\crt\src\crt0.c and paste into crt0.c.
Find mainret = _tmain(__argc, _targv, _tenviron); and comment it out.
Right-click on crt0.c and select Properties.
Set C/C++ -> General -> Additional Include Directories = "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\crt\src".
Set C/C++ -> Preprocessor -> Preprocessor Definitions = _CRTBLD.
Click OK.
Right-click on the project name and select Properties.
Set C/C++ -> Code Generation -> Runtime Library = Multi-threaded Debug (/MTd) (*).
Click OK.
Add new item.. C++ File, name it whatever (app.cpp for this example).
Paste the code below into app.cpp and run it.
(*) You can't use the runtime DLL, you have to statically link to the runtime library.
#include <iostream>
class App
{
public: App()
{
std::cout << "Hello, World! I have no main!" << std::endl;
}
};
static App theApp;
Added
I removed the superflous exit call and the blurb about lifetime as I think we're all capable of understanding the consequences of removing main.
Ultra Necro
I just came across this answer and read both it and John Dibling's objections below. It was apparent that I didn't explain what the above procedure does and why that does indeed remove main from the program entirely.
John asserts that "there is always a main" in the CRT. Those words are not strictly correct, but the spirit of the statement is. Main is not a function provided by the CRT, you must add it yourself. The call to that function is in the CRT provided entry point function.
The entry point of every C/C++ program is a function in a module named 'crt0'. I'm not sure if this is a convention or part of the language specification, but every C/C++ compiler I've come across (which is a lot) uses it. This function basically does three things:
Initialize the CRT
Call main
Tear down
In the example above, the call is _tmain but that is some macro magic to allow for the various forms that 'main' can have, some of which are VS specific in this case.
What the above procedure does is it removes the module 'crt0' from the CRT and replaces it with a new one. This is why you can't use the Runtime DLL, there is already a function in that DLL with the same entry point name as the one we are adding (2). When you statically link, the CRT is a collection of .lib files, and the linker allows you to override .lib modules entirely. In this case a module with only one function.
Our new program contains the stock CRT, minus its CRT0 module, but with a CRT0 module of our own creation. In there we remove the call to main. So there is no main anywhere!
(2) You might think you could use the runtime DLL by renaming the entry point function in your crt0.c file, and changing the entry point in the linker settings. However, the compiler is unaware of the entry point change and the DLL contains an external reference to a 'main' function which you're not providing, so it would not compile.
Generally speaking, an application needs an entry point, and main is that entry point. The fact that initialization of globals might happen before main is pretty much irrelevant. If you're writing a console or GUI app you have to have a main for it to link, and it's only good practice to have that routine be responsible for the main execution of the app rather than use other features for bizarre unintended purposes.
Well, from the perspective of the C++ standard, yes, it's still required. But I suspect your question is of a different nature than that.
I think doing it the way you're thinking about would cause too many problems though.
For example, in many environments the return value from main is given as the status result from running the program as a whole. And that would be really hard to replicate from a constructor. Some bit of code could still call exit of course, but that seems like using a goto and would skip destruction of anything on the stack. You could try to fix things up by having a special exception you threw instead in order to generate an exit code other than 0.
But then you still run into the problem of the order of execution of global constructors not being defined. That means that in any particular constructor for a global object you won't be able to make any assumptions about whether or not any other global object yet exists.
You could try to solve the constructor order problem by just saying each constructor gets its own thread, and if you want to access any other global objects you have to wait on a condition variable until they say they're constructed. That's just asking for deadlocks though, and those deadlocks would be really hard to debug. You'd also have the issue of which thread exiting with the special 'return value from the program' exception would constitute the real return value of the program as a whole.
I think those two issues are killers if you want to get rid of main.
And I can't think of a language that doesn't have some basic equivalent to main. In Java, for example, there is an externally supplied class name who's main static function is called. In Python, there's the __main__ module. In perl there's the script you specify on the command line.
If you have more than one global object being constructed, there is no guarantee as to which constructor will run first.
If you are building static or dynamic library code then you don't need to define main yourself, but you will still wind up running in some program that has it.
If you are coding for windows, do not do this.
Running your app entirely from within the constructor of a global object may work just fine for quite awhile, but sooner or later you will make a call to the wrong function and end up with a program that terminates without warning.
Global object constructors run during the startup of the C runtime.
The C runtime startup code runs during the DLLMain of the C runtime DLL
During DLLMain, you are holding the DLL loader lock.
Tring to load another DLL while already holding the DLL loader lock results in a swift death for your process.
Compiling your entire app into a single executable won't save you - many Win32 calls have the potential to quietly load system DLLs.
There are implementations where global objects are not possible, or where non-trivial constructors are not possible for such objects (especially in the mobile and embedded realms).

Access violation calling C++ dll

I created c++ dll (using mingw) from code I wrote on linux (gcc), but somehow have difficulties using it in VC++. The dll basically exposes just one class, I created pure virtual interface for it and also factory function which creates the object (the only export) which looks like this:
extern "C" __declspec(dllexport) DeviceDriverApi* GetX5Driver();
I added extern "C" to prevent name mangling, dllexport is replaced by dllimport in actual code where I want to use the dll, DeviceDriverApi is the pure virtual interface.
Now I wrote simple code in VC++ which just call the factory function and then just tries to delete the pointer. It compiles without any problems but when I try to run it I get access violation error. If I try to call any method of the object I get access violation again.
When I compile the same code in MinGW (gcc) and use the same library, it runs without any problems. So there must be something (hehe, I guess many differences actually :)) between how VC++ code uses the library and gcc code.
Any ideas what?
Cheers,
Tom
Edit:
The code is:
DeviceDriverApi* x5Driver = GetX5Driver();
if (x5Driver->isConnected())
Console::WriteLine(L"Hello World");
delete x5Driver;
It's crashing when I try to call the method and when I try to delete the pointer as well. The object is created correctly though (the first line). There are some debug outputs when the object is created and I can see them before I get the access violation error.
You're using one compiler (mingw) for the DLL, and another (VC++) for the calling code.
You're calling a 'C' function, but returning a pointer to a C++ Object.
That will never work, because VTable layouts are almost guranteed to be incompatible. And, the DLL and app are probably using different memory managers, so you're doing new() with one and delete() with the other. Again, it just won't work.
For this to work the two compilers need to both support a standard ABI (Application Binary Interface). I don't think such a thing exists for Windows.
The best option is to expose all you DLL object methods and properties via C functions (including one to delete the object). You can the re-wrap into a C++ object on the calling end.
The two different compilers may be using different calling conventions. Try putting _cdecl before the function name in both the client code and the DLL code and recompiling both.
More info on calling conventions here: http://en.wikipedia.org/wiki/X86_calling_conventions
EDIT: The question was updated with more detail and it looks likely the problem is what Adrien Plisson describes at the end of his answer. You're creating an object in one module and freeing it in another, which is wrong.
(1) I suspect a calling covnention problem as well, though the simple suggestion by Leo doesn't seem to have helped.
Is isConnected virtual? It is possible that MinGW and VC++ use different implementations for a VTable, in which case, well, tough luck.
Try to see how far you get with the debugger: does it crash at the call, or the return? Do you arrive at invalid code? (If you know to read assembly, that usually helps a lot with these problems.)
Alternatively, add trace statements to the various methods, to see how far you get.
(2) For a public DLL interface, never free memory in the caller that was allocated by a callee (or vice versa). The DLL likely runs with a completely different heap, so the pointer is not known.
If you want to rely on that behavior, you need to make sure:
Caller and Callee (i.e. DLL and main program, in your case) are compiled with the same version of the sam compiler
for all supported compilers, you have configured the compile options to ensure caller and callee use the same shared runtime library state.
So the best way is to change your API to:
extern "C" __declspec(dllexport) DeviceDriverApi* GetX5Driver();
extern "C" __declspec(dllexport) void FreeDeviceDriver(DeviceDriverApi* driver);
and, at caller site, wrap in some way (e.g. in a boost::intrusive_ptr).
try looking at the imported libraries from both your DLL and your client executable. (you can use the Dependency Viewer or dumpbin or any other tool you like). verify that both the DLL and the client code are using the same C++ runtime.
if it is not the case, you can indeed run into some issues since the way the memory is managed may be different between the 2, leading to a crash when freeing from one runtime a pointer allocated from another runtime.
if this is really your problem, try not destroying the pointer in your client executable, but rather declare and export a function in your DLL which will take care of destroying the pointer.

Virtual methods chaos, how can I find what causes that?

A colleague of mine had a problem with some C++ code today. He was debugging the weird behaviour of an object's virtual method. Whenever the method executed ( under debug, Visual Studio 2005 ), everything went wrong, and the debugger wouldn't step in that method, but in the object's destructor! Also, the virtual table of the object, only listed it's destructor, no other methods.
I haven't seen this behaviour before, and a runtime error was printed, saying something about the ESP register. I wish I could give you the right error message, but I don't remember it correctly now.
Anyway, have any of you guys ever encountered that? What could cause such behaviour? How would that be fixed? We tried to rebuild the project many times, restarted the IDE, nothing helped. We also used the _CrtCheckMemory function before that method call to make sure the memory was in a good state, and it returned true ( which means ok ) . I have no more ideas. Do you?
I've seen that before. Generally it occurs because I'm using a class from a Release built .LIB file while I'm in Debug mode. Someone else probably has seen a better example and I'd yield my answer to their answer.
Maybe you use C-style casts where a static_cast<> is required? This may result in the kind of error you report, whenever multiple inheritance is involved, e.g.:
class Base1 {};
class Base2 {};
class Derived : public Base1, public Base2 {};
Derived *d = new Derived;
Base2* b2_1 = (Base2*)d; // wrong!
Base2* b2_2 = static_cast<Base2*>(d); // correct
assert( b2_1 == b2_2 ); // assertion may fail, because b2_1 != b2_2
Note, this may not always be the case, this depends on the compiler and on declarations of all the classes involved (it probably happens when all classes have virtual methods, but I do not have exact rules at hand).
OR: A completely different part of your code is going wild and is overwriting memory. Try to isolate the error and check if it still occurs. CrtCheckMemory will find only a few cases where you overwrite memory (e.g. when you write into specially marked heap management locations).
If you ever call a function with the wrong number of parameters, this can easily end up trashing your stack and producing undefined behaviour. I seem to recall that certain errors when using MFC could easily cause this, for example if you use the dispatch macros to point a message at a method that doesn't have the right number or type of parameters (I seem to recall that those function pointers aren't strongly checked for type). It's been probably a decade since I last encountered that particular problem, so my memory is hazy.
The value of ESP was not properly saved across a function call.
This sort of behaviour is usually indicative of the calling code having been compiled with a different definition of a class or function than the code that created the particular class in question.
Is it possible that there is an different version of a component dll that is being loaded instead of the freshly built one? This can happen if you copy things as part of a post-build step or if the process is run from a different directory or changes its dll search path before doing a LoadLibrary or equivalent.
I've encountered it most often in complex projects where a class definition is changed to add, remove or change the signature of a virtual function and then an incremental build is done and not all the code that needs to be recompiled is actually recompiled. Theoretically, it could happen if some part of the program is overwriting the vptr or vtables of some polymorhpic objects but I've always found that a bad partial build is a much more likely cause.
This may 'user error', a developer deliberately tells the compiler to only build one project when others should be rebuilt, or it can be having multiple solutions or multiple projects in a solution where the dependencies are not correctly setup.
Very occasionally, Visual Studio can slip up and not get the generated dependencies correct even when the projects in a solution are correctly linked. This happens less often than Visual Studios is blamed for it.
Expunging all intermediate build files and rebuilding everything from source usually fixes the problem. Obviously for very large projects this can be a severe penalty.
Since it's guess-fest anyway, here's one from me:
You stack is messed up and _CrtCheckMemory doesn't check for that. As to why the stack is corrupted:
good old stack overflow
calling convention mismatches, which was already mentioned (I don't know, like passing a callback in the wrong calling convention to a WinAPI function; what static or dynamic libraries are you linking with?)
a line like printf("%d");