I have a C# program mainly for UI which uses a c++ dll for the logic.
Lately i switched compiler from platform toolset VS 2012(v110) to VS 2017(v141), since then i recieve an exception(without description) everytime it trys to release a std:: collection.
Example:
{
std::string str = "";
}
I tried the same with std::map, std::stack and std::list all of these get a exception in the file xmemory0 within the function _Deallocate. This does not happen when i create a simple c++ console application, so i guess it has something todo with c++ used within a c# application.
Using a custom Allocater for std::list seemed to work but i would like to know the reason why upgrading the compiler leads to such a problem.
Finaly i found the cause, between 2012 and 2017 they added a new delete operator. So for allocating the string it used our new operator and for deallocating it used the standard delete operator.
(do not ask why we override those operators(not my decision)...)
Related
While working on a C++/CLI project to wrap a native C++ DLL, I've come across a native function that takes in a std::string. Something like the following:
class NativeApi
{
public:
ErrorCode readFile(std::string filename = "path.csv");
};
Inside my managed wrapper implementation I allocate a new instance of the native class and call this function:
ref class ManagedApi
{
private:
NativeApi *api;
public:
ManagedApi(): api(new NativeApi()) { }
void Read()
{
api->readFile("apath.csv") // or with nothing to use default value
}
}
When I run this, I get the MDA PinvokeStackImbalance complaining that this call has unbalanced the stack. I was surprised, since the only other time I ever got this MDA was from C# when calling conventions didn't match. I never saw this happen with C++/CLI, where presumably all the matching is done automatically by the compiler.
Has anyone ever saw this before? Googling came up empty. I've looked at the DLL signature and it looks something like:
?readFile#NativeApi##QAE?AW4ErrorCode##V?$basic_string#DU?$char_traits#D#std##V?$allocator#D#2##std###Z
This tells me that the function is there, and takes as a sole argument a basic_string, which should match the standard std::string typedef.
No idea what could possibly have gone wrong. I can make other calls to the native API that do not involve strings perfectly fine.
It is likely that there's a difference between the definition of std::string that you're using vs. what was used to compile the native C++ DLL. Even if the definitions are the same, the native DLL probably isn't using the same version of the C runtime as you are, so when your DLL allocates memory for the std::string, the Native DLL will try to call delete on it (when the string is destroyed at the end of the readFile method), and that call to delete will go to a different heap than was used to allocate the object!
If you want to make this work, you'll have to use the exact same version of the compiler as was used on the native DLL. Note that you'll be limited to the Release build of your project, as you don't have a native DLL that was compiled with the debug runtime.
The proper fix to this problem is to use raw types when calling across DLL boundaries (in this case, wchar_t*). If you can request a change to the native DLL, I would do that. If only raw types are used, then there's no issue with using different runtimes, and everything works the way it should.
I have a C++ client to a C++/CLI DLL, which initializes a series of C# dlls.
This used to work. The code that is failing has not changed. The code that has changed is not called before the exception is thrown. My compile environment has changed, but recompiling on a machine with an environment similar to my old one still failed. (EDIT: as we see in the answer this is not entirely true, I was only recompiling the library in the old environment, not the library and client together. The client projects had been upgraded and couldn't easily go back.)
Someone besides me recompiled the library, and we started getting memory management issues. The pointer passed in as a String must not be in the bottom 64K of the process's address space. I recompiled it, and all worked well with no code changes. (Alarm #1) Recently it was recompiled, and memory management issues with strings re-appeared, and this time they're not going away. The new error is Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I'm pretty sure the problem is not located where I see the exception, the code didn't change between the successful and failing builds, but we should review that to be complete. Ignore the names of things, I don't have much control over the design of what it's doing with these strings. And sorry for the confusion, but note that _bridge and bridge are different things. Lots of lines of code missing because this question is already too long.
Defined in library:
struct Config
{
std::string aye;
std::string bee;
std::string sea;
};
extern "C" __declspec(dllexport) BridgeBase_I* __stdcall Bridge_GetConfiguredDefaultsImplementationPointer(
const std::vector<Config> & newConfigs, /**< new configurations to apply **/
std::string configFolderPath, /**< folder to write config files in **/
std::string defaultConfigFolderPath, /**< folder to find default config files in **/
std::string & status /**< output status of config parse **/
);
In client function:
GatewayWrapper::Config bridge;
std::string configPath("./config");
std::string defaultPath("./config/default");
GatewayWrapper::Config gwtransport;
bridge.aye = "bridged.dll";
bridge.bee = "1.0";
bridge.sea = "";
configs.push_back(bridge);
_bridge = GatewayWrapper::Bridge_GetConfiguredDefaultsImplementationPointer(configs, configPath, defaultPath, status);
Note that call to library that is crashing is in the same scope as the vector declaration, struct declaration, string assignment and vector push-back
There are no threading calls in this section of code, but there are other threads running doing other things. There is no pointer math here, there are no heap allocations in the area except perhaps inside the standard library.
I can run the code up to the Bridge_GetConfiguredDefaultsImplementationPointer call in the debugger, and the contents of the configs vector look correct in the debugger.
Back in the library, in the first sub function, where the debugger don't shine, I've broken down the failing statement into several console prints.
System::String^ temp
List<CConfig^>^ configs = gcnew List<CConfig ^>((INT32)newConfigs.size());
for( int i = 0; i< newConfigs.size(); i++)
{
std::cout << newConfigs[i].aye<< std::flush; // prints
std::cout << newConfigs[i].aye.c_str() << std::flush; // prints
temp = gcnew System::String(newConfigs[i].aye.c_str());
System::Console::WriteLine(temp); // prints
std::cout << "Testing string creation" << std::endl; // prints
std::cout << newConfigs[i].bee << std::flush; // crashes here
}
I get the same exception on access of bee if I move the newConfigs[i].bee out above the assignment of temp or comment out the list declaration/assignment.
Just for reference that the std::string in a struct in a vector should have arrived at it's destination ok
Is std::vector copying the objects with a push_back?
std::string in struct - Copy/assignment issues?
http://www.cplusplus.com/reference/vector/vector/operator=/
Assign one struct to another in C
Why this exception is not caught by my try/catch
https://stackoverflow.com/a/918891/2091951
Generic AccessViolationException related questions
How to handle AccessViolationException
Programs randomly getting System.AccessViolationException
https://connect.microsoft.com/VisualStudio/feedback/details/819552/visual-studio-debugger-throws-accessviolationexception
finding the cause of System.AccessViolationException
https://msdn.microsoft.com/en-us/library/ms164911.aspx
Catching access violation exceptions?
AccessViolationException when using C++ DLL from C#
Suggestions in above questions
Change to .net 3.5, change target platform - these solutions could have serious issues with a large mult-project solution.
HandleProcessCorruptedStateExceptions - does not work in C++, this decoration is for C#, catching this error could be a very bad idea anyway
Change legacyCorruptedStateExceptionsPolicy - this is about catching the error, not preventing it
Install .NET 4.5.2 - can't, already have 4.6.1. Installing 4.6.2 did not help. Recompiling on a different machine that didn't have 4.5 or 4.6 installed did not help. (Despite this used to compile and run on my machine before installing Visual Studio 2013, which strongly suggests the .NET library is an issue?)
VSDebug_DisableManagedReturnValue - I only see this mentioned in relation to a specific crash in the debugger, and the help from Microsoft says that other AccessViolationException issues are likely unrelated. (http://connect.microsoft.com/VisualStudio/feedbackdetail/view/819552/visual-studio-debugger-throws-accessviolationexception)
Change Comodo Firewall settings - I don't use this software
Change all the code to managed memory - Not an option. The overall design of calling C# from C++ through C++/CLI is resistant to change. I was specifically asked to design it this way to leverage existing C# code from existing C++ code.
Make sure memory is allocated - memory should be allocated on the stack in the C++ client. I've attempted to make the vector be not a reference parameter, to force a vector copy into explicitly library controlled memory space, did not help.
"Access violations in unmanaged code that bubble up to managed code are always wrapped in an AccessViolationException." - Fact, not a solution.
but it was the mismatch, not the specific version that was the problem
Yes, that's black letter law in VS. You unfortunately just missed the counter-measures that were built into VS2012 to turn this mistake into a diagnosable linker error. Previously (and in VS2010), the CRT would allocate its own heap with HeapAlloc(). Now (in VS2013), it uses the default process heap, the one returned by the GetProcessHeap().
Which is in itself enough to trigger an AVE when you run your app on Vista or higher, allocating memory from one heap and releasing it from another triggers an AVE at runtime, a debugger break when you debug with the Debug Heap enabled.
This is not where it ends, another significant issue is that the std::string object layout is not the same between the versions. Something you can discover with a little test program:
#include <string>
#include <iostream>
int main()
{
std::cout << sizeof(std::string) << std::endl;
return 0;
}
VS2010 Debug : 32
VS2010 Release : 28
VS2013 Debug : 28
VS2013 Release : 24
I have a vague memory of Stephen Lavavej mentioning the std::string object size reduction, very much presented as a feature, but I can't find it back. The extra 4 bytes in the Debug build is caused by the iterator debugging feature, it can be disabled with _HAS_ITERATOR_DEBUGGING=0 in the Preprocessor Definitions. Not a feature you'd quickly want to throw away but it makes mixing Debug and Release builds of the EXE and its DLLs quite lethal.
Needless to say, the different object sizes seriously bytes when the Config object is created in a DLL built with one version of the standard C++ library and used in another. Many mishaps, the most basic one is that the code will simply read the Config::bee member from the wrong offset. An AVE is (almost) guaranteed. Lots more misery when code allocates the small flavor of the Config object but writes the large flavor of std::string, that randomly corrupts the heap or the stack frame.
Don't mix.
I believe 2013 introduced a lot of changes in the internal data formats of STL containers, as part of a push to reduce memory usage and improve perf. I know vector became smaller, and string is basically a glorified vector<char>.
Microsoft acknowledges the incompatibility:
"To enable new optimizations and debugging checks, the Visual Studio
implementation of the C++ Standard Library intentionally breaks binary
compatibility from one version to the next. Therefore, when the C++
Standard Library is used, object files and static libraries that are
compiled by using different versions can't be mixed in one binary (EXE
or DLL), and C++ Standard Library objects can't be passed between
binaries that are compiled by using different versions."
If you're going to pass std::* objects between executables and/or DLLs, you absolutely must ensure that they are using the same version of the compiler. It would be well-advised to have your client and its DLLs negotiate in some way at startup, comparing any available versions (e.g. compiler version + flags, boost version, directx version, etc.) so that you catch errors like this quickly. Think of it as a cross-module assert.
If you want to confirm that this is the issue, you could pick a few of the data structures you're passing back and forth and check their sizes in the client vs. the DLLs. I suspect your Config class above would register differently in one of the fail cases.
I'd also like to mention that it is probably a bad idea in the first place to use smart containers in DLL calls. Unless you can guarantee that the app and DLL won't try to free or reallocate the internal buffers of the other's containers, you could easily run into heap corruption issues, since the app and DLL each have their own internal C++ heap. I think that behavior is considered undefined at best. Even passing const& arguments could still result in reallocation in rare cases, since const doesn't prevent a compiler from diddling with mutable internals.
You seem to have memory corruption. Microsoft Application Verifier is invaluable in finding corruption. Use it to find your bug:
Install it to your dev machine.
Add your exe to it.
Only select Basics\Heaps.
Press Save. It doesn't matter if you keep application verifier open.
Run your program a few times.
If it crashes, debug it and this time, the crash will point to your problem, not just some random location in your program.
PS: It's a great idea to have Application Verifier enabled at all times for your development project.
There is a large legacy project I have to maintain, which I recently upgraded from Visual Studio 2008 to Visual Studio 2012. As it is a COM server and a OCX control, creating all the typelib stuff etc. resulted in some problems that I managed to solve. However, when I run the Release build now I frequently get crashes.
I followed some advise I found here on SO and was able to track down the crash to the following piece of code:
int Phx2Preview::ClearOvlElementList() {
for (int i = 0; i < (int)m_vOvlElements.size(); i++) {
P_SAFE_DELETE(m_vOvlElements[i].pPolyOrig); // <- code crashes here
P_SAFE_DELETE(m_vOvlElements[i].pPolyDispl);
}
m_vOvlElements.clear();
m_vRefElemList.clear();
m_pRefElemSelected = NULL;
return PHXE_NO_ERROR;
}
P_SAFE_DELETE is a macro that checks if the pointer is null, and in case it's not deletes and sets it to null. The actual vector elements are created like this:
if (v1) {
tNew.pPolyOrig = new CInPolygon();
tNew.pPolyDispl = new CInPolygon();
tNew.pPolyOrig->FromSafeArray(v1);
tNew.pPolyOrig->Rotate(NULLPOINT, m_nTurnAngle*__pi/180.);
tNew.eType = (overlayET)type;
tNew.nImagenr = nImageNr;
m_vOvlElements.push_back(tNew);
}
Now, the thing is that CInPolygon is a class from an external library which is created with Visual C++ 7.1. The P_SAFE_DELETE is also defined in a header from that library. From here I know that mixing different runtime versions is bad, and this question lets me suspect that this mixing may be responsible from the crash.
My question is: Why does it happen? After all, since both new and delete are called from the same place, no actual objects are passed between the different CRTs. Also, when the OCX is compiled using Visual Studio 2008, no problems occur. Is this due to pure luck? I guess the basic issue is existing in that setting, too. And, well, what can I do to solve to problem? Switch back to VS2008?
Edit:
As asked: The destructor of CInPolygon is just
CInPolygon::~CInPolygon(void) {
m_vPoints.clear();
}
here the m_vPoints is a std::vector<..> defined in the class. Maybe I should mention that CInPolygon inherits from that:
interface IRoi {
virtual ~IRoi() {
return;
}
public:
// other stuff
};
(Didn't even know that interface was a valid keyword in plain C++...) Could it be that the fact that the base class destructor is defined in the header is causing the problem? After all, that header is also known to the host programm..
tNew.pPolyOrig = new CInPolygon();
Yes, this is guaranteed to fail. Short from having different allocators in your program, your host program cannot possibly compute the size of the CInPolygon object correctly. It uses an entirely different implementation of std::vector. It was significantly rewritten in VS2012 to take advantage of C++11. Inevitably, the code in the library using the old version of vector will corrupt the heap.
You must rebuild the library as well, using the exact same version of the compiler with the exact same settings.
Unfortunately the documentation I have is either (a) original product documentation without any errrata (MS VC++ 6.0 help files) or (b) later MSDN help that applies to later MFC versions.
In particular:
[Q1] Is the operator += known to be buggy in VC++6 MFC CString? This code from VC++6 had to be fixed before it would even compile in a modern MFC app:
CString szTemp;
unsigned char m_chReceive[MY_BUF_SIZE];
// compiles and seems to run but may be buggy in VC++6, won't compile in modern MFC
szTemp += m_chReceive;
// the above won't compile in modern MFC versions, but this "&+cast" does:
szTemp.Append( (const char *)&m_chReceive[0]);
[Q2] Is it safe and robust to return a CString as the result of a function, in this manner, or does this cause memory corruption?
CString MyClass:MyMethod(void)
{ CString Stuff;
// set a value to Stuff here.
return Stuff; // return a stack-allocated-CString
}
I have code that uses the above two things all over the place and which also seems to exhibit random runtime memory corruption. Those two things are red flags to me, am I right in suspecting that CString was intended by the authors of MFC in Visual C++ 6.0 to be nice simple thing that you could use like it was an int or a char type, and return it from a function and somehow copy constructors and memory management all just worked?
Obvious stuff: Yes of course I will get all my code off VC++ 6.0 when I can, but I first need to patch a production system which is crashing, and then I can begin the huge task of moving this legacy codebase forward.
According to Documentation for VC6.0
CString objects can grow as a result of concatenation operations.
CString objects follow “value semantics.”
Microsoft Documentation seems to indicate that CString in purpose is similar to std::string in that it grows automatically when needed, and can be safely be passed around as arguements or return values of functions.
I have a C++ library app which talks to a C++ server and I am creating a vector of my custom class objects. But my Cpp/CLI console app(which interacts with native C++ ), throws a memory violation error when I try to return my custom class obj vector.
Code Sample -
In my native C++ class -
std::vector<a> GetStuff(int x)
{
-- do stuff
std::vector<a> vec;
A a;
vec.push_back(a);
--- push more A objs
return vec;
}
In my Cpp/CLI class
public void doStuff()
{
std::vector<a> vec;
vec = m_nativeCpp->GetStuff(4); // where nativeCpp is a dynamically allocated class in nativecpp DLL, the app throws up a memory violation error here!
}
exact error message
An unhandled exception of type 'System.AccessViolationException' occurred in CLIConsole.exe -- which is my console cpp/CLI project
Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I'll assume that the native code is in a separately compiled unit, like a .dll. First thing the worry about is the native code using a different allocator (new/delete), you'll get that when it is compiled with /MT or linked to another version of the CRT.
Next thing to worry about is STL iterator debugging. You should make sure both modules were compiled with the same setting for _HAS_ITERATOR_DEBUGGING. They won't be the same if the native code was built with an old version of the CRT of is the Release mode build.
Take a look at this support article. I think what's happening is that your vector in CLI tries to access internal vector data from DLL and fails to do so because of different static variables. I guess the only good solution is to pass simple array through DLL boundaries, &vector[0] returns it.
But there might be also some magic happening in A class copy constructors. If they missing and class have pointers as members you could easily get the same error.
I'm not sure, but this may work: instead of returning a vector, create the vector on the heap and return a pointer to it.