Program crashes only in Release mode outside debugger - c++

I have quite massive program (>10k lines of C++ code). It works perfectly in debug mode or in release mode when launched from within Visual Studio, but the release mode binary usually crashes when launched manually from the command line (not always!!!).
The line with delete causes the crash:
bool Save(const short* data, unsigned int width, unsigned int height,
const wstring* implicit_path, const wstring* name = NULL,
bool enable_overlay = false)
{
char* buf = new char[17];
delete [] buf;
}
EDIT: Upon request expanded the example.
The "len" has length 16 in my test case. It doesn't matter, if I do something with the buf or not, it crashes on the delete.
EDIT: The application works fine without the delete [] line, but I suppose it leaks memory then (since the block is never unallocated). The buf in never used after the delete line. It also seems it does not crash with any other type than char. Now I am really confused.
The crash message is very unspecific (typical Windows "xyz.exe has stopped working"). When I click the "Debug the program" option, it enters VS, where the error is specified to be "Access violation writing location xxxxxxxx". It is unable to locate the place of the error though "No symbols were loaded for any stack frame".
I guess it is some pretty serious case of heap corruption, but how to debug this? What should I look for?
Thanks for help.

have you checked memory leaks elsewhere?
usually weird delete behavior is caused by the heap getting corrupted at one point, then much much later on, it becomes apparent because of another heap usage.
The difference between debug and release can be caused by the way windows allocate the heap in each context. For example in debug, the heap can be very sparse and the corruption doesn't affect anything right away.

The biggest difference between launched in debugger and launched on its own is that when an application is lunched from the debugger Windows provides a "debug heap", that is filled with the 0xBAADF00D pattern; note that this is not the debug heap provided by the CRT, which instead is filled with the 0xCD pattern (IIRC).
Here is one of the few mentions that Microsoft makes about this feature, and here you can find some links about it.
Also mentioned in that link is "starting a program and attaching to it with a debugger does NOT cause it to use the "special debug heap" to be used."

You probably have a memory overwrite somewhere and the delete[] is simply the first time it causes a problem. But the overwrite itself can be located in a totally different part of your program. The difficulty is finding the overwrite.
Add the following function
#include <malloc.h>
#define CHKHEAP() (check_heap(__FILE__, __LINE__))
void check_heap(char *file, int line)
{
static char *lastOkFile = "here";
static int lastOkLine = 0;
static int heapOK = 1;
if (!heapOK) return;
if (_heapchk() == _HEAPOK)
{
lastOkFile = file;
lastOkLine = line;
return;
}
heapOK = 0;
printf("Heap corruption detected at %s (%d)\n", file, line);
printf("Last OK at %s (%d)\n", lastOkFile, lastOkLine);
}
Now call CHKHEAP() frequently throughout your program and run again. It should show you the source file and line where the heap becomes corrupted and where it was OK for the last time.

There are many possible causes of crashes. It's always difficult to locate them, especially when they differ from debug to release mode.
On the other hand, since you are using C++, you could get away by using a std::string instead of a manually allocated buffer >> there is a reason for which RAII exists ;)

It sounds like you have an unitialised variable somewhere in the code.
In debug mode all the memory is initialised to somthing standard so you will get consistant behavior.
In release mode the memory is not initialised unless you explicitly do somthing.
Run your compiler with the warnings set at the highest level possable.
Then make sure you code compiles with no warnings.

These two are the first two lines in their function.
If you really mean that the way I interpret it, then the first line is declaring a local variable buf in one function, but the delete is deleting some different buf declared outside the second function.
Maybe you should show the two functions.

Have you tried simply isolating this with the same build file but code based just on what you've put above? Something like:
int main(int argc, char* argv[] )
{
const int len( 16 );
char* buf = new char[len + 1];
delete [] buf;
}
The code you've given is absolutely fine and, on it's own, should run with no problems either in debug or optimised. So if the problem isn't down to specifics of your code, then it must be down to specifics of the project (i.e. compilation / linkage)
Have you tried creating a brand new project and placing the 10K+ lines of C++ into it? Might not take too long to prove the point. Especially if the existing project has either been imported in or heavily altered.

I was having the same issue, and I figured out that my program was only crashing when I went to delete[] char pointers with a string length of 1.
void DeleteCharArray(char* array){
if(strlen(array)>1){delete [] array;}
else{delete array;}
}
This fixed the issue, but it is still error prone, but could be modified to be otherwise.
Anyhow the reason this happens I suspect is that to C++ char* str=new char[1] and char* str=new char; are the same thing, and that means that when you're trying to delete a pointer with delete[] which is made for arrays only then results are unexpected, and often fatal.

One type of problem I had when I observed this symptom is that I had a multi-process program crash on me when run in shell, but ran flawlessly when called from valgrind or gdb. I discovered (much to my embarrassment), that I had a few stray processes of the same program still running in the system, causing a mq_send() call to return with error. The problem was that those stray processes were also assigned the message queue handle by the kernel/system and so the mq_send() in my newly spawned process(es) failed, but undeterministically (per the kernel scheduling circumstances).
Like I said, trivial, but until you find it out, you'll tear your hair out!
I learnt from this hard lesson, and my Makefile these days has all the appropriate commands to create a new build, and cleanup the old environment (including tearing down old message queues and shared memory and semaphores and such). This way, I don't forget to do something and have to get heartburn over a seemingly difficult (but clearly trivially solvable) problem. Here is a cut-and-paste from my latest project:
[Makefile]
all:
...
...
obj:
...
clean:
...
prep:
#echo "\n!! ATTENTION !!!\n\n"
#echo "First: Create and mount mqueues onto /dev/mqueue (Change for non ubuntu)"
rm -rf /run/shm/*Pool /run/shm/sem.*;
rm -rf /dev/mqueue/Test;
rm -rf /dev/mqueue/*Task;
killall multiProcessProject || true;

Related

General way of solving Error: Stack around the variable 'x' was corrupted

I have a program which prompts me the error in VS2010, in debug :
Error: Stack around the variable 'x' was corrupted
This gives me the function where a stack overflow likely occurs, but I can't visually see where the problem is.
Is there a general way to debug this error with VS2010? Would it be possible to indentify which write operation is overwritting the incorrect stack memory?
thanks
Is there a general way to debug this error with VS2010?
No, there isn't. What you have done is to somehow invoke undefined behavior. The reason these behaviors are undefined is that the general case is very hard to detect/diagnose. Sometimes it is provably impossible to do so.
There are however, a somewhat smallish number of things that typically cause your problem:
Improper handling of memory:
Deleting something twice,
Using the wrong type of deletion (free for something allocated with new, etc.),
Accessing something after it's memory has been deleted.
Returning a pointer or reference to a local.
Reading or writing past the end of an array.
This can be caused by several issues, that are generally hard to see:
double deletes
delete a variable allocated with new[] or delete[] a variable allocated with new
delete something allocated with malloc
delete an automatic storage variable
returning a local by reference
If it's not immediately clear, I'd get my hands on a memory debugger (I can think of Rational Purify for windows).
This message can also be due to an array bounds violation. Make sure that your function (and every function it calls, especially member functions for stack-based objects) is obeying the bounds of any arrays that may be used.
Actually what you see is quite informative, you should check in near x variable location for any activity that might cause this error.
Below is how you can reproduce such exception:
int main() {
char buffer1[10];
char buffer2[20];
memset(buffer1, 0, sizeof(buffer1) + 1);
return 0;
}
will generate (VS2010):
Run-Time Check Failure #2 - Stack around the variable 'buffer1' was corrupted.
obviously memset has written 1 char more than it should. VS with option \GS allows to detect such buffer overflows (which you have enabled), for more on that read here: http://msdn.microsoft.com/en-us/library/Aa290051.
You can for example use debuger and step throught you code, each time watch at contents of your variable, how they change. You can also try luck with data breakpoints, you set breakpoint when some memory location changes and debugger stops at that moment,possibly showing you callstack where problem is located. But this actually might not work with \GS flag.
For detecting heap overflows you can use gflags tool.
I was puzzled by this error for hours, I know the possible causes, and they are already mentioned in the previous answers, but I don't allocate memory, don't access array elements, don't return pointers to local variables...
Then finally found the source of the problem:
*x++;
The intent was to increment the pointed value. But due to the precedence ++ comes first, moving the x pointer forward then * does nothing, then writing to *x will be corrupt the stack canary if the parameter comes from the stack, making VS complain.
Changing it to (*x)++ solves the problem.
Hope this helps.
Here is what I do in this situation:
Set a breakpoint at a location where you can see the (correct) value of the variable in question, but before the error happens. You will need the memory address of the variable whose stack is being corrupted. Sometimes I have to add a line of code in order for the debugger to give me the address easily (int *x = &y)
At this point you can set a memory breakpoint (Debug->New Breakpoint->New Data Breakpoint)
Hit Play and the debugger should stop when the memory is written to. Look up the stack (mine usually breaks in some assembly code) to see whats being called.
I usually follow the variable before the complaining variable which usually helps me get the problem. But this can sometime be very complex with no clue as you have seen it. You could enable Debug menu >> Exceptions and tick the 'Win32 exceptions" to catch all exceptions. This will still not catch this exceptions but it could catch something else which could indirectly point to the problem.
In my case it was caused by library I was using. It turnout the header file I was including in my project didn't quite match the actual header file in that library (by one line).
There is a different error which is also related:
0xC015000F: The activation context being deactivated is not the most
recently activated one.
When I got tired of getting the mysterious stack corrupted message on my computer with no debugging information, I tried my project on another computer and it was giving me the above message instead. With the new exception I was able to work my way out.
I encountered this when I made a pointer array of 13 items, then trying to set the 14th item. Changing the array to 14 items solved the problem. Hope this helps some people ^_^
One relatively common source of "Stack around the variable 'x' was corrupted" problem is wrong casting. It is sometimes hard to spot. Here is an example of a function where such problem occurs and the fix. In the function assignValue I want to assign some value to a variable. The variable is located at the memory address passed as argument to the function:
using namespace std;
template<typename T>
void assignValue(uint64_t address, T value)
{
int8_t* begin_object = reinterpret_cast<int8_t*>(std::addressof(value));
// wrongly casted to (int*), produces the error (sizeof(int) == 4)
//std::copy(begin_object, begin_object + sizeof(T), (int*)address);
// correct cast to (int8_t*), assignment byte by byte, (sizeof(int8_t) == 1)
std::copy(begin_object, begin_object + sizeof(T), (int8_t*)address);
}
int main()
{
int x = 1;
int x2 = 22;
assignValue<int>((uint64_t)&x, x2);
assert(x == x2);
}

Howto debug double deletes in C++?

I'm maintaining a legacy application written in C++. It crashes every now and then and Valgrind tells me its a double delete of some object.
What are the best ways to find the bug that is causing a double delete in an application you don't fully understand and which is too large to be rewritten ?
Please share your best tips and tricks!
Here's some general suggestion's that have helped me in that situation:
Turn your logging level up to full debug, if you are using a logger. Look for suspicious stuff in the output. If your app doesn't log pointer allocations and deletes of the object/class under suspicion, it's time to insert some cout << "class Foo constructed, ptr= " << this << endl; statements in your code (and corresponding delete/destructor prints).
Run valgrind with --db-attach=yes. I've found this very handy, if a bit tedious. Valgrind will show you a stack trace every time it detects a significant memory error or event and then ask you if you want to debug it. You may find yourself repeatedly pressing 'n' many many times if your app is large, but keep looking for the line of code where the object in question is first (and secondly) deleted.
Just scour the code. Look for construction/deletion of the object in question. Sadly, sometimes it winds up being in a 3rd party library :-(.
Update: Just found this out recently: Apparently gcc 4.8 and later (if you can use GCC on your system) has some new built-in features for detecting memory errors, the "address sanitizer". Also available in the LLVM compiler system.
Yep. What #OliCharlesworth said. There's no surefire way of testing a pointer to see if it points to allocated memory, since it really is just the memory location itself.
The biggest problem your question implies is the lack of reproducability. Continuing with that in mind, you're stuck with changing simple 'delete' constructs to delete foo;foo = NULL;.
Even then the best case scenario is "it seems to occur less" until you've really stamped it down.
I'd also ask by what evidence Valgrind suggests it's a double-delete problem. Might be a better clue lingering around in there.
It's one of the simpler truly nasty problems.
This may or may not work for you.
Long time ago I was working on 1M+ lines program that was 15 years old at the time. Faced with the exact same problem - double delete with huge data set. With such data any out of the box "memory profiler" would be a no go.
Things that were on my side:
It was very reproducible - we had macro language and running same script exactly the same way reproduced it every time
Sometime during the history of the project someone decided that "#define malloc my_malloc" and "#define free my_free" had some use. These didn't do much more than call built-in malloc() and free() but project already compiled and worked this way.
Now the trick/idea:
my_malloc(int size)
{
static int allocation_num = 0; // it was single threaded
void* p = builtin_malloc(size+16);
*(int*)p = ++allocation_num;
*((char*)p+sizeof(int)) = 0; // not freed
return (char*)p+16; // check for NULL in order here
}
my_free(void* p)
{
if (*((char*)p+sizeof(int)))
{
// this is double free, check allocation_number
// then rerun app with this in my_alloc
// if (alloc_num == XXX) debug_break();
}
*((char*)p+sizeof(int)) = 1; // freed
//built_in_free((char*)p-16); // do not do this until problem is figured out
}
With new/delete it might be trickier, but still with LD_PRELOAD you might be able to replace malloc/free without even recompiling your app.
you are probably upgrading from a version that treated delete differently then the new version.
probably what the previous version did was when delete was called it did a static check for if (X != NULL){ delete X; X = NULL;} and then in the new version it just does the delete action.
you might need to go through and check for pointer assignments, and tracking references of object names from construction to deletion.
I've found this useful: backtrace() on linux. (You have to compile with -rdynamic.) This lets you find out where that double free is coming from by putting a try/catch block around all memory operations (new/delete) then in the catch block, print out your stack trace.
This way you can narrow down the suspects much faster than running valgrind.
I wrapped backtrace in a handy little class so that I can just say:
try {
...
} catch (...) {
StackTrace trace;
std::cerr << "Double free!!!\n" << trace << std::endl;
throw;
}
On Windows, assuming the app is built with MSVC++, you can take advantage of the extensive heap debugging tools built into the debug version of the standard library.
Also on Windows, you can use Application Verifier. If I recall correctly, it has a mode the forces each allocation onto a separate page with protected guard pages in between. It's very effective at finding buffer overruns, but I suspect it would also be useful for a double-free situation.
Another thing you could do (on any platform) would be to make a copy of the sources that are transformed (perhaps with macros) so that every instance of:
delete foo;
is replaced with:
{ delete foo; foo = nullptr; }
(The braces help in many cases, though it's not perfect.) That will turn many instances of double-free into a null pointer reference, making it much easier to detect. It doesn't catch everything; you might have a copy of a stale pointer, but it can help squash a lot of the common use-after-delete scenarios.

Memory Error in C++

I have a high memory requirement in my code and this statement is repeated a lot of times:
Node** x;
x = new Node*[11];
It fails at this allocation. I figured out this line by throwing output to the console!
I am building my code on Visual Studio. It works fine in Debug mode (both in VS2005 and VS2008)
However it throws the error in VS2005 Release mode.
A direct exe generated from
cl Program.cpp works if cl is from VS2010 but fails when it's from VS2005.
Any clues?
PS: Linux gives me a Bus Error(core dumped) for the same
Thanks
UPDATE:
And I guess, it can be due to 'unaligned' thing as I understand. I just made 11 to 12 (or any even number) and It works!!! I don't know why. It doesn't work with odd numbers!
Update 2 : http://www.devx.com/tips/Tip/13265 ?
I think you've done something somewhere else which corrupted the program heap: for example, writing past the end of an allocated chunk of memory, or writing to a chunk of memory after it's been freed.
I recommend that the easiest way to diagnose the problem would be to run the software using a kind of debugger that's intended to detect this kind of problem, for example valgrind.
I have a high memory requirement in my code
Are you actually running out of memory?
x = new Node*[11];
Are you deleting x like so:
delete [] x; // the correct way
or:
delete x; // incorrect
Or there could simply be something else corrupting the heap, though I would have expected that running in debug mode mode would make it more obvious, not less so. But with heap corruption there are rarely any guarantees that it'll do so in a nice, easy to debug way.
There is nothing wrong with this code.
Node **x;
x = new Node*[11];
You are allocating 11 pointers to class Node and storing it as a double-pointer in variable x. This is fine.
The fact that your program is crashing here is probably due to some memory error that is occurring elsewhere in your program. Perhaps you're writing past array bounds somewhere. If you load this array using a for loop, double-check your indexing.
If you have access to a memory profiler, I'd recommend using it. These bugs can be difficult to track down in large programs.
A valid C++98 implementation will throw an exception (std::bad_alloc) if allocation fails, not just crash. I'd agree with previous answers and suggest running your program in valgrind as this reeks of memory corruption. Valgrind should be available in your Linux distribution of choice.

How can I get the size of a memory block allocated using malloc()? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How can I get the size of an array from a pointer in C?
Is there any way to determine the size of a C++ array programmatically? And if not, why?
I get a pointer to a chunk of allocated memory out of a C style function.
Now, it would be really interesting for debugging purposes to know how
big the allocated memory block that this pointer points is.
Is there anything more elegant than provoking an exception by blindly running over its boundaries?
Thanks in advance,
Andreas
EDIT:
I use VC++2005 on Windows, and GCC 4.3 on Linux
EDIT2:
I have _msize under VC++2005
Unfortunately it results in an exception in debug mode....
EDIT3:
Well. I have tried the way I described above with the exception, and it works.
At least while I am debugging and ensuring that immediately after the call
to the library exits I run over the buffer boundaries. Works like a charm.
It just isn't elegant and in no way usable in production code.
It's not standard but if your library has a msize() function that will give you the size.
A common solution is to wrap malloc with your own function that logs each request along with the size and resulting memory range, in the release build you can switch back to the 'real' malloc.
If you don't mind sleazy violence for the sake of debugging, you can #define macros to hook calls to malloc and free and pad the first 4 bytes with the size.
To the tune of
void *malloc_hook(size_t size) {
size += sizeof (size_t);
void *ptr = malloc(size);
*(size_t *) ptr = size;
return ((size_t *) ptr) + 1;
}
void free_hook (void *ptr) {
ptr = (void *) (((size_t *) ptr) - 1);
free(ptr);
}
size_t report_size(ptr) {
return * (((size_t *) ptr) - 1);
}
then
#define malloc(x) malloc_hook(x)
and so on
The C runtime library does not provide such a function. Furthermore, deliberately provoking an exception will not tell you how big the block is either.
Usually the way this problem is solved in C is to maintain a separate variable which keeps track of the size of the allocated block. Of course, this is sometimes inconvenient but there's generally no other way to know.
Your C runtime library may provide some heap debug functions that can query allocated blocks (after all, free() needs to know how big the block is), but any of this sort of thing will be nonportable.
With gcc and the GNU linker, you can easily wrap malloc
#include <stdlib.h>
#include <stdio.h>
void* __real_malloc(size_t sz);
void* __wrap_malloc(size_t sz)
{
void *ptr;
ptr = __real_malloc(sz);
fprintf(stderr, "malloc of size %d yields pointer %p\n", sz, ptr);
/* if you wish to save the pointer and the size to a data structure,
then remember to add wrap code for calloc, realloc and free */
return ptr;
}
int main()
{
char *x;
x = malloc(103);
return 0;
}
and compile with
gcc a.c -o a -Wall -Werror -Wl,--wrap=malloc
(Of course, this will also work with c++ code compiled with g++, and with the new operator (through it's mangled name) if you wish.)
In effect, the statically/dynamically loaded library will also use your __wrap_malloc.
No, and you can't rely on an exception when overrunning its boundaries, unless it's in your implementation's documentation. It's part of the stuff you really don't need to know about to write programs. Dig into your compiler's documentation or source code if you really want to know.
There is no standard C function to do this. Depending on your platform, there may be a non-portable method - what OS and C library are you using?
Note that provoking an exception is unreliable - there may be other allocations immediately after the chunk you have, and so you might not get an exception until long after you exceed the limits of your current chunk.
Memory checkers like Valgrind's memcheck and Google's TCMalloc (the heap checker part) keep track of this sort of thing.
You can use TCMalloc to dump a heap profile that shows where things got allocated, or you can just have it check to make sure your heap is the same at two points in program execution using SameHeap().
Partial solution: on Windows you can use the PageHeap to catch a memory access outside the allocated block.
PageHeap is an alternate memory manager present in the Windows kernel (in the NT varieties but nobody should be using any other version nowadays). It takes every allocation in a process and returns a memory block that has its end aligned with the end of a memory page, then it makes the following page unaccessible (no read, no write access). If the program tries to read or write past the end of the block, you'll get an access violation you can catch with your favorite debugger.
How to get it: Download and install the package Debugging Tools for Windows from Microsoft: http://www.microsoft.com/whdc/devtools/debugging/default.mspx
then launch the GFlags utility, go to the 3rd tab and enter the name of your executable, then Hit the key. Check the PageHeap checkbox, click OK and you're good to go.
The last thing: when you're done with debugging, don't ever forget to launch GFlags again, and disable PageHeap for the application. GFlags enters this setting into the Registry (under HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\), so it is persistent, even across reboots.
Also, be aware that using PageHeap can increase the memory needs of your application tremendously.
The way to do what you want is to BE the allocator. If you filter all requests, and then record them for debugging purposes, then you can find out what you want when the memory is free'd.
Additionally, you can check at the end of the program to see if all allocated blocks were freed, and if not, list them. An ambitious library of this sort could even take FUNCTION and LINE parameters via a macro to let you know exactly where you are leaking memory.
Finally, Microsoft's MSVCRT provides a a debuggable heap that has many useful tools that you can use in your debug version to find memory problems: http://msdn.microsoft.com/en-us/library/bebs9zyz.aspx
On Linux, you can use valgrind to find many errors. http://valgrind.org/

Dealing with an object corrupting the heap

In my application I am creating an object pretty much like this :
connect() {
mVHTGlove = new vhtGlove(params);
}
and once I am about to close application I call this one :
disconnect() {
if (mVHTGlove)
delete mVHTGlove;
}
This call always triggers a breakpoint with the following message :
Windows has triggered a breakpoint in
DesignerDynD.exe.
This may be due to a corruption of the
heap, which indicates a bug in
DesignerDynD.exe or any of the DLLs it
has loaded.
This may also be due to the user
pressing F12 while DesignerDynD.exe
has focus.
The output window may have more
diagnostic information.
I cannot modify the vhtGlove class to fix the corruption of the stack as it is an external library provided only in the form of header files, lib files and dlls.
Is there any way to use this class in a clean way ?
**** EDIT ::: I tried to strip things down to a bare minimum, however I get the same results... here you have the ENTIRE code.
#include "vhandtk/vhtCyberGlove.h"
#include "vhandtk/vhtIOConn.h"
#include "vhandtk/vhtBaseException.h"
using namespace std;
int main(int argc, char* argv[])
{
vhtCyberGlove* testGlove = NULL;
vhtIOConn gloveAddress("cyberglove", "localhost", "12345", "com1", "115200");
try
{
testGlove = new vhtCyberGlove(&gloveAddress,false);
if (testGlove->connect())
cout << "Glove connected successfully" << endl;
else
{
throw vhtBaseException("testGlove()->connect() returned false.");
}
if (testGlove->disconnect())
{
cout << "Glove disconnected successfully" << endl;
}
else
{
throw vhtBaseException("testGlove()->disconnect() returned false.");
}
}
catch (vhtBaseException *e)
{
cout << "Error with gloves: " << e << endl;
system("pause");
exit(0);
}
delete testGlove;
return 0;
}
Still crashes on deletion of the glove.
EDIT #2 :: If I just allocate and delete an instance of vhtCyberGlove it also crashes.
int main(int argc, char* argv[])
{
vhtCyberGlove* testGlove = NULL;
vhtIOConn gloveAddress("cyberglove", "localhost", "12345", "com1", "115200");
testGlove = new vhtCyberGlove(&gloveAddress,false);
delete testGlove; //<<crash!
return 0;
}
Any ideas?
thanks!
JC
One possiblity is that mVHTGlove isn't being initialized to 0. If disconnect was then called without a connect ever being called, then you'd be attempting to deallocate a garbage pointer. Boom.
Another possibility is that you are actually corrupting the stack a bit before that point, but that is where the corruption actually causes the crash. A good way to check that would be to comment out as much code as you can and still get the program to run, then see if you still get the corruption. If you don't, slowly bring back in bits of code until you see it come back.
Some further thoughts (after your edits).
You might check and see if the API doesn't have its own calls for memory management, rather than expecting you to "new" and "delete" objects manually. The reason I say this is that I've seen some DLLs have issues that looked a lot like this when some memory was managed inside the DLL and some outside.
The heap corruption error is reported when the vhtGlove is deleted. However, it may just as well be your own code that causes the corruption. This often happens as a result of overwriting a buffer allocated on the heap, perhaps from a call to malloc. Or you are perhaps deleting the same object twice. You can avoid this by using a smart pointer like std::auto_ptr to store the pointer to the object.
One thing you might try to track down the source of the corruption is to look at the memory location pointed to by mVHTGlove using Visual Sudio's "Memory" window when the heap corruption is detected. See if you see anything in that memory that looks obviously like something that overran a buffer. For example, if you see a string used elsewhere in the program, then go review the code that manipulates that string -- it might be overrunning its buffer.
Given vhtCyberGlove's implementation is on another DLL, I would look for heaps mismatch. In VS, for example, this would happen if the DLL is linked to the Release CRT, while your EXE is linked to the Debug CRT. When this is the case, each module uses a different heap, and as soon as you try to free memory using the wrong heap, you'll crash.
In your case, it is possible that vhtCyberGlove gets some stuff that is allocated on the other DLL, but when you delete the vhtCyberGlove instance the stuff is being deleted directly, namely referring to your heap rather than the DLL's. And when trying to free a pointer that points to another heap, your effectively corrupting yours.
If this is indeed the case, without having more details I can offer two fixes:
Make sure your EXE uses the same heap as the DLL. Will probably lock you in Release mode, so it's not the best way to go
Get the provider of vhtCyberGlove to manage its memory usage properly...
You are passing the address of a local vhtIOConn to the constructor. Is it possible that the object is assuming ownership of this pointer and trying to delete it in the destructor?