a bizarre "Project.exe has triggered a breakpoint."? - c++

just don't tell me google that, because I did it for the last 48 hours.
this is my problem; I create a lon program, which execute many itterations; but in few itterations, this error comes ...
Critical error detected c0000374
Project.exe has triggered a breakpoint.
The program '[4452] Project.exe' has exited with code 0 (0x0).
so, Visual Studio 2012 opens newaop.cpp; which contains few line
// newaop -- operator new[](size_t) REPLACEABLE
#include <new>
void *__CRTDECL operator new[](size_t count) _THROW1(std::bad_alloc)
{ // try to allocate count bytes for an array
return (operator new(count));
}
/*
* Copyright (c) 1992-2007 by P.J. Plauger. ALL RIGHTS RESERVED.
* Consult your license regarding permissions and restrictions.
V5.03:0009 */
pointing to the return line...
I searched so many; but nothing works; the problem in that, my program works for few itterations
I tried to locate the instructions generating this erreor (with cout's) and I found this loop
for (int i = OriginalCadre.X.x + 1; i < OriginalCadre.X.x + OriginalCadre.height; i++){
for (int j = OriginalCadre.X.y + 1; j < OriginalCadre.X.y + OriginalCadre.width; j++){
QuantityColor[Pattern_init[i][j]] ++;
}
}
this loop works at the begening for few itterations; which is bizarre !

Critical error detected c0000374 is a sign of heap corruption, which means you might be doing bad things with memory e.g. writing after the end of a buffer, or writing to a buffer after it's been freed back to the heap.
I don't see any tell tale signs in that small loop, but likely you are writing past the memory location of QuantityColor or something similar.
Debugging heap corruption errors

Related

Stack Smashing in GCC vs Clang (Possibly due to canaries)

I am trying to understand possible sources for "stack smashing" errors in GCC, but not Clang.
Specifically, when I compile a piece of code with just debug symbols
set(CMAKE_CXX_FLAGS_DEBUG "-g")
and use the GCC C++ compiler (GNU 5.4.0), the application crashes with
*** stack smashing detected ***: ./testprogram terminated
Aborted (core dumped)
However, when I use Clang 3.8.0, the program completes without error.
My first thought was that perhaps the canaries of GCC are catching a buffer overrun that Clang isn't. So I added the additional debug flag
set(CMAKE_CXX_FLAGS_DEBUG "-g -fstack-protector-all")
But Clang still compiles a program that runs without errors. To me this suggests that the issue likely is not a buffer overrun (as you commonly see with stack smashing errors), but an allocation issue.
In any case, when I add in the ASAN flags:
set(CMAKE_CXX_FLAGS_DEBUG "-g -fsanitize=address")
Both compilers yield a program that crashes with an identical error. Specifically,
GCC 5.4.0:
==1143==ERROR: AddressSanitizer failed to allocate 0xdfff0001000 (15392894357504) bytes at address 2008fff7000 (errno: 12)
==1143==ReserveShadowMemoryRange failed while trying to map 0xdfff0001000 bytes. Perhaps you're using ulimit -v
Aborted (core dumped)
Clang 3.8.0:
==1387==ERROR: AddressSanitizer failed to allocate 0xdfff0001000 (15392894357504) bytes at address 2008fff7000 (errno: 12)
==1387==ReserveShadowMemoryRange failed while trying to map 0xdfff0001000 bytes. Perhaps you're using ulimit -v
Aborted (core dumped)
Can somebody give me some hints on the likely source of this error? I am having an awefully hard time tracing down the line where this is occurring, as it is in a very large code base.
EDIT
The issue is unresolved, but is isolated to the following function:
void get_sparsity(Data & data) {
T x[n_vars] = {};
T g[n_constraints] = {};
for (Index j = 0; j < n_vars; j++) {
const T x_j = x[j];
x[j] = NAN;
eval_g(n_vars, x, TRUE, n_constraints, g, &data);
x[j] = x_j;
std::vector<Index> nonzero_entries;
for (Index i = 0; i < n_constraints; i++) {
if (isnan(g[i])) {
data.flattened_nonzero_rows.push_back(i);
data.flattened_nonzero_cols.push_back(j);
nonzero_entries.push_back(i);
}
}
data.nonzeros.push_back(nonzero_entries);
}
int internal_debug_point = 5;
}
which is called like this:
get_sparsity(data);
int external_debug_point= 6;
However, when I put a debug point on the last line of the get_sparsity function, internal_debug_point = 5, it reaches that line without issue. However, when exiting the function, and before it hits the external debug point external_debug_point = 6, it crashes with the error
received signal SIGABRT, Aborted.
0x00007ffffe315428 in __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
My guess is that GCC is only checking the canaries when exiting that function, and hence the error is actually occurring inside the function. Does that sound reasonable? If so, then is there a way to get GCC or clang to do more frequent canary checks?
I suspect ASan is running out of memory.
I don't think the ASan errors mean your program is trying to allocate that memory, it means ASan is trying to allocate it for itself (it says "shadow memory" which is what ASan uses to keep track of the memory your program allocates).
If the number of iterations (and size of array) n_vars is large, then the function will use extra memory for a new std::vector in every loop, forcing ASan to track more and more memory.
You could try moving the local vector out of the loop (which will likely increase the performance of the function anyway):
std::vector<Index> nonzero_entries;
for (Index j = 0; j < n_vars; j++) {
// ...
for (Index i = 0; i < n_constraints; i++) {
if (isnan(g[i])) {
data.flattened_nonzero_rows.push_back(i);
data.flattened_nonzero_cols.push_back(j);
nonzero_entries.push_back(i);
}
}
data.nonzeros.push_back(nonzero_entries);
nonzero_entries.clear();
}
This will reuse the same memory for nonzero_entries instead of allocating and deallcoating memory for a new vector every iteration.
Trying to figure out the source of the stack problems was getting nowhere. So I tried a different approach. Through debugging, I narrowed down the above function get_sparsity as the culprit. The debugger wasn't giving me any hints exactly WHERE the problem was occurring, but it was somewhere inside that function. With that information, I switched the only two stack variables in that function x and g to heap variables so that valgrind could help me find the error (sgcheck was coming up empty). Specifically, I modified the above code to
void get_sparsity(Data & data) {
std::vector<T> x(n_vars, 0);
std::vector<T> g(n_constraints, 0);
/* However, for our purposes, it is easier to make an std::vector of Eigen
* vectors, where the ith entry of "nonzero_entries" contains a vector of
* indices in g for which g(indices) are nonzero when perturbing x(i).
* If that sounds complicated, just look at the code and compare to
* the code where we use the sparsity structure.
*/
for (Index j = 0; j < n_vars; j++) {
const T x_j = x[j];
x[j] = NAN;
Bool val = eval_g(n_vars, x.data(), TRUE, n_constraints, g.data(), &data);
x[j] = x_j;
std::vector<Index> nonzero_entries;
for (Index i = 0; i < n_constraints; i++) {
if (isnan(g[i])) {
data.flattened_nonzero_rows.push_back(i);
data.flattened_nonzero_cols.push_back(j);
nonzero_entries.push_back(i);
}
}
data.nonzeros.push_back(nonzero_entries);
}
int bob = 5;
return;
}
and then valgrinded it to find the offending line. Now that I know where the problem is occurring, I can fix the problem.

CString use coupled with HeapWalk and HeapLock/HeapUnlock deadlocks in the kernel

My goal is to lock virtual memory allocated for my process heaps (to prevent a possibility of it being swapped out to disk.)
I use the following code:
//pseudo-code, error checks are omitted for brevity
struct MEM_PAGE_TO_LOCK{
const BYTE* pBaseAddr; //Base address of the page
size_t szcbBlockSz; //Size of the block in bytes
MEM_PAGE_TO_LOCK()
: pBaseAddr(NULL)
, szcbBlockSz(0)
{
}
};
void WorkerThread(LPVOID pVoid)
{
//Called repeatedly from a worker thread
HANDLE hHeaps[256] = {0}; //Assume large array for the sake of this example
UINT nNumberHeaps = ::GetProcessHeaps(256, hHeaps);
if(nNumberHeaps > 256)
nNumberHeaps = 256;
std::vector<MEM_PAGE_TO_LOCK> arrPages;
for(UINT i = 0; i < nNumberHeaps; i++)
{
lockUnlockHeapAndWalkIt(hHeaps[i], arrPages);
}
//Now lock collected virtual memory
for(size_t p = 0; p < arrPages.size(); p++)
{
::VirtualLock((void*)arrPages[p].pBaseAddr, arrPages[p].szcbBlockSz);
}
}
void lockUnlockHeapAndWalkIt(HANDLE hHeap, std::vector<MEM_PAGE_TO_LOCK>& arrPages)
{
if(::HeapLock(hHeap))
{
__try
{
walkHeapAndCollectVMPages(hHeap, arrPages);
}
__finally
{
::HeapUnlock(hHeap);
}
}
}
void walkHeapAndCollectVMPages(HANDLE hHeap, std::vector<MEM_PAGE_TO_LOCK>& arrPages)
{
PROCESS_HEAP_ENTRY phe = {0};
MEM_PAGE_TO_LOCK mptl;
SYSTEM_INFO si = {0};
::GetSystemInfo(&si);
for(;;)
{
//Get next heap block
if(!::HeapWalk(hHeap, &phe))
{
if(::GetLastError() != ERROR_NO_MORE_ITEMS)
{
//Some other error
ASSERT(NULL);
}
break;
}
//We need to skip heap regions & uncommitted areas
//We're interested only in allocated blocks
if((phe.wFlags & (PROCESS_HEAP_REGION |
PROCESS_HEAP_UNCOMMITTED_RANGE | PROCESS_HEAP_ENTRY_BUSY)) == PROCESS_HEAP_ENTRY_BUSY)
{
if(phe.cbData &&
phe.lpData)
{
//Get address aligned at the page size boundary
size_t nRmndr = (size_t)phe.lpData % si.dwPageSize;
BYTE* pBegin = (BYTE*)((size_t)phe.lpData - nRmndr);
//Get segment size, also page aligned (round it up though)
BYTE* pLast = (BYTE*)phe.lpData + phe.cbData;
nRmndr = (size_t)pLast % si.dwPageSize;
if(nRmndr)
pLast += si.dwPageSize - nRmndr;
size_t szcbSz = pLast - pBegin;
//Do we have such a block already, or an adjacent one?
std::vector<MEM_PAGE_TO_LOCK>::iterator itr = arrPages.begin();
for(; itr != arrPages.end(); ++itr)
{
const BYTE* pLPtr = itr->pBaseAddr + itr->szcbBlockSz;
//See if they intersect or are adjacent
if(pLPtr >= pBegin &&
itr->pBaseAddr <= pLast)
{
//Intersected with another memory block
//Get the larger of the two
if(pBegin < itr->pBaseAddr)
itr->pBaseAddr = pBegin;
itr->szcbBlockSz = pLPtr > pLast ? pLPtr - itr->pBaseAddr : pLast - itr->pBaseAddr;
break;
}
}
if(itr == arrPages.end())
{
//Add new page
mptl.pBaseAddr = pBegin;
mptl.szcbBlockSz = szcbSz;
arrPages.push_back(mptl);
}
}
}
}
}
This method works, except that rarely the following happens. The app hangs up, UI and everything, and even if I try to run it with the Visual Studio debugger and then try to Break all, it shows an error message that no user-mode threads are running:
The process appears to be deadlocked (or is not running any user-mode
code). All threads have been stopped.
I tried it several times. The second time when the app hung up, I used the Task Manager to create dump file, after which I loaded the .dmp file into Visual Studio & analyzed it. The debugger showed that the deadlock happened somewhere in the kernel:
and if you review the call stack:
It points to the location of the code as such:
CString str;
str.Format(L"Some formatting value=%d, %s", value, etc);
Experimenting further with it, if I remove HeapLock and HeapUnlock calls from the code above, it doesn't seem to hang anymore. But then HeapWalk may sometimes issue an unhandled exception, access violation.
So any suggestions how to resolve this?
The problem is that you're using the C runtime's memory management, and more specifically the CRT's debug heap, while holding the operating system's heap lock.
The call stack you've posted includes _free_dbg, which always claims the CRT debug heap lock before taking any other action, so we know the thread holds the CRT debug heap lock. We can also see that the CRT was inside an operating system call made by _CrtIsValidHeapPointer when the deadlock occurred; the only such call is to HeapValidate and HEAP_NO_SERIALIZE is not specified.
So the thread whose call stack has been posted is holding the CRT debug heap lock and attempting to claim the operating system's heap lock.
The worker thread, on the other hand, holds the operating system's heap lock and makes calls that attempt to claim the CRT debug heap lock.
QED. Classic deadlock situation.
In a debug build, you will need to refrain from using any C or C++ library functions that might allocate or free memory while you are holding the corresponding operating system heap lock.
Even in a release build, you would still need to avoid any library functions that might allocate or release memory while holding a lock, which might be a problem if, for example, a hypothetical future implementation of std::vector was changed to make it thread-safe.
I recommend that you avoid the issue entirely, which is probably best done by creating a dedicated heap for your worker thread and taking all necessary memory allocations out of that heap. It would probably be best to exclude this heap from processing; the documentation for HeapWalk does not explicitly say that you should not modify the heap during enumeration, but it seems risky.

malloc: *** error for object 0x10003b3c4: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug

I'm new on programming C++, please be patient :) My problem is that the model (a DGVM) runs until the end, but the last message I receive is "malloc: * error for object 0x10003b3c4: pointer being freed was not allocated
* set a breakpoint in malloc_error_break to debug". The debugger points to this:
clTreePop::~clTreePop() {free(Trees);}
The debugger points to free(Trees)and gives the message: "EXC_BAD_INSTRUCTION(code=EXC_i386_INVOP, subcode=0x0)". What am I doing wrong? Thanks.
The part of the code that may be important for this question:
void clTreePop::addFirstTree( double init_mass)
{
clTree Tree( init_mass, -1. , pop_size_, count_trees_);
Trees = (clTree *) malloc(sizeof(clTree));
Trees[0] = Tree;
pop_size_ ++;
new_born_ ++;
count_trees_ ++;
root_biomass_ += Tree.getBr();
stem_biomass_ += Tree.getBS();
leaf_biomass_ += Tree.getBl();
canopy_area_ += Tree.getCanopyArea();
gc_weighted_ += Tree.getGc();
max_height_ += MyMax(max_height_,Tree.getHeight());
basal_area_ += Tree.getStemArea();
return; }
First of all in C++ You don't need to use malloc, as the allocation can be done in diffrent, better, and if not, at least easier ways. Malloc is an old, low level, C (not C++) way. Try using
clTree *Trees = new clTree;
The code You copied does not show the situation fully, althrough what I can see is that instead of
Trees = (clTree *) malloc(sizeof(clTree));
You should use:
clTree *Trees = (clTree *) malloc(sizeof(clTree));
This way You create a pointer to which Then You attach structure, which You allocated.
The error "EXC_BAD_INSTRUCTION(code=EXC_i386_INVOP, subcode=0x0)" indicates some kind of incopatibility in between Your code and the architecture of Your computer (processor, system, etc.). I do not know the matter, but I think it is caused by the mistake I listed before.

What could cause a mutex to misbehave?

I've been busy the last couple of months debugging a rare crash caused somewhere within a very large proprietary C++ image processing library, compiled with GCC 4.7.2 for an ARM Cortex-A9 Linux target. Since a common symptom was glibc complaining about heap corruption, the first step was to employ a heap corruption checker to catch oob memory writes. I used the technique described in https://stackoverflow.com/a/17850402/3779334 to divert all calls to free/malloc to my own function, padding every allocated chunk of memory with some amount of known data to catch out-of-bounds writes - but found nothing, even when padding with as much as 1 KB before and after every single allocated block (there are hundreds of thousands of allocated blocks due to intensive use of STL containers, so I can't enlarge the padding further, plus I assume any write more than 1KB out of bounds would eventually trigger a segfault anyway). This bounds checker has found other problems in the past so I don't doubt its functionality.
(Before anyone says 'Valgrind', yes, I have tried that too with no results either.)
Now, my memory bounds checker also has a feature where it prepends every allocated block with a data struct. These structs are all linked in one long linked list, to allow me to occasionally go over all allocations and test memory integrity. For some reason, even though all manipulations of this list are mutex protected, the list was getting corrupted. When investigating the issue, it began to seem like the mutex itself was occasionally failing to do its job. Here is the pseudocode:
pthread_mutex_t alloc_mutex;
static bool boolmutex; // set to false during init. volatile has no effect.
void malloc_wrapper() {
// ...
pthread_mutex_lock(&alloc_mutex);
if (boolmutex) {
printf("mutex misbehaving\n");
__THROW_ERROR__; // this happens!
}
boolmutex = true;
// manipulate linked list here
boolmutex = false;
pthread_mutex_unlock(&alloc_mutex);
// ...
}
The code commented with "this happens!" is occasionally reached, even though this seems impossible. My first theory was that the mutex data structure was being overwritten. I placed the mutex within a struct, with large arrays before and after it, but when this problem occurred the arrays were untouched so nothing seems to be overwritten.
So.. What kind of corruption could possibly cause this to happen, and how would I find and fix the cause?
A few more notes. The test program uses 3-4 threads for processing. Running with less threads seems to make the corruptions less common, but not disappear. The test runs for about 20 seconds each time and completes successfully in the vast majority of cases (I can have 10 units repeating the test, with the first failure occurring after 5 minutes to several hours). When the problem occurs it is quite late in the test (say, 15 seconds in), so this isn't a bad initialization issue. The memory bounds checker never catches actual out of bounds writes but glibc still occasionally fails with a corrupted heap error (Can such an error be caused by something other than an oob write?). Each failure generates a core dump with plenty of trace information; there is no pattern I can see in these dumps, no particular section of code that shows up more than others. This problem seems very specific to a particular family of algorithms and does not happen in other algorithms, so I'm quite certain this isn't a sporadic hardware or memory error. I have done many more tests to check for oob heap accesses which I don't want to list to keep this post from getting any longer.
Thanks in advance for any help!
Thanks to all commenters. I've tried nearly all suggestions with no results, when I finally decided to write a simple memory allocation stress test - one that would run a thread on each of the CPU cores (my unit is a Freescale i.MX6 quad core SoC), each allocating and freeing memory in random order at high speed. The test crashed with a glibc memory corruption error within minutes or a few hours at most.
Updating the kernel from 3.0.35 to 3.0.101 solved the problem; both the stress test and the image processing algorithm now run overnight without failing. The problem does not reproduce on Intel machines with the same kernel version, so the problem is specific either to ARM in general or perhaps to some patch Freescale included with the specific BSP version that included kernel 3.0.35.
For those curious, attached is the stress test source code. Set NUM_THREADS to the number of CPU cores and build with:
<cross-compiler-prefix>g++ -O3 test_heap.cpp -lpthread -o test_heap
I hope this information helps someone. Cheers :)
// Multithreaded heap stress test. By Itay Chamiel 20151012.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <pthread.h>
#include <sys/time.h>
#define NUM_THREADS 4 // set to number of CPU cores
#define ALIVE_INDICATOR NUM_THREADS
// Each thread constantly allocates and frees memory. In each iteration of the infinite loop, decide at random whether to
// allocate or free a block of memory. A list of 500-1000 allocated blocks is maintained by each thread. When memory is allocated
// it is added to this list; when freeing, a random block is selected from this list, freed and removed from the list.
void* thr(void* arg) {
int* alive_flag = (int*)arg;
int thread_id = *alive_flag; // this is a number between 0 and (NUM_THREADS-1) given by main()
int cnt = 0;
timeval t_pre, t_post;
gettimeofday(&t_pre, NULL);
const int ALLOCATE=1, FREE=0;
const unsigned int MINSIZE=500, MAXSIZE=1000;
const int MAX_ALLOC=10000;
char* membufs[MAXSIZE];
unsigned int membufs_size = 0;
int num_allocs = 0, num_frees = 0;
while(1)
{
int action;
// Decide whether to allocate or free a memory block.
// if we have less than MINSIZE buffers, allocate.
if (membufs_size < MINSIZE) action = ALLOCATE;
// if we have MAXSIZE, free.
else if (membufs_size >= MAXSIZE) action = FREE;
// else, decide randomly.
else {
action = ((rand() & 0x1)? ALLOCATE : FREE);
}
if (action == ALLOCATE) {
// choose size to allocate, from 1 to MAX_ALLOC bytes
size_t size = (rand() % MAX_ALLOC) + 1;
// allocate and fill memory
char* buf = (char*)malloc(size);
memset(buf, 0x77, size);
// add buffer to list
membufs[membufs_size] = buf;
membufs_size++;
assert(membufs_size <= MAXSIZE);
num_allocs++;
}
else { // action == FREE
// choose a random buffer to free
size_t pos = rand() % membufs_size;
assert (pos < membufs_size);
// free and remove from list by replacing entry with last member
free(membufs[pos]);
membufs[pos] = membufs[membufs_size-1];
membufs_size--;
assert(membufs_size >= 0);
num_frees++;
}
// once in 10 seconds print a status update
gettimeofday(&t_post, NULL);
if (t_post.tv_sec - t_pre.tv_sec >= 10) {
printf("Thread %d [%d] - %d allocs %d frees. Alloced blocks %u.\n", thread_id, cnt++, num_allocs, num_frees, membufs_size);
gettimeofday(&t_pre, NULL);
}
// indicate alive to main thread
*alive_flag = ALIVE_INDICATOR;
}
return NULL;
}
int main()
{
int alive_flag[NUM_THREADS];
printf("Memory allocation stress test running on %d threads.\n", NUM_THREADS);
// start a thread for each core
for (int i=0; i<NUM_THREADS; i++) {
alive_flag[i] = i; // tell each thread its ID.
pthread_t th;
int ret = pthread_create(&th, NULL, thr, &alive_flag[i]);
assert(ret == 0);
}
while(1) {
sleep(10);
// check that all threads are alive
bool ok = true;
for (int i=0; i<NUM_THREADS; i++) {
if (alive_flag[i] != ALIVE_INDICATOR)
{
printf("Thread %d is not responding\n", i);
ok = false;
}
}
assert(ok);
for (int i=0; i<NUM_THREADS; i++)
alive_flag[i] = 0;
}
return 0;
}

HeapWalk not working as expected in Release mode

So I used this example of the HeapWalk function to implement it into my app. I played around with it a bit and saw that when I added
HANDLE d = HeapAlloc(hHeap, 0, sizeof(int));
int* f = new(d) int;
after creating the heap then some new output would be logged:
Allocated block Data portion begins at: 0X037307E0
Size: 4 bytes
Overhead: 28 bytes
Region index: 0
So seeing this I thought I could check Entry.wFlags to see if it was set as PROCESS_HEAP_ENTRY_BUSY to keep a track of how much allocated memory I'm using on the heap. So I have:
HeapLock(heap);
int totalUsedSpace = 0, totalSize = 0, largestFreeSpace = 0, largestCounter = 0;
PROCESS_HEAP_ENTRY entry;
entry.lpData = NULL;
while (HeapWalk(heap, &entry) != FALSE)
{
int entrySize = entry.cbData + entry.cbOverhead;
if ((entry.wFlags & PROCESS_HEAP_ENTRY_BUSY) != 0)
{
// We have allocated memory in this block
totalUsedSpace += entrySize;
largestCounter = 0;
}
else
{
// We do not have allocated memory in this block
largestCounter += entrySize;
if (largestCounter > largestFreeSpace)
{
// Save this value as we've found a bigger space
largestFreeSpace = largestCounter;
}
}
// Keep a track of the total size of this heap
totalSize += entrySize;
}
HeapUnlock(heap);
And this appears to work when built in debug mode (totalSize and totalUsedSpace are different values). However, when I run it in Release mode totalUsedSpace is always 0.
I stepped through it with the debugger while in Release mode and for each heap it loops three times and I get the following flags in entry.wFlags from calling HeapWalk:
1 (PROCESS_HEAP_REGION)
0
2 (PROCESS_HEAP_UNCOMMITTED_RANGE)
It then exits the while loop and GetLastError() returns ERROR_NO_MORE_ITEMS as expected.
From here I found that a flag value of 0 is "the committed block which is free, i.e. not being allocated or not being used as control structure."
Does anyone know why it does not work as intended when built in Release mode? I don't have much experience of how memory is handled by the computer, so I'm not sure where the error might be coming from. Searching on Google didn't come up with anything so hopefully someone here knows.
UPDATE: I'm still looking into this myself and if I monitor the app using vmmap I can see that the process has 9 heaps, but when calling GetProcessHeaps it returns that there are 22 heaps. Also, none of the heap handles it returns matches to the return value of GetProcessHeap() or _get_heap_handle(). It seems like GetProcessHeaps is not behaving as expected. Here is the code to get the list of heaps:
// Count how many heaps there are and allocate enough space for them
DWORD numHeaps = GetProcessHeaps(0, NULL);
HANDLE* handles = new HANDLE[numHeaps];
// Get a handle to known heaps for us to compare against
HANDLE defaultHeap = GetProcessHeap();
HANDLE crtHeap = (HANDLE)_get_heap_handle();
// Get a list of handles to all the heaps
DWORD retVal = GetProcessHeaps(numHeaps, handles);
And retVal is the same value as numHeaps, which indicates that there was no error.
Application Verifier had been set up previously to do a full page heap verifying of my executable and was interfering with the heaps returned by GetProcessHeaps. I'd forgotten about it being set up as it was done for a different issue several days ago and then closed without clearing the tests. It wasn't happening in debug build because the application builds to a different file name for debug builds.
We managed to detect this by adding a breakpoint and looking at the callstack of the thread. We could see the AV DLL had been injected in and that let us know where to look.