SecVerifyTransformCreate memory leak? - c++

Take following code:
// init
CFDataRef signature = CFDataCreate(...);
CFDataRef pubKeyData = CFDataCreate(...);
CFArrayRef array = NULL;
OSStatus res = SecItemImport(pubKeyData, ..., &array);
SecKeyRef pubKey = (SecKeyRef) CFArrayGetValueAtIndex(array, 0);
// everything goes wrong here
SecTransformRef verifier = SecVerifyTransformCreate(pubKey, signature, NULL);
// release
CFRelease(signature);
CFRelease(pubKeyData);
CFRelease(signature);
CFRelease(verifier);
In short: I'm importing public key and signature from file, creating verifier for that signature. On succeeding lines of code, I'm able to successfully validate the signature.
What I'm concerned about is memory leak which occurs when calling SecVerifyTransformCreate method. If I comment out that line, the leak is gone.
I've read all about create rules and get rules and I think I've got the releases figured out.

After some extensive testing, this is what I've found out:
As mentioned in question, on lines following code above, I'm executing verifier to check whether the signature is correct, the important method is:
CFTypeRef result = SecTransformExecute(verifier, NULL);
If I don't include this line of code, there is a 320B leak (per call) observable in Xcode's Instrumentation tool.
I suppose that method SecVerifyTransformCreate allocates some piece of memory and expects you to call SecTransformExecute, which then releases it. If you don't, there is a leak. IMO that's wrong behavior.
As Frank mentioned, memory usage as reported by OS grows even without an observable leak, but does not grow exponentially (it stopped at around 40MB in my case). That's correct behavior.
Kudos to Frank for elaboration.

Related

Node Add-on Nan::NewBuffer Causes Memory Leak

I have a c++ node add on that uses the Nan library. I have a function that needs to return a buffer. The simplest version of it is as follows (code edited as per comments):
NAN_METHOD(Test) {
char * retVal = (char*)malloc(100 * sizeof(char));
info.GetReturnValue().Set(Nan::NewBuffer(retVal, 100 *sizeof(char)).ToLocalChecked());
}
where the union is just used as an easy way to reinterpret the bytes. According to the documentation, Nan::NewBuffer assumes ownership of the memory so there is no need to free the memory manually. However when I run my node code that uses this function, my memory skyrockets, even when I force the garbage collector to run via global.gc(); The node code to produce the error is extremely simple:
const addon = require("addon");
for (let i = 0; i < 100000000; i++) {
if(i % (1000000) === 0){
console.log(i);
try {
global.gc();
} catch (e) {
console.log("error garbage collecting");
process.exit();
}
}
const buf = addon.Test();
}
Any help would be appreciated.
After much experimentation and research, I discovered this postenter link description here which basically states that the promise to free the memory that is passed into Nan::NewBuffer is just a lie. Using Nan::CopyBuffer instead of Nan::NewBuffer solves the problem at the cost of a memcpy. So essentially, the answer is that Nan::NewBuffer is broken and you shouldn't use it. Use Nan::CopyBuffer instead.
In IT, we tend to call a lie in the documentation a bug.
This question has been the subject of a Node.js issue - https://github.com/nodejs/node/issues/40936
The answer is that the deallocation of the underlying buffer happens in the C++ equivalent of a JS SetImmediate handler and if your code is completely synchronous - which is your case - the buffers won't be freed until the program end.
You correctly found that Nan::CopyBuffer does not suffer from this problem.
Alas, this cannot be easily fixed, as Node.js does not know how this buffer was allocated and cannot call its callback from the garbage collector context.
If you are using Nan::NewBuffer I also suggest to look at this issue: https://github.com/nodejs/node/issues/40926 in which another closely related pitfall is discussed.

PROCESS_MEMORY_COUNTERS_EX creates unreliable PrivateUsage field, why?

Using the following code on VS 2012, native C++ development:
SIZE_T CppUnitTests_MemoryValidation::TakeMemoryUsageSnapshot() {
PROCESS_MEMORY_COUNTERS_EX processMemoryCounter;
GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)
&processMemoryCounter, sizeof(processMemoryCounter));
return processMemoryCounter.PrivateUsage;
}
I call this method before and after each CPPUnitTest and calculate the difference of the PrivateUsage field. Normally this difference should be zero, assuming my memory allocation doesn't leak.
Only simple things happen inside my test class. Even without any memory allocation, just creating an instance of my test class and releasing it again, sometimes (not in every test iteration) the difference gets above zero, so this scheme seems to be non-deterministic.
Is there somebody with more insight than me who could either explain how to tackle this or tell me what is wrong with my assumptions?
In short, your assumptions are not correct. There can be a lot of other things going on in your process that perform memory allocation (the Event Tracing thread, and any others created by third-party add-ons on your system) so it is not surprising to see memory use go up occasionally.
Following Hans Passants debug allocator link, I noticed some more information about memory leak detection instrumentation by Microsoft, in special the _CrtMemCheckpoint function(s).
The link i followed was "http://msdn.microsoft.com/en-us/library/5tz9b54s(v=vs.90).aspx"
Now when taking my memory snapshots with this function and checking for a difference using the _CrtMemDifference function, this seems to work reliable and deterministic.

Why this code is not causing memory leak?

I wanted to simulate memory leak in my application. I write following code, and tried to see in perfmon.
int main()
{
int *i;
while(1)
{
i = (int *) malloc(1000);
//just to avoid lazy allocation
*i = 100;
if(i == NULL)
{
printf("Memory Not Allocated\n");
}
Sleep(1000);
}
}
When I see used memory in Task Manager, it is fluctuate between 52K and 136K, but not going beyond that. Means, somethings it shows 52K and sometimes 136K, I do not understand how this code once goes to 136K, coming back to 52K, and not going beyond that.
I tried to use perfmon, but not able to exactly what to see in perfmon, snapshot of counters,
Please suggest how to simulate memory leak and how to detect it.
While an OS may defer actual allocation of dynamically allocated memory until it is used, the compiler optimizer may eliminate allocations that are only written to, and never read from. Because your writes have no well defined observable behaviour (you never read from it), the compiler may well optimize it away. I would suggest examing the generated assembly code to see what the compiler is actually generating. Really, this ought be one of the first steps in answering questions like "why doesn't this code behave like I think it should?".
Strictly, a memory leak is a bit context dependent: something in your program keeps allocating memory over time and not freeing it, when it should have been freed.
You code produces a "leak" on each subsequent pass through the while loop, because your program loses knowledge of a previously allocated pointer at that point. This is only visible by inspection however in this case; from the code posted it looks more like you are actually doing, albeit very slowly, is attempting to create a memory stress situation.
To 'find' a leak without inspection you need to run a tool like valgrind (Unix/Linux/OSX) or in Visual Studio enable allocation tracing with the DEBUG_NEW macro and view the output using a debugger.
If you really want to stress memory in a hurry, allocate 1024 x 1024 x 1024 bytes at a time...

vector::size and Segmentation fault

Why could this code throw segmentation fault?:/
listeners = new vector<Listener*> ();
... /* other code */
if (listeners != NULL) {
int i = listeners->size();
}
Just because the pointer isn't NULL doesn't mean it points to a valid vector<Listener*> object.
Run your program through valgrind to detect memory corruption issues, and make sure that you run your code through your debugger, too.
If you still have problems, post a test that reproduces the issue (rather than little snippets of code that do not).
Easier than using valgrind is to move the listeners->size() call right after the allocation and see if it segfaults even then. If no, move it a few lines of code lower and try again, repeat. If it segfaults, you just found the lines that cause it. Maybe you have done something with the pointer along the way and this is a method to find that piece of code.
Look at the bisection method.
May not work always, it's more of a heuristic.
vector<Listener*> listeners; might save you some problems or make the reason of the code break more evident

Debugging HeapReAlloc() failure using GetExceptionCode()

I have a HeapReAlloc() failing with the error ACCESS_VIOLATION, but I'm unsure how to implement a further check using GetExceptionCode() as it uses try/catch or exception or something - can someone give me an example of how I can use it to narrow down this failure?
You are trouble-shooting the wrong problem. HeapRealloc() is bombing because the heap is corrupted. That happened a while ago, some statement in your program overflowing a heap block, writing data to memory that was freed, something like that. MSVC has a debug memory allocator to help you troubleshoot these kind of problems, look in the MSDN library for <crtdbg.h>.
Make sure hHeap and lpMem parameters of HeapReAlloc are valid.
You should take the following possible root of causes into account.
What value is passes for dwFlags.
How hHeap is obtained.
via HeapCreate
or GetProcessHeap
What parameters are provided to HeapCreate/GetProcessHeap.
In addition to HeapValidate(hHeap, 0, lpMem), you should also validate the entire heap by calling
HeapValidate(hHeap, 0, NULL)