Leadtools Not enough memory available - c++

I have to correct a bug where Leadtools function " L_LoadBitmap() returns ERROR_NO_MEMORY " , more info about it you can find Here.
The application I am working on has to be able to handle images despites the size of each image or their count.
Here the function is called:
HENHMETAFILE hemf = 0;
BITMAPHANDLE bmh = {0};
hemf = LoadMetaFile( (LPCTSTR)strPath, hDC );
if ( !hemf )
{
memset( &bmh, 0, sizeof(BITMAPHANDLE) );
L_INT nResult = 0;
nResult = L_LoadBitmap( const_cast<LPTSTR>( (LPCTSTR)strPath ), &bmh, 0, ORDER_BGR );
if ( nResult != SUCCESS )
{
MFDebugString( DL_FORMULAR, __FUNCTION__ "( %s ): Can't load background file via L_LoadBitmap (%d)\n", (LPCTSTR)strPath, nResult );
return nullptr;
}
}
pOrigBg = std::make_shared<CBackgroundImage>(strPath, hemf, bmh);
m_ImageCache[strKey.GetString()] = pOrigBg;
return pOrigBg;
Here pOrigBgis an std::shared_ptr<CBackgroundImage> object that gets constructed this way:
NxFE::CBackgroundImage::CBackgroundImage(LPCTSTR strPath, HENHMETAFILE emf, const BITMAPHANDLE& bmp)
: m_Filename(strPath), m_Metafile(emf), m_pLeadBitmap(new BITMAPHANDLE(bmp)),
m_pGdiplusBitmap(NxClass::Win32::GDIPlus::CreateBitmapFromFile((LPCSTR) m_Filename))
{
}
How can you see, pOrigBg contains a std::unique_ptr of type BITMAPHANDLE and Gdiplus::Bitmap.
Firstly, I thought that removing constructor of m_pGdiplusBitmap may help , but it doesn't.
Is any possible way to deallocate/reduce the usage of graphic memory ? Or at least some tools of inspecting Graphic Memory Usage (I'm using Microsoft Visual Studio 2017).

As you found out, functions in LEADTOOLS that allocate pixel data must be followed by calling L_FreeBitmap() when you no longer need the bitmap in memory. This is actually mentioned in the help topic you mentioned in your original question, which states: “Since the function allocates storage to hold the image, it is up to you to free this storage by calling L_FreeBitmap.”
Placement of the L_FreeBitmap call can be crucial in avoiding memory leaks. Since pixel data is typically the largest memory object in the bitmap handle, failing to free it correctly could cause huge leaks.
Also, if your code is allocating the BITMAPHANDLE structure itself using the “new” operator, you need to delete it once done with it. Even though the structure itself is typically much smaller in size than the pixel data, you should never allow any type of memory leak in your application.
If you run into any problem related to LEADTOOLS functions, feel free to email the details to our support address support#leadtools.com. Email support is free for all versions of the toolkit, whether Release (purchased) or free evaluation.

Ok, this statement worked, just had to put it in a different place
if ((bmh).Flags.Allocated)
L_FreeBitmap(&bmh);
Still got problems with GdiplusBitmap and loading images with .bmp extension, but that's already something else.
Also, in VS2017 you can go Debug -> Performance Profiler()... (Alt+F2) to use some tools for inspecting CPU / GPU / Memory Usage.

Related

Using new to allocate memory for unsigned char array fails

I'm trying to load a tga file in c++ code that I got from google searching, but the part that allocates memory fails. The beginning of my "LoadTarga" method includes these variables:
int imageSize;
unsigned char* targaImage;
Later on in the method the imageSize variable gets set to 262144 and I use that number to set the size of the array:
// Calculate the size of the 32 bit image data.
imageSize = width * height * 4;
// Allocate memory for the targa image data.
targaImage = new unsigned char[imageSize];
if (!targaImage)
{
MessageBox(hwnd, L"LoadTarga - failed to allocate memory for the targa image data", L"Error", MB_OK);
return false;
}
The problem is that the body of the if statement executes and I have no idea why the memory allocation failed. As far as I know it should work - I know the code compiles and runs up to this point and I haven't seen anything yet in google that would show a proper alternative.
What should I change in my code to make it allocate memory correctly?
Important Update:
Rob L's comments and suggestions were very useful (though I didn't try _heapchk since I solved the issue before I tried using it)
Trying each of fritzone's ideas meant the program ran past the "if (!targaImage)" point without trouble. The code that sets "targaImage and the if statement checks if memory was allocated correctly has been replaced with this:
try
{
targaImage = new unsigned char[imageSize];
}
catch (std::bad_alloc& ba)
{
std::cerr << "bad_alloc caught: " << ba.what() << '\n';
return false;
}
However I got a new problem with the very next bit of code:
count = (unsigned int)fread(targaImage, 1, imageSize, filePtr);
if (count != imageSize)
{
MessageBox(hwnd, L"LoadTarga - failed to read in the targa image data", L"Error", MB_OK);
return false;
}
Count was giving me a value of "250394" which is different to imageSize's value of "262144". I couldn't figure out why this was and doing a bit of searching (though I must admit, not much searching) on how "fread" works didn't yield info.
I decided to cancel my search and try the answer code on the tutorial site here http://www.rastertek.com/dx11s2tut05.html (scroll to the bottom of the page where it says "Source Code and Data Files" and download the zip. However creating a new project, putting in the source files and image file didn't work as I got a new error. At this point I thought maybe the way I converted the image file from to tga might have been incorrect.
So rather than spend a whole lot of time debugging the answer code I put the image file from the answer into my own project. I noted that the size of mine was MUCH smaller than the answer (245KB compared to 1025 KB) )so maybe if I use the answer code's image my code would run fine. Turns out I was right! Now the image is stretched sideways for some reason but my original query appears to have been solved.
Thanks Rob L and fritzone for your help!
You are NOT using the form of new which returns a null pointer in case of error, so it makes no sense for checking the return value. Instead you should be aware of catching a std::bad_alloc. The null pointer returning new for you has the syntax: new (std::nothrow) unsigned char[imageSize];
Please see: http://www.cplusplus.com/reference/new/operator%20new[]/
Nothing in your sample looks wrong. It is pretty unlikely that a modern Windows system will run out of memory allocating 256k just once. Perhaps your allocator is being called in a loop and allocating more than you think, or the value of imagesize is wrong. Look in the debugger.
Another possibility is that your heap is corrupt. Calling _heapchk() can help diagnose that.
Check the "memory peak working set" in windows tasks manager and ensure how much memory you are really trying to allocate.

Accessing buffer using C++-AMP

Could somebody please help me understand exactly the step that is not working here?
I am trying to use C++-AMP to do parallel-for loops, however despite having no trouble or errors going through my process, I can't get my final data.
I want to pull out my data by means of mapping it
m_pDeviceContext->Map(pBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource);
{
blah
}
But I've worked on this for days on end without even a single inch of progress.
Here is everything I do with C++-AMP:
Constructor: I initialise my variables because I have to
: m_AcceleratorView(concurrency::direct3d::create_accelerator_view(reinterpret_cast<IUnknown *>(_pDevice)))
, m_myArray(_uiNumElement, m_AcceleratorView)
I copy my initial data into the C++-AMP array
concurrency::copy(Data.begin(), m_myArray);
I do stuff to the data
concurrency::parallel_for_each(...) restrict(amp)
{
blah
}
All of this seems fine, I run into no errors.
However the next step I want to do is pull the data from the buffer, which doesn't seem to work:
ID3D11Buffer* pBuffer = reinterpret_cast<ID3D11Buffer *>(concurrency::direct3d::get_buffer(m_myArray));
When I map this data (deviceContext->Map) the data inside is 0x00000000
What step am I forgetting that will allow me to read this data? Even when I try to set the CPU read/write access type I get an error, and I didn't even see any of my references do it that way either:
m_Accelerator.set_default_cpu_access_type(concurrency::access_type::access_type_read_write);
This creates an error to say "accelerator does not support zero copy"
Can anyone please help me and tell me why I can't read my buffer, and how to fix it?
The following code should work for this. You should also check that the DX device you and the C++AMP accelerator are associated with the same hardware.
HRESULT hr = S_OK;
array<int, 1> arr(1024);
CComPtr<ID3D11Buffer> buffer;
IUnknown* unkBuf = get_buffer(arr);
hr = unkBuf->QueryInterface(__uuidof(ID3D11Buffer), reinterpret_cast<LPVOID*>(&buffer));
This question has an answer that shows you how to do the opposite.
Reading Buffer Data using C++ AMP

DLL causes program to crash only when memory is allocated

I am writing a small DLL, which once injected into my target process, will find a hwnd and write the window's text to a file. I have it setup like this:
hWnd = FindWindow(L"tSkMainForm",NULL);
chat = FindWindowEx(hWnd, NULL, L"TConversationForm", NULL);
ofstream myfile("X:\\Handles.txt", ios::out | ios::app);
if (myfile.is_open())
{
int len;
len = SendMessage(chat, WM_GETTEXTLENGTH, 0, 0) + 1; // + 1 is for the null term.
char* buffer = new char[len];
SendMessageW(chat, WM_GETTEXT, (WPARAM)len, (LPARAM)buffer);
myfile.write(buffer,len); /* << buffer <<endl; */
myfile.close();
delete[] buffer;
}
It works for a seemingly random amount of time, then the application (Skype) crashes. It only crashes when I allocate memory. I have tried using malloc with:
char* buffer = (char*)malloc(len); //I even tried removing "(char*) before malloc
//Do the rest of the stuff here
free((void*) buffer);
But that crashes too.
My DLL calls CreateThread, adds an extra menu item via AppendMenu, and handles the messages for it, all perfectly. It just seems that allocating memory doesn't want to work, but only at random times. I am not sure, but I think Skype is overwriting my memory, or I am overwriting Skype's memory (how would I ensure that the two don't overwrite each other then?)
Also, I know an API exists for Skype, but I want to do it this way. I would use the Skype API if I wanted to write a serious program.
Thanks.
Of course it crashes. "Injecting a DLL in another process" is something you shouldn't be doing in the first place, and certainly not if you cannot figure this out.
Your problem is that your DLL makes assumptions about the environment it's running in. In particular, you assume there's a C++ heap (or a C heap, for malloc), and that is has precisely the right state for your program. That's just not the case. Normal C++ rules do not apply to injected DLL's; your DLL must be able to stand on its own legs.

ERROR_INSUFFICIENT_BUFFER returned from GetAdaptersAddresses

Using the following code, more or less copy-pasted from the MSDN example of
GetAdaptersAddresses, I get the return value 122, which means ERROR_INSUFFICIENT_BUFFER (according to this system error code list).
ULONG outBufLen = 150000; // Tried for different (large) values here...
PIP_ADAPTER_ADDRESSES pAddresses = (IP_ADAPTER_ADDRESSES *) malloc(outBufLen);
DWORD dwRetVal = GetAdaptersAddresses(AF_INET, 0, NULL, pAddresses, &outBufLen);
// ....
free(pAddresses);
The documentation of GetAdaptersAddresses does not list ERROR_INSUFFICIENT_BUFFER as one of the expected return values. (It lists ERROR_BUFFER_OVERFLOW, which should adjust outBufLen to the needed value, but that remains unchanged).
Using GetAdaptersInfo instead leads to the same symptoms.
This error does not occur on my development machine, but on one virtual and one real clean Windows 7 x86 SP1 installation (added the VC++ redistributables).
As a c++ newbie, am I doing something wrong? What could cause this error and how to fix it? =)
First of all, you can - as others suggested - do two calls, to find out required buffer size, and then do the query itself. Especially if you are seeing the error, your first try would be to ask API what size it expected.
Second, you need to know that this API is not quite safe in 32-bit processes consuming high amounts of memory, so that buffers span into higher 2GB of address space. API might start acting in a weird way, either due to its own bug, or a bug in an underlying layer. See details on this on MS Connect here: GetAdaptersAddresses API incorrectly returns no adapters for a process with high memory consumption.
The fact that error code is not "one of the expected return values" tells for the versions that the error comes from an underlying layer and this API just passes it up on internal failure. As a clue, having disabled some network adapter on the system, you might get rid of the error.
Visual Studio deployed a library named "IPHLPAPI.dll" together with my project which caused the problem. Deleting this file solved it.
Why this was the case is subject to further research =)
First, a buffer is a block of memory.
So insufficient could mean that you haven't given it enough memory somehow. Our could be a block of memory which you don't have access to. Maybe the address doesn't even exist.
Look at this:
ERROR_INSUFFICIENT_BUFFER
122 (0x7A)
The data area passed to a system call is too small.
This sounds really like the buffer hasn't got enough allocated memory. Or similar.
Maybe the
outBufLen
has to be a specific length, maybe the size of the memory block. Because sometimes it doesn't check for the 'name' but tries to compare for each of the variables size. This idea came from the High Level Shader Language.
So i would try to look a bit more on the:
ULONG outBufLen = 150000; // Tried for different (large) values here...
PIP_ADAPTER_ADDRESSES pAddresses = (IP_ADAPTER_ADDRESSES *) malloc(outBufLen);
Good luck!
To know the exact buffer size required, you can just pass NULL into pAddresses and size will be set to the required size. You may want to rewrite your code slightly to make that work;
DWORD rv, size = 0;
PIP_ADAPTER_ADDRESSES adapter_addresses;
rv = GetAdaptersAddresses(AF_INET, 0, NULL, NULL, &size);
if (rv != ERROR_BUFFER_OVERFLOW)
return false; // ERROR
adapter_addresses = (PIP_ADAPTER_ADDRESSES)malloc(size);
rv = GetAdaptersAddresses(AF_INET, 0, NULL, adapter_addresses, &size);
if (rv != ERROR_SUCCESS) {
free(adapter_addresses);
return false; // ERROR
}

Remote thread is failing on call to LoadLibrary with error 87

I am tring to create a Remote thread that will load a DLL I wrote, and run a function from it.
The DLL is working fine (Checked) but from some reason, the Remote thread fails and the proccess in which it was created stop responding.
I used ollyDebug to try and see what is going wrong and I noticed two things...
My strings (dll name and function name) are passed to the remote thread correctly
The thread fails on LoadLibrary with lasterror code 87 "ERROR_INVALID_PARAMETER"
My best guess is that somehow, The remote thread can't find LoadLibrary (Is this because the linker is done with repspect to my proccess???, Just a guess...)
What am I doing wrong?
This is the code to the remote function:
static DWORD WINAPI SetRemoteHook (DATA *data)
{
HINSTANCE dll;
HHOOK WINAPI hook;
HOOK_PROC hookAdress;
dll = LoadLibrary(data->dll);
hookAdress = (HOOK_PROC) GetProcAddress(dll,data->func);
if (hookAdress != NULL)
{
(hookAdress)();
}
return 1;
}
Edit:
This is the part in which I allocate the memory to the remote proccess:
typedef struct
{
char* dll;
char* func;
} DATA;
char* dllName = "C:\\Windows\\System32\\cptnhook.dll";
char* funcName = "SetHook";
char* targetPrgm = "mspaint.exe";
Data lData;
lData.dll = (char*) VirtualAllocEx( explorer, 0, sizeof(char)*strlen(dllName), MEM_COMMIT, PAGE_READWRITE );
lData.func = (char*) VirtualAllocEx( explorer, 0, sizeof(char)*strlen(funcName), MEM_COMMIT, PAGE_READWRITE );
WriteProcessMemory( explorer, lData.func, funcName, sizeof(char)*strlen(funcName), &v );
WriteProcessMemory( explorer, lData.dll, dllName, sizeof(char)*strlen(dllName), &v );
rDataP = (DATA*) VirtualAllocEx( explorer, 0, sizeof(DATA), MEM_COMMIT, PAGE_READWRITE );
WriteProcessMemory( explorer, rDataP, &lData, sizeof(DATA), NULL );
Edit:
It looks like the problem is that the remote thread is calling a "garbage" address
instead of LoadLibrary base address. Is there a possibily Visual studio linked
the remote proccess LoadLibrary address wrong?
Edit:
when I try to run the same exact code as a local thread (I use a handle to the current procces in CreateRemoteThread) the entire thing works just fine. What can cause this?
Should I add the calling function code? It seems to be doing its job as
the code is being executed in the remote thread with the correct parameters...
The code is compiled under VS2010.
data is a simple struct with char* 's to the names. (As explicetly writing the strings in code would lead to pointers to my original proccess).
What am I doing wrong?
Failing with ERROR_INVALID_PARAMETER indicates that there is a problem with the parameters passed.
So one should look at data->dll which represents the only parameter in question.
It is initialised here:
lData.dll = VirtualAllocEx(explorer, 0, sizeof(char) * (strlen(dllName) + 1), MEM_COMMIT, PAGE_READWRITE);
So let's add a check whether the allocation of the memory which's reference should be store into lData.dll really succeded.
if (!lData.dll) {
// do some error logging/handling/whatsoever
}
Having done so, you might have detected that the call as implemented failed because (verbatim from MSDN for VirtualAllocEx()):
The function fails if you attempt to commit a page that has not been
reserved. The resulting error code is ERROR_INVALID_ADDRESS.
So you might like to modifiy the fourth parameter of the call in question as recommended (again verbatim from MSDN):
To reserve and commit pages in one step, call VirtualAllocEx with
MEM_COMMIT | MEM_RESERVE.
PS: Repeat this exercise for the call to allocate lData.func. ;-)
It's possible that LoadLibrary is actually aliasing LoadLibraryW (depending on project settings), which is the Unicode version. Whenever you use the Windows API with "char" strings instead of "TCHAR", you should explicitly use ANSI version names. This will prevent debugging hassles when the code is written, and also in the future for you or somebody else in case the project ever flips to Unicode.
So, in addition to fixing that horrible unterminated string problem, make sure to use:
LoadLibraryA(data->dll);