Using new to allocate memory for unsigned char array fails - c++

I'm trying to load a tga file in c++ code that I got from google searching, but the part that allocates memory fails. The beginning of my "LoadTarga" method includes these variables:
int imageSize;
unsigned char* targaImage;
Later on in the method the imageSize variable gets set to 262144 and I use that number to set the size of the array:
// Calculate the size of the 32 bit image data.
imageSize = width * height * 4;
// Allocate memory for the targa image data.
targaImage = new unsigned char[imageSize];
if (!targaImage)
{
MessageBox(hwnd, L"LoadTarga - failed to allocate memory for the targa image data", L"Error", MB_OK);
return false;
}
The problem is that the body of the if statement executes and I have no idea why the memory allocation failed. As far as I know it should work - I know the code compiles and runs up to this point and I haven't seen anything yet in google that would show a proper alternative.
What should I change in my code to make it allocate memory correctly?
Important Update:
Rob L's comments and suggestions were very useful (though I didn't try _heapchk since I solved the issue before I tried using it)
Trying each of fritzone's ideas meant the program ran past the "if (!targaImage)" point without trouble. The code that sets "targaImage and the if statement checks if memory was allocated correctly has been replaced with this:
try
{
targaImage = new unsigned char[imageSize];
}
catch (std::bad_alloc& ba)
{
std::cerr << "bad_alloc caught: " << ba.what() << '\n';
return false;
}
However I got a new problem with the very next bit of code:
count = (unsigned int)fread(targaImage, 1, imageSize, filePtr);
if (count != imageSize)
{
MessageBox(hwnd, L"LoadTarga - failed to read in the targa image data", L"Error", MB_OK);
return false;
}
Count was giving me a value of "250394" which is different to imageSize's value of "262144". I couldn't figure out why this was and doing a bit of searching (though I must admit, not much searching) on how "fread" works didn't yield info.
I decided to cancel my search and try the answer code on the tutorial site here http://www.rastertek.com/dx11s2tut05.html (scroll to the bottom of the page where it says "Source Code and Data Files" and download the zip. However creating a new project, putting in the source files and image file didn't work as I got a new error. At this point I thought maybe the way I converted the image file from to tga might have been incorrect.
So rather than spend a whole lot of time debugging the answer code I put the image file from the answer into my own project. I noted that the size of mine was MUCH smaller than the answer (245KB compared to 1025 KB) )so maybe if I use the answer code's image my code would run fine. Turns out I was right! Now the image is stretched sideways for some reason but my original query appears to have been solved.
Thanks Rob L and fritzone for your help!

You are NOT using the form of new which returns a null pointer in case of error, so it makes no sense for checking the return value. Instead you should be aware of catching a std::bad_alloc. The null pointer returning new for you has the syntax: new (std::nothrow) unsigned char[imageSize];
Please see: http://www.cplusplus.com/reference/new/operator%20new[]/

Nothing in your sample looks wrong. It is pretty unlikely that a modern Windows system will run out of memory allocating 256k just once. Perhaps your allocator is being called in a loop and allocating more than you think, or the value of imagesize is wrong. Look in the debugger.
Another possibility is that your heap is corrupt. Calling _heapchk() can help diagnose that.

Check the "memory peak working set" in windows tasks manager and ensure how much memory you are really trying to allocate.

Related

Sending Binary File Data via Google Protobuf

I have my protobuf-message set up fine it seems, all other fields I have transmit correctly across the network and do not truncate. I only have one problem, when I read the binary data of a picture or file then send it through google protobuf as bytes array type, on the other side it only contains the first 4 elements of the array. If the picture is say 200kb, on the other end it comes out as 1kb(Basically only contains a header or identifier). This problem is kinda complex so I will try to give a run down. Sorry if I make this impossible to understand. I may be going about this completely the wrong way.
Example below contains conceptual work, and was written in class. It very well could contain small errors. The code compiles at home, and if it is a typo let me know and I can fix it.
FILE* file;
FILE* ofile;
file = fopen("red.png", "rb");
fseek(file, 0, SEEK_END);
long fSize = ftell(file);
rewind(file);
BYTE* ret = new BYTE[fSize];
fread(ret, 1, fSize, file);
fclose(file);
char dataStream[1024] //yes it is large enough
myPacket.set_file(ret);
//set other fields here
myPacket.SerializeToArray(dataStream,sizeof(dataStream));
//send through sockets below, works for all but file field.
I can include more when I get back home to my main work computer, sorry, was just hoping I could let this stew while at class. If this is not enough info feel free to give me the smack down, it's alright just looking for advice. I also know that certain image formats can be read certain ways, but I was able to copy a png and rewrite it through binary locally, just not over protobuf
Thanks for reading my pseudo book guys, I am finally trying to leap into improving my knowledge.
Edited quickly typed pointer error(&ret) to (ret). Also then should size of be sizeof(myPacket) rather.
You have written this:
char dataStream[1024] //yes it is large enough
But how could 1024 bytes buffer be large enough if you want to store 200 000 bytes into it?
Better allocate a bigger buffer on the heap, e.g.:
std::vector<char> dataStream(500000);
myPacket.SerializeToArray(&dataStream[0], dataStream.size());

Can't analyse PE files more than certain size

Currently, my code is able to get the entropy and file offset of PE files that are less than 3MB, tested with notepad.exe. However, I receive errors whenever I try to analyse a bigger file instead.
I am not sure how I should solve this problem. But my lecturer told me to create another similar function. Really appreciate if someone can help me on this.
Error shown in CLI:
Call to ReadFile() failed.
Error Code: 998
Error portion:
dwFileSize = GetFileSize(hFile, NULL);
if (dwFileSize != INVALID_FILE_SIZE)
{
bFile = (byte*)malloc(dwFileSize);
You're error code decodes to "Invalid access to memory location" and you're not checking the return value of malloc, and even if you were you need to loop on ReadFile to read the whole thing in.
You ran out of memory. You certainly need to redesign your algorithm.
And as Hans Passant pointed out, you have a memory leak because you never free the file's memory when you are done with it. C++ isn't garbage collected.

C++ read text line-by-line, speed/efficiency savings needed

I have a series of large text files (10s - 100s of thousands of lines) that I want to parse line-by-line. The idea is to check if the line has a specific word/character/phrase and to, for now, record to a secondary file if it does.
The code I've used so far is:
ifstream infile1("c:/test/test.txt");
while (getline(infile1, line)) {
if (line.empty()) continue;
if (line.find("mystring") != std::string::npos) {
outfile1 << line << '\n';
}
}
The end goal is to be writing those lines to a database. My thinking was to write them to the file first and then to import the file.
The problem I'm facing is the time taken to complete the task. I'm looking to minimize the time as far as possible, so any suggestions as to time savings on the read/write scenario above would be most welcome. Apologies if anything is obvious, I've only just started moving into C++.
Thanks
EDIT
I should say that I'm using VS2015
EDIT 2
So this was my own dumb fault, when switching to Release and changing the architecture type I had noticeable speed increases. Thanks to everyone for pointing me in that direction. I'm also looking at the mmap stuff and that's proving useful too. Thanks guys!
When you use ifstream to read and process to/from really big files, you have to increase the default buffer size that is used (normally 512 bytes).
The best buffer size depends on your needs, but as a hint you can use the partition block size of the file(s) your reading/writing. To know that information you can use a lot of tools or even code.
Example in Windows:
fsutil fsinfo ntfsinfo c:
Now, you have to create a new buffer to ifstream like this:
size_t newBufferSize = 4 * 1024; // 4K
char * newBuffer = new char[newBufferSize];
ifstream infile1;
infile1.rdbuf()->pubsetbuf(newBuffer, newBufferSize);
infile1.open("c:/test/test.txt");
while (getline(infile1, line)) {
/* ... */
}
delete newBuffer;
Do the same with the output stream and don't forget set new buffer before open file or it may not work.
You can play with values to find the very best size for you.
You'll note the difference.
C-style I/O functions are much faster than fstream.
You may use fgets/fputs to read/write each text line.

Accessing buffer using C++-AMP

Could somebody please help me understand exactly the step that is not working here?
I am trying to use C++-AMP to do parallel-for loops, however despite having no trouble or errors going through my process, I can't get my final data.
I want to pull out my data by means of mapping it
m_pDeviceContext->Map(pBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource);
{
blah
}
But I've worked on this for days on end without even a single inch of progress.
Here is everything I do with C++-AMP:
Constructor: I initialise my variables because I have to
: m_AcceleratorView(concurrency::direct3d::create_accelerator_view(reinterpret_cast<IUnknown *>(_pDevice)))
, m_myArray(_uiNumElement, m_AcceleratorView)
I copy my initial data into the C++-AMP array
concurrency::copy(Data.begin(), m_myArray);
I do stuff to the data
concurrency::parallel_for_each(...) restrict(amp)
{
blah
}
All of this seems fine, I run into no errors.
However the next step I want to do is pull the data from the buffer, which doesn't seem to work:
ID3D11Buffer* pBuffer = reinterpret_cast<ID3D11Buffer *>(concurrency::direct3d::get_buffer(m_myArray));
When I map this data (deviceContext->Map) the data inside is 0x00000000
What step am I forgetting that will allow me to read this data? Even when I try to set the CPU read/write access type I get an error, and I didn't even see any of my references do it that way either:
m_Accelerator.set_default_cpu_access_type(concurrency::access_type::access_type_read_write);
This creates an error to say "accelerator does not support zero copy"
Can anyone please help me and tell me why I can't read my buffer, and how to fix it?
The following code should work for this. You should also check that the DX device you and the C++AMP accelerator are associated with the same hardware.
HRESULT hr = S_OK;
array<int, 1> arr(1024);
CComPtr<ID3D11Buffer> buffer;
IUnknown* unkBuf = get_buffer(arr);
hr = unkBuf->QueryInterface(__uuidof(ID3D11Buffer), reinterpret_cast<LPVOID*>(&buffer));
This question has an answer that shows you how to do the opposite.
Reading Buffer Data using C++ AMP

Deleting registry keys - error in MSDN sample

This MSDN article is supposed to demonstrate how to delete a registry key which has subkeys, but the code is flawed.
The line that says
StringCchCopy (lpEnd, MAX_PATH*2, szName);
causes an exception, which is due to trying to copy to beyond the buffer of lpEnd. I tried correcting the solution by replacing that line with the following
size_t subKeyLen = lstrlen(lpSubKey);
size_t bufLen = subKeyLen + lstrlen(szName)+1;
LPTSTR buf = new WCHAR[bufLen];
StringCchCopy(buf,bufLen,lpSubKey);
StringCchCopy(buf+subKeyLen,lstrlen(szName)+1,szName);
buf[bufLen-1]='\0';
I'm unable to step through the code as the target platform and dev platform are different, but from the logging I've put in the code it looks like it just freezes up, but doesn't throw an exception.
It's frustrating that MSDN articles are wrong...you'd think they would be checked.
Any ideas on how to correct this?
Thanks.
If you don't mind having Shlwapi.dll as an additional dependency, it may be easier for you just to use SHDeleteKey. If you're only targetting Vista+, RegDeleteTree (which lives in Advapi32.dll) is another alternative.
That change by itself would not be sufficient. The line of code following it:
if (!RegDelnodeRecurse(hKeyRoot, lpSubKey)) {
break;
would also need to change. lpSubKey would need to be replaced with buf since that now contains the full key.
And it probably goes without saying, but be sure to free (delete) buf as part of the cleanup.
However, for correctness, it seems as if it would be better just to fix the original line of code by changing it to pass the correct length (which should be okay since I believe the maximum key length in the registry is 255):
StringCchCopy (lpEnd, MAX_PATH*2 - lstrlen(lpSubKey), szName);