Deleting registry keys - error in MSDN sample - c++

This MSDN article is supposed to demonstrate how to delete a registry key which has subkeys, but the code is flawed.
The line that says
StringCchCopy (lpEnd, MAX_PATH*2, szName);
causes an exception, which is due to trying to copy to beyond the buffer of lpEnd. I tried correcting the solution by replacing that line with the following
size_t subKeyLen = lstrlen(lpSubKey);
size_t bufLen = subKeyLen + lstrlen(szName)+1;
LPTSTR buf = new WCHAR[bufLen];
StringCchCopy(buf,bufLen,lpSubKey);
StringCchCopy(buf+subKeyLen,lstrlen(szName)+1,szName);
buf[bufLen-1]='\0';
I'm unable to step through the code as the target platform and dev platform are different, but from the logging I've put in the code it looks like it just freezes up, but doesn't throw an exception.
It's frustrating that MSDN articles are wrong...you'd think they would be checked.
Any ideas on how to correct this?
Thanks.

If you don't mind having Shlwapi.dll as an additional dependency, it may be easier for you just to use SHDeleteKey. If you're only targetting Vista+, RegDeleteTree (which lives in Advapi32.dll) is another alternative.

That change by itself would not be sufficient. The line of code following it:
if (!RegDelnodeRecurse(hKeyRoot, lpSubKey)) {
break;
would also need to change. lpSubKey would need to be replaced with buf since that now contains the full key.
And it probably goes without saying, but be sure to free (delete) buf as part of the cleanup.
However, for correctness, it seems as if it would be better just to fix the original line of code by changing it to pass the correct length (which should be okay since I believe the maximum key length in the registry is 255):
StringCchCopy (lpEnd, MAX_PATH*2 - lstrlen(lpSubKey), szName);

Related

Get raw buffer for in-memory dataset in GDAL C++ API

I have generated a GeoTiff dataset in-memory using GDALTranslate() with a /vsimem/ filepath. I need access to the buffer for the actual GeoTiff file to put it in a stream for an external API. My understanding is that this should be possible with VSIGetMemFileBuffer(), however I can't seem to get this to return anything other than nullptr.
My code is essentially as follows:
//^^ GDALDataset* srcDataset created somewhere up here ^^
//psOptions struct has "-b 4" and "-of GTiff" settings.
const char* filep = "/vsimem/foo.tif";
GDALDataset* gtiffData = GDALTranslate(filep, srcDataset, psOptions, nullptr);
vsi_l_offset size = 0;
GByte* buf = VSIGetMemFileBuffer(filep, &size, true); //<-- returns nullptr
gtiffData seems to be a real dataset on inspection, it has all the appropriate properties (number of bands, raster size, etc). When I provide a real filesystem location to GDALTranslate() rather than the /vsimem/ path and load it up in QGIS it renders correctly too.
Looking a the source for VSIGetMemFileBuffer(), this should really only be returning nullptr if the file can't be found. This suggests i'm using it incorrectly. Does anyone know what the correct usage is?
Bonus points: Is there a better way to do this (stream the file out)?
Thanks!
I don't know anything about the C++ API. But in Python, the snippet below is what I sometimes use to get the contents of an in-mem file. In my case mainly VRT's but it shouldn't be any different for other formats.
But as said, I don't know if the VSI-api translate 1-on-1 to C++.
from osgeo import gdal
filep = "/vsimem/foo.tif"
# get the file size
stat = gdal.VSIStatL(filep, gdal.VSI_STAT_SIZE_FLAG)
# open file
vsifile = gdal.VSIFOpenL(filep, 'r')
# read entire contents
vsimem_content = gdal.VSIFReadL(1, stat.size, vsifile)
In the case of a VRT the content would be text, shown with something like print(vsimem_content.decode()). For a tiff it would of course be binary data.
I came back to this after putting in a workaround, and upon swapping things back over it seems to work fine. #mmomtchev suggested looking at the CPL_DEBUG output, which showed nothing unusual (and was silent during the actual VSIGetMemFileBuffer call).
In particular, for other reasons I had to put a GDALWarp call in between calling GDALTranslate and accessing the buffer, and it seems that this is what makes the difference. My guess is that GDALWarp is calling VSIFOpenL internally - although I can't find this in the source - and this does some kind of initialisation for VSIGetMemFileBuffer. Something to try for anyone else who encounters this.

Can't analyse PE files more than certain size

Currently, my code is able to get the entropy and file offset of PE files that are less than 3MB, tested with notepad.exe. However, I receive errors whenever I try to analyse a bigger file instead.
I am not sure how I should solve this problem. But my lecturer told me to create another similar function. Really appreciate if someone can help me on this.
Error shown in CLI:
Call to ReadFile() failed.
Error Code: 998
Error portion:
dwFileSize = GetFileSize(hFile, NULL);
if (dwFileSize != INVALID_FILE_SIZE)
{
bFile = (byte*)malloc(dwFileSize);
You're error code decodes to "Invalid access to memory location" and you're not checking the return value of malloc, and even if you were you need to loop on ReadFile to read the whole thing in.
You ran out of memory. You certainly need to redesign your algorithm.
And as Hans Passant pointed out, you have a memory leak because you never free the file's memory when you are done with it. C++ isn't garbage collected.

Using new to allocate memory for unsigned char array fails

I'm trying to load a tga file in c++ code that I got from google searching, but the part that allocates memory fails. The beginning of my "LoadTarga" method includes these variables:
int imageSize;
unsigned char* targaImage;
Later on in the method the imageSize variable gets set to 262144 and I use that number to set the size of the array:
// Calculate the size of the 32 bit image data.
imageSize = width * height * 4;
// Allocate memory for the targa image data.
targaImage = new unsigned char[imageSize];
if (!targaImage)
{
MessageBox(hwnd, L"LoadTarga - failed to allocate memory for the targa image data", L"Error", MB_OK);
return false;
}
The problem is that the body of the if statement executes and I have no idea why the memory allocation failed. As far as I know it should work - I know the code compiles and runs up to this point and I haven't seen anything yet in google that would show a proper alternative.
What should I change in my code to make it allocate memory correctly?
Important Update:
Rob L's comments and suggestions were very useful (though I didn't try _heapchk since I solved the issue before I tried using it)
Trying each of fritzone's ideas meant the program ran past the "if (!targaImage)" point without trouble. The code that sets "targaImage and the if statement checks if memory was allocated correctly has been replaced with this:
try
{
targaImage = new unsigned char[imageSize];
}
catch (std::bad_alloc& ba)
{
std::cerr << "bad_alloc caught: " << ba.what() << '\n';
return false;
}
However I got a new problem with the very next bit of code:
count = (unsigned int)fread(targaImage, 1, imageSize, filePtr);
if (count != imageSize)
{
MessageBox(hwnd, L"LoadTarga - failed to read in the targa image data", L"Error", MB_OK);
return false;
}
Count was giving me a value of "250394" which is different to imageSize's value of "262144". I couldn't figure out why this was and doing a bit of searching (though I must admit, not much searching) on how "fread" works didn't yield info.
I decided to cancel my search and try the answer code on the tutorial site here http://www.rastertek.com/dx11s2tut05.html (scroll to the bottom of the page where it says "Source Code and Data Files" and download the zip. However creating a new project, putting in the source files and image file didn't work as I got a new error. At this point I thought maybe the way I converted the image file from to tga might have been incorrect.
So rather than spend a whole lot of time debugging the answer code I put the image file from the answer into my own project. I noted that the size of mine was MUCH smaller than the answer (245KB compared to 1025 KB) )so maybe if I use the answer code's image my code would run fine. Turns out I was right! Now the image is stretched sideways for some reason but my original query appears to have been solved.
Thanks Rob L and fritzone for your help!
You are NOT using the form of new which returns a null pointer in case of error, so it makes no sense for checking the return value. Instead you should be aware of catching a std::bad_alloc. The null pointer returning new for you has the syntax: new (std::nothrow) unsigned char[imageSize];
Please see: http://www.cplusplus.com/reference/new/operator%20new[]/
Nothing in your sample looks wrong. It is pretty unlikely that a modern Windows system will run out of memory allocating 256k just once. Perhaps your allocator is being called in a loop and allocating more than you think, or the value of imagesize is wrong. Look in the debugger.
Another possibility is that your heap is corrupt. Calling _heapchk() can help diagnose that.
Check the "memory peak working set" in windows tasks manager and ensure how much memory you are really trying to allocate.

C++ CopyFile() does not works on c:\

hello guys i have one simple program which copying itself. Its work great when i copying in D disk. But when im trying to copy it on c disk nothing happens.
This is code :
int main()
{
string appDir = "";
appDir = std::string( result, GetModuleFileName( NULL, result, MAX_PATH ) );
CopyFile(appDir.c_str(), "C:\\SelfCopyingApp.exe", 1);
system("PAUSE");
return 0;
}
Does anyone have an idea?
Thanks...
By default, the system drive has locked down permissions which prevent anyone from copying things there who are not administrators. Generally, one should not be messing with the root of the drive. If you need to do something like an installer, then you should
Ask for Admin rights
Install yourself in the correct location, namely %PROGRAMFILES%\CompanyName\ProductName
Messing with the root of the drive is asking for trouble; that's not where programs go.
Other notes on this code not related to your question:
system("pause") is wrong. Use std::cin.get() if you really want a portable way to get that behavior.
You should probably be using Unicode.
If GetModuleFileName fails you're going to be copying some random garbage to that location, not yourself. You need to check the return codes and GetLastError codes of every Win32 function.

Deleting a bitmap resource in Visual C++

I am trying to delete a resource bitmap through code and am having trouble doing it. Went through several hours of headbanging with google. Here is the code:
int result;
HANDLE h;
h = BeginUpdateResource(L"C:\\Users\\Steve\\Desktop\\stub.exe", FALSE);
result = UpdateResource(h, RT_BITMAP, MAKEINTRESOURCE(IDB_BITMAP2), MAKELANGID(LANG_NEUTRAL, SUBLANG_NEUTRAL), NULL, 0);
EndUpdateResource(h, FALSE);
When I debug, variable "result" ends up being NULL which means that the update didn't go through. Is there something incredibly basic that I'm missing?
Ok, I figured out what went wrong. First off, I used the "GetLastError()" command to help widdle down the possibilities. It gave me an error 0x57 which means ERROR_INVALID_PARAMETER.
When I experimented more, it seems that in the language paramters, I listed "MAKELANGID(LANG_NEUTRAL, SUBLANG_NEUTRAL)" instead of "MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US)". Apparently, resources care about the language you are using even if it's something like a bitmap which doesn't use language.
Once I put in LANG_ENGLISH paramters, it worked. You can find out what language you are using by inspecting the text of the .rc that the resource is using to build.,