Win32 Can't get data from shared memory - c++

I am able to create a shared memory object, as well as open it using the guide from MSDN.
The first process creates it and keeps it open.
The second process inputs a string.
Then the first process will attempt to recover that string and display it, however I can't seem to get anything. It's always empty although it seems like the writing part is set up correctly.
I write a string to memory like this:
int MemoryMapper::Write(const std::string& data) {
m_pBuffer = (LPCTSTR)MapViewOfFile(m_OpenHandle, FILE_MAP_ALL_ACCESS, 0, 0, m_BufferSize);
if (m_pBuffer == NULL)
{
std::cerr << m_DebugErrorTitle << "Write(): " << MM_ERROR_MAPPING_FAILED << " {" << GetLastError() << "}" << std::endl;
Close();
return 0;
}
const char* cdata = _CharFromString(data);
int size = (lstrlen(cdata) * sizeof(const char*));
CopyMemory((PVOID)m_pBuffer, cdata, size);
m_WrittenSize += size;
if (m_Debug > 1) { std::cout << m_DebugTitle << "Wrote " << size << " bytes." << std::endl; }
return size;
}
Then I read it like so:
int MemoryMapper::Read(std::string& data) {
m_pBuffer = (LPCTSTR) MapViewOfFile(m_OpenHandle, FILE_MAP_ALL_ACCESS, 0, 0, m_BufferSize);
if (m_pBuffer == NULL)
{
std::cerr << m_DebugErrorTitle << "Read(" << m_MemoryName << "): " << MM_ERROR_MAPPING_FAILED << " {" << GetLastError() << "}" << std::endl;
Close();
return 0;
}
MessageBox(NULL, m_pBuffer, TEXT("TEST MESSAGE"), MB_OK);
int size = (lstrlen(m_pBuffer) * sizeof(const char*));
UnmapViewOfFile(m_pBuffer);
return size;
}
m_pBuffer is a LPCTSTR and m_BufferSize is 1024.
The name speficied for the object is the same on both ends. I've already made sure the creation and opening/closing part works.
The second process writes '8312.000000,8312.000000', a total of 92 bytes according to the code.
The reader's buffer is empty.
What am I doing wrong?
I've tried various data types, char, const char, string, tchar - same result.

8312.000000,8312.000000 is 23 characters in length.
std::string::c_str() returns a null-terminated char* pointer. lstrlen() returns the number of characters up to but not including the null terminator.
Write() is multiplying the string length by sizeof(const char*), which is 4 in a 32-bit process (8 in a 64-bit process). Write() is exceeding the bounds of data and attempting to copy 23 * 4 = 92 bytes into m_pBuffer. cdata is guaranteed to point at a buffer containing 24 bytes max (23 characters + 1 null terminator), so Write() is reaching into surrounding memory. That is undefined behavior, and anything could happen. In your case, you probably just ended up copying extra garbage into m_pBuffer. Write() could have easily crashed instead.
In fact, if data has more than 256 characters, Write() WOULD crash, because it would be trying to copy 257+ * 4 > 1024 bytes into m_pBuffer - more than MapViewOfFile() mapped access for.
You should be multiplying the string length by sizeof(std::string::value_type) instead, which is sizeof(char), which is always 1 (so you could just omit the multiplication).
Read() has the same sizeof() mistake, but it is also making the assumption that m_pBuffer is always null-terminated when calling lstrlen() and MessageBox(), but Write() does not guarantee that a null terminator is always present.
With that said, try something more like this instead:
int MemoryMapper::Write(const std::string& data)
{
// include the null terminator if there is room...
DWORD size = std::min(data.size() + 1, m_BufferSize);
char *pBuffer = (char*) MapViewOfFile(m_OpenHandle, FILE_MAP_WRITE, 0, 0, size);
if (!pBuffer)
{
DWORD errCode = GetLastError();
std::cerr << m_DebugErrorTitle << "Write(): " << MM_ERROR_MAPPING_FAILED << " {" << errCode << "}" << std::endl;
Close();
return 0;
}
CopyMemory(pBuffer, data.c_str(), size);
UnmapViewOfFile(pBuffer);
m_WrittenSize += size;
if (m_Debug > 1) {
std::cout << m_DebugTitle << "Wrote " << size << " bytes." << std::endl;
}
return size;
}
int MemoryMapper::Read(std::string& data)
{
char *pBuffer = (char*) MapViewOfFile(m_OpenHandle, FILE_MAP_READ, 0, 0, m_BufferSize);
if (!pBuffer)
{
DWORD errCode = GetLastError();
std::cerr << m_DebugErrorTitle << "Read(" << m_MemoryName << "): " << MM_ERROR_MAPPING_FAILED << " {" << errCode << "}" << std::endl;
Close();
return 0;
}
// check for a null terminator, but don't exceed the buffer...
char *terminator = std::find(pBuffer, pBuffer + m_BufferSize, '\0');
std::size_t len = std::distance(pBuffer, terminator);
data.assign(pBuffer, len);
UnmapViewOfFile(pBuffer);
MessageBoxA(NULL, data.c_str(), "TEST MESSAGE", MB_OK);
// include the null terminator if it was read...
return std::min(len + 1, m_BufferSize);
}

Related

Issue with memory allocation in beginner OpenCL code

I am trying to run a beginner level OpenCL test using an Intel CPU and integrated Iris graphics. I am compiling the code using the standard g++ and -framework OpenCL as a compile switch. I've tried sanitizing the code by running with gdb and referring a few guides online. But, I'm still seeing an error and I am supposing this is related to memory allocation. I have pasted my entire code below; please help if you see anything glaringly wrong.
Apologies for the verbose comments. Let me know if I have some wrong assumptions there as well :)
#include <iostream>
#include <OpenCL/opencl.h>
#include <cassert>
// the kernel that we want to execute on the device.
// here, you are doing an addition of elements in an array.
const char* kernelAdd =
{
"__kernel void add (global int* data)\n"
"{\n"
" int work_item_id = get_global_id(0);\n"
" data[work_item_id] *= 2;\n"
"}\n"
};
int main (int argc, char* argv[])
{
cl_int ret_val;
// getting the platform ID that can used - here we are getting only one
cl_platform_id platformID;
cl_uint numPlatforms;
if((clGetPlatformIDs(1, &platformID, &numPlatforms)))
std::cout << "clGetPlatformIDs failed!" << std::endl;
// getting OpenCL device ID for our GPU - here too, we are getting only one
cl_device_id deviceID;
cl_uint numDevices;
if((clGetDeviceIDs(platformID, CL_DEVICE_TYPE_GPU, 1, &deviceID, &numDevices)))
std::cout << "clGetDeviceIDs failed!" << std::endl;
// printing out some device info. here we have chosen CL_DEVICE_NAME.
// you can choose any others by referring
// https://www.khronos.org/registry/OpenCL/sdk/1.0/docs/man/xhtml/clGetDeviceInfo.html
typedef char typeInfo;
size_t sizeInfo = 16*sizeof(typeInfo);
typeInfo* deviceInfo = new typeInfo(sizeInfo);
if((clGetDeviceInfo(deviceID, CL_DEVICE_NAME, sizeInfo, (void*) deviceInfo, NULL)))
std::cout << "clGetDeviceInfo failed!" << std::endl;
std::cout << "CL_DEVICE_NAME = " << deviceInfo << ", platform ID = ";
std::cout << platformID << ", deviceID = " << deviceID << std::endl;
// set up a context for our device
cl_context_properties contextProp[3] = {CL_CONTEXT_PLATFORM, (cl_context_properties) platformID, 0};
cl_context context = clCreateContext(contextProp, 1, &deviceID, NULL, NULL, &ret_val);
if (ret_val)
std::cout << "clCreateContext failed!" << std::endl;
// set up a queue for our device
cl_command_queue queue = clCreateCommandQueue(context, deviceID, (cl_command_queue_properties) NULL, &ret_val);
if (ret_val)
std::cout << "clCreateCommandQueue failed!" << std::endl;
// creating our data set that we want to compute on
int N = 1 << 4;
size_t data_size = sizeof(int) * N;
int* input_data = new int(N);
int* output_data = new int(N);
for (int i = 0; i < data_size; i++)
{
input_data[i] = rand() % 1000;
}
// create a buffer to where you will eventually enqueue the program for the device
cl_mem buffer = clCreateBuffer(context, CL_MEM_READ_WRITE, data_size, NULL, &ret_val);
if (ret_val)
std::cout << "clCreateBuffer failed!" << std::endl;
// copying our data set to the buffer
if((clEnqueueWriteBuffer(queue, buffer, CL_TRUE, 0, data_size, input_data, 0, NULL, NULL)))
std::cout << "clEnqueueWriteBuffer failed!" << std::endl;
// we compile the device program with our source above and create a kernel for it.
// also, we are allowed to create a device program with a binary that we can point to.
cl_program program = clCreateProgramWithSource(context, 1, (const char**) &kernelAdd, NULL, &ret_val);
if (ret_val)
std::cout << "clCreateProgramWithSource failed!" << std::endl;
if((clBuildProgram(program, 1, &deviceID, NULL, NULL, NULL)))
std::cout << "clBuildProgram failed!" << std::endl;
cl_kernel kernel = clCreateKernel(program, "add", &ret_val);
if (ret_val)
std::cout << "clCreateKernel failed! ret_val = " << ret_val << std::endl;
// configure options to find the arguments to the kernel
if((clSetKernelArg(kernel, 0, sizeof(buffer), &buffer)))
std::cout << "clSetKernelArg failed!" << std::endl;
// the total number of work items that we want to use
const size_t global_dimensions[3] = {data_size, 0, 0};
if((clEnqueueNDRangeKernel(queue, kernel, 1, NULL, global_dimensions, NULL, 0, NULL, NULL)))
std::cout << "clEnqueueNDRangeKernel failed!" << std::endl;
// read back output into another buffer
ret_val = clEnqueueReadBuffer(queue, buffer, CL_TRUE, 0, data_size, output_data, 0, NULL, NULL);
if(ret_val)
std::cout << "clEnqueueReadBuffer failed! ret_val = " << ret_val << std::endl;
std::cout << "Kernel completed" << std::endl;
// Release kernel, program, and memory objects
if(clReleaseMemObject(buffer))
std::cout << "clReleaseMemObject failed!" << std::endl;
if(clReleaseKernel(kernel))
std::cout << "clReleaseKernel failed!" << std::endl;
if(clReleaseProgram(program))
std::cout << "clReleaseProgram failed!" << std::endl;
if(clReleaseCommandQueue(queue))
std::cout << "clReleaseCommandQueue failed!" << std::endl;
if(clReleaseContext(context))
std::cout << "clReleaseContext failed!" << std::endl;
for (int i = 0; i < data_size; i++)
{
assert(output_data[i] == input_data[i]/2);
}
return 0;
}
The output is as follows:
CL_DEVICE_NAME = Iris, platform ID = 0x7fff0000, deviceID = 0x1024500
objc[1034]: Method cache corrupted. This may be a message to an invalid object, or a memory error somewhere else.
objc[1034]: receiver 0x7fefb8712a90, SEL 0x7fff7ce87c58, isa 0x7fff99268208, cache 0x7fff99268218, buckets 0x7fefb87043c0, mask 0x3, occupied 0x1
objc[1034]: receiver 48 bytes, buckets 64 bytes
objc[1034]: selector 'dealloc'
objc[1034]: isa 'OS_xpc_array'
objc[1034]: Method cache corrupted. This may be a message to an invalid object, or a memory error somewhere else.
make: *** [all] Abort trap: 6
Quite a common mistake
int* input_data = new int(N);
should be
int* input_data = new int[N];
Your version allocates a single int and initialises it to N. To allocate N integers you need square brackets.

Windows registry returning incorrect value C++

The below code can correctly read Registry values from various different keys, however whenever I try to read a value from a key under Winlogon it will either come up as "not found" or it will return a completely wrong value. The code is ran as admin, and compiled with Visual Studio 2017.
HKEY registryHandle = NULL;
int registryResult = NULL;
DWORD dataType;
TCHAR dataBuffer[1024] = {};
DWORD bufferSize = sizeof(dataBuffer);
registryResult = RegOpenKeyEx(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon", 0, KEY_QUERY_VALUE, &registryHandle);
if (registryResult != ERROR_SUCCESS) {
std::cout << "Error: " << registryResult << std::endl;
return false;
}
registryResult = RegQueryValueEx(registryHandle, L"LastUsedUsername", NULL, NULL, (LPBYTE)dataBuffer, &bufferSize);
if (registryResult != ERROR_SUCCESS) {
std::cout << "Error2: " << registryResult << std::endl;
return false;
}
std::cout << "Data Size: " << bufferSize << std::endl;
for (int i = 0; i < 256; i++) {
if (dataBuffer[i] == NULL) { break; }
std::cout << (char)dataBuffer[i];
}
std::cin.get();
RegCloseKey(registryHandle);
Registry value that I'm trying to read:
Below refers to Remy's suggested solution.
RegQueryValueEx Returns a buffer size of 4 with an output of 18754 17236 0 52428
You are clearly calling the Unicode version of the Registry functions, so you should be using WCHAR instead of TCHAR for your data buffer.
And you should not be truncating the characters to char at all. Use std::wcout instead of std::cout for printing out Unicode strings. And use the returned bufferSize to know how many WCHARs were actually output. Your printing loop is ignoring the bufferSize completely, so it is possible that you are actually printing out random garbage that RegQueryValueEx() did not actually intend for you to use (hence why lpcbData parameter is an in/out parameter, so you know how many bytes are actually valid).
You are also leaking the opened HKEY handle if RegQueryValueEx() fails.
Try something more like this instead:
HKEY registryHandle;
int registryResult;
registryResult = RegOpenKeyExW(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon", 0, KEY_QUERY_VALUE, &registryHandle);
if (registryResult != ERROR_SUCCESS) {
std::cout << "Error: " << registryResult << std::endl;
return false;
}
WCHAR dataBuffer[1024];
DWORD bufferSize = sizeof(dataBuffer);
// TODO: consider using RegGetValueW() instead, which is safer
// when it comes to reading string values from the Registry...
registryResult = RegQueryValueExW(registryHandle, L"LastUsedUsername", NULL, NULL, (LPBYTE)dataBuffer, &bufferSize);
RegCloseKey(registryHandle);
if (registryResult != ERROR_SUCCESS) {
std::cout << "Error2: " << registryResult << std::endl;
return false;
}
DWORD len = bufferSize / sizeof(WCHAR);
if ((len > 0) && (dataBuffer[len-1] == L'\0')) {
--len;
}
std::cout << "Data Byte Size: " << bufferSize << std::endl;
std::cout << "Data Character Length: " << len << std::endl;
std::wcout.write(dataBuffer, len);
std::cin.get();
return true;
That being said, on my machine, there is no LastUsedUsername value in the Winlogon key you are accessing, so getting a "not found" error is a very likely possibility. But you definately need to handle

Access violation reading location when using ReadFile

I`m struggling for the past many hours with the following problem: I try to read a file using CreateFile and ReadFile methods.
Here is the code:
char* Utils::ReadFromFile(wchar_t* path) {
HANDLE hFile = CreateFile(
path, // long pointer word string file path (16 bit UNICODE char pointer)
GENERIC_READ, // access to file
0, // share mode ( 0 - prevents others from opening/readin/etc)
NULL, // security attributes
OPEN_EXISTING, // action to take on file -- returns ERROR_FILE_NOT_FOUND
FILE_ATTRIBUTE_READONLY, // readonly and offset possibility
NULL // when opening an existing file, this parameter is ignored
);
if (hFile == INVALID_HANDLE_VALUE) {
std::cout << "File opening failed" << endl;
std::cout << "Details: \n" << Utils::GetLastErrorMessage() << endl;
CloseHandle(hFile);
hFile = NULL;
return nullptr;
}
LARGE_INTEGER largeInteger;
GetFileSizeEx(hFile, &largeInteger);
LONGLONG fileSize = largeInteger.QuadPart;
if (fileSize == 0) {
std::cout << "Error when reading file size" << endl;
std::cout << "Details: \n" << Utils::GetLastErrorMessage() << endl;
CloseHandle(hFile);
hFile = NULL;
return nullptr;
}
cout << "File size: " << fileSize << endl;
char* bytesRead;
bytesRead = new char(fileSize);
int currentOffset = 0;
int attempts = 0;
int nBytesToBeRead = BYTES_TO_READ;
//DWORD nBytesRead = 0;
OVERLAPPED overlap{};
errno_t status;
while (currentOffset < fileSize) {
overlap.Offset = currentOffset;
if (fileSize - currentOffset < nBytesToBeRead)
nBytesToBeRead = fileSize - currentOffset;
status = ReadFile(
hFile, // file handler
bytesRead + currentOffset, // byted read from file
nBytesToBeRead, // number of bytes to read
NULL, // number of bytes read
&overlap // overlap parameter
);
if (status == 0) {
std::cout << "Error when reading file at offset: " << currentOffset << endl;
std::cout << "Details: \n" << Utils::GetLastErrorMessage() << endl;
attempts++;
std::cout << "Attempt: " << attempts << endl;
if (attempts == 3) {
cout << "The operation could not be performed. Closing..." << endl;
CloseHandle(hFile);
hFile = NULL;
return nullptr;
}
continue;
}
else {
cout << "Read from offset: " << currentOffset;// << " -- " << overlap.InternalHigh << endl;
currentOffset += nBytesToBeRead;
if (currentOffset == fileSize) {
cout << "File reading completed" << endl;
break;
}
}
}
CloseHandle(hFile);
return bytesRead;
}
When running this method I get some weird results:
One time it worked perfectly
Very often I get Access violation reading location for currentOffset variable and overlap.InternalHigh ( I commented last one), with last method from CallStack being
msvcp140d.dll!std::locale::locale(const std::locale & _Right) Line 326 C++
Sometimes the function runs perfectly, but I get access violation reading location when trying to exit main function with last method from CallStack being
ucrtbased.dll!_CrtIsValidHeapPointer(const void * block) Line 1385 C++
I read the windows documentation thoroughly regarding the methods I use and checked the Internet for any solution I could find, but without any result. I don't understand this behaviour, getting different errors when running cod multiple times, and therefore I can`t get to a solution for this problem.
Note: The reason I am reading the file in repeated calls is not relevant. I tried reading with a single call and the result is the same.
Thank you in advance
You are allocating a single char for bytesRead, not an array of fileSize chars:
char* bytesRead;
bytesRead = new char(fileSize); // allocate a char and initialize it with fileSize value
bytesRead = new char[fileSize]; // allocate an array of fileSize chars

Converting (parsing) google protocol buffer streams from socket

I am using the following code to parse a message that was SerializedwithCodedStream on to the socket:
if ( socket->read(databuffer, size) != -1)
{
google::protobuf::io::ArrayInputStream array_input(databuffer,size);
google::protobuf::io::CodedInputStream coded_input(&array_input);
data_model::terminal_data* tData = new data_model::terminal_data();
if (!tData->ParseFromCodedStream(&coded_input))
{
return;
}
else
std::cout << tData->symbol_name() << std::endl;
}
Here is how I serialized it:
data_model::terminal_data tData;
tData.set_type(1);
tData.set_client_id("C109");
tData.set_expiry("20140915");
tData.set_quantity(3500);
tData.set_strat_id("056");
tData.set_symbol_name("BANKNIFTY");
tData.set_time("145406340");
tData.set_trade_id(16109234);
int total_size = tData.ByteSize() + sizeof(int);
char *buffer = new char[total_size];
memset(buffer, '\0', total_size);
google::protobuf::io::ArrayOutputStream aos(buffer,total_size);
google::protobuf::io::CodedOutputStream *coded_output = new google::protobuf::io::CodedOutputStream(&aos);
google::protobuf::uint32 s = tData.ByteSize();
coded_output->WriteVarint32(s);
tData.SerializeToCodedStream(coded_output);
int sent_bytes = 0;
if ( (sent_bytes = send(liveConnections.at(i), buffer, total_size, MSG_NOSIGNAL)) == -1 )
liveConnections.erase(liveConnections.begin() + i);
else
std::cout << "sent " << sent_bytes << " bytes to " << i << std::endl;
delete coded_output;
delete buffer;
When I try to parse, it gives the following error at runtime:
[libprotobuf ERROR google/protobuf/message_lite.cc:123] Can't parse message of type "data_model.terminal_data" because it is missing required fields: type
But as you can see (in the second code snippet) I have set the type field. What is the problem ?
You're ignoring the count returned by read(), other than checking it for -1. You need to use it instead of size when constructing array_input.

C++ WIN32 Creating An Array of Integers and Booleans in Shared Memory

I'm trying to create an array of int and an array of bools in shared memory. So far I have the following code which runs without errors and 'apparently' creates the memory, however I'm not sure that I can use a LPCTSTR to access the data like an array? Can someone please explain the best way to go about this as I find MSDN quite lacking and painful.
void createSharedMemory()
{
const char slotsName[]="Slots";
const char flagsName[]="Flags";
const LONG BufferSize = sizeof(int);
const LONG Buffers = 10;
const LONG FlagSize = sizeof(bool);
HANDLE hSlots = CreateFileMapping((HANDLE)0xFFFFFFFF, NULL, PAGE_READWRITE, 0, BufferSize * Buffers, SLOTSNAME);
assert(hSlots != NULL);
HANDLE hFlags = CreateFileMapping((HANDLE)0xFFFFFFFF, NULL, PAGE_READWRITE, 0, FlagSize * Buffers, flagsName);
assert(hSlots != NULL);
std::cout << "Created shared memory!" << std::endl;
}
int main(int argc, char* argv[])
{
createSharedMemory();
HANDLE hSlots;
LPCTSTR pSlots;
hSlots = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, SLOTSNAME);
if(hSlots == NULL)
{
std::cout << "Could not open slots file mapping object:" << GetLastError() << std::endl;
getchar();
return 0;
}
pSlots = (LPTSTR) MapViewOfFile(hSlots, FILE_MAP_ALL_ACCESS, 0, 0, 10 * sizeof(int));
if(pSlots == NULL)
{
std::cout << "Could not map view of slots file:" << GetLastError() << std::endl;
CloseHandle(hSlots);
getchar();
return 0;
}
std::cout << "Mapped slots correctly!" << std::endl;
HANDLE hFlags;
LPCTSTR pFlags;
hFlags = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, FLAGSNAME);
if(hFlags == NULL)
{
std::cout << "Could not open flags file mapping object:" << GetLastError() << std::endl;
getchar();
return 0;
}
pFlags = (LPTSTR) MapViewOfFile(hFlags, FILE_MAP_ALL_ACCESS, 0, 0, 10 * sizeof(bool));
if(pFlags == NULL)
{
std::cout << "Could not map view of flags file:" << GetLastError() << std::endl;
CloseHandle(hFlags);
getchar();
return 0;
}
std::cout << "Mapped flags correctly!" << std::endl;
//Access the data here
getchar();
UnmapViewOfFile(pSlots);
CloseHandle(hSlots);
UnmapViewOfFile(pFlags);
CloseHandle(hFlags);
return 0;
}
MapViewOfFile() maps the shared memory into your process's address space. From then on (until it is unmapped) you can treat it just like a local chunk of memory that you allocated (or declared on the stack).
The shared memory handle hSlots is 10 * sizeof(int) bytes in size, and if you are really storing ints in this memory then the easiest thing to do is to declare pSlots as an int*:
int* pSlots = reinterpret_cast<int*>( MapViewOfFile(hSlots, FILE_MAP_ALL_ACCESS, 0, 0, 10 * sizeof(int)) );
if (pSlots)
{
// pSlots can now be used as an array
for (int i = 0; i < 10; i++)
{
pSlots[i] = i; // etc etc
}
}