Reading comport with QT through winapi - c++

I'm trying to get a list of comports that are currently in use to present them in my GUI.
I'm using the following code:
TCHAR szComPort[8];
HANDLE hCom = NULL;
char str[20];
for (int i = 1; i <= 255; ++i)
{
if (i < 10)
wsprintf(szComPort, ("COM%d"), i);
else
wsprintf(szComPort, ("\\\\.\\COM%d"), i);
hCom = CreateFile(szComPort,
GENERIC_READ|GENERIC_WRITE,
0,
NULL,
OPEN_EXISTING,
0,
NULL);
if (INVALID_HANDLE_VALUE != hCom)
{
sprintf_s(str,"COM%d",i);
ui->COMLIST->addItem(str);
}
CloseHandle(hCom);
}
This works fine on my laptop but for some reason it crashes QT on my PC for comports 10 and higher (meaning if i change i<=255 to i<=9 it works fine),
Any ideas?
Thank you!

You have 8 wchars in the szComPort buffer, you you are writing 10 characters for COM10 and above and 11 characters for COM11 and above. Make the buffer at least 11 units.
Edit: The usual practice is to make the buffer simply large enough with enough slop so you don't have to count characters. I'd probably just look at the string, think it has about 10 characters so with something for the formatted value it won't result in 30 and declare the buffer 32 items. The stack has enough room for this few extra bytes and you are not even initializing it, so there is no performance penalty and it's less risk.

Related

Reading Memory from Another Process in C++ | Copilot's Solution

As the title suggests, I am trying to read memory from another process in C++ in order to check if the values from the other process reach a certain level. Since I don't know anything about this, I decided to consult GitHub Copilot for help. On a normal basis, I would search the docs, but Github seems to disagree. Since I have access to GitHub Copilot, and since the front page advertisement clearly encourages users to trust Copilot's programming ability, I chose to let Copilot make this function.
So I gave it a prompt in the form of a comment: //A function that can grab an address from the memory of another process and store it as a double value
What it gave me seemed pretty good, but I will never take a function that copilot makes and blindly use it unless I know for sure it will work (because I don't trust that everything Copilot makes is never going to cause issues, especially when dealing with pointers and such). I wanted to see if someone who had experience with memory in C++ could tell me if this function will work and why it would or wouldn't work as I know nothing about getting memory from another process.
There are three main reasons why I am not just searching the docs anyway despite GitHub's statement:
Since this is a complicated and real-world use case, this will really test Copilot's programming ability and it will give me insight into how much I can trust Copilot in the future for stuff I don't know how to do (Obviously I wouldn't let this get out of hand, but it would be good to know I can trust Copilot a little more than I do right now).
Searching the docs anyways despite the statement that GitHub made on their website is quite the opposite of what Copilot is supposed to help users with, and while I understand that it's a public beta and it's not complete yet, it should at least be good enough for real-world use cases rather than simple coding cases. An answer from someone experienced will really show if it is good enough for real-world coding cases.
The docs only tell me what a function does and what to put as its parameters, it doesn't tell me how to use it. If I really wanted to know how to use it, I would have to search the web. Searching the web will most likely get me complicated examples that don't pertain to my issue and defer me from what I am actually trying to accomplish. Not only that, but it is the opposite of what Copilot is supposed to help users with, as stated in reason #2.
Here is the code that Copilot generated for me:
DWORD GetAddress(DWORD dwProcessId, LPCWSTR szModuleName, const char* szProcName)
{
HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, dwProcessId);
if (hProcess == NULL)
return 0;
MODULEINFO modinfo;
GetModuleInformation(hProcess, GetModuleHandle(szModuleName), &modinfo, sizeof(MODULEINFO));
DWORD dwAddress = (DWORD)modinfo.lpBaseOfDll;
DWORD dwSize = (DWORD)modinfo.SizeOfImage;
MEMORY_BASIC_INFORMATION mbi;
while (dwSize > 0)
{
VirtualQueryEx(hProcess, (LPVOID)dwAddress, &mbi, sizeof(mbi));
if (mbi.State == MEM_COMMIT && !(mbi.Protect & PAGE_GUARD) && mbi.Protect & PAGE_EXECUTE_READWRITE)
{
DWORD dwOldProtect;
VirtualProtectEx(hProcess, (LPVOID)dwAddress, mbi.RegionSize, PAGE_EXECUTE_READWRITE, &dwOldProtect);
char* szBuffer = new char[mbi.RegionSize];
ReadProcessMemory(hProcess, (LPVOID)dwAddress, szBuffer, mbi.RegionSize, NULL);
for (DWORD dwIndex = 0; dwIndex < mbi.RegionSize - 4; dwIndex++)
{
if (szBuffer[dwIndex] == '\x55' && szBuffer[dwIndex + 1] == '\x8B' && szBuffer[dwIndex + 2] == 'E' && szBuffer[dwIndex + 3] == 'A')
{
DWORD dwAddress2 = dwAddress + dwIndex + 7;
DWORD dwAddress3 = dwAddress2 + *(DWORD*)(dwAddress2);
if (strcmp((char*)dwAddress3, szProcName) == 0)
{
delete[] szBuffer;
CloseHandle(hProcess);
return dwAddress2 + 4;
}
}
}
delete[] szBuffer;
VirtualProtectEx(hProcess, (LPVOID)dwAddress, mbi.RegionSize, dwOldProtect, &dwOldProtect);
}
dwAddress += mbi.RegionSize;
dwSize -= mbi.RegionSize;
}
CloseHandle(hProcess);
return 0;
}
You may point out an immediately noticeable error: The function returns DWORD rather than double, which is what I asked Copilot to return. I saw that error but from examples that I have seen (Yes, I have done at least some searching), returning DWORD works as well. I could have seen those examples wrong, and if I am correct me.
The function returns double but it only returns the value that is stored in the memory, not actually double-typed data. If you do a casting to double, you get back your original data.
You can't search in memory with anything other than a byte pointer on 64-bit systems: http://msdn.microsoft.com/en-us/library/aa746449%28v=vs.85%29.aspx
There are different ways to search for a string in memory, depending on what you are looking for: http://www.catatonicsoft.com/blog/need-to-read-and-write-strings-and-data-in-a-processs-memory/
(Read more here: https://github.com/MicrosoftArchiveOrgMember/copilot)
Memory Scraper
This program uses several techniques to obtain information from processes and memory as it runs so that it can be added to the evidence file when Cofactor terminates the target process (by default) or when you terminate the program manually (with CTRL+C).
This code was mostly cobbled together from various examples at http://www.codeproject.com and https://forums.hak5.org, with some heavy modifications made to get the output in a useful format.
The process memory usage is checked continuously and added to the log file when it changes. This is done by getting a pointer to the process' memory region, then checking all of its pages as they are referenced. When they are changed, the contents of that page will be read and added to the log file as evidence. If a process goes in and out of sleep mode or is stopped for some other reason, this program will detect that and add it to the log file accordingly.
The current DLLs loaded by processes are recorded every 5 seconds so that if a DLL gets loaded after Cofactor terminates its target, it will still be included in the log file as evidence. A process is also checked every 5 seconds for new threads being spawned so that child processes are also included in our evidence files.
We could extend this program by having it check for two modules:
1) A module containing functions that correspond to debug breakpoints (which would detect whether a debugger was attached).
2) A module containing crash signatures - integers that would trigger an alert if they were found written to memory in any of our processes (like stack smashing protections might provide).
In order to do this without complicating things too much, I'd probably use CreateRemoteThread with an address within each module to continue execution from that thread into your own code where you can check for the breakpoint or crash signature and act accordingly.
Conclusion
If you need to debug a process and can't get it to stop for any reason, this program will still be able to grab the process memory at any time so that you can search for whatever you need.
You'll have to do some extra work in order to use the log file that is created, like parsing it with a parser of your choice and searching it (using regexes or something) which I assume is outside of the scope of what Copilot is designed to do.
If you end up using this program, please let me know! I'm curious to see how many people find this program useful.
Interesting Techniques I Learned From Other Programs
Finding DLLs Loaded by a Process
The C++ code below uses the Windows API GetModuleFileNameW() to get the full path of loaded DLLs and parses it with split() to extract just the filename and not the whole path. The rest of that code just tries to avoid duplicates while being simple enough that it doesn't get too confused between different processes and file system cases (hopefully).
// Code Example: Finding DLLs Loaded by a Process
#include <stdio.h>
#include <string.h>
#include <tchar.h>
#define BUFF_SIZE 200
// Find the full path to a loaded DLL by process ID (PID) and its filename (first 8 characters)
void GetModuleFileNameEx(int pid, const char* szName, char* buff, int buffSize)
{
HANDLE hProcess = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, pid);
if (hProcess == NULL) { return; }
HMODULE hMods[1024];
DWORD cbNeeded;
if (!EnumProcessModules(hProcess, hMods, sizeof(hMods), &cbNeeded)) { return; }
for (int i = 0; i < (int)(cbNeeded / sizeof(HMODULE)); i++) {
TCHAR szModName[MAX_PATH];
if (!GetModuleFileNameEx(hProcess, hMods[i], szModName, sizeof(szModName))) { continue; }
strcat_s((char*)buff, buffSize - 1 , (char*)szModName);
// Check if the first 8 characters of the filename in the process matches
// with what we are looking for and avoid adding duplicates
char* chPtr = strchr((char*)buff, '\\');
if (chPtr != NULL) {
*chPtr = 0;
strcat_s((char*)buff, buffSize - 1 , "\\");
strcat_s((char*)buff, buffSize - 1 , szName);
HANDLE hFile = CreateFileA((LPCSTR)buff, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS | FILE_FLAG_SEQUENTIAL_SCAN, NULL);
if (hFile != INVALID_HANDLE_VALUE) { CloseHandle(hFile); return; }
} else { break; }
}
}
int main() {
char szDllName[8]; // Maximum length of a module name is MAXPATH - 1 bytes including the NULL terminator. However we only need 8 characters to store the DLL name so use this limit to save memory.
int pid; scanf("%d", &pid);
if (pid == 0) { return 0; }
char buff[BUFF_SIZE];
GetModuleFileNameEx(pid, szDllName, buff, BUFF_SIZE);
char* chPtr = strchr(buff, '\\');
// Change the path separator character to a null terminator so we can split it
if (chPtr != NULL) { *chPtr = 0; }
chPtr = strtok(buff, "\\");
while (chPtr != NULL) {
printf("%s\n", chPtr);
chPtr = strtok(NULL, "\\");
}
return 0;
}

Designing a fast "rolling window" file reader

I'm writing an algorithm in C++ that scans a file with a "sliding window," meaning it will scan bytes 0 to n, do something, then scan bytes 1 to n+1, do something, and so forth, until the end is reached.
My first algorithm was to read the first n bytes, do something, dump one byte, read a new byte, and repeat. This was very slow because to "ReadFile" from HDD one byte at a time was inefficient. (About 100kB/s)
My second algorithm involves reading a chunk of the file (perhaps n*1000 bytes, meaning the whole file if it's not too large) into a buffer and reading individual bytes off the buffer. Now I get about 10MB/s (decent SSD + Core i5, 1.6GHz laptop).
My question: Do you have suggestions for even faster models?
edit: My big buffer (relative to the window size) is implemented as follows:
- for a rolling window of 5kB, the buffer is initialized to 5MB
- read the first 5MB of the file into the buffer
- the window pointer starts at the beginning of the buffer
- upon shifting, the window pointer is incremented
- when the window pointer nears the end of the 5MB buffer, (say at 4.99MB), copy the remaining 0.01MB to the beginning of the buffer, reset the window pointer to the beginning, and read an additional 4.99MB into the buffer.
- repeat
edit 2 - the actual implementation (removed)
Thank you all for many insightful response. It was hard to select a "best answer"; they were all excellent and helped with my coding.
I use a sliding window in one of my apps (actually, several layers of sliding windows working on top of each other, but that is outside the scope of this discussion). The window uses a memory-mapped file view via CreateFileMapping() and MapViewOfFile(), then I have an an abstraction layer on top of that. I ask the abstraction layer for any range of bytes I need, and it ensures that the file mapping and file view are adjusted accordingly so those bytes are in memory. Every time a new range of bytes is requested, the file view is adjusted only if needed.
The file view is positioned and sized on page boundaries that are even multiples of the system granularity as reported by GetSystemInfo(). Just because a scan reaches the end of a given byte range does not necessarily mean it has reached the end of a page boundary yet, so the next scan may not need to alter the file view at all, the next bytes are already in memory. If the first requested byte of a range exceeds the right-hand boundary of a mapped page, the left edge of the file view is adjusted to the left-hand boundary of the requested page and any pages to the left are unmapped. If the last requested byte in the range exceeds the right-hand boundary of the right-most mapped page, a new page is mapped and added to the file view.
It sounds more complex than it really is to implement once you get into the coding of it:
Creating a View Within a File
It sounds like you are scanning bytes in fixed-sized blocks, so this approach is very fast and very efficient for that. Based on this technique, I can sequentially scan multi-GIGBYTE files from start to end fairly quickly, usually a minute or less on my slowest machine. If your files are smaller then the system granularity, or even just a few megabytes, you will hardly notice any time elapsed at all (unless your scans themselves are slow).
Update: here is a simplified variation of what I use:
class FileView
{
private:
DWORD m_AllocGran;
DWORD m_PageSize;
HANDLE m_File;
unsigned __int64 m_FileSize;
HANDLE m_Map;
unsigned __int64 m_MapSize;
LPBYTE m_View;
unsigned __int64 m_ViewOffset;
DWORD m_ViewSize;
void CloseMap()
{
CloseView();
if (m_Map != NULL)
{
CloseHandle(m_Map);
m_Map = NULL;
}
m_MapSize = 0;
}
void CloseView()
{
if (m_View != NULL)
{
UnmapViewOfFile(m_View);
m_View = NULL;
}
m_ViewOffset = 0;
m_ViewSize = 0;
}
bool EnsureMap(unsigned __int64 Size)
{
// do not exceed EOF or else the file on disk will grow!
Size = min(Size, m_FileSize);
if ((m_Map == NULL) ||
(m_MapSize != Size))
{
// a new map is needed...
CloseMap();
ULARGE_INTEGER ul;
ul.QuadPart = Size;
m_Map = CreateFileMapping(m_File, NULL, PAGE_READONLY, ul.HighPart, ul.LowPart, NULL);
if (m_Map == NULL)
return false;
m_MapSize = Size;
}
return true;
}
bool EnsureView(unsigned __int64 Offset, DWORD Size)
{
if ((m_View == NULL) ||
(Offset < m_ViewOffset) ||
((Offset + Size) > (m_ViewOffset + m_ViewSize)))
{
// the requested range is not already in view...
// round down the offset to the nearest allocation boundary
unsigned __int64 ulNewOffset = ((Offset / m_AllocGran) * m_AllocGran);
// round up the size to the next page boundary
DWORD dwNewSize = ((((Offset - ulNewOffset) + Size) + (m_PageSize-1)) & ~(m_PageSize-1));
// if the new view will exceed EOF, truncate it
unsigned __int64 ulOffsetInFile = (ulNewOffset + dwNewSize);
if (ulOffsetInFile > m_FileSize)
dwNewViewSize -= (ulOffsetInFile - m_FileSize);
if ((m_View == NULL) ||
(m_ViewOffset != ulNewOffset) ||
(m_ViewSize != ulNewSize))
{
// a new view is needed...
CloseView();
// make sure the memory map is large enough to contain the entire view
if (!EnsureMap(ulNewOffset + dwNewSize))
return false;
ULARGE_INTEGER ul;
ul.QuadPart = ulNewOffset;
m_View = (LPBYTE) MapViewOfFile(m_Map, FILE_MAP_READ, ul.HighPart, ul.LowPart, dwNewSize);
if (m_View == NULL)
return false;
m_ViewOffset = ulNewOffset;
m_ViewSize = dwNewSize;
}
}
return true;
}
public:
FileView() :
m_AllocGran(0),
m_PageSize(0),
m_File(INVALID_HANDLE_VALUE),
m_FileSize(0),
m_Map(NULL),
m_MapSize(0),
m_View(NULL),
m_ViewOffset(0),
m_ViewSize(0)
{
// map views need to be positioned on even multiples
// of the system allocation granularity. let's size
// them on even multiples of the system page size...
SYSTEM_INFO si = {0};
if (GetSystemInfo(&si))
{
m_AllocGran = si.dwAllocationGranularity;
m_PageSize = si.dwPageSize;
}
}
~FileView()
{
CloseFile();
}
bool OpenFile(LPTSTR FileName)
{
CloseFile();
if ((m_AllocGran == 0) || (m_PageSize == 0))
return false;
HANDLE hFile = CreateFile(FileName, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL);
if (hFile == INVALID_HANDLE_VALUE)
return false;
ULARGE_INTEGER ul;
ul.LowPart = GetFileSize(hFile, &ul.HighPart);
if ((ul.LowPart == INVALID_FILE_SIZE) && (GetLastError() != 0))
{
CloseHandle(hFile);
return false;
}
m_File = hFile;
m_FileSize = ul.QuadPart;
return true;
}
void CloseFile()
{
CloseMap();
if (m_File != INVALID_HANDLE_VALUE)
{
CloseHandle(m_File);
m_File = INVALID_HANDLE_VALUE;
}
m_FileSize = 0;
}
bool AccessBytes(unsigned __int64 Offset, DWORD Size, LPBYTE *Bytes, DWORD *Available)
{
if (Bytes) *Bytes = NULL;
if (Available) *Available = 0;
if ((m_FileSize != 0) && (offset < m_FileSize))
{
// make sure the requested range is in view
if (!EnsureView(Offset, Size))
return false;
// near EOF, the available bytes may be less than requested
DWORD dwOffsetInView = (Offset - m_ViewOffset);
if (Bytes) *Bytes = &m_View[dwOffsetInView];
if (Available) *Available = min(m_ViewSize - dwOffsetInView, Size);
}
return true;
}
};
.
FileView fv;
if (fv.OpenFile(TEXT("C:\\path\\file.ext")))
{
LPBYTE data;
DWORD len;
unsigned __int64 offset = 0, filesize = fv.FileSize();
while (offset < filesize)
{
if (!fv.AccessBytes(offset, some size here, &data, &len))
break; // error
if (len == 0)
break; // unexpected EOF
// use data up to len bytes as needed...
offset += len;
}
fv.CloseFile();
}
This code is designed to allow random jumping anywhere in the file at any data size. Since you are reading bytes sequentially, some of the logic can be simplified as needed.
Your new algorithm only pays 0.1% of the I/O inefficiencies... not worth worrying about.
To get further throughput improvement, you should take a closer look at the "do something" step. See whether you can reuse part of the result from an overlapping window. Check cache behavior. Check if there's a better algorithm for the same computation.
You have the basic I/O technique down. The easiest improvement you can make now is to pick a good buffer size. With some experimentation, you'll find that read performance increases quickly with buffer size until you hit about 16k, then performance begins to level out.
Your next task is probably to profile your code, and see where it is spending its time. When dealing with performance, it is always best to measure rather than guess. You don't mention what OS you're using, so I won't make any profiler recommendations.
You can also try to reduce the amount of copying/moving of data between your buffer and your workspace. Less copying is generally better. If you can process your data in-place instead of moving it to a new location, that's a win. (I see from your edits you're already doing this.)
Finally, if you're processing many gigabytes of archived information then you should consider keeping your data compressed. It will come as a surprise to many people that it is faster to read compressed data and then decompress it than it is to just read decompressed data. My favorite algorithm for this purpose is LZO which doesn't compress as well as some other algorithms, but decompresses impressively fast. This kind of setup is only worth the engineering effort if:
Your job is I/O bound.
You are reading many G of data.
You're running the program frequently, so it saves you a lot of time to make it run
faster.

Mapping large files using MapViewOfFile

I have a very large file and I need to read it in small pieces and then process each piece. I'm using MapViewOfFile function to map a piece in memory, but after reading first part I can't read the second. It throws when I'm trying to map it.
char *tmp_buffer = new char[bufferSize];
LPCWSTR input = L"input";
OFSTRUCT tOfStr;
tOfStr.cBytes = sizeof tOfStr;
HANDLE inputFile = (HANDLE)OpenFile(inputFileName, &tOfStr, OF_READ);
HANDLE fileMap = CreateFileMapping(inputFile, NULL, PAGE_READONLY, 0, 0, input);
while (offset < fileSize)
{
long k = 0;
bool cutted = false;
offset -= tempBufferSize;
if (fileSize - offset <= bufferSize)
{
bufferSize = fileSize - offset;
}
char *buffer = new char[bufferSize + tempBufferSize];
for(int i = 0; i < tempBufferSize; i++)
{
buffer[i] = tempBuffer[i];
}
char *tmp_buffer = new char[bufferSize];
LPCWSTR input = L"input";
HANDLE inputFile;
OFSTRUCT tOfStr;
tOfStr.cBytes = sizeof tOfStr;
long long offsetHigh = ((offset >> 32) & 0xFFFFFFFF);
long long offsetLow = (offset & 0xFFFFFFFF);
tmp_buffer = (char *)MapViewOfFile(fileMap, FILE_MAP_READ, (int)offsetHigh, (int)offsetLow, bufferSize);
memcpy(&buffer[tempBufferSize], &tmp_buffer[0], bufferSize);
UnmapViewOfFile(tmp_buffer);
offset += bufferSize;
offsetHigh = ((offset >> 32) & 0xFFFFFFFF);
offsetLow = (offset & 0xFFFFFFFF);
if (offset < fileSize)
{
char *next;
next = (char *)MapViewOfFile(fileMap, FILE_MAP_READ, (int)offsetHigh, (int)offsetLow, 1);
if (next[0] >= '0' && next[0] <= '9')
{
cutted = true;
}
UnmapViewOfFile(next);
}
ostringstream path_stream;
path_stream << tempPath << splitNum;
ProcessChunk(buffer, path_stream.str(), cutted, bufferSize);
delete buffer;
cout << (splitNum + 1) << " file(s) sorted" << endl;
splitNum++;
}
One possibility is that you're not using an offset that's a multiple of the allocation granularity. From MSDN:
The combination of the high and low offsets must specify an offset within the file mapping. They must also match the memory allocation granularity of the system. That is, the offset must be a multiple of the allocation granularity. To obtain the memory allocation granularity of the system, use the GetSystemInfo function, which fills in the members of a SYSTEM_INFO structure.
If you try to map at something other than a multiple of the allocation granularity, the mapping will fail and GetLastError will return ERROR_MAPPED_ALIGNMENT.
Other than that, there are many problems in the code sample that make it very difficult to see what you're trying to do and where it's going wrong. At a minimum, you need to solve the memory leaks. You seem to be allocating and then leaking completely unnecessary buffers. Giving them better names can make it clear what they are actually used for.
Then I suggest putting a breakpoint on the calls to MapViewOfFile, and then checking all of the parameter values you're passing in to make sure they look right. As a start, on the second call, you'd expect offsetHigh to be 0 and offsetLow to be bufferSize.
A few suspicious things off the bat:
HANDLE inputFile = (HANDLE)OpenFile(inputFileName, &tOfStr, OF_READ);
Every cast should make you suspicious. Sometimes they are necessary, but make sure you understand why. At this point you should ask yourself why every other file API you're using requires a HANDLE and this function returns an HFILE. If you check OpenFile documentation, you'll see, "This function has limited capabilities and is not recommended. For new application development, use the CreateFile function." I know that sounds confusing because you want to open an existing file, but CreateFile can do exactly that, and it returns the right type.
long long offsetHigh = ((offset >> 32) & 0xFFFFFFFF);
What type is offset? You probably want to make sure it's an unsigned long long or equivalent. When bitshifting, especially to the right, you almost always want an unsigned type to avoid sign-extension. You also have to make sure that it's a type that has more bits than the amount you're shifting by--shifting a 32-bit value by 32 (or more) bits is actually undefined in C and C++, which allows the compilers to do certain types of optimizations.
long long offsetLow = (offset & 0xFFFFFFFF);
In both of these statements, you have to be careful about the 0xFFFFFFFF value. Since you didn't cast it or give it a suffix, it can be hard to predict whether the compiler will treat it as an int or unsigned int. In this case,
it'll be an unsigned int, but that won't be obvious to many people. In fact,
I got this wrong when I first wrote this answer. [This paragraph corrected 16-MAY-2017] With bitwise operations, you almost always want to make sure you're using unsigned values.
tmp_buffer = (char *)MapViewOfFile(fileMap, FILE_MAP_READ, (int)offsetHigh, (int)offsetLow, bufferSize);
You're casting offsetHigh and offsetLow to ints, which are signed values. The API actually wants DWORDs, which are unsigned values. Rather than casting in the call, I would declare offsetHigh and offsetLow as DWORDs and do the casting in the initialization, like this:
DWORD offsetHigh = static_cast<DWORD>((offset >> 32) & 0xFFFFFFFFul);
DWORD offsetLow = static_cast<DWORD>( offset & 0xFFFFFFFFul);
tmp_buffer = reinterpret_cast<const char *>(MapViewOfFile(fileMap, FILE_MAP_READ, offsetHigh, offsetLow, bufferSize));
Those fixes may or may not resolve your problem. It's hard to tell what's going on from the incomplete code sample.
Here's a working sample you can compare to:
// Calls ProcessChunk with each chunk of the file.
void ReadInChunks(const WCHAR *pszFileName) {
// Offsets must be a multiple of the system's allocation granularity. We
// guarantee this by making our view size equal to the allocation granularity.
SYSTEM_INFO sysinfo = {0};
::GetSystemInfo(&sysinfo);
DWORD cbView = sysinfo.dwAllocationGranularity;
HANDLE hfile = ::CreateFileW(pszFileName, GENERIC_READ, FILE_SHARE_READ,
NULL, OPEN_EXISTING, 0, NULL);
if (hfile != INVALID_HANDLE_VALUE) {
LARGE_INTEGER file_size = {0};
::GetFileSizeEx(hfile, &file_size);
const unsigned long long cbFile =
static_cast<unsigned long long>(file_size.QuadPart);
HANDLE hmap = ::CreateFileMappingW(hfile, NULL, PAGE_READONLY, 0, 0, NULL);
if (hmap != NULL) {
for (unsigned long long offset = 0; offset < cbFile; offset += cbView) {
DWORD high = static_cast<DWORD>((offset >> 32) & 0xFFFFFFFFul);
DWORD low = static_cast<DWORD>( offset & 0xFFFFFFFFul);
// The last view may be shorter.
if (offset + cbView > cbFile) {
cbView = static_cast<int>(cbFile - offset);
}
const char *pView = static_cast<const char *>(
::MapViewOfFile(hmap, FILE_MAP_READ, high, low, cbView));
if (pView != NULL) {
ProcessChunk(pView, cbView);
}
}
::CloseHandle(hmap);
}
::CloseHandle(hfile);
}
}
You have a memory leak in your code:
char *tmp_buffer = new char[bufferSize];
[ ... ]
while (offset < fileSize)
{
[ ... ]
char *tmp_buffer = new char[bufferSize];
[ ... ]
tmp_buffer = (char *)MapViewOfFile(fileMap, FILE_MAP_READ, (int)offsetHigh, (int)offsetLow, bufferSize);
[ ... ]
}
You're never delete what you allocate via new char[] during every iteration there. If your file is large enough / you do enough iterations of this loop, the memory allocation will eventually fail - that's then you'll see a throw() done by the allocator.
Win32 API calls like MapViewOfFile() are not C++ and never throw, they return error codes (the latter NULL on failure). Therefore, if you see exceptions, something's wrong in you C++ code. Likely the above.
I also had some troubles with memory mapped files.
Basically I just wanted to share memory (1Mo) between 2 apps on the same Pc.
- Both apps where written in Delphi
- Using Windows8 Pro
At first one application (the first one launched) could read and write the memoryMappedFile, but the second one could only read it (error 5 : AccessDenied)
Finally after a lot of testing It suddenly worked when both application where using CreateFileMapping. I even tried to create my on security descriptor, nothing helped.
Just before my applications where first calling OpenFileMapping and then CreateFileMapping if the first one failed
Another thing that misleaded me is that the handles , although visibly referencing the same MemoryMappedFile where different in both applications.
One last thing, after this correction my application seemed to work all right, but after a while I had error_NotEnough_Memory. when calling MapViewOfFile.
It was just a beginner's mistake of my part, I was not always calling UnmapViewOfFile.

Strange behaviour of memory mapped file, some observations and some questions

Please look at this code below.
#include <windows.h>
void Write(char *pBuffer)
{
// pBuffer -= 4*sizeof(int);
for(int i = 0; i<20; i++)
*(pBuffer + sizeof(int)*i) = i+1;
}
void main()
{
HANDLE hFile = ::CreateFile("file", GENERIC_READ|GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
if(INVALID_HANDLE_VALUE == hFile)
{
::MessageBox(NULL, "", "Error", 0);
return;
}
HANDLE hMMF = ::CreateFileMapping(hFile, NULL, PAGE_READWRITE, 0, 32, NULL);
char *pBuffer = (char*)::MapViewOfFile(hMMF, FILE_MAP_WRITE, 0, 0, 0);
Write(pBuffer);
::FlushViewOfFile(pBuffer, 100);
::UnmapViewOfFile(pBuffer);
}
I have allocated only 32 bytes yet when I attempt to write past the allocated size, I don't get any error at all. Is this by design or is this a bug in Windows code? However, if you include the commented part, it gives error, as expected.
I ask this because I am thinking of using this "feature" to my advantage. Can I? FYI, I have Win XP ver 2002 SP 3 but I suspect this to be "fixed" in newer Windows' which might fail my code, IDK. Any useful link explaining some internals of this would really help.
Thanks
This isn't any different then writing past the end of a buffer that's allocated on the heap. The operating system can only slap your fingers if you write to virtual memory that isn't mapped. Mapping is page based, one page is 4096 bytes. You'll have to write past this page to get the kaboom. Change your for-loop to end at (4096+4)/4 to repro it.
The virtual memory manager has to map memory by the page, so the extent will in effect be rounded up to the nearest 4kB (or whatever your system page size is).
I don't think it's documented whether writes into the same page as mapped data, but beyond the end of the mapping, will be committed back to the file. So don't rely on that behavior, it could easily change between Windows versions.

Files reading differently in linux? C++

I'm a fairly new programmer, but I consider my google-fu quite competent and I've spent several hours searching.
I've got a simple SDL application that reads from a binary file (2 bytes as a magic number, then 5 bytes per "tile")
it then displays each tile in the buffer, the bytes decide the x,y,id,passability and such.
So it's just level loading really.
It runs fine on any windows computer (tested windows server 2008, 7/64 and 7/32)
but when I compile it on linux, it displays random tiles in random positions.
I'd be tempted to say it's reading from the wrong portion in the RAM, but I implimented the magic number so it'd return an error if the first 2 bytes were out.
I'd love to figure this out myself but it's bugging me to hell now and I can't progress much further with it unless I can program on the move (my laptop runs linux).
I'm using G++ on linux, mingw32g++ on windows.
bool loadlevel(int level_number)
{
int length;
std::string filename;
filename = "Levels/level";
filename += level_number+48;
filename += ".lvl";
std::ifstream level;
level.open(filename.c_str(),std::ios::in|std::ios::binary);
level.seekg(0,std::ios::end);
length = level.tellg();
level.seekg(0,std::ios::beg);
char buffer[length];
level.read(buffer,length);
if (buffer[0] == 0x49 && buffer[1] == 0x14)
{
char tile_buffer[BYTES_PER_TILE];
int buffer_place = 1;
while(buffer_place < length)
{
for (int i = 1;i <= BYTES_PER_TILE;i++)
{
tile_buffer[i] = buffer[buffer_place+1];
buffer_place++;
}
apply_surface(tile_buffer[1],tile_buffer[2],tiles,screen,&clip[tile_buffer[3]]);
}
}
else
{
// File is invalid
return false;
}
level.close();
return true;
}
Thanks in advance!
Your array handling is incorrect.
Array indexing in C/C++ begins from 0.
You have defined 'tile_buffer' to be an array sized 'BYTES_PER_TILE'.
If BYTES_PER_TILE was 5, your array would have elements tile_buffer[0] to tile_buffer[4].
In your inner for-loop you loop from 1 to 5 so a buffer overflow will occur.
I don't know if this is the cause of your problem but it certainly won't help matters.
This is probably not an answer, but the 1-based array handling and the unneeded copying make my head hurt.
Why not just do something along these lines?
if ((length >= 2+BYTES_PER_TILE) && (buf[0] == CONST1) && (buf[1] == CONST2)) {
for (char *tile = &buf[2]; tile < &buf[length-BYTES_PER_TILE]; tile+=BYTES_PER_TILE) {
apply_surface(tile[0],tile[1],tiles,screen,&clip[tile[2]]);
}
}