Reading data off of a cluster - c++

I need help reading data off of the last cluster of a file using CreateFile() and then using ReadFile(). First I'm stuck with a zero result for my ReadFile() because I think I have incorrect permissions set up in CreateFile().
/**********CreateFile for volume ********/
HANDLE hDevice = INVALID_HANDLE_VALUE;
hDevice = CreateFile(L"\\\\.\\C:",
0,
FILE_SHARE_READ |
FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
0,
NULL);
if (hDevice == INVALID_HANDLE_VALUE)
{
wcout << "error at hDevice at CreateFile "<< endl;
system("pause");
}
/******* Read file from the volume *********/
DWORD nRead;
TCHAR buff[4096];
if (BOOL fileFromVol = ReadFile(
hDevice,
buff,
4096,
&nRead,
NULL
) == 0) {
cout << "Error with fileFromVol" << "\n\n";
system("pause");
}
Next, I have all the cluster information and file information I need (file size, last cluster location of the file,# of clusters on disk, cluster size,etc). How do I set the pointer on the volume to start at a specfied cluster location so I can read/write data from it?

The main problem is that you specify 0 for dwDesiredAccess. In order to read the data you should specify FILE_READ_DATA.
On top of that I seriously question the use of TCHAR. That's appropriate for text when you need to support Windows 9x. On top of not needing to support Windows 9x, the data is not text. Your buffer should be of type unsigned char.
Obviously you need the buffer to be a multiple of the cluster size. You've hard coded 4096, but the real code should surely query the cluster size.
When either of these API calls fail, they indicate a failure reason in the last error value. You can obtain that by calling GetLastError. When your ReadFile fails it will return ERROR_ACCESS_DENIED.
You can seek in the volume by calling SetFilePointerEx. Again, you will need to seek to multiples of the cluster size.
LARGE_INTEGER dist;
dist.QuadPart = ClusterNum * ClusterSize;
BOOL res = SetFilePointerEx(hFile, dist, nullptr, FILE_BEGIN);
if (!res)
// handle error
If you are reading sequentially that there's no need to set the file pointer. The call to ReadFile will advance it automatically.

When doing random-access I/O, just don't mess with the file pointer stored in the file handle at all. Instead, use an OVERLAPPED structure and specify the location for each and every I/O operation.
This works even for synchronous I/O (if the file is opened without FILE_FLAG_OVERLAPPED).
Of course, as David mentioned you will get ERROR_ACCESS_DENIED if you perform operations using a file handle opened without sufficient access.

Related

Read/Write to unformatted SD Card on Windows

I am trying to read/write to an SD card that is unformatted and I am having issues. I am using the windows API to open a handle to the SD card and read/write to it, however I get various errors depending on my approach.
Below is me trying to access the SD card by volume label:
HANDLE sdCardHandle = CreateFile("\\\\.\\E:", GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if(sdCardHandle == INVALID_HANDLE_VALUE)
{
CloseHandle(sdCardHandle);
return;
}
// I have also tried using VirtualAlloc() to get a sector aligned buffer
unit8_t buffer[512] = { 0 };
DWORD bytesWritten = 0;
if(WriteFile(sdCardHandle, buffer, 512, &bytesWritten, NULL) != TRUE)
{
DWORD lastError = GetLastError();
CloseHandle(sdCardHandle);
return;
}
However the WriteFile fails and the last error is 87 which is invalid parameter. I have tried locking the volume and also unmounting the volume before writing also and it failed.
The next attempt was to try and write to the physical drive instead by running the following in administrator mode:
HANDLE sdCardHandle = CreateFile("\\\\.\\PhysicalDrive1", GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if(sdCardHandle == INVALID_HANDLE_VALUE)
{
CloseHandle(sdCardHandle);
return;
}
// I have also tried using VirtualAlloc() to get a sector aligned buffer
unit8_t buffer[512] = { 0 };
DWORD bytesWritten = 0;
if(WriteFile(sdCardHandle, buffer, 512, &bytesWritten, NULL) != TRUE)
{
DWORD lastError = GetLastError();
CloseHandle(sdCardHandle);
return;
}
Which also fails but return error 23 which is a bad CRC error. I have also tried unmounting and locking the volume first but nothing changed. If there is any thing else I need to do or try please let me know.
Thank you everyone for all of your help and suggestions. It turns out I was doing the operation correct the entire time. However the SD card reader was causing the error. The issue I believe is that BitDefender might not be allowing the read/write operations to go out to the physical disk. I instead used a USB adapter that shows the SD card as USB drive and my read/write works! Hopefully this helps anyone having a similar issue.
from CreateFile
Volume handles can be opened as noncached at the discretion of the
particular file system, even when the noncached option is not
specified in CreateFile. You should assume that all Microsoft file
systems open volume handles as noncached. The restrictions on
noncached I/O for files also apply to volumes.
so we need assume that FILE_FLAG_NO_BUFFERING (FILE_NO_INTERMEDIATE_BUFFERING) will be used:
Specifying this flag places the following restrictions on the caller's
parameters to other ZwXxxFile routines.
Any optional ByteOffset passed to NtReadFile or NtWriteFile must be a multiple of the sector size.
The Length passed to NtReadFile or NtWriteFile must be an integral of the sector size. Note that specifying a read operation to
a buffer whose length is exactly the sector size might result in a
lesser number of significant bytes being transferred to that buffer
if the end of the file was reached during the transfer.
Buffers must be aligned in accordance with the alignment requirement of the underlying device. To obtain this information,
call NtCreateFile to get a handle for the file object that
represents the physical device, and pass that handle to NtQueryInformationFile. For a list of the system's FILE_XXX_ALIGNMENT values, see DEVICE_OBJECT.
note, that here - Alignment and File Access Requirements was wrong information:
File access buffer addresses for read and write operations should be
physical sector-aligned, which means aligned on addresses in memory
that are integer multiples of the volume's physical sector size.
Depending on the disk, this requirement may not be enforced.
this is false - buffer addresses for read and write operations must not be physical sector-aligned. it must be aligned in accordance with the alignment requirement of the underlying device. this is absolute different things.
we can get this align from FILE_ALIGNMENT_INFO (win 8+) or by using FILE_ALIGNMENT_INFORMATION via NtQueryInformationFile with FileAlignmentInformation
in your current code you hardcode buffer size to 512. however sector size of device can be bigger size.
// I have also tried using VirtualAlloc() to get a sector aligned
buffer
how i say - you not need sector aligned buffer (usual device align 2-4 bytes). but you need buffer integral of the sector size. so before read data - you need first query sector size and device align required
HANDLE sdCardHandle = CreateFile(L"\\\\.\\PhysicalDrive1", GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL);
if (sdCardHandle != INVALID_HANDLE_VALUE)
{
FILE_ALIGNMENT_INFO fai;
if (GetFileInformationByHandleEx(sdCardHandle, FileAlignmentInfo, &fai, sizeof(fai)))
{
ULONG BytesReturned;
STORAGE_ACCESS_ALIGNMENT_DESCRIPTOR saad;
STORAGE_PROPERTY_QUERY spq = { StorageAccessAlignmentProperty, PropertyStandardQuery };
if (DeviceIoControl(sdCardHandle, IOCTL_STORAGE_QUERY_PROPERTY, &spq, sizeof(spq), &saad, sizeof(saad), &BytesReturned, 0))
{
if (PBYTE pb = new BYTE[saad.BytesPerPhysicalSector + fai.AlignmentRequirement])
{
PBYTE buf = (PBYTE)(((ULONG_PTR)pb + fai.AlignmentRequirement) & ~(ULONG_PTR)fai.AlignmentRequirement);
if (ReadFile(sdCardHandle, buf, saad.BytesPerPhysicalSector, &BytesReturned, 0))
{
__nop();
}
else
{
GetLastError();//RtlGetLastNtStatus();
}
delete [] pb;
}
}
}
CloseHandle(sdCardHandle);
}
also as separate note - when you use OPEN_EXISTING - any file attributes is ignored (it used only when you create new file). as result use FILE_ATTRIBUTE_NORMAL - senseless (but not error - simply will be ignored)

[WIN API]Why sharing a same HANDLE of WriteFile(sync) and ReadFile(sync) cause ReadFile error?

I've search the MSDN but did not find any information about sharing a same HANDLE with both WriteFile and ReadFile. NOTE:I did not use create_always flag, so there's no chance for the file being replaced with null file.
The reason I tried to use the same HANDLE was based on performance concerns. My code basically downloads some data(writes to a file) ,reads it immediately then delete it.
In my opinion, A file HANDLE is just an address of memory which is also an entrance to do a I/O job.
This is how the error occurs:
CreateFile(OK) --> WriteFile(OK) --> GetFileSize(OK) --> ReadFile(Failed) --> CloseHandle(OK)
If the WriteFile was called synchronized, there should be no problem on this ReadFile action, even the GetFileSize after WriteFile returns the correct value!!(new modified file size), but the fact is, ReadFile reads the value before modified (lpNumberOfBytesRead is always old value). A thought just came to my mind,caching!
Then I tried to learn more about Windows File Caching which I have no knowledge with. I even tried Flag FILE_FLAG_NO_BUFFERING, and FlushFileBuffers function but no luck. Of course I know I can do CloseHandle and CreateFile again between WriteFile and ReadFile, I just wonder if there's some possible way to achieve this without calling CreateFile again?
Above is the minimum about my question, down is the demo code I made for this concept:
int main()
{
HANDLE hFile = CreateFile(L"C://temp//TEST.txt", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL| FILE_FLAG_WRITE_THROUGH, NULL);
//step one write 12345 to file
std::string test = "12345";
char * pszOutBuffer;
pszOutBuffer = (char*)malloc(strlen(test.c_str()) + 1); //create buffer for 12345 plus a null ternimator
ZeroMemory(pszOutBuffer, strlen(test.c_str()) + 1); //replace null ternimator with 0
memcpy(pszOutBuffer, test.c_str(), strlen(test.c_str())); //copy 12345 to buffer
DWORD wmWritten;
WriteFile(hFile, pszOutBuffer, strlen(test.c_str()), &wmWritten, NULL); //write 12345 to file
//according to msdn this refresh the buffer
FlushFileBuffers(hFile);
std::cout << "bytes writen to file(num):"<< wmWritten << std::endl; //got output 5 here as expected, 5 bytes has bebn wrtten to file.
//step two getfilesize and read file
//get file size of C://temp//TEST.txt
DWORD dwFileSize = 0;
dwFileSize = GetFileSize(hFile, NULL);
if (dwFileSize == INVALID_FILE_SIZE)
{
return -1; //unable to get filesize
}
std::cout << "GetFileSize result is:" << dwFileSize << std::endl; //got output 5 here as expected
char * bufFstream;
bufFstream = (char*)malloc(sizeof(char)*(dwFileSize + 1)); //create buffer with filesize & a null terminator
memset(bufFstream, 0, sizeof(char)*(dwFileSize + 1));
std::cout << "created a buffer for ReadFile with size:" << dwFileSize + 1 << std::endl; //got output 6 as expected here
if (bufFstream == NULL) {
return -1;//ERROR_MEMORY;
}
DWORD nRead = 0;
bool bBufResult = ReadFile(hFile, bufFstream, dwFileSize, &nRead, NULL); //dwFileSize is 5 here
if (!bBufResult) {
free(bufFstream);
return -1; //copy file into buffer failed
}
std::cout << "nRead is:" << nRead << std::endl; //!!!got nRead 0 here!!!? why?
CloseHandle(hFile);
free(pszOutBuffer);
free(bufFstream);
return 0;
}
then the output is:
bytes writen to file(num):5
GetFileSize result is:5
created a buffer for ReadFile with size:6
nRead is:0
nRead should be 5 not 0.
Win32 files have a single file pointer, both for read and write; after the WriteFile it is at the end of the file, so if you try to read from it it will fail. To read what you just wrote you have to reposition the file pointer at the start of the file, using the SetFilePointer function.
Also, the FlushFileBuffer isn't needed - the operating system ensures that reads and writes on the file handle see the same state, regardless of the status of the buffers.
After first write file cursor points at file end. There is nothing to read. You can rewind it back to the beginning using SetFilePointer:
::DWORD const result(::SetFilePointer(hFile, 0, nullptr, FILE_BEGIN));
if(INVALID_SET_FILE_POINTER == result)
{
::DWORD const last_error(::GetLastError());
if(NO_ERROR != last_error)
{
// TODO do error handling...
}
}
when you try read file - from what position you try read it ?
FILE_OBJECT maintain "current" position (CurrentByteOffset member) which can be used as default position (for synchronous files only - opened without FILE_FLAG_OVERLAPPED !!) when you read or write file. and this position updated (moved on n bytes forward) after every read or write n bytes.
the best solution always use explicit file offset in ReadFile (or WriteFile). this offset in the last parameter OVERLAPPED lpOverlapped - look for Offset[High] member - the read operation starts at the offset that is specified in the OVERLAPPED structure
use this more effective and simply compare use special api call SetFilePointer which adjust CurrentByteOffset member in FILE_OBJECT (and this not worked for asynchronous file handles (created with FILE_FLAG_OVERLAPPED flag)
despite very common confusion - OVERLAPPED used not for asynchronous io only - this is simply additional parameter to ReadFile (or WriteFile) and can be used always - for any file handles

Searching for structures in a continuous, unstructured file stream

I am trying to figure out a (hopefully easy) way to read a large, unstructured file without bumping into the edge of a buffer. An example is helpful here.
Imagine you are trying to do some data-recovery of a 16GB flash-drive and have saved a dump of the drive to a 16GB file. You want to scan through the image, looking for certain items of interest. If the file were smaller, you could read the entire thing into a memory buffer (let’s say 1MB) and do a simple scan through the buffer. However, because it is too big to read in all at once, you need to read it in chunks. The problem is that an item of interest may not be perfectly aligned so as to fall within a single 1MB buffer. In other words, it may end up straddling the edge of the buffer so that it starts at the end of the buffer during one read, and ends in the next one (or even further).
At one time in the past, I dealt with this by using two buffers and copying the second one to the first one to create a sort of sliding window, however I imagine that this should be a common enough scenario that there are better, existing solutions. I looked into memory-mapped files, thinking that they let you read the file by simply increasing the array index/pointer, but I ended up in the exact same situation as before due to the limit of the map view size. I tried looking for some practical examples of using MapViewOfFile with offsets, but all I could find were contrived examples that skipped that.
How is this situation normally handled?
If you are running in a 64 bit environment, I would just use memory mapped files. There is no (reasonable) memory limit for a process. You can read the file in, even jump around, and the OS will swap memory to and from disk.
Here's some basic information:
http://msdn.microsoft.com/en-us/library/ms810613.aspx
And an example of a file viewer here:
http://www.catch22.net/tuts/memory-techniques-part-1
This case works on a 2.8GB file in x64, but fails in win32 because it cannot allocate more than 2GB per process. It is very fast since it touches only the first and last byte in the pBuf array. Modifying the method to traverse the buffer and count the number of 'zero' bytes works as expected. You can watch the memory footprint go up as it does it but that memory is only virtually allocated.
#include "stdafx.h"
#include <string>
#include <Windows.h>
TCHAR szName[] = TEXT( pathToFile );
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hMapFile;
char* pBuf;
HANDLE file = CreateFile( szName, GENERIC_READ, FILE_SHARE_READ, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
if ( file == NULL )
{
_tprintf(TEXT("Could not open file object (%d).\n"),
GetLastError());
return 1;
}
unsigned int length = GetFileSize(file, 0);
printf( "Length = %u\n", length );
hMapFile = CreateFileMapping( file, 0, PAGE_READONLY, 0, 0, 0 );
if (hMapFile == NULL)
{
_tprintf(TEXT("Could not create file mapping object (%d).\n"), GetLastError());
return 1;
}
pBuf = (char*) MapViewOfFile(hMapFile, FILE_MAP_READ, 0,0, length);
if (pBuf == NULL)
{
_tprintf(TEXT("Could not map view of file (%d).\n"), GetLastError());
CloseHandle(hMapFile);
return 1;
}
printf("First Byte: 0x%02x\n", pBuf[0] );
printf("Last Byte: 0x%02x\n", pBuf[length-1] );
UnmapViewOfFile(pBuf);
CloseHandle(hMapFile);
return 0;
}

Using WriteFile to fill up a cluster

I want to use Writefile to fill up then end of every file until it reaches the end of its last cluster. Then I want to delete what I wrote and repeat the process(attempting to get rid data that might have been there).
I have a 2 issues:
WriteFile gives me an error: ERROR_INVALID_PARAMETER
Depending on the type of file, WriteFile() gives me different results
So for the first issue I realized that the parameter nNumberOfBytesToWrite in the WriteFile() has to be a multiple of bytes per sector(my case is 512 bytes). Is this a limitation of the function or am I doing something wrong?
In my second issue, I'm using two dummy files(.txt and .html) on an external hard drive to write random data to. In the case of the .txt file, the data is written to the end of the file which is what I need. However, the .html file just writes to the beginning of the file and replaces any data that was already there.
Here are some code snippets relevant to my issue:
hFile = CreateFile(result,
GENERIC_READ | GENERIC_WRITE |FILE_READ_ATTRIBUTES,
FILE_SHARE_READ | FILE_SHARE_WRITE,
0,
OPEN_EXISTING,
FILE_FLAG_NO_BUFFERING,
0);
if (hFile == INVALID_HANDLE_VALUE) {
cout << "File does not exist" << endl;
CloseHandle(hFile);
}
DWORD dwBytesWritten;
char * wfileBuff = new char[512];
memset (wfileBuff,'0',512);
returnz = SetFilePointer(hFile, 0,NULL,FILE_END);
if(returnz ==0){
cout<<"Error: "<<GetLastError()<<endl;
};
LockFile(hFile, returnz, 0, 512, 0)
returnz =WriteFile(hFile, wfileBuff, 512, &dwBytesWritten, NULL);
if(returnz ==0){
cout<<"Error: "<<GetLastError()<<endl;
}
UnlockFile(hFile, returnz, 0, 512, 0);
cout<<dwBytesWritten<<endl<<endl;
I am using static numbers at the moment just to test out the functions. Is there anyway I can always write to the the end of the file no matter what type of file? I also tried SetFilePointer(hFile, 0,(fileSize - slackSpace + 1),FILE_BEGIN); but that didn't work.
You need to heed the information in the documentation concerning FILE_FLAG_NO_BUFFERING. Specifically this section:
As previously discussed, an application must meet certain requirements
when working with files opened with FILE_FLAG_NO_BUFFERING. The
following specifics apply:
File access sizes, including the optional file offset in the OVERLAPPED structure, if specified, must be for a number of bytes that
is an integer multiple of the volume sector size. For example, if the
sector size is 512 bytes, an application can request reads and writes
of 512, 1,024, 1,536, or 2,048 bytes, but not of 335, 981, or 7,171
bytes.
File access buffer addresses for read and write operations should be physical sector-aligned, which means aligned on addresses in memory
that are integer multiples of the volume's physical sector size.
Depending on the disk, this requirement may not be enforced.

Faster method for exporting embedded data

For some reasons, i'm using the method described here: http://geekswithblogs.net/TechTwaddle/archive/2009/10/16/how-to-embed-an-exe-inside-another-exe-as-a.aspx
It starts off from the first byte of the embedded file and goes through 4.234.925 bytes one by one! It takes approximately 40 seconds to finish.
Is there any other methods for copying an embedded file to the hard-disk? (I maybe wrong here but i think the embedded file is read from the memory)
Thanks.
Once you know the location and size of the embedded exe , then you can do it in one write.
LPBYTE pbExtract; // the pointer to the data to extract
UINT cbExtract; // the size of the data to extract.
HANDLE hf;
hf = CreateFile("filename.exe", // file name
GENERIC_WRITE, // open for writing
0, // no share
NULL, // no security
CREATE_ALWAYS, // overwrite existing
FILE_ATTRIBUTE_NORMAL, // normal file
NULL); // no template
if (INVALID_HANDLE_VALUE != hf)
{
DWORD cbWrote;
WriteFile(hf, pbExtract, cbExtract, &cbWrote, NULL);
CloseHandle(hf);
}
As the man says, write more of the file (or the whole thing) per WriteFile call. A WriteFile call per byte is going to be ridiculously slow yes.