I am creating a program to populate a disk with a dummy file system.
Currently, I am writing files of variable sizes using WriteFile.
WriteFile(hFile, FileData, i * 1024, &dwWrote, NULL);
err = GetLastError();
err returns #1784 which translates to
The supplied user buffer is not valid for the requested operation. ERROR_INVALID_USER_BUFFER
So for the first 24 files, the write operation works. For file #25 on, the write operation fails.
The files are still created but the WriteFile function does not populate the files.
Any ideas on how to get past ERROR_INVALID_USER_BUFFER?
Every reference I can find to the error is limited to crashing programs and I cannot figure out how it relates to the issue I am experiencing.
EDIT:
FileData = (char *) malloc(sizeof(char) * (size_t)k * 1024);
memset(FileData, 245, sizeof(char) * (size_t)k * 1024);
FileData is set and allocated to the size of the maximum anticipate buffer.
i is the loop variable that iterates until it increments to the Maximum Size (k).
My guess is that FileData is not large enough for you to write i * 1024 bytes from it. Is i the loop control variable for your list of files? If so, you need the write buffer FileData to grow 1K at a time as you loop through your files.
This is an unusual construct. Are you sure the logic is correct here? Post more code (specifically, all usage of FileData and i) for better accuracy in the answers.
Note that you should not always be checking GetLastError here - you need to check WriteFile's return code before you rely on that being meaningful. Otherwise you could be picking up an error from some unrelated part of your code - whatever failed last.
I got a Error = 1784 and it was because I opened the file without specifying the size of records and then did block reads on the file.
Reset( FileHandle );
Should be
Reset( FileHandle, 1 );
Related
I am writing a program to reformat a DNS log file for insertion to a database. There is a possibility that the line currently being written to in the log file is incomplete. If it is, I would like to discard it.
I started off believing that the eof function might be a good fit for my application, however I noticed a lot of programmers dissuading the use of the eof function. I have also noticed that the feof function seems to be quite similar.
Any suggestions/explanations that you guys could provide about the side effects of these functions would be most appreciated, as would any suggestions for more appropriate methods!
Edit: I currently am using the istream::peek function in order to skip over the last line, regardless of whether it is complete or not. While acceptable, a solution that determines whether the last line is complete would be preferred.
The specific comparison I'm using is: logFile.peek() != EOF
I would consider using
int fseek ( FILE * stream, long int offset, int origin );
with SEEK_END
and then
long int ftell ( FILE * stream );
to determine the number of bytes in the file, and therefore - where it ends. I have found this to be more reliable in detecting the end of the file (in bytes).
Could you detect an (End of Record/Line) EOR marker (CRLF perhaps) in the last two or three bytes of the file? (3 bytes might be used for CRLF^Z...depends on the file type). This would verify if you have a complete last row
fseek (stream, -2,SEEK_END);
fread (2 bytes... etc
If you try to open the file with exclusive locks, you can detect (by the failure to open) that the file is in use, and try again in a second...(or whenever)
If you need to capture the file contents as the file is being written, it's much easier if you eliminate as many layers of indirection and buffering between your logic and the actual bytes of data in the file.
Do not use C++ IO streams of any type - you have no real control over them. Don't use FILE *-based functions such as fopen() and fread() - those are buffered, and even if you disable buffering there are layers of code between your code and the data that once again you can't control and don't know what's happening.
In a POSIX environment, you can use low-level C-style open() and read()/pread() calls. And use fstat() to know when the file contents have changed - you'll see the st_size member of the struct stat argument change after a call to fstat().
You'd open the file like this:
int logFileFD = open( "/some/file/name.log", O_RDONLY );
Inside a loop, you could do something like this (error checking and actual data processing omitted):
size_t lastSize = 0;
while ( !done )
{
struct stat statBuf;
fstat( logFileFD, &statBuf );
if ( statBuf.st_size == lastSize )
{
sleep( 1 ); // or however long you want
continue; // go to next loop iteration
}
// process new data - might need to keep some of the old data
// around to handle lines that cross boundaries
processNewContents( logFileFD, lastSize, statBuf.st_size );
}
processNewContents() could look something like this:
void processNewContents( int fd, size_t start, size_t end )
{
static char oldData[ BUFSIZE ];
static char newData[ BUFSIZE ];
// assumes amount of data will fit in newData...
ssize_t bytesRead = pread( fd, newData, start, end - start );
// process the data that was read read here
return;
}
You may also find that you need to add some code to close() then re-open() the file in case your application doesn't seem to be "seeing" data written to the file. I've seen that happen on some systems - the application somehow sees a cached copy of the file size somewhere while an ls run in another context gets the more accurate, updated size. If, for example, you know your log file is written to every 10-15 seconds, if you go 30 seconds without seeing any change to the file you know to try reopening the file.
You can also track the inode number in the struct stat results to catch log file rotation.
In a non-POSIX environment, you can replace open(), fstat() and pread() calls with the low-level OS equivalent, although Windows provides most of what you'd need. On Windows, lseek() followed by read() would replace pread().
I have an issue with WinInet's InternetReadFile (C++).
In some rare cases the function fails and GetLastError returns the mentioned error 0x8007007a (which according to ErrorLookup corresponds to "The data area passed to a system call is too small").
I have a few questions regarding this:
Why does this happen in some rare cases but in other cases works
fine (I'm talking of course about always downloading the same ~15MB
zip file) ?
Is this really related to the buffer size passed to the API call ? I am using a const buffer size of 1024 BYTES for this call. Should I use a bigger buffer size ? If so, how can I know what is the "right" buffer size ?
What can I do to recover during run time if I do get this error ?
Adding a code snippet (note that this will not work as is because some init code is necessary):
#define HTTP_RESPONSE_BUFFER_SIZE 1024
std::vector<char> responseBuffer;
DWORD dwResponseBytesRead = 0;
do
{
const size_t oldBufferSize = responseBuffer.size();
responseBuffer.resize(oldBufferSize + HTTP_RESPONSE_BUFFER_SIZE);
// Now we read again to the last place we stopped
// writing in the previous iteration.
dwResponseBytesRead = 0;
BOOL bInternetReadFile = ::InternetReadFile(hOpenRequest, // hFile. Retrieved from a previous call to ::HttpOpenRequest
(LPVOID)&responseBuffer[oldBufferSize], // lpBuffer.
HTTP_RESPONSE_BUFFER_SIZE, // dwNumberOfBytesToRead.
&dwResponseBytesRead); // lpdwNumberOfBytesRead.
if(!bInternetReadFile)
{
// Do clean up and exit.
DWORD dwErr = ::GetLastError(); // This, in some cases, will return: 0x7a
HRESULT hr = HRESULT_FROM_WIN32(dwErr); // This, in some cases, will return: 0x8007007a
return;
}
// Adjust the buffer according to the actual number of bytes read.
responseBuffer.resize(oldBufferSize + dwResponseBytesRead);
}
while(dwResponseBytesRead != 0);
It is a documented error for InternetReadFile:
WinINet attempts to write the HTML to the lpBuffer buffer a line at a time. If the application's buffer is too small to fit at least one line of generated HTML, the error code ERROR_INSUFFICIENT_BUFFER is returned as an indication to the application that it needs a larger buffer.
So you are supposed to handle this error by increasing the buffer size. Just double the size, repeatedly if necessary.
There are some discrepancies in question. It isn't clear that you are reading an HTML file for one, 15MB seems excessive. Another is that this error should repeat well. But most troubling is the error code value, it is wrapped in an HRESULT, the kind of error code that a COM component would return. You should be getting a Windows error code back from GetLastError(), just 0x7a and not 0x8007007a.
Do make sure that your error checking is correct. Only ever call GetLastError() when InternetReadFile() returned FALSE. If that checks out (always post a snippet please) then do consider that this error is actually generated upstream, perhaps the firewall or flaky anti-malware.
Using Visual c++ i am trying to read an image from the stream I do this by storing the stream in a buffer. I know that at what location in buffer i have the image.(its the first file in the stream and i know the size of the image so i read and store the image in buffer until the size of file and thats correct.I am sure about it) For the first time when i read the image there is no problem it works correctly. The code is as follows-
ReadFromStream(IStream *pStream )
{//this pStream stream contents the file contents
ULONG cbRead;
int size=5348928;
char *buffer = new char[size + 1];
HRESULT hr = pStream->Read(buffer, size, &cbRead ); //here we store the stream in buffer.Now all the data is in buffer.
buffer[cbRead ] = L'\0';
int location = 512 ;
char FileContents[107643];
memcpy(FileContents,&buffer[location],SizeOfFile); // here i have the contents of the image in File contents.I am sure about it its location. For the first call to ReadFromStream() function it works fine.
}
But my situation is that i have to read the image second time also on the same execution of the program. so what happens when the second time i call to ReadFromStream() function(with the same stream value i can see on debugging the stream value is same.) even then the buffer show the contents which are at location far away from the image stored in it (i mean the stream had Image File as the first file but in the second call to ReadFromStream() the buffer points to the data of another file but the first file was actually the image file). So the quetion is how this memory is alloctaed up to this unexpected file ?
Why the buffer shows the data which is at location very far from the starting index.(For the second call to ReadFromStream() also it should show image file as the starting file. why it show the file which is far away from the Image file ??? ) As I guess some memory is allocated and which must be deleted ?? but where and how i don't know ..am i right ??
may be its because in the second call to ReadFromStream(); this buffer has already some memory allocated i mean for the second call the buffer points to address which don't start from zero (but it should do it as i think)
Streams are like normal files in that they're sequential in nature and once you've read data, the "read cursor" is advanced and another call to Read() will read more data, and so on.
To seek backwards to re-read the same data again, use IStream::Seek(). For example, to go back to the start of the stream:
LARGE_INTEGER li = { 0 };
HRESULT hr = pStream->Seek(li, STREAM_SEEK_SET, NULL);
Not all streams support seeking so you should always check the return code for error.
Hello I am trying to simulate two programs that send and receive files in C++ from the network, something like client and server. To begin with I have to split a file to pages of 4096 bytes and send it to the other program in order to create the file. The way I send and receive files through the network is by write and read. So in the client programm I must create a function tha receives the packages and puts them into a file. I cannot figure a way to put the packages in to the file. For example I a file has 2 pages I must create another file using these 2 pages. Also i cannot know if they come in order so I must create the file and put them in the right position.
/*consider the connections are ok and the file's name is at char* name*/
int file=open(name,"O_CREAT | O_WRONLY,0666);
char buffer[4096];
int pagenumber;
for(int i=0;i<page_number;i++){
read(socket,&pagenumber,sizeof(int));
read(socket,buffer,sizeof(int));
write(file(pagenumber*4096),buffer,4096);
}
This code works for pagenumber=0 but for pagenumber=1 nothing happens! Can you help me? Thanks in advance!
To write at a certain position in the file you must use lseek
off_t lseek(int fd, off_t offset, int whence);
It takes the descriptor, the offset and the final parameter is a constant in these:
SEEK_SET The offset is set to offset bytes.
SEEK_CUR The offset is set to its current location plus offset bytes.
SEEK_END The offset is set to the size of the file plus offset bytes.
If you know how big is the file going to be, you can use ftruncate for it.
int ftruncate(int fd, off_t length);
Anyway even if you create a file that is huge, since most filesystems on Linux support sparse files, the actual file on disk will be the sum of the blocks that have been written.
The first argument to write() is a filedescriptor, which you optained with open(). So it should be
int file = open(...);
...
write(file,buffer,4096);
not
write(file(pagenumber*4096),buffer,4096);
Regarding the question as to how to write at a specific position. You can prepare the file beforehand with write, and then use seek() to position the file where you want to write at. For a description of seek you can look here.
Mario, first of all, lets no rely on garbage in 'pagenumber' to continue the loop (which is happening when loop boundary condition is checked here for the first time). Now, if you are writing page number '0' and then page following it, pagenumber will be initialized to 0 and your loop will come out. Also, please check bytes written and read in write and read system calls respectively.
try pwrite
int file=open(name,"O_CREAT | O_WRONLY,0666);
char buffer[4096];
int pagenumber;
for(int i=0;i<page_number;i++){
read(socket,&pagenumber,sizeof(int));
read(socket,buffer,sizeof(int));
pwrite(file,buffer,4096,4096*i);
}
Could you please tell how to set file max size to 100 Bytes. If I try to write any more data more than 100 Bytes ntwritefile() should throw error ERROR_NOT_ENOUGH_MEMORY.
Could you please suggest me, a widnows API to set 100 Bytes ( or a fixed size) to file ?
in our product, ntwritefile() fails with error ERROR_NOT_ENOUGH_MEMORY. I am trying to understand, in what are the scenarios, we will get that error.
Use SetFilePointer, then SetEndOfFile. Don't forget to CloseHandle after SetEndOfFile!
On Windows this would look like this:
CreateFile
WriteFile
That's it.
You can create a buffer of size 100 char and write the buffer to a file when finished with it.
#include <stdio.h>
#include <memory.h>
int main(int argc, char* argv[])
{
char buffer[100];
//assign values
memset(buffer, 'A', 100);
buffer[5]='#';
FILE * pFile;
pFile = fopen ( "buffer.100" , "wb" );
fwrite (buffer, 1, sizeof(buffer) , pFile );
fclose (pFile);
return 0;
}
Do you want to create a file that is 100 bytes? or do you want to limit the size of the file to 100 bytes meaning any further writes on the file will not work if the file exceeds 100 byte?
If its the first one you're after then you can use CreateFile and WriteFile and limit the buffer size to a 100 bytes.
If its the second one.. then I dont think its possible...
Your best approach, architecturally speaking, is as follows:
Identify how you're reading from and writing to the file in question.
Create an inteface that provides these same facilities via a single managed class. The API would be as near as possible the same to what you already have.
Add code, encapsulated in your manager class to track the size of the buffer as you write, probably also buffering the 100 bytes that you want in your file. Code can then detect and handle overflow situations in whatever way you deceide (i.e. throwing exceptions if that's what you want.)
I'm sorry you don't need a buffer for this.
Simply if you create a memory map object of file ,which is greater than the
file size, the system automatically expand the file into that size.
But for just 100bytes you just create a 100 bytes of dummy char array and
write it to the file.
see,
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366542(v=vs.85).aspx