How to perform fwrite in a loop in c++ - c++

Okay, here's my code that seems to fail and doesn't even write at least once, I fail to see what's actually wrong, could anyone point me in the correct direction?
My intention is to perform this write every 10 seconds on an infinite loop
FILE* pFile;
pFile = fopen("Hdd:\\LOGFile.txt", "w");
while(true) {
DWORD TitleID = XamGetCurrentTitleId();
std::ostringstream titleMessageSS;
titleMessageSS << "Here's the current title we're on : " << TitleID << "\r";
std::string titleMessage = titleMessageSS.str(); // get the string from the stream
DWORD dwBytesToWrite = (DWORD)titleMessage.size();
DWORD dwBytesWritten = 0;
fwrite(titleMessage.c_str(), 1 , dwBytesToWrite, pFile);
Sleep(10000);
}
fclose(pFile);
return NULL;

You probably should call fflush(pFile); before your call to Sleep;
but you should better use C++ std::ostream (and use std::flush manipulator).

How do you know that it doesn't write? Is the file empty after the
program stops.
What I suspect is that you're looking at the output file before the
program has stopped. The stream types (both C++ and C) are normally
buffered; fwrite copies the data to a local buffer, and only transfers
it to the OS (system function write under Unix) when the buffer is
full, when you explicitly flush it (see fflush) or when the file is
closed. This is, for example, why we have to check for output errors
after fclose().

Related

[WIN API]Why sharing a same HANDLE of WriteFile(sync) and ReadFile(sync) cause ReadFile error?

I've search the MSDN but did not find any information about sharing a same HANDLE with both WriteFile and ReadFile. NOTE:I did not use create_always flag, so there's no chance for the file being replaced with null file.
The reason I tried to use the same HANDLE was based on performance concerns. My code basically downloads some data(writes to a file) ,reads it immediately then delete it.
In my opinion, A file HANDLE is just an address of memory which is also an entrance to do a I/O job.
This is how the error occurs:
CreateFile(OK) --> WriteFile(OK) --> GetFileSize(OK) --> ReadFile(Failed) --> CloseHandle(OK)
If the WriteFile was called synchronized, there should be no problem on this ReadFile action, even the GetFileSize after WriteFile returns the correct value!!(new modified file size), but the fact is, ReadFile reads the value before modified (lpNumberOfBytesRead is always old value). A thought just came to my mind,caching!
Then I tried to learn more about Windows File Caching which I have no knowledge with. I even tried Flag FILE_FLAG_NO_BUFFERING, and FlushFileBuffers function but no luck. Of course I know I can do CloseHandle and CreateFile again between WriteFile and ReadFile, I just wonder if there's some possible way to achieve this without calling CreateFile again?
Above is the minimum about my question, down is the demo code I made for this concept:
int main()
{
HANDLE hFile = CreateFile(L"C://temp//TEST.txt", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL| FILE_FLAG_WRITE_THROUGH, NULL);
//step one write 12345 to file
std::string test = "12345";
char * pszOutBuffer;
pszOutBuffer = (char*)malloc(strlen(test.c_str()) + 1); //create buffer for 12345 plus a null ternimator
ZeroMemory(pszOutBuffer, strlen(test.c_str()) + 1); //replace null ternimator with 0
memcpy(pszOutBuffer, test.c_str(), strlen(test.c_str())); //copy 12345 to buffer
DWORD wmWritten;
WriteFile(hFile, pszOutBuffer, strlen(test.c_str()), &wmWritten, NULL); //write 12345 to file
//according to msdn this refresh the buffer
FlushFileBuffers(hFile);
std::cout << "bytes writen to file(num):"<< wmWritten << std::endl; //got output 5 here as expected, 5 bytes has bebn wrtten to file.
//step two getfilesize and read file
//get file size of C://temp//TEST.txt
DWORD dwFileSize = 0;
dwFileSize = GetFileSize(hFile, NULL);
if (dwFileSize == INVALID_FILE_SIZE)
{
return -1; //unable to get filesize
}
std::cout << "GetFileSize result is:" << dwFileSize << std::endl; //got output 5 here as expected
char * bufFstream;
bufFstream = (char*)malloc(sizeof(char)*(dwFileSize + 1)); //create buffer with filesize & a null terminator
memset(bufFstream, 0, sizeof(char)*(dwFileSize + 1));
std::cout << "created a buffer for ReadFile with size:" << dwFileSize + 1 << std::endl; //got output 6 as expected here
if (bufFstream == NULL) {
return -1;//ERROR_MEMORY;
}
DWORD nRead = 0;
bool bBufResult = ReadFile(hFile, bufFstream, dwFileSize, &nRead, NULL); //dwFileSize is 5 here
if (!bBufResult) {
free(bufFstream);
return -1; //copy file into buffer failed
}
std::cout << "nRead is:" << nRead << std::endl; //!!!got nRead 0 here!!!? why?
CloseHandle(hFile);
free(pszOutBuffer);
free(bufFstream);
return 0;
}
then the output is:
bytes writen to file(num):5
GetFileSize result is:5
created a buffer for ReadFile with size:6
nRead is:0
nRead should be 5 not 0.
Win32 files have a single file pointer, both for read and write; after the WriteFile it is at the end of the file, so if you try to read from it it will fail. To read what you just wrote you have to reposition the file pointer at the start of the file, using the SetFilePointer function.
Also, the FlushFileBuffer isn't needed - the operating system ensures that reads and writes on the file handle see the same state, regardless of the status of the buffers.
After first write file cursor points at file end. There is nothing to read. You can rewind it back to the beginning using SetFilePointer:
::DWORD const result(::SetFilePointer(hFile, 0, nullptr, FILE_BEGIN));
if(INVALID_SET_FILE_POINTER == result)
{
::DWORD const last_error(::GetLastError());
if(NO_ERROR != last_error)
{
// TODO do error handling...
}
}
when you try read file - from what position you try read it ?
FILE_OBJECT maintain "current" position (CurrentByteOffset member) which can be used as default position (for synchronous files only - opened without FILE_FLAG_OVERLAPPED !!) when you read or write file. and this position updated (moved on n bytes forward) after every read or write n bytes.
the best solution always use explicit file offset in ReadFile (or WriteFile). this offset in the last parameter OVERLAPPED lpOverlapped - look for Offset[High] member - the read operation starts at the offset that is specified in the OVERLAPPED structure
use this more effective and simply compare use special api call SetFilePointer which adjust CurrentByteOffset member in FILE_OBJECT (and this not worked for asynchronous file handles (created with FILE_FLAG_OVERLAPPED flag)
despite very common confusion - OVERLAPPED used not for asynchronous io only - this is simply additional parameter to ReadFile (or WriteFile) and can be used always - for any file handles

C++ get the size of a file while it's being written to

I have a recording application that is reading data from a network stream and writing it to file. It works very well, but I would like to display the file size as the data is being written. Every second the gui thread updates the status bar to update the displayed time of recording. At this point I would also like to display the current file size.
I originally consulted this question and have tried both the stat method:
struct stat stat_buf;
int rc = stat(recFilename.c_str(), &stat_buf);
std::cout << recFilename << " " << stat_buf.st_size << "\n";
(no error checking for simplicity) and the fseek method:
FILE *p_file = NULL;
p_file = fopen(recFilename.c_str(),"rb");
fseek(p_file,0,SEEK_END);
int size = ftell(p_file);
fclose(p_file);
but either way, I get 0 for the file size. When I go back and look at the file I write to, the data is there and the size is correct. The recording is happening on a separate thread.
I know that bytes are being written because I can print the size of the data as it is written in conjunction with the output of the methods shown above.
The filename plus the 0 is what I print out from the GUI thread. 'Bytes written x' is out of the recording thread.
You can read all about C++ file manipulations here http://www.cplusplus.com/doc/tutorial/files/
This is an example of how I would do it.
#include <fstream>
std::ifstream::pos_type filesize(const char* file)
{
std::ifstream in(file, std::ifstream::ate | std::ifstream::binary);
return in.tellg();
}
Hope this helps.
As a desperate alternative, you can use a ftell in "the write data thread" or maybe a variable to track the amount of data that is written, but going to the real problem, you must be making a mistake, maybe fopen never opens the file, or something like that.
I'll copy a test code to show that this works at least in a singlethread app
int _tmain(int argc, _TCHAR* argv[])
{
FILE * mFile;
FILE * mFile2;
mFile = fopen("hi.txt", "a+");
// fseek(mFile, 0, SEEK_END);
// ## this is to make sure that fputs and fwrite works equal
// fputs("fopen example", mFile);
fwrite("fopen ex", 1, 9, mFile);
fseek(mFile, 0, SEEK_END);
std::cout << ftell(mFile) << ":";
mFile2 = fopen("hi.txt", "rb");
fseek(mFile2, 0, SEEK_END);
std::cout << ftell(mFile2) << std::endl;
fclose(mFile2);
fclose(mFile);
getchar();
return 0;
}
Just use freopen function before calling stat. It seems freopen refreshes the file length.
I realize this post is rather old at this point, but in response to #TylerD007, while that works, that is incredibly expensive to do if all you're trying to do is get the amount of bytes written.
In C++17 and later, you can simply use the <filesystem> header and call
auto fileSize {std::filesystem::file_size(filePath)}; and now variable fileSize holds the actual size of the file.

Size error on read file

RESOLVED
I'm trying to make a simple file loader.
I aim to get the text from a shader file (plain text file) into a char* that I will compile later.
I've tried this function:
char* load_shader(char* pURL)
{
FILE *shaderFile;
char* pShader;
// File opening
fopen_s( &shaderFile, pURL, "r" );
if ( shaderFile == NULL )
return "FILE_ER";
// File size
fseek (shaderFile , 0 , SEEK_END);
int lSize = ftell (shaderFile);
rewind (shaderFile);
// Allocating size to store the content
pShader = (char*) malloc (sizeof(char) * lSize);
if (pShader == NULL)
{
fputs ("Memory error", stderr);
return "MEM_ER";
}
// copy the file into the buffer:
int result = fread (pShader, sizeof(char), lSize, shaderFile);
if (result != lSize)
{
// size of file 106/113
cout << "size of file " << result << "/" << lSize << endl;
fputs ("Reading error", stderr);
return "READ_ER";
}
// Terminate
fclose (shaderFile);
return 0;
}
But as you can see in the code I have a strange size difference at the end of the process which makes my function crash.
I must say I'm quite a beginner in C so I might have missed some subtilities regarding the memory allocation, types, pointers...
How can I solve this size issue?
*EDIT 1:
First, I shouldn't return 0 at the end but pShader; that seemed to be what crashed the program.
Then, I change the type of reult to size_t, and added a end character to pShader, adding pShdaer[result] = '/0'; after its declaration so I can display it correctly.
Finally, as #JamesKanze suggested, I turned fopen_s into fopen as the previous was not usefull in my case.
First, for this sort of raw access, you're probably better off
using the system level functions: CreateFile or open,
ReadFile or read and CloseHandle or close, with
GetFileSize or stat to get the size. Using FILE* or
std::filebuf will only introduce an additional level of
buffering and processing, for no gain in your case.
As to what you are seeing: there is no guarantee that an ftell
will return anything exploitable as a numeric value; it could
very well be just a magic cookie. On most current systems, it
is a byte offset into the physical file, but on any non-Unix
system, the offset into the physical file will not map directly
to the logical file you are reading unless you open the file in
binary mode. If you use "rb" to open the file, you'll
probably see the same values. (Theoretically, you could get
extra 0's at the end of the file, but practically, the OS's
where that happened are either extinct, or only used on legacy
mainframes.)
EDIT:
Since the answer stating this has been deleted: you should loop
on the fread until it returns 0 (setting errno to 0 before
each call, and checking it after the return to see whether the
function returned because of an error or because it reached the
end of file). Having said this: if you're on one of the usual
Windows or Unix systems, and the file is local to the machine,
and not too big, fread will read it all in one go. The
difference in size you are seeing (given the numerical values
you posted) is almost certainly due to the fact that the two
byte Windows line endings are being mapped to a single '\n'
character. To avoid this, you must open in binary mode;
alternatively, if you really are dealing with text (and want
this mapping), you can just ignore the extra bytes in your
buffer, setting the '\0' terminator after the last byte
actually read.

C++ ifstream::read slow due to memcpy

Recently I decided to optimize some file reading I was doing, because as everyone says, reading a large chunk of data to a buffer and then working with it is faster than using lots of small reads. And my code certainly is much faster now, but after doing some profiling it appears memcpy is taking up a lot of time.
The gist of my code is...
ifstream file("some huge file");
char buffer[0x1000000];
for (yada yada) {
int size = some arbitrary size usually around a megabyte;
file.read(buffer, size);
//Do stuff with buffer
}
I'm using Visual Studio 11 and after profiling my code it says ifstream::read() eventually calls xsgetn() which copies from the internal buffer to my buffer. This operation takes up over 80% of the time! In second place comes uflow() which takes up 10% of the time.
Is there any way I can get around this copying? Can I somehow tell the ifstream to buffer the size I need directly into my buffer? Does the C-style FILE* also use such an internal buffer?
UPDATE: Due to people telling me to use cstdio... I have done a benchmark.
EDIT: Unfortunately the old code was full of fail (it wasn't even reading the entire file!). You can see it here: http://pastebin.com/4dGEQ6S7
Here's my new benchmark:
const int MAX = 0x10000;
char buf[MAX];
string fpath = "largefile";
int main() {
{
clock_t start = clock();
ifstream file(fpath, ios::binary);
while (!file.eof()) {
file.read(buf, MAX);
}
clock_t end = clock();
cout << end-start << endl;
}
{
clock_t start = clock();
FILE* file = fopen(fpath.c_str(), "rb");
setvbuf(file, NULL, _IOFBF, 1024);
while (!feof(file)) {
fread(buf, 0x1, MAX, file);
}
fclose(file);
clock_t end = clock();
cout << end-start << endl;
}
{
clock_t start = clock();
HANDLE file = CreateFile(fpath.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_ALWAYS, NULL, NULL);
while (true) {
DWORD used;
ReadFile(file, buf, MAX, &used, NULL);
if (used < MAX) break;
}
CloseHandle(file);
clock_t end = clock();
cout << end-start << endl;
}
system("PAUSE");
}
Times are:
185
80
78
Well... looks like using the C-style fread is faster than ifstream::read. As well, using the windows ReadFile gives only a slight advantage which is negligible (I looked at the code and fread basically is a wrapper around ReadFile). Looks like I'll be switching to fread after all.
Man it is confusing to write a benchmark which actually tests this stuff correctly.
CONCLUSION: Using <cstdio> is faster than <fstream>. The reason fstream is slower is because c++ streams have their own internal buffer. This results in extra copying whenever you read/write and this copying accounts for the entire extra time taken by fstream. Even more shocking is that the extra time taken is longer than the time taken to actually read the file.
Can I somehow tell the ifstream to buffer the size I need directly
into my buffer?
Yes, this is what pubsetbuf() is for.
But if you're that concerned with copying whlie reading a file, consider memory mapping as well, boost has a portable implementation.
If you want to speed up file I/O I suggest you to use the good ol' <cstdio> because it can outperform the C++ one by a large margin.
It has been proven several times that the fastest way of reading data is mmap() on linux systems. I don't know about Windows. However it for sure will do without this buffering.
fopen(), fread(), fwrite() (FILE*) is somewhat higher-level and may induce a buffer, while open(), read(), write() functions are low level and the only buffer you may have there come from the Os kernel.

How do I read the results of a system() call in C++?

I'm using the following code to try to read the results of a df command in Linux using popen.
#include <iostream> // file and std I/O functions
int main(int argc, char** argv) {
FILE* fp;
char * buffer;
long bufSize;
size_t ret_code;
fp = popen("df", "r");
if(fp == NULL) { // head off errors reading the results
std::cerr << "Could not execute command: df" << std::endl;
exit(1);
}
// get the size of the results
fseek(fp, 0, SEEK_END);
bufSize = ftell(fp);
rewind(fp);
// allocate the memory to contain the results
buffer = (char*)malloc( sizeof(char) * bufSize );
if(buffer == NULL) {
std::cerr << "Memory error." << std::endl;
exit(2);
}
// read the results into the buffer
ret_code = fread(buffer, 1, sizeof(buffer), fp);
if(ret_code != bufSize) {
std::cerr << "Error reading output." << std::endl;
exit(3);
}
// print the results
std::cout << buffer << std::endl;
// clean up
pclose(fp);
free(buffer);
return (EXIT_SUCCESS);
}
This code is giving me a "Memory error" with an exit status of '2', so I can see where it's failing, I just don't understand why.
I put this together from example code that I found on Ubuntu Forums and C++ Reference, so I'm not married to it. If anyone can suggest a better way to read the results of a system() call, I'm open to new ideas.
EDIT to the original: Okay, bufSize is coming up negative, and now I understand why. You can't randomly access a pipe, as I naively tried to do.
I can't be the first person to try to do this. Can someone give (or point me to) an example of how to read the results of a system() call into a variable in C++?
You're making this all too hard. popen(3) returns a regular old FILE * for a standard pipe file, which is to say, newline terminated records. You can read it with very high efficiency by using fgets(3) like so in C:
#include <stdio.h>
char bfr[BUFSIZ] ;
FILE * fp;
// ...
if((fp=popen("/bin/df", "r")) ==NULL) {
// error processing and return
}
// ...
while(fgets(bfr,BUFSIZ,fp) != NULL){
// process a line
}
In C++ it's even easier --
#include <cstdio>
#include <iostream>
#include <string>
FILE * fp ;
if((fp= popen("/bin/df","r")) == NULL) {
// error processing and exit
}
ifstream ins(fileno(fp)); // ifstream ctor using a file descriptor
string s;
while (! ins.eof()){
getline(ins,s);
// do something
}
There's some more error handling there, but that's the idea. The point is that you treat the FILE * from popen just like any FILE *, and read it line by line.
Why would std::malloc() fail?
The obvious reason is "because std::ftell() returned a negative signed number, which was then treated as a huge unsigned number".
According to the documentation, std::ftell() returns -1 on failure. One obvious reason it would fail is that you cannot seek in a pipe or FIFO.
There is no escape; you cannot know the length of the command output without reading it, and you can only read it once. You have to read it in chunks, either growing your buffer as needed or parsing on the fly.
But, of course, you can simply avoid the whole issue by directly using the system call df probably uses to get its information: statvfs().
(A note on terminology: "system call" in Unix and Linux generally refers to calling a kernel function from user-space code. Referring to it as "the results of a system() call" or "the results of a system(3) call" would be clearer, but it would probably be better to just say "capturing the output of a process.")
Anyway, you can read a process's output just like you can read any other file. Specifically:
You can start the process using pipe(), fork(), and exec(). This gives you a file descriptor, then you can use a loop to read() from the file descriptor into a buffer and close() the file descriptor once you're done. This is the lowest level option and gives you the most control.
You can start the process using popen(), as you're doing. This gives you a file stream. In a loop, you can read using from the stream into a temporary variable or buffer using fread(), fgets(), or fgetc(), as Zarawesome's answer demonstrates, then process that buffer or append it to a C++ string.
You can start the process using popen(), then use the nonstandard __gnu_cxx::stdio_filebuf to wrap that, then create an std::istream from the stdio_filebuf and treat it like any other C++ stream. This is the most C++-like approach. Here's part 1 and part 2 of an example of this approach.
I'm not sure you can fseek/ftell pipe streams like this.
Have you checked the value of bufSize ? One reason malloc be failing is for insanely sized buffers.
Thanks to everyone who took the time to answer. A co-worker pointed me to the ostringstream class. Here's some example code that does essentially what I was attempting to do in the original question.
#include <iostream> // cout
#include <sstream> // ostringstream
int main(int argc, char** argv) {
FILE* stream = popen( "df", "r" );
std::ostringstream output;
while( !feof( stream ) && !ferror( stream ))
{
char buf[128];
int bytesRead = fread( buf, 1, 128, stream );
output.write( buf, bytesRead );
}
std::string result = output.str();
std::cout << "<RESULT>" << std::endl << result << "</RESULT>" << std::endl;
return (0);
}
To answer the question in the update:
char buffer[1024];
char * line = NULL;
while ((line = fgets(buffer, sizeof buffer, fp)) != NULL) {
// parse one line of df's output here.
}
Would this be enough?
First thing to check is the value of bufSize - if that happens to be <= 0, chances are that malloc returns a NULL as you're trying to allocate a buffer of size 0 at that point.
Another workaround would be to ask malloc to provide you with a buffer of the size (bufSize + n) with n >= 1, which should work around this particular problem.
That aside, the code you posted is pure C, not C++, so including is overdoing it a little.
check your bufSize. ftell can return -1 on error, and this can lead to nonallocation by malloc with buffer having a NULL value.
The reason for the ftell to fail is, because of the popen. You cant search pipes.
Pipes are not random access. They're sequential, which means that once you read a byte, the pipe is not going to send it to you again. Which means, obviously, you can't rewind it.
If you just want to output the data back to the user, you can just do something like:
// your file opening code
while (!feof(fp))
{
char c = getc(fp);
std::cout << c;
}
This will pull bytes out of the df pipe, one by one, and pump them straight into the output.
Now if you want to access the df output as a whole, you can either pipe it into a file and read that file, or concatenate the output into a construct such as a C++ String.