working of fwrite in c++ - c++

I am trying to simulate race conditions in writing to a file. This is what I am doing.
Opening a.txt in append mode in process1
writing "hello world" in process1
prints the ftell in process1 which is 11
put process1 in sleep
open a.txt again in append mode in process2
writing "hello world" in process2 (this correctly appends to the end of the file)
prints the ftell in process2 which is 22 (correct)
writing "bye world" in process2 (this correctly appends to the end of the file).
process2 quits
process1 resumes, and prints its ftell value, which is 11.
writing "bye world" by process1 --- i assume as the ftell of process1 is 11, this should overwrite the file.
However, the write of process1 is writing to the end of the file and there is no contention in writing between the processes.
I am using fopen as fopen("./a.txt", "a+)
Can anyone tell why is this behavior and how can I simulate the race condition in writing to the file?
The code of process1:
#include <iostream>
#include <fstream>
#include <string>
#include <stdio.h>
#include "time.h"
using namespace std;
int main()
{
FILE *f1= fopen("./a.txt","a+");
cout<<"opened file1"<<endl;
string data ("hello world");
fwrite(data.c_str(), sizeof(char), data.size(), f1);
fflush(f1);
cout<<"file1 tell "<<ftell(f1)<<endl;
cout<<"wrote file1"<<endl;
sleep(3);
string data1 ("bye world");;
cout<<"wrote file1 end"<<endl;
cout<<"file1 2nd tell "<<ftell(f1)<<endl;
fwrite(data1.c_str(), sizeof(char), data1.size(), f1);
cout<<"file1 2nd tell "<<ftell(f1)<<endl;
fflush(f1);
return 0;
}
In process2, I have commented out the sleep statement.
I am using the following script to run:
./process1 &
sleep 2
./process2 &
Thanks for your time.

The writer code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BLOCKSIZE 1000000
int main(int argc, char **argv)
{
FILE *f = fopen("a.txt", "a+");
char *block = malloc(BLOCKSIZE);
if (argc < 2)
{
fprintf(stderr, "need argument\n");
}
memset(block, argv[1][0], BLOCKSIZE);
for(int i = 0; i < 3000; i++)
{
fwrite(block, sizeof(char), BLOCKSIZE, f);
}
fclose(f);
}
The reader function:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BLOCKSIZE 1000000
int main(int argc, char **argv)
{
FILE *f = fopen("a.txt", "r");
int c;
int oldc = 0;
int rl = 0;
while((c = fgetc(f)) != EOF)
{
if (c != oldc)
{
if (rl)
{
printf("Got %d of %c\n", rl, oldc);
}
oldc = c;
rl = 0;
}
rl++;
}
fclose(f);
}
I ran ./writefile A & ./writefile B then ./readfile
I got this:
Got 1000999424 of A
Got 999424 of B
Got 999424 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
As you can see, there are nice long runs of A and B, but they are not exactly 1000000 characters long, which is the size I wrote them. The whole file, after a trialrun with a smaller size in the first run is just short of 7GB.
For reference: Fedora Core 16, with my own compiled 3.7rc5 kernel, gcc 4.6.3, x86-64, and ext4 on top of lvm, AMD PhenomII quad core processor, 16GB of RAM

Writing in append mode is an atomic operation. This is why it doesn't break.
Now... how to break it?
Try memory mapping the file and writing in the memory from the two processes. I'm pretty sure this will break it.

I'm pretty sure you can't RELY on this behaviour, but it may well work reliably on some systems. Writing to the same file from two different processes is likely to cause problems sooner or later, if you "try hard enough". And sod's law says that that's exactly when your boss is checking if the software works, when your customer takes delivery of the system you've sold, or when you are finalizing your report that took ages to produce, or some other important time.

The behavior you're trying to break or see depends on which OS you are working on, as writing in a file is a system call.
On what you told us about the first file descriptor to not overwrite what the second process wrote, the fact you opened the file in append mode in both process may have actualized the ftell value before actually writing in it.
Did you try to do the same with the standard open and write functions? Might be interesting as well.
EDIT: The C++ Reference doc explains about the fopen append option here:
"append/update: Open a file for update (both for input and output) with all output operations writing data at the end of the file. Repositioning operations (fseek, fsetpos, rewind) affects the next input operations, but output operations move the position back to the end of file."
This explains the behavior you observed.

Related

Why are std::fstreams so slow?

I was working on a simple parser and when profiling I observed the bottleneck is in... file read! I extracted very simple test to compare the performance of fstreams and FILE* when reading a big blob of data:
#include <stdio.h>
#include <chrono>
#include <fstream>
#include <iostream>
#include <functional>
void measure(const std::string& test, std::function<void()> function)
{
auto start_time = std::chrono::high_resolution_clock::now();
function();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now() - start_time);
std::cout<<test<<" "<<static_cast<double>(duration.count()) * 0.000001<<" ms"<<std::endl;
}
#define BUFFER_SIZE (1024 * 1024 * 1024)
int main(int argc, const char * argv[])
{
auto buffer = new char[BUFFER_SIZE];
memset(buffer, 123, BUFFER_SIZE);
measure("FILE* write", [buffer]()
{
FILE* file = fopen("test_file_write", "wb");
fwrite(buffer, 1, BUFFER_SIZE, file);
fclose(file);
});
measure("FILE* read", [buffer]()
{
FILE* file = fopen("test_file_read", "rb");
fread(buffer, 1, BUFFER_SIZE, file);
fclose(file);
});
measure("fstream write", [buffer]()
{
std::ofstream stream("test_stream_write", std::ios::binary);
stream.write(buffer, BUFFER_SIZE);
});
measure("fstream read", [buffer]()
{
std::ifstream stream("test_stream_read", std::ios::binary);
stream.read(buffer, BUFFER_SIZE);
});
delete[] buffer;
}
The results of running this code on my machine are:
FILE* write 1388.59 ms
FILE* read 1292.51 ms
fstream write 3105.38 ms
fstream read 3319.82 ms
fstream write/read are about 2 times slower than FILE* write/read! And this while reading a big blob of data, without any parsing or other features of fstreams. I'm running the code on Mac OS, Intel I7 2.6GHz, 16GB 1600 MHz Ram, SSD drive. Please note that running again same code the time for FILE* read is very low (about 200 ms) probably because the file gets cached... This is why the files opened for reading are not created using the code.
Why when reading just a blob of binary data using fstream is so slow compared to FILE*?
EDIT 1: I updated the code and the times. Sorry for the delay!
EDIT 2: I added command line and new results (very similar to previous ones!)
$ clang++ main.cpp -std=c++11 -stdlib=libc++ -O3
$ ./a.out
FILE* write 1417.9 ms
FILE* read 1292.59 ms
fstream write 3214.02 ms
fstream read 3052.56 ms
Following the results for the second run:
$ ./a.out
FILE* write 1428.98 ms
FILE* read 196.902 ms
fstream write 3343.69 ms
fstream read 2285.93 ms
It looks like the file gets cached when reading for both FILE* and stream as the time reduces with the same amount for both of them.
EDIT 3: I reduced the code to this:
FILE* file = fopen("test_file_write", "wb");
fwrite(buffer, 1, BUFFER_SIZE, file);
fclose(file);
std::ofstream stream("test_stream_write", std::ios::binary);
stream.write(buffer, BUFFER_SIZE);
And started the profiler. It seems like stream spends lots of time in xsputn function, and the actual write calls have the same duration (as it should be, it's the same function...)
Running Time Self Symbol Name
3266.0ms 66.9% 0,0 std::__1::basic_ostream<char, std::__1::char_traits<char> >::write(char const*, long)
3265.0ms 66.9% 2145,0 std::__1::basic_streambuf<char, std::__1::char_traits<char> >::xsputn(char const*, long)
1120.0ms 22.9% 7,0 std::__1::basic_filebuf<char, std::__1::char_traits<char> >::overflow(int)
1112.0ms 22.7% 2,0 fwrite
1127.0ms 23.0% 0,0 fwrite
EDIT 4 For some reason this question is marked as duplicate. I wanted to point out that I don't use printf at all, I use only std::cout to write the time. The files used in the read part are the output from the write part, copied with different name to avoid caching
It would seem that, on Linux, for this large set of data, the implementation of fwrite is much more efficient, since it uses write rather than writev.
I'm not sure WHY writev is so much slower than write, but that appears to be where the difference is. And I see absolutely no real reason as to why the fstream needs to use that construct in this case.
This can easily be seen by using strace ./a.out (where a.out is the program testing this).
Output:
Fstream:
clock_gettime(CLOCK_REALTIME, {1411978373, 114560081}) = 0
open("test", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
writev(3, [{NULL, 0}, {"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1073741824}], 2) = 1073741824
close(3) = 0
clock_gettime(CLOCK_REALTIME, {1411978386, 376353883}) = 0
write(1, "fstream write 13261.8 ms\n", 25fstream write 13261.8 ms) = 25
FILE*:
clock_gettime(CLOCK_REALTIME, {1411978386, 930326134}) = 0
open("test", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
write(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1073741824) = 1073741824
clock_gettime(CLOCK_REALTIME, {1411978388, 584197782}) = 0
write(1, "FILE* write 1653.87 ms\n", 23FILE* write 1653.87 ms) = 23
I don't have them fancy SSD drives, so my machine will be a bit slower on that - or something else is slower in my case.
As pointed out by Jan Hudec, I'm misinterpreting the results. I just wrote this:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/uio.h>
#include <unistd.h>
#include <iostream>
#include <cstdlib>
#include <cstring>
#include <functional>
#include <chrono>
void measure(const std::string& test, std::function<void()> function)
{
auto start_time = std::chrono::high_resolution_clock::now();
function();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now() - start_time);
std::cout<<test<<" "<<static_cast<double>(duration.count()) * 0.000001<<" ms"<<std::endl;
}
#define BUFFER_SIZE (1024 * 1024 * 1024)
int main()
{
auto buffer = new char[BUFFER_SIZE];
memset(buffer, 0, BUFFER_SIZE);
measure("writev", [buffer]()
{
int fd = open("test", O_CREAT|O_WRONLY);
struct iovec vec[] =
{
{ NULL, 0 },
{ (void *)buffer, BUFFER_SIZE }
};
writev(fd, vec, sizeof(vec)/sizeof(vec[0]));
close(fd);
});
measure("write", [buffer]()
{
int fd = open("test", O_CREAT|O_WRONLY);
write(fd, buffer, BUFFER_SIZE);
close(fd);
});
}
It is the actual fstream implementation that does something daft - probably copying the whole data in small chunks, somewhere and somehow, or something like that. I will try to find out further.
And the result is pretty much identical for both cases, and faster than both fstream and FILE* variants in the question.
Edit:
It would seem like, on my machine, right now, if you add fclose(file) after the write, it takes approximately the same amount of time for both fstream and FILE* - on my system, around 13 seconds to write 1GB - with old style spinning disk type drives, not SSD.
I can however write MUCH faster using this code:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/uio.h>
#include <unistd.h>
#include <iostream>
#include <cstdlib>
#include <cstring>
#include <functional>
#include <chrono>
void measure(const std::string& test, std::function<void()> function)
{
auto start_time = std::chrono::high_resolution_clock::now();
function();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now() - start_time);
std::cout<<test<<" "<<static_cast<double>(duration.count()) * 0.000001<<" ms"<<std::endl;
}
#define BUFFER_SIZE (1024 * 1024 * 1024)
int main()
{
auto buffer = new char[BUFFER_SIZE];
memset(buffer, 0, BUFFER_SIZE);
measure("writev", [buffer]()
{
int fd = open("test", O_CREAT|O_WRONLY, 0660);
struct iovec vec[] =
{
{ NULL, 0 },
{ (void *)buffer, BUFFER_SIZE }
};
writev(fd, vec, sizeof(vec)/sizeof(vec[0]));
close(fd);
});
measure("write", [buffer]()
{
int fd = open("test", O_CREAT|O_WRONLY, 0660);
write(fd, buffer, BUFFER_SIZE);
close(fd);
});
}
gives times of about 650-900 ms.
I can also edit the original program to give a time of approximately 1000ms for fwrite - simply remove the fclose.
I also added this method:
measure("fstream write (new)", [buffer]()
{
std::ofstream* stream = new std::ofstream("test", std::ios::binary);
stream->write(buffer, BUFFER_SIZE);
// Intentionally no delete.
});
and then it takes about 1000 ms here too.
So, my conclusion is that, somehow, sometimes, closing the file makes it flush to disk. In other cases, it doesn't. I still don't understand why...
TL;DR: Try adding this to your code before doing the writing:
const size_t bufsize = 256*1024;
char buf[bufsize];
mystream.rdbuf()->pubsetbuf(buf, bufsize);
When working with large files with fstream, make sure to use a stream buffer.
Counterintuitively, disabling stream buffering dramatically reduces performance. At least the MSVC implementation copies 1 char at a time to the filebuf when no buffer was set (see streambuf::xsputn()), which can make your application CPU-bound, which will result in lower I/O rates.
NB: You can find a complete sample application here.
A side note for whom interests.
The main keywords are Windows 2016 server /CloseHandle.
In our app we discovered a NASTY bug on win2016 server.
Our std code under EVERY windows version takes: (ms)
time CreateFile/SetFilePointer 1 WriteFile 0 CloseHandle 0
on windows 2016 we got:
time CreateFile/SetFilePointer 1 WriteFile 0 CloseHandle 275
And times grows with dimension of file, that is ABSURD.
After a LOT of investigations (we first found "CloseHandle" is the culprit...) we discovered that under windows2016 MS attached an "hook" in close function that triggers "Windows Defender" to scan ALL the file and prevents returning until done. (in other words scanning is synchronous, that is PURE MADNESS).
When we added exclusion in "Defender" for our file, all works fine.
I think is a BAD design, no antivirus stops normal file active INSIDE program space to scan files. (MS can do it as they have the power to do so.)
In contrary to other answers, a big issue with large file reads comes from buffering by the C standard library. Try using low level read/write calls in large chunks (1024KB) and see the performance jump.
File buffering by the C library is useful for reading or writing small chunks of data (smaller than disk block size).
On Windows I got almost a 3x performance boost dropping file buffering when reading and writing raw video streams.
I also opened the file using native OS (win32) API calls and told the OS not to cache the file as this involves yet another copy.
The stream is somehow broken on the MAC, old implementation or setup.
An old setup could cause the FILE to be written in the exe directory and the stream in the user directory, this shouldn't make any difference unless you got 2 disks or other different setting.
On my lousy Vista I get
Normal buffer+Uncached:
C++ 201103
FILE* write 4756 ms
FILE* read 5007 ms
fstream write 5526 ms
fstream read 5728 ms
Normal buffer+Cached:
C++ 201103
FILE* write 4747 ms
FILE* read 454 ms
fstream write 5490 ms
fstream read 396 ms
Large Buffer+cached:
C++ 201103
5th run:
FILE* write 4760 ms
FILE* read 446 ms
fstream write 5278 ms
fstream read 369 ms
This shows that the FILE write is faster than the fstream, but slower in read than fstream ... but all numbers are within ~10% of each other.
Try adding some more buffering to your stream to see if that helps.
const int MySize = 1024*1024;
char MrBuf[MySize];
stream.rdbuf()->pubsetbuf(MrBuf, MySize);
The equivalent for FILE is
const int MySize = 1024*1024;
if (!setvbuf ( file , NULL , _IOFBF , MySize ))
DieInDisgrace();

Does posix_fallocate work with files opened with appened mode?

I'm trying to preallocate disk space for file operations, however, I encounter one weird issue that posix_fallocate only alloates one byte when I call it to allocate disk space for files opened with append mode and file contents are also unexpected. Has anyone known this issue? And my test codes are,
#include <cstdio>
#include <fcntl.h>
#include <unistd.h>
#include <sys/stat.h>
#include <cerrno>
int main(int argc, char **argv)
{
FILE *fp = fopen("append.txt", "w");
for (int i = 0; i < 5; ++i)
fprintf(fp, "## Test loop %d\n", i);
fclose(fp);
sleep(1);
int fid = open("append.txt", O_WRONLY | O_APPEND);
struct stat status;
fstat(fid, &status);
printf("INFO: sizeof 'append.txt' is %ld Bytes.\n", status.st_size);
int ret = posix_fallocate(fid, (off_t)status.st_size, 1024);
if (ret) {
switch (ret) {
case EBADF:
fprintf(stderr, "ERROR: %d is not a valid file descriptor, or is not opened for writing.\n", fid);
break;
case EFBIG:
fprintf(stderr, "ERROR: exceed the maximum file size.\n");
break;
case ENOSPC:
fprintf(stderr, "ERROR: There is not enough space left on the device\n");
break;
default:
break;
}
}
fstat(fid, &status);
printf("INFO: sizeof 'append.txt' is %ld Bytes.\n", status.st_size);
char *hello = "hello world\n";
write(fid, hello, 12);
close(fid);
return 0;
}
And the expected result should be,
## Test loop 0
## Test loop 1
## Test loop 2
## Test loop 3
## Test loop 4
hello world
However, the result of above program is,
## Test loop 0
## Test loop 1
## Test loop 2
## Test loop 3
## Test loop 4
^#hello world
So, what's "^#"?
And the message shows,
INFO: sizeof 'append.txt' is 75 Bytes.
INFO: sizeof 'append.txt' is 76 Bytes.
Any clues?
Thanks
Quick Answer
Yes, posix_fallocate does work with files opened in APPEND mode. IF your filesystem supports the fallocate system call. If your filesystem does not support it the glibc emulation adds a single 0 byte to the end in APPEND mode.
More Information
This was a strange one and really puzzled me. I found the answer by using the strace program which shows what system calls are being made.
Check this out:
fallocate(3, 0, 74, 1000) = -1 EOPNOTSUPP (Operation not
supported)
fstat(3, {st_mode=S_IFREG|0664, st_size=75, ...}) = 0
fstatfs(3, {f_type=0xf15f, f_bsize=4096, f_blocks=56777565,
f_bfree=30435527, f_bavail=27551380, f_files=14426112,
f_ffree=13172614, f_fsid={1863489073, -1456395543}, f_namelen=143,
f_frsize=4096}) = 0
pwrite(3, "\0", 1, 1073) = 1
It looks like the GNU C Library is trying to help you here. The fallocate system call is apparently not implemented on your filesystem, so GLibC is emulating it by using pwrite to write a 0 byte out at the end of the requested allocation, thus extending the file.
This works fine in normal write mode. But in APPEND mode the write is always done at the end of the file so the pwrite writes one 0 byte at the end.
Not what was intended. Might be a GNU C Library bug.
It looks like ext4 does support fallocate. And if I write the file into /tmp it works. It fails in my home directory because I am using an encrypted home directory in Ubuntu with the ecryptfs filesystem
Per POSIX:
If the offset+ len is beyond the current file size, then posix_fallocate() shall adjust the file size to offset+ len. Otherwise, the file size shall not be changed.
So it doesn't make sense to use posix_fallocate with append mode, since it will extend the size of the file (filled with null bytes) and subsequent writes will take place after those null bytes, in space that's not yet reserved.
As for why it's only extending the file by one byte, are you sure that's correct? Have you measured? That sounds like a bug in the implementation.

Xcode Error: “EXC_BAD_ACCESS”

I am attempting to compile and run a test C program in Xcode. This program reads 5 symbols from a text file and closes it. The program builds successfully, but when I try to run the program I get the error: GDB: Program received signal: "EXC_BAD_ACCESS" around fclose(in).
#include <iostream>
#include <unistd.h>
int main (int argc, const char * argv[])
{
bool b;
char inpath[PATH_MAX];
printf("Enter input file path :\r\n");
std::cin >> inpath;
FILE *in = fopen(inpath, "r+w");
char buf[5];
fread(&buf,sizeof(buf),5,in);
printf(buf);
fclose(in);
return 0;
}
What could be a cause of this?
Ah! sizeof(buf) will return 5, so you're asking for 25 bytes in a 5-byte buffer. This overwrites auto storage and clobbers in.
And, of course, note that fprint(buf) will be attempting to print a buffer with no terminating null, so it will print garbage beyond the end of what was read.
The line
fread(&buf,sizeof(buf),5,in);
is wrong: read carefully the man page of fread (and remember that sizeof(buf) would be the size of the whole buf array).
The line
printf(buf);
is wrong. Behavior is undefined if for instance buf would contain %d
You definitely should learn to use the debugger (and enable all warnings with your compiler).
fread(&buf,sizeof(buf),5,in);
this says that you want to read the buf 5 times, which is not correct.
The second and third parameters tell fread the size of each element you want to read and the number of elements.

C/C++ best way to send a number of bytes to stdout

Profiling my program and the function print is taking a lot of time to perform. How can I send "raw" byte output directly to stdout instead of using fwrite, and making it faster (need to send all 9bytes in the print() at the same time to the stdout) ?
void print(){
unsigned char temp[9];
temp[0] = matrix[0][0];
temp[1] = matrix[0][1];
temp[2] = matrix[0][2];
temp[3] = matrix[1][0];
temp[4] = matrix[1][1];
temp[5] = matrix[1][2];
temp[6] = matrix[2][0];
temp[7] = matrix[2][1];
temp[8] = matrix[2][2];
fwrite(temp,1,9,stdout);
}
Matrix is defined globally to be a unsigned char matrix[3][3];
IO is not an inexpensive operation. It is, in fact, a blocking operation, meaning that the OS can preempt your process when you call write to allow more CPU-bound processes to run, before the IO device you're writing to completes the operation.
The only lower level function you can use (if you're developing on a *nix machine), is to use the raw write function, but even then your performance will not be that much faster than it is now. Simply put: IO is expensive.
The top rated answer claims that IO is slow.
Here's a quick benchmark with a sufficiently large buffer to take the OS out of the critical performance path, but only if you're willing to receive your output in giant blurps. If latency to first byte is your problem, you need to run in "dribs" mode.
Write 10 million records from a nine byte array
Mint 12 AMD64 on 3GHz CoreDuo under gcc 4.6.1
340ms to /dev/null
710ms to 90MB output file
15254ms to 90MB output file in "dribs" mode
FreeBSD 9 AMD64 on 2.4GHz CoreDuo under clang 3.0
450ms to /dev/null
550ms to 90MB output file on ZFS triple mirror
1150ms to 90MB output file on FFS system drive
22154ms to 90MB output file in "dribs" mode
There's nothing slow about IO if you can afford to buffer properly.
#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
int main (int argc, char* argv[])
{
int dribs = argc > 1 && 0==strcmp (argv[1], "dribs");
int err;
int i;
enum { BigBuf = 4*1024*1024 };
char* outbuf = malloc (BigBuf);
assert (outbuf != NULL);
err = setvbuf (stdout, outbuf, _IOFBF, BigBuf); // full line buffering
assert (err == 0);
enum { ArraySize = 9 };
char temp[ArraySize];
enum { Count = 10*1000*1000 };
for (i = 0; i < Count; ++i) {
fwrite (temp, 1, ArraySize, stdout);
if (dribs) fflush (stdout);
}
fflush (stdout); // seems to be needed after setting own buffer
fclose (stdout);
if (outbuf) { free (outbuf); outbuf = NULL; }
}
The rawest form of output you can do is the probable the write system call, like this
write (1, matrix, 9);
1 is the file descriptor for standard out (0 is standard in, and 2 is standard error). Your standard out will only write as fast as the one reading it at the other end (i.e. the terminal, or the program you're pipeing into) which might be rather slow.
I'm not 100% sure, but you could try setting non-blocking IO on fd 1 (using fcntl) and hope the OS will buffer it for you until it can be consumed by the other end. It's been a while, but I think it works like this
fcntl (1, F_SETFL, O_NONBLOCK);
YMMV though. Please correct me if I'm wrong on the syntax, as I said, it's been a while.
Perhaps your problem is not that fwrite() is slow, but that it is buffered.
Try calling fflush(stdout) after the fwrite().
This all really depends on your definition of slow in this context.
All printing is fairly slow, although iostreams are really slow for printing.
Your best bet would be to use printf, something along the lines of:
printf("%c%c%c%c%c%c%c%c%c\n", matrix[0][0], matrix[0][1], matrix[0][2], matrix[1][0],
matrix[1][1], matrix[1][2], matrix[2][0], matrix[2][1], matrix[2][2]);
As everyone has pointed out IO in tight inner loop is expensive. I have normally ended up doing conditional cout of Matrix based on some criteria when required to debug it.
If your app is console app then try redirecting it to a file, it will be lot faster than doing console refreshes. e.g app.exe > matrixDump.txt
What's wrong with:
fwrite(matrix,1,9,stdout);
both the one and the two dimensional arrays take up the same memory.
Try running the program twice. Once with output and once without. You will notice that overall, the one without the io is the fastest. Also, you could fork the process (or create a thread), one writing to a file(stdout), and one doing the operations.
So first, don't print on every entry. Basically what i am saying is do not do like that.
for(int i = 0; i<100; i++){
printf("Your stuff");
}
instead allocate a buffer either on stack or on heap, and store you infomration there and then just throw this bufffer into stdout, just liek that
char *buffer = malloc(sizeof(100));
for(int i = 100; i<100; i++){
char[i] = 1; //your 8 byte value goes here
}
//once you are done print it to a ocnsole with
write(1, buffer, 100);
but in your case, just use write(1, temp, 9);
I am pretty sure you can increase the output performance by increasing the buffer size. So you have less fwrite calls. write might be faster but I am not sure. Just try this:
❯ yes | dd of=/dev/null count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 2.18338 s, 234 MB/s
vs
> yes | dd of=/dev/null count=100000 bs=50KB iflag=fullblock
100000+0 records in
100000+0 records out
5000000000 bytes (5.0 GB, 4.7 GiB) copied, 2.63986 s, 1.9 GB/s
The same applies to your code. Some tests during the last days show that probably good buffer sizes are around 1 << 12 (=4096) and 1<<16 (=65535) bytes.
You can simply:
std::cout << temp;
printf is more C-Style.
Yet, IO operations are costly, so use them wisely.

How do I read the results of a system() call in C++?

I'm using the following code to try to read the results of a df command in Linux using popen.
#include <iostream> // file and std I/O functions
int main(int argc, char** argv) {
FILE* fp;
char * buffer;
long bufSize;
size_t ret_code;
fp = popen("df", "r");
if(fp == NULL) { // head off errors reading the results
std::cerr << "Could not execute command: df" << std::endl;
exit(1);
}
// get the size of the results
fseek(fp, 0, SEEK_END);
bufSize = ftell(fp);
rewind(fp);
// allocate the memory to contain the results
buffer = (char*)malloc( sizeof(char) * bufSize );
if(buffer == NULL) {
std::cerr << "Memory error." << std::endl;
exit(2);
}
// read the results into the buffer
ret_code = fread(buffer, 1, sizeof(buffer), fp);
if(ret_code != bufSize) {
std::cerr << "Error reading output." << std::endl;
exit(3);
}
// print the results
std::cout << buffer << std::endl;
// clean up
pclose(fp);
free(buffer);
return (EXIT_SUCCESS);
}
This code is giving me a "Memory error" with an exit status of '2', so I can see where it's failing, I just don't understand why.
I put this together from example code that I found on Ubuntu Forums and C++ Reference, so I'm not married to it. If anyone can suggest a better way to read the results of a system() call, I'm open to new ideas.
EDIT to the original: Okay, bufSize is coming up negative, and now I understand why. You can't randomly access a pipe, as I naively tried to do.
I can't be the first person to try to do this. Can someone give (or point me to) an example of how to read the results of a system() call into a variable in C++?
You're making this all too hard. popen(3) returns a regular old FILE * for a standard pipe file, which is to say, newline terminated records. You can read it with very high efficiency by using fgets(3) like so in C:
#include <stdio.h>
char bfr[BUFSIZ] ;
FILE * fp;
// ...
if((fp=popen("/bin/df", "r")) ==NULL) {
// error processing and return
}
// ...
while(fgets(bfr,BUFSIZ,fp) != NULL){
// process a line
}
In C++ it's even easier --
#include <cstdio>
#include <iostream>
#include <string>
FILE * fp ;
if((fp= popen("/bin/df","r")) == NULL) {
// error processing and exit
}
ifstream ins(fileno(fp)); // ifstream ctor using a file descriptor
string s;
while (! ins.eof()){
getline(ins,s);
// do something
}
There's some more error handling there, but that's the idea. The point is that you treat the FILE * from popen just like any FILE *, and read it line by line.
Why would std::malloc() fail?
The obvious reason is "because std::ftell() returned a negative signed number, which was then treated as a huge unsigned number".
According to the documentation, std::ftell() returns -1 on failure. One obvious reason it would fail is that you cannot seek in a pipe or FIFO.
There is no escape; you cannot know the length of the command output without reading it, and you can only read it once. You have to read it in chunks, either growing your buffer as needed or parsing on the fly.
But, of course, you can simply avoid the whole issue by directly using the system call df probably uses to get its information: statvfs().
(A note on terminology: "system call" in Unix and Linux generally refers to calling a kernel function from user-space code. Referring to it as "the results of a system() call" or "the results of a system(3) call" would be clearer, but it would probably be better to just say "capturing the output of a process.")
Anyway, you can read a process's output just like you can read any other file. Specifically:
You can start the process using pipe(), fork(), and exec(). This gives you a file descriptor, then you can use a loop to read() from the file descriptor into a buffer and close() the file descriptor once you're done. This is the lowest level option and gives you the most control.
You can start the process using popen(), as you're doing. This gives you a file stream. In a loop, you can read using from the stream into a temporary variable or buffer using fread(), fgets(), or fgetc(), as Zarawesome's answer demonstrates, then process that buffer or append it to a C++ string.
You can start the process using popen(), then use the nonstandard __gnu_cxx::stdio_filebuf to wrap that, then create an std::istream from the stdio_filebuf and treat it like any other C++ stream. This is the most C++-like approach. Here's part 1 and part 2 of an example of this approach.
I'm not sure you can fseek/ftell pipe streams like this.
Have you checked the value of bufSize ? One reason malloc be failing is for insanely sized buffers.
Thanks to everyone who took the time to answer. A co-worker pointed me to the ostringstream class. Here's some example code that does essentially what I was attempting to do in the original question.
#include <iostream> // cout
#include <sstream> // ostringstream
int main(int argc, char** argv) {
FILE* stream = popen( "df", "r" );
std::ostringstream output;
while( !feof( stream ) && !ferror( stream ))
{
char buf[128];
int bytesRead = fread( buf, 1, 128, stream );
output.write( buf, bytesRead );
}
std::string result = output.str();
std::cout << "<RESULT>" << std::endl << result << "</RESULT>" << std::endl;
return (0);
}
To answer the question in the update:
char buffer[1024];
char * line = NULL;
while ((line = fgets(buffer, sizeof buffer, fp)) != NULL) {
// parse one line of df's output here.
}
Would this be enough?
First thing to check is the value of bufSize - if that happens to be <= 0, chances are that malloc returns a NULL as you're trying to allocate a buffer of size 0 at that point.
Another workaround would be to ask malloc to provide you with a buffer of the size (bufSize + n) with n >= 1, which should work around this particular problem.
That aside, the code you posted is pure C, not C++, so including is overdoing it a little.
check your bufSize. ftell can return -1 on error, and this can lead to nonallocation by malloc with buffer having a NULL value.
The reason for the ftell to fail is, because of the popen. You cant search pipes.
Pipes are not random access. They're sequential, which means that once you read a byte, the pipe is not going to send it to you again. Which means, obviously, you can't rewind it.
If you just want to output the data back to the user, you can just do something like:
// your file opening code
while (!feof(fp))
{
char c = getc(fp);
std::cout << c;
}
This will pull bytes out of the df pipe, one by one, and pump them straight into the output.
Now if you want to access the df output as a whole, you can either pipe it into a file and read that file, or concatenate the output into a construct such as a C++ String.