why read() is slower than getc() - c++

Why syscall read() is slower than getc() function?
for (;;) {
chr++;
amr=read(file1, &wc1, 1);
amr2=read(file2, &wc2, 1);
if (wc1 == wc2) {
if (wc1 == '\n')
line++;
if (amr == 0) {
if (eflg)
return (1);
return (0);
}
continue;
}
is slower than
for (;;) {
chr++;
c1 = getc(file1);
c2 = getc(file2);
if (c1 == c2) {
if (c1 == '\n')
line++;
if (c1 == EOF) {
if (eflg)
return (1);
return (0);
}
continue;
}
when getc() call it uses read() system call, so why is slower?

read() involves context switch to kernel, which is relatively slow. When you use it directly and read one byte at a time, you have many context switches. But when you use getc(), it will call read() once for 4 or 8 kB and than return the characters from that without further context switches until it exhausts the buffer.
If you use read() with larger buffer, it would be faster than getc(), because the generic buffering of standard C library has some overhead.
(Edit) Note, that disks can only be read in blocks that are 512 bytes for all generally used storage media. So there has to be some buffering in the kernel anyway. And because memory is allocated in pages of 4096 bytes, most systems (Linux certainly does) read at least that much with each request to the physical storage. But the context switch is also expensive, so the additional layer of buffering in userland still saves a lot of time. This buffering is used in all libc IO, which includes everything using FILE* (the buffer is part of the FILE structure), so fread() will be faster than read() for small reads.

Answer #1: It isn't slower.
Answer #2: It depends.
You don't want to make a syscall (which includes a context switch on memory-protected systems) for every byte you read from the file. You would, on the first byte-sized access, read an appropriate amount of data (say 4k) into memory, and serve the first byte to the caller. On subsequent byte-sized reads, you don't have to call the kernel or actually access the file at all; you simply pass the next byte from the buffer until you have to read another 4k block.
This is what the standard C library's calls (fread(), fgetc(), fgets() etc. etc.) do by default. You can check BUFSIZ to get the default buffer size. You can change the buffer size, or disable buffering altogether, via the setvbuf() call.
read() is not part of the standard C library, it is a POSIX syscall. Basically, it is the backend for the standard C library calls on POSIX systems. (On a Windows system, fgetc() would call the Win32 API instead.) As such, read() doesn't buffer, and calling it for byte-sized chunks is inefficient as hell. If you call read(), you usually do so because you want to do the buffering yourself.
Generally speaking, don't mix POSIX and standard library I/O calls. Use the POSIX API for low-level access, use the standard library for portable convenience (and good default performance).

Related

Does my C++ code handle 100GB+ file copying? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I need a cross-platform portable function that is able to copy a 100GB+ binary file to a new destination. My first solution was this:
void copy(const string &src, const string &dst)
{
FILE *f;
char *buf;
long len;
f = fopen(src.c_str(), "rb");
fseek(f, 0, SEEK_END);
len = ftell(f);
rewind(f);
buf = (char *) malloc((len+1) * sizeof(char));
fread(buf, len, 1, f);
fclose(f);
f = fopen(dst.c_str(), "a");
fwrite(buf, len, 1, f);
fclose(f);
}
Unfortunately, the program was very slow. I suspect the buffer had to keep 100GB+ in the memory. I'm tempted to try the new code (taken from Copy a file in a sane, safe and efficient way):
std::ifstream src_(src, std::ios::binary);
std::ofstream dst_ = std::ofstream(dst, std::ios::binary);
dst_ << src_.rdbuf();
src_.close();
dst_.close();
My question is about this line:
dst_ << src_.rdbuf();
What does the C++ standard say about it? Does the code compiled to byte-by-byte transfer or just whole-buffer transfer (like my first example)?
I'm curious does the << compiled to something useful for me? Maybe I don't have to invest my time on something else, and just let the compiler do the job inside the operator? If the operator translates to looping for me, why should I do it myself?
PS: std::filesystem::copy is impossible as the code has to work for C++11.
The crux of your question is what happens when you do this:
dst_ << src_.rdbuf();
Clearly this is two function calls: one to istream::rdbuf(), which simply returns a pointer to a streambuf, followed by one to ostream::operator<<(streambuf*), which is documented as follows:
After constructing and checking the sentry object, checks if sb is a null pointer. If it is, executes setstate(badbit) and exits. Otherwise, extracts characters from the input sequence controlled by sb and inserts them into *this until one of the following conditions are met: [...]
Reading this, the answer to your question is that copying a file in this way will not require buffering the entire file contents in memory--rather it will read a character at a time (perhaps with some chunked buffering, but that's an optimization that shouldn't change our analysis).
Here is one implementation: https://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-api-4.6/a01075_source.html (__copy_streambufs). Essentially it a loop calling sgetc() and sputc() repeatedly until EOF is reached. The memory required is small and constant.
The C++ standard (I checked C++98, so this should be extremely compatible) says in [lib.ostream.inserters]:
basic_ostream<charT,traits>& operator<<
(basic_streambuf<charT,traits> *sb);
Effects: If sb is null calls setstate(badbit) (which may throw ios_base::failure).
Gets characters from sb and inserts them in *this. Characters are read from sb and inserted until any of the following occurs:
end-of-file occurs on the input sequence;
inserting in the output sequence fails (in which case the character to be inserted is not extracted);
an exception occurs while getting a character from sb.
If the function inserts no characters, it calls setstate(failbit) (which may throw ios_base::failure (27.4.4.3)). If an exception was thrown while extracting a character, the function set failbit in error state, and if failbit is on in exceptions() the caught exception is rethrown.
Returns: *this.
This description says << on rdbuf works on a character-by-character basis. In particular, if inserting of a character fails, that exact character remains unread in the input sequence. This implies that an implementation cannot just extract the whole contents into a single huge buffer upfront.
So yes, there's a loop somewhere in the internals of the standard library that does a byte-by-byte (well, charT really) transfer.
However, this does not mean that the whole thing is completely unbuffered. This is simply about what operator<< does internally. Your ostream object will still accumulate data internally until its buffer is full, then call write (or whatever low-level function your OS uses).
Unfortunately, the program was very slow.
Your first solution is wrong for a very simple reason: it reads the entire source file in memory, then write it entirely.
Files have been invented (perhaps in the 1960s) to handle data that don't fit in memory (and has to be in some "slower" storage, at that time hard disks or drums, or perhaps even tapes). And they have always been copied by "chunks".
The current (Unix-like) definition of file (as a sequence of bytes than is open-ed, read, write-n, close-d) is more recent than 1960s. Probably the late 1970s or early 1980s. And it comes with the notion of streams (which has been standardized in C with <stdio.h> and in C++ with std::fstream).
So your program has to work (like every file copying program today) for files much bigger than the available memory.You need some loop to read some buffer, write it, and repeat.
The size of the buffer is very important. If it is too small, you'll make too many IO operations (e.g. system calls). If it is too big, IO might be inefficient or even not work.
In practice, the buffer should today be much less than your RAM, typically several megabytes.
Your code is more C like than C++ like because it uses fopen. Here is a possible solution in C with <stdio.h>. If you code in genuine C++, adapt it to <fstream>:
void copyfile(const char*destpath, const char*srcpath) {
// experiment with various buffer size
#define MYBUFFERSIZE (4*1024*1024) /* four megabytes */
char* buf = malloc(MYBUFFERSIZE);
if (!buf) { perror("malloc buf"); exit(EXIT_FAILURE); };
FILE* filsrc = fopen(srcpath, "r");
if (!filsrc) { perror(srcpath); exit(EXIT_FAILURE); };
FILE* fildest = fopen(destpath, "w");
if (!fildest) { perror(destpath); exit(EXIT_FAILURE); };
for (;;) {
size_t rdsiz = fread(buf, 1, MYBUFFERSIZE, filsrc);
if (rdsiz==0) // end of file
break;
else if (rdsiz<0) // input error
{ perror("fread"); exit(EXIT_FAILURE); };
size_t wrsiz = fwrite(buf, rdsiz, 1, fildest);
if (wrsiz != 1) { perror("fwrite"); exit(EXIT_FAILURE); };
}
if (fclose(filsrc)) { perror("fclose source"); exit(EXIT_FAILURE); };
if (fclose(fildest)) { perror("fclose dest"); exit(EXIT_FAILURE); };
}
For simplicity, I am reading the buffer in byte components and writing it as a whole. A better solution is to handle partial writes.
Apparently dst_ << src_.rdbuf(); might do some loop internally (I have to admit I never used it and did not understand that at first; thanks to Melpopene for correcting me). But the actual buffer size matters a big lot. The two other answers (by John Swinck and by melpomene) focus on that rdbuf() thing. My answer focus on explaining why copying can be slow when you do it like in your first solution, and why you need to loop and why the buffer size matters a big lot.
If you really care about performance, you need to understand implementation details and operating system specific things. So read Operating systems: three easy pieces. Then understand how, on your particular operating system, the various buffering is done (there are several layers of buffers involved: your program buffers, the standard stream buffers, the kernel buffers, the page cache). Don't expect your C++ standard library to buffer in an optimal fashion.
Don't even dream of coding in standard C++ (without operating system specific stuff) an optimal or very fast copying function. If performance matters, you need to dive in OS specific details.
On Linux, you might use time(1), oprofile(1), perf(1) to measure your program's performance. You could use strace(1) to understand the various system calls involved (see syscalls(2) for a list). You might even code (in a Linux specific way) using directly the open(2), read(2), write(2), close(2) and perhaps readahead(2), mmap(2), posix_fadvise(2), madvise(2), sendfile(2) system calls.
At last, large file copying are limited by disk IO (which is the bottleneck). So even by spending days in optimizing OS specific code, you won't win much. The hardware is the limitation. You probably should code what is the most readable code for you (it might be that dst_ << src_.rdbuf(); thing which is looping) or use some library providing file copy. You might win a tiny amount of performance by tuning the various buffer sizes.
If the operator translates to looping for me, why should I do it myself?
Because you have no explicit guarantee on the actual buffering done (at various levels). As I explained, buffering matters for performance. Perhaps the actual performance is not that critical for you, and the ordinary settings of your system and standard library (and their default buffers sizes) might be enough.
PS. Your question contains at least 3 different questions (but related ones). I don't find it clear (so downvoted it), because I did not understand what is the most relevant one. Is it : performance? robustness? meaning of dst_ << src_.rdbuf();? Why is the first solution slow? How to copy large files quickly?

Efficient way to read file multiple line one time?

I am now trying to handle a large file (several GB), so I am thinking to use multi-thread. The file is multiple lines of data like:
data1 attr1.1 attr1.2 attr1.3
data2 attr2.1 attr2.2 attr2.3
data3 attr3.1 attr3.2 attr3.3
I am thinking to use one thread read multiple lines first to a buffer1, and then one other thread to handle the data in buffer1 line by line, while the reading thread start to read file to buffer2. Then the handling thread continues when buffer2 is ready, and the reading thread read to buffer1 again.
Now I finished the handler part by using freads for small file (several KB), but I am not sure how to make the buffer contains the complete line instead of splitting part of line at end of the buffer, which is like this:
data1 attr1.1 attr1.2 attr1.3
data2 attr2.1 att
Also, I find that the fgets or ifstream getline can read file line by line, but would it be very costly since it has many IOs?
Now I am struggling to figure out what it the best way to do that? Is there any efficient way to read multiple lines at one time? Any advice is appreciated.
C stdio and C++ iostream functions use buffered I/O. Small reads only have function-call and locking overhead, not read(2) system call overhead.
Without knowing the line length ahead of time, fgets has to either use a buffer or read one byte at a time. Luckily, the C/C++ I/O semantics allow it to use buffering, so every mainstream implementation does. (According to the docs, mixing stdio and I/O on the underlying file descriptors gives undefined results. This is what allows buffering.)
You're right that it would be a problem if every fgets required a system call.
You might find it useful for one thread to read lines and put the lines into some kind of data structure that's useful for the processing thread.
If you don't have to do much processing on each line, doing the I/O in the same thread as the processing will keep everything in the L1 cache of that CPU, though. Otherwise data will end up in L1 of the I/O thread, and then have to make it to L1 of the core running the processing thread.
Depending on what you want to do with your data, you can minimize copying by memory-mapping the file in-place. Or read with fread, or skip the stdio layer entirely and just use POSIX open / read, if you don't need your code to be as portable. Scanning a buffer for newlines migh have less overhead than what the stdio functions do.
You can handle the leftover line at the end of the buffer by copying it to the front of the buffer, and calling the next fread with a reduced buffer size. (Or, make your buffer ~1k bigger than the size of your fread calls, so you can always read multiples of the memory and filesystem page size (typically 4kiB), unless the trailing part of the line is > 1k.)
Or use a circular buffer, but reading from a circular buffer means checking for wraparound every time you touch it.
It all depends what you want to do as processing afterwards : do you need to keep a copy of the lines ? Do you intend to process input as std::strings ? etc...
Here some general remarks that could help you further:
istream::getline() and fgets() are buffered operations. So I/O is already reduced and you could assume the performance is already correct.
std::getline() is also buffered. Nevertheless, if you don't need to process std::strings the function would cost you a considerable number of memory allocation/deallocation, which might impact performance
Bloc operations like read() or fread() can achieve economies of scale if you can afford large buffers. This can be especially efficient, if you use the data in a throw-away fashion (because you can avoid copying the data and work directly in the buffer), but at the cost of an extra complexity.
But all these considerations shall not forget that the erformance is very much affected by the library implementation that you use.
I've done a little informal benchmark reading a milion of lines in the format you've shown:
* With MSVC2015 on my PC the read() is twice as fast as fgets(), and almost 4 times faster than std::string.
* With GCC on CodingGround, compiling with O3, fgets(), and both getline() are approximately the same, and the read() is slower.
Here the full code if you want to play around.
Here the the code that show you how to move the buffer arround.
int nr=0; // number of bytes read
bool last=false; // last (incomplete) read
while (!last)
{
// here nr conains the number of bytes kept from incomplete line
last = !ifs.read(buffer+nr, szb-nr);
nr = nr+ifs.gcount();
char *s, *p = buffer, *pe = p + nr;
do { // process complete lines in buffer
for (s = p; p != pe && *p != '\n'; p++)
;
if (p != pe || (p == pe && last)) {
if (p != pe)
*p++ = '\0';
lines++; // TO DO: here s is a null terminated line to process
sln += strlen(s); // (dummy operatio for the example)
}
} while (p != pe); // until eand of buffer is reached
std::copy(s, pe, buffer); // copy last (incoplete) line to begin of buffer
nr = pe - s; // and prepare the info for the next iteration
}

What's the difference between read() and getc()

I have two code segments:
while((n=read(0,buf,BUFFSIZE))>0)
if(write(1,buf,n)!=n)
err_sys("write error");
while((c=getc(stdin))!=EOF)
if(putc(c,stdout)==EOF)
err_sys("write error");
Some sayings on internet make me confused. I know that standard I/O does buffering automatically, but I have passed a buf to read(), so read() is also doing buffering, right? And it seems that getc() read data char by char, how much data will the buffer have before sending all the data out?
Thanks
While both functions can be used to read from a file, they are very different. First of all on many systems read is a lower-level function, and may even be a system call directly into the OS. The read function also isn't standard C or C++, it's part of e.g. POSIX. It also can read arbitrarily sized blocks, not only one byte at a time. There's no buffering (except maybe at the OS/kernel level), and it doesn't differ between "binary" and "text" data. And on POSIX systems, where read is a system call, it can be used to read from all kind of devices and not only files.
The getc function is a higher level function. It usually uses buffered input (so input is read in blocks into a buffer, sometimes by using read, and the getc function gets its characters from that buffer). It also only returns a single characters at a time. It's also part of the C and C++ specifications as part of the standard library. Also, there may be conversions of the data read and the data returned by the function, depending on if the file was opened in text or binary mode.
Another difference is that read is also always a function, while getc might be a preprocessor macro.
Comparing read and getc doesn't really make much sense, more sense would be comparing read with fread.

fclose() function slow

I tried to create around 4 GB file using c++ fopen, fwrite and fflush and fclose functions on Linux machine, but I observed that fclose() function is taking very long time to close the file, taking around (40-50 seconds). I checked different forum to find the reason for this slowness, changed the code as suggested in forums, Used setvbuf() function to make unbuffered I/O as like write() function. but still could not resolve the issue.
totalBytes = 4294967296 // 4GB file
buffersize = 2000;
while ( size <= totalBytes )
{
len = fwrite(buffer, 1, bufferSize, fp);
if ( len != bufferSize ) {
cout<<"ERROR (Internal): in calling ACE_OS::fwrite() "<<endl;
ret = -1;
}
size = size + len;
}
...
...
...
fflush(fp);
flcose(fp);
Any solution to the above problem would be very helpful.
thanks,
Ramesh
The operating system is deferring actual writing to the disk and may not actually write the data to the disk at any writing operation or even at fflush().
I looked at the man page of fflush() and saw the following note:
Note that fflush() only flushes the user space buffers provided by
the C library. To ensure that the data is physically stored on disk
the kernel buffers must be flushed too, for example, with sync(2) or
fsync(2).
(there's a similar note for fclose() as well, although behaviour on your Linux system seems different)
It will take a long time to write that much data to the disk, and there's no way around that fact.
fopen/fwrite/fclose are C standard wrappers around the low level open/write/close. All fflush is doing is making sure all the 'write' calls have been made for something buffered. There is no "synchronization point" at the fflush. The operating system is flushing the write buffer before it allows 'close' to return.
Yeah, the time taken by fclose() is part of the time taken by the OS to write your data to the disk.
Look at fsync for achieving what you probably wanted with fflush. If you want to display some progress and the time used by fclose() is making it inaccurate, you could do a fsync() every 100 Mbytes written, or something like that.

Consistency of two C FILE* streams on a single file

I need to implement a simple "spill to disk" layer for large volume of data coming off a network socket. I was hoping to have two C FILE* streams, one used by a background thread writing to the file, one used by a front end thread reading it.
The two streams are so one thread can be writing at one offset, while the other is reading elsewhere - without taking a lock and blocking the other thread.
There will be a paging mechanism so the reads/writes are at random access locations - not necessarily sequential.
One more caveat, this needs to work on Windows and Linux.
The question: after the fwrite to the first stream has returned, is that written data guaranteed to be immediately visible to an fread on the second stream?
If not, what other options might I consider?
So Posix pread/pwrite functions turned out to be what I needed. Here's a version for Win32:
size_t pread64(int fd, void* buf, size_t nbytes, __int64 offset)
{
OVERLAPPED ovl;
memset(&ovl, 0, sizeof(ovl));
*((__int64*)&ovl.Offset)=offset;
DWORD nBytesRead;
if (!ReadFile((HANDLE)_get_osfhandle(fd), buf, nbytes, &nBytesRead, &ovl))
return -1;
return nBytesRead;
}
size_t pwrite64(int fd, void* buf, size_t nbytes, __int64 offset)
{
OVERLAPPED ovl;
memset(&ovl, 0, sizeof(ovl));
*((__int64*)&ovl.Offset)=offset;
DWORD nBytesWritten;
if (!WriteFile((HANDLE)_get_osfhandle(fd), buf, nbytes, &nBytesWritten, &ovl))
return -1;
return nBytesWritten;
}
(And thank you everyone for input on this - much appreciated).
This sounds like a great fit for memory-mapped I/O. It's guaranteed to be coherent, very fast, and keeping track of multiple pointers is straightforward.
You'll need different functions to set up the memory mapping on different OSes, but the actual I/O is completely portable (using pointer deference).
linux: open, mmap
Windows: CreateFileMapping, MapViewOfFile
This definitely will not give you the semantics you want. If you disabled buffering, it might be reasonable to expect it to work, but I still don't think there are any guarantees. Stdio/FILE is really not the right tool for specialized IO needs like this.
The POSIX way to do what you want is with file descriptors and the pread/pwrite functions. I suspect there's a Windows way (or you could emulate them based on some other underlying Windows primitive) but I don't know it.
Also Ben's suggestion of using memory-mapped IO is a very good one, assuming the file fits in your address space.