I am writing a concurrent, persistent message queue in C++, which requires concurrent read access to a file without using memory mapped io. Short story is that several threads will need to read from different offsets of the file.
Originally I had a file object that had typical read/write methods, and threads would acquire a mutex to call those methods. However, it so happened that I did not acquire the mutex properly somewhere, causing one thread to move the file offset during a read/write, and another thread would start reading/writing to an incorrect part of the file.
So, the paranoid solution is to have one open file handle per thread. Now I've got a lot of file handles to the same file, which I'm assuming can't be great.
I'd like to use something like pread, which allows passing in of the current offset to read/write functions.
However, the function is only available on linux, and I need equivalent implementations on windows, aix, solaris and hpux, any suggestions?
On Windows, the ReadFile() function can do it, see the lpOverlapped parameter and this info on async IO.
With NIO, java.nio.channels.FileChannel has a read(ByteBuffer dst, long position) method, which internally uses pread.
Oh wait, your question is about C++, not Java. Well, I just looked at the JDK source code to see how it does it for Windows, but unfortunately on Windows it isn't atomic: it simply seeks, then reads, then seeks back.
For Unix platforms, the punchline is that pread is standard for any XSI-supporting (X/Open System Interface, apparently) operating system: http://www.opengroup.org/onlinepubs/009695399/functions/pread.html
Based on another answer, the closest I could come up with is this. However, there is a bug: ReadFile will change the file offset, and pread is guaranteed to not change the file offset. There's no real way to fix this, because code can do normal read() and write() concurrently with no lock. Anybody found a call that will not change the offset?
unsigned int FakePRead(int fd, void *to, std::size_t size, uint64_offset) {
// size_t might be 64-bit. DWORD is always 32.
const std::size_t kMax = static_cast<std::size_t>(1UL << 31);
DWORD reading = static_cast<DWORD>(std::min<std::size_t>(kMax, size));
DWORD ret;
OVERLAPPED overlapped;
memset(&overlapped, 0, sizeof(OVERLAPPED));
overlapped.Offset = static_cast<DWORD>(off);
overlapped.OffsetHigh = static_cast<DWORD>(off >> 32);
if (!ReadFile((HANDLE)_get_osfhandle(fd), to, reading, &ret, &overlapped)) {
// TODO: set errno to something?
return -1;
}
// Note the limit to 1 << 31 before.
return static_cast<unsigned int>(ret);
}
Related
On Windows, the WriteFile() function has a parameter called lpOverlapped which lets you specify an offset at which to write to the file.
I was wondering, is there is an fwrite() cross-platform equivalent of that?
I see that if the file is opened with the rb+ flag, I might be able to use fseek() to write to a particular offset. My question is - will this approach be equivalent to the overlapped WriteFile(), and will it produce the same behaviour on all platforms?
Background
The reason I need this is because I am writing blocked compressed data streams to a file, and I want to be able to load a specific block from the file and be able to decompress it. So, basically if I keep track of where the block begins in a file, I can load the block and decompress it in a more efficient manner. I know that there are probably better ways to do this, but I need this solution for some backwards compatibility.
Assuming you are okay with using POSIX functions and not just things from the C or C++ standard libraries, the solution is pwrite (aka: positioned write).
ssize_t rc = pwrite(file_handle, data_ptr, data_size, destination_offset);
I think you are confusing "overlapped" and "overwrite"/"offset." I didn't study up on the specifics of why Microsoft explicitly says overlapped writes include a parameter for offset (I think it makes sense as I describe below). In general, when Microsoft talks about "overlapped" IO, they are talking about how to synchronize events like starting to write the file, receiving notification that the write completed, and starting another write to the file which might or might not overlap with a previous write. In this last case, by overlap I mean what you would think that overlap means, ie overlaps within the contents of the file. Whereas Microsoft means that writing the file overlaps in time with your thread running, or not. Note that this gets very complicated if more than one thread can write the same file.
If possible, and surely if you want portable code, you want to avoid all this nonsense and just do the simplest write possible in each context, which means avoid Microsoft optimizations like "overlapped IO" unless you really need performance. (And if you need absolutely optimal performance, you might want to cache the file yourself and manage the overlaps, then write it once from start to finish.)
While pwrite is probably the best solution, there is an alternative that sticks with stdio functions. Unfortunately, to make it thread-safe, you're using non-standard "stdio" to take direct control of the FILE*'s internal lock, and the names aren't portable. Specifically, POSIX defines one set of "take/release file lock" names and Windows defines another set (_lock_file/_unlock_file).
That said, you could use these semi-portable constructs to use stdio functions to ensure no buffering conflicts (pwrite to fileno(some_FILE_star) could cause problems if the FILE* buffer overlaps the pwrite location, since pwrite won't fix up the buffer):
// Error checking omitted; you should actually check returns in real code
size_t pfwrite(const void *ptr, size_t size, size_t n,
size_t offset, FILE *stream) {
// Take FILE*'s lock and hold it for entire transaction
flockfile(stream); // _lock_file on Windows
// Record position
long origpos = ftell(stream);
// Seek to desired offset and write
fseek(stream, offset, SEEK_SET); // Possibly offset * size, not just offset?
size_t written = fwrite(ptr, size, n, stream);
// Seek back to original position
fseek(stream, origpos, SEEK_SET);
// Release FILE*'s lock now that transaction complete
funlockfile(stream); // _unlock_file on Windows
return written;
}
From Linux documentation, POLLOUT means Normal data may be written without blocking. Well, but this explanation is ambigous.
How much data is it possible to write without blocking after poll reported this event? 1 byte? 2 bytes? Gigabyte?
After POLLOUT event on blocking socket, how to check how much data I can send to socket without block?
poll system call only tells you that there is something happen in the file descriptor(physical device) but it doesn't tell you how much space is available for you to read or write. In order to know exactly how many bytes data is available to be used for reading or writing, you must use read() or write() system call to get the return value which says the number of bytes you have actually been read or written.
Thus,poll() is mainly used for applications that must use multiple input or output streams without getting stuck on any one of them. You can't use write() or read() in this case since you can't monitor multiple descriptors at the same time within one thread.
BTW,for device driver,the underlying implementation for POLL in driver usually likes this(code from ldd3):
static unsigned int scull_p_poll(struct file *filp, poll_table *wait)
{
poll_wait(filp, &dev->inq, wait);
poll_wait(filp, &dev->outq, wait);
...........
if (spacefree(dev))
mask |= POLLOUT | POLLWRNORM; /* writable */
up(&dev->sem);
return mask;
}
If poll() sets the POLLOUT flag then at least one byte may be written without blocking. You may then find that a write() operation performs only a partial write, so indicated by returning a short count. You must always be prepared for partial reads and writes when multiplexing I/O via poll() and/or select().
I am writing an application that produces and logs a lot of data in the form of ASCII and binary output (not mixed, one or the other depending on the log). The application is single-threaded (should make things easier) and I want to write my data to disk in the order that it was generated. I need to implement a write(char* data) method that takes a null-terminated character array and writes it to disk. Ideally, I want the function to buffer the data and return before the data is actually written to disk...I figure that there must be some way for Windows to setup a thread and do this in the background. The only thing that I care about is that I get the data in the log file in the order that it was written. What is the best way to do this? Someone else implemented the current write method and it looks like:
void writeData(const char* data, int size)
{
if (fp != 0)
fwrite (data, 1, size, fp);
}
fp is the file pointer.
C++ Stdio.h header:
http://www.cplusplus.com/reference/cstdio/fwrite/
In multi-thread, may be you can use something like log queue.
In single-thread, the order is guaranteed
If you are talking Windows-only then you pretty much have two options: Overlapped I/O through the WinAPI or setting up a separate thread in your program to handle file I/O (which can potentially be cross-platform by using pthreads)
I tried to create around 4 GB file using c++ fopen, fwrite and fflush and fclose functions on Linux machine, but I observed that fclose() function is taking very long time to close the file, taking around (40-50 seconds). I checked different forum to find the reason for this slowness, changed the code as suggested in forums, Used setvbuf() function to make unbuffered I/O as like write() function. but still could not resolve the issue.
totalBytes = 4294967296 // 4GB file
buffersize = 2000;
while ( size <= totalBytes )
{
len = fwrite(buffer, 1, bufferSize, fp);
if ( len != bufferSize ) {
cout<<"ERROR (Internal): in calling ACE_OS::fwrite() "<<endl;
ret = -1;
}
size = size + len;
}
...
...
...
fflush(fp);
flcose(fp);
Any solution to the above problem would be very helpful.
thanks,
Ramesh
The operating system is deferring actual writing to the disk and may not actually write the data to the disk at any writing operation or even at fflush().
I looked at the man page of fflush() and saw the following note:
Note that fflush() only flushes the user space buffers provided by
the C library. To ensure that the data is physically stored on disk
the kernel buffers must be flushed too, for example, with sync(2) or
fsync(2).
(there's a similar note for fclose() as well, although behaviour on your Linux system seems different)
It will take a long time to write that much data to the disk, and there's no way around that fact.
fopen/fwrite/fclose are C standard wrappers around the low level open/write/close. All fflush is doing is making sure all the 'write' calls have been made for something buffered. There is no "synchronization point" at the fflush. The operating system is flushing the write buffer before it allows 'close' to return.
Yeah, the time taken by fclose() is part of the time taken by the OS to write your data to the disk.
Look at fsync for achieving what you probably wanted with fflush. If you want to display some progress and the time used by fclose() is making it inaccurate, you could do a fsync() every 100 Mbytes written, or something like that.
I need to implement a simple "spill to disk" layer for large volume of data coming off a network socket. I was hoping to have two C FILE* streams, one used by a background thread writing to the file, one used by a front end thread reading it.
The two streams are so one thread can be writing at one offset, while the other is reading elsewhere - without taking a lock and blocking the other thread.
There will be a paging mechanism so the reads/writes are at random access locations - not necessarily sequential.
One more caveat, this needs to work on Windows and Linux.
The question: after the fwrite to the first stream has returned, is that written data guaranteed to be immediately visible to an fread on the second stream?
If not, what other options might I consider?
So Posix pread/pwrite functions turned out to be what I needed. Here's a version for Win32:
size_t pread64(int fd, void* buf, size_t nbytes, __int64 offset)
{
OVERLAPPED ovl;
memset(&ovl, 0, sizeof(ovl));
*((__int64*)&ovl.Offset)=offset;
DWORD nBytesRead;
if (!ReadFile((HANDLE)_get_osfhandle(fd), buf, nbytes, &nBytesRead, &ovl))
return -1;
return nBytesRead;
}
size_t pwrite64(int fd, void* buf, size_t nbytes, __int64 offset)
{
OVERLAPPED ovl;
memset(&ovl, 0, sizeof(ovl));
*((__int64*)&ovl.Offset)=offset;
DWORD nBytesWritten;
if (!WriteFile((HANDLE)_get_osfhandle(fd), buf, nbytes, &nBytesWritten, &ovl))
return -1;
return nBytesWritten;
}
(And thank you everyone for input on this - much appreciated).
This sounds like a great fit for memory-mapped I/O. It's guaranteed to be coherent, very fast, and keeping track of multiple pointers is straightforward.
You'll need different functions to set up the memory mapping on different OSes, but the actual I/O is completely portable (using pointer deference).
linux: open, mmap
Windows: CreateFileMapping, MapViewOfFile
This definitely will not give you the semantics you want. If you disabled buffering, it might be reasonable to expect it to work, but I still don't think there are any guarantees. Stdio/FILE is really not the right tool for specialized IO needs like this.
The POSIX way to do what you want is with file descriptors and the pread/pwrite functions. I suspect there's a Windows way (or you could emulate them based on some other underlying Windows primitive) but I don't know it.
Also Ben's suggestion of using memory-mapped IO is a very good one, assuming the file fits in your address space.