I have 3 processes communicating over named pipes: server, writer, reader. The basic idea is that the writer can store huge (~GB) binary blobs on the server, and the reader(s) can retrieve it. But instead of sending data on the named pipe, memory mapping is used.
The server creates an unnamed file-backed mapping with CreateFileMapping with PAGE_READWRITE protection, then duplicates the handle into the writer. After the writer has done its job, the handle is duplicated into any number of interested readers.
The writer maps the handle with MapViewOfFile in FILE_MAP_WRITE mode.
The reader maps the handle with MapViewOfFile in FILE_MAP_READ|FILE_MAP_COPY mode.
On the reader I want copy-on-write semantics, so as long the mapping is only read it is shared between all reader instances. But if a reader wants to write into it (eg. in-place parsing or image processing), the impacts should be limited to the modifying process with the least number of copied pages possible.
The problem
When the reader tries to write into the mapping it dies with segmentation fault as if FILE_MAP_COPY was not considered.
What's wrong with the above described method? According to MSDN this should work...
We have the same mechanism implemented on linux as well (with mmap and fd passing in AF_UNIX ancillary buffers) and it works as expected.
problem here that MapViewOfFile bad designed or/and documented. this is shell (with restricted functionality) over ZwMapViewOfSection. the dwDesiredAccess parameter of MapViewOfFile converted to Win32Protect parameter of ZwMapViewOfSection.
the FILE_MAP_READ|FILE_MAP_COPY combination converted to PAGE_READONLY page protection, because this you and get page fault on write.
you need use FILE_MAP_COPY only flag - it converted to PAGE_WRITECOPY page protection and in this case all will be work.
the best solution of course direct use ZwMapViewOfSection with PAGE_WRITECOPY page protection
TL:DR: RbMm is correct, you must pass just FILE_MAP_COPY to MapViewOfFile to get copy-on-write behavior.
The current Microsoft documentation is incorrect, it incorrectly states that FILE_MAP_COPY can be OR'ed with FILE_MAP_<ALL_ACCESS|READ|WRITE>.
Looking at older versions of MSDN it correctly says that you must choose one of the access modes:
Type of access to the file view and, therefore, the protection of the pages mapped by the file. This parameter can be one of the following values.
FILE_MAP_WRITE
FILE_MAP_READ
FILE_MAP_ALL_ACCESS
FILE_MAP_COPY
No longer relevant but still surprising, on Windows 95/98/ME the copy-on-write behavior only applies to the file, writes are propagated to views in other processes!
Related
According to the boost reference for Boost.Iostreams (In section 3.6, at the very bottom):
http://www.boost.org/doc/libs/1_64_0/libs/iostreams/doc/index.html
Although the Boost.Iostreams Filter and Device concepts can
accommodate non-blocking i/o, the C++ standard library stream and
stream buffer interfaces cannot, since they lack a means to
distinguish between temporary and permanent failures to satisfy a read
or write request
However, the function std::istream::readsome appears to be non-blocking, in that the available characters will be immediately returned, without a blocking (except for a RAM copy) wait. My understanding is that:
std::istream::read will block until eof or number of characters read.
std::istream::readsome will return immediately with characters copied from the internal buffer.
I agree with you that readsome is not a blocking operation. However, as specified, it is wholly inadequate as an interface for performing what is usually called "non-blocking I/O".
First, there is no guarantee that readsome will ever return new data, even if it is available. So to guarantee you actually make progress, you must use one of the blocking interfaces eventually.
Second, there is no way to know when readsome will return data. There is no way to "poll" the stream, or to get a "notification" or "event" or "callback". A usable non-blocking interface needs at least one of these.
In short, readsome appears to be a half-baked and under-specified attempt to provide a non-blocking interface to I/O streams. But I have never seen it used in production code, and I would not expect to.
I think the Boost documentation overstates the argument, because as you observe, readsome is certainly capable of distinguishing temporary from permanent failure. But their conclusion is still correct for the reasons above.
When looking into non-blocking portability, I didn't find anything in the C++ standard library that looked like it did what you think it does.
If your goal is portability, my interpretation was that the section that mattered most was this:
http://en.cppreference.com/w/cpp/io/basic_istream/readsome
For example, when used with std::ifstream, some library
implementations fill the underlying filebuf with data as soon as the
file is opened (and readsome() on such implementations reads data,
potentially, but not necessarily, the entire file), while other
implementations only read from file when an actual input operation is
requested (and readsome() issued after file opening never extracts any
characters).
This says that different implementations that use the iostream interface are allowed to do their work lazily, and readsome() doesn't guarantee that the work even gets kicked off.
However, I think your interpretation that readsome is guaranteed not to block is true.
In C++, I know I can use read or write file using system function like read or write and I can also do that with fstream's help.
Now I'm implementing a disk management which is a component of DBMS. For simplicity I only use disk management to manage the space of a Unix file.
All I know is fstream wrap system function like read or write and provide some buffer.
However I was wondering whether this will affect atomicity and synchronization or not?
My question is which way should I use and why?
No. Particularly not with Unix. A DBM is going to want contiguous files. That means either a unix variant that support them or creating a disk partition.
You're also going to want to handle the buffering; not following the C++ library's buffering.
I could go on but streams are for - - streams of data -- not secure, reliable structured data.
The following information about synchronization and thread safety of 'fstream' can be found from ISO C++ standard.
27.2.3 Thread safety [iostreams.threadsafety]
Concurrent access to a stream object (27.8, 27.9), stream buffer
object (27.6), or C Library stream (27.9.2) by multiple threads may
result in a data race (1.10) unless otherwise specified (27.4). [
Note: Data races result in undefined behavior (1.10). —end note ]
If one thread makes a library call a that writes a value to a stream
and, as a result, another thread reads this value from the stream
through a library call b such that this does not result in a data
race, then a’s write synchronizes with b’s read.
C/C++ file I/O operation are not thread safe by default. So if you are planning to use fstream of open/write/read system call, then you would have to use synchronization mechanism by yourself in your implementation. You may use 'std::mutex' mechanism provided in new C++ standard(.i.e C++11) to synchronize your file I/O.
I'm trying to understand something about HGLOBALs, because I just found out that what I thought is simply wrong.
In app A I GlobalAlloc() data (with GMEM_SHARE|GMEM_MOVABLE) and place the string "Test" in it. Now, what can I give to another application to get to that data?
I though (wrongfully!) that HGLOBALs are valid in all the processes, which is obviously wrong, because HGLOBAL is a HANDLE to the global data, and not a pointer to the global data (that's where I said "OHHHH!").
So how can I pass the HGLOBAL to another application?
Notice: I want to pass just a "pointer" to the data, not the data itself, like in the clipboard.
Thanks a lot! :-)
(This is just a very long comment as others have already explained that Win32 takes different approach to memory sharing.)
I would say that you are reading into books (or tutorials) on Windows programming which are quite old and obsolete as Win16 is virtually dead for quite some time.
16-bit Windows (3.x) didn't have the concept of memory isolation (or virtual /flat/ address space) that 32-bit (and later) Windows versions provide. Memory there used to be divided into local (to the process) and global sections, both living in the same global address space. Descriptors like HGLOBAL were used to allow memory blocks to be moved around in physical memory and still accessed correctly despite their new location in the address space (after proper fixation with LocalLock()/GlobalLock()). Win32 uses pointers instead since physical memory pages can be moved without affecting their location in the virtual address space. It still provides all of the Global* and Local* API functions for compatibility reasons but they should not be used anymore and usual heap management should be used instead (e.g. malloc() in C or the new operator in C++). Also several different kind of pointers existed on Win16 in order to reflect on the several different addressing modes available on x86 - near (same segment), far (segment:offset) and huge (normalised segment:offset). You can still see things like FARPTR in legacy Win16 code that got ported to Win32 but they are defined to be empty strings as in flat mode only near pointers are used.
Read the documentation. With the introduction of 32-bit processing, GlobalAlloc() does not actually allocate global memory anymore.
To share a memory block with another process, you could allocate the block with GlobalAlloc() and put it on the clipboard, then have the other process retreive it. Or you can allocate a block of shared memory using CreateFileMapping() and MapViewOfFile() instead.
Each process "thinks" that it owns the full memory space available on the computer. No process can "see" the memory space of another process. As such, normally, nothing a process stores can be seen by another process.
Because it can be necessary to pass information between processess, certain mechanisms exists to provide this functionality.
One approach is message passing; one process issues a message to another, for example over a pipe, or a socket, or by a Windows message.
Another is shared memory, where a given block of memory is made available to two or more processes, such that whatever one process writes can be seen by the others.
Don't be confused with GMEM_SHARE flag. It does not work the way you possibly supposed. From MSDN:
The following values are obsolete, but are provided for compatibility
with 16-bit Windows. They are ignored.
GMEM_SHARE
GMEM_SHARE flag explained by Raymond Chen:
In 16-bit Windows, the GMEM_SHARE flag controlled whether the memory
should outlive the process that allocated it.
To share memory with another process/application you instead should take a look at File Mappings: Memory-mapped files and how they work.
I SEEM to be having an issue with GlobalLock in my application. I say seem because I haven't been able to witness the issue by stepping through yet but when I let it run it breaks in one of two locations.
The app has multiple threads (say 2) simultaneously reading and writing bitmaps from PDF files. each thread handles a different file.
The first location it breaks I am reading a dib from the pdf to be OCRed. OCR is reading the characters on the bitmap and turning them into string data. The second location is when a new PDF is being created with the string data being added over the bitmap.
GlobalLock is being used on a HANDLE created by the following:
GlobalAlloc(GMEM_MOVEABLE, uBytes);
I either get an AccessViolationError (always in the first instance) or I get GlobalLock returning a NULL pointer. (The second occurance)
It seems like one file is being read and another is having a copy written at the same time. There seems to be no pattern to which files it happens on.
Now I understand that the VC++ runtime has been multithreaded since 2005 (I am using VS2010 with 2008 toolchain). But is GlobalLock part of the runtime? It seems to me more like a platform independent thing.
I want to avoid just putting a CRITICAL_SECTION around globallock and globalunlock to get them to work, or at least not know why I am doing so.
Can anyone inform me better about GlobalLock/Unlock?
-A fish out of water
First, the Global* heap routines are provided for compatibility with 16-bit windows. They still work, but there's no real reason to use them anymore, except for compatibility with routines that still use global heap object handles. Note that GlobalLock/GlobalUnlock are not threading locks - they prevent the memory from moving, but multiple threads can GlobalLock the same object at the same time.
That said, they are otherwise thread-safe; they take a heap lock internally, so there is no need to wrap your own locking around every Global* call. If you are having problems like this, it suggests you may be trying to GlobalLock a freed object, or you may be corrupting the heap (heap overflows, use-after-free, etc). You may also be missing thread synchronization on the contents of the heap object - the Global* API does not prevent multiple threads from accessing or modifying the same object at once.
I'm working with two independent c/c++ applications on Windows where one of them constantly updates an image on disk (from a webcam) and the other reads that image for processing. This works fine and dandy 99.99% of the time, but every once in a while the reader app is in the middle of reading the image when the writer deletes it to refresh it with a new one.
The obvious solution to me seems to be to have the reader put some sort of a lock on the file so that the writer can see that it can't delete it and thus spin-lock on it until it can delete and update. Is there anyway to do this? Or is there another simple design pattern I can use to get the same sort of constant image refreshing between two programs?
Thanks,
-Robert
Try using a synchronization object, probably a mutex will do. Whenever a process wants to read or write to a file it should first acquire the mutex lock.
Yes, a locking mechanism would help. There are, unfortunately, several to choose from. Linux/Unix e.g. has flock (2), Windows has a similar (but different) mechanism.
Another (somewhat hacky) solution is to just write the file under a temporary name, then rename it. Many filesystems guarantee that a rename is atomic, so this may work. This however depends on the fs, so it's a bit hacky.
If you are willing to go with the Windows API, opening the file with CreateFile and passing in 0 for the dwShareMode will not allow any other application to open the file.
From the documentation:
Prevents other processes from opening a file or device if they
request delete, read, or write access.
Then you'd have to use ReadFile, WriteFile, CloseFile, etc rather than the C standard library functions.
Or, as a really simple kludge, the reader creates a temp file (says, .lock) before starting reading and deletes it afterwards. The write doesn't manipulate the file so long as .lock exists.
That's how Open Office does it (and others) and it's probably the simplest to implement, no matter which platform.
Joe, many solutions have been proposed; I commented on some of them but I'd like to chime in with an overall view and some specifics and recommendations:
You have the following options:
use filesystem locking: under Windows have both the reader and writer open (and create with the CREATE_ALWAYS disposition, respectively) the shared file in OF_SHARE_EXCLUSIVE mode; have both the reader and writer ready to handle ERROR_SHARING_VIOLATION and retry after some predefined period of time (e.g. 250ms)
use file renaming to essentially transfer file ownership: have the writer create a writer-private file (e.g. shared_file.tmpwrite), write to it, close it, then make it publicly available to the reader by renaming it to an agreed-upon "public" name (e.g. simply shared-file); have the reader periodically test for the existence of a file with the agreed-upon "public" name (e.g. shared-file) and, when one is found, attempt to first rename it to a reader-private name (e.g. shared_file.tmpread) before having the reader open it (under the reader-private name); under Windows use MOVEFILE_REPLACE_EXISTING; the rename operation does not have to be atomic for this to work
use other forms of interprocess communication (IPC): under Windows you can create a named mutex, and have both the reader and writer attempt to create (the existing mutex will be returned if it already exists) then acquire the named mutex before opening the shared file for reading or writing
implement your own filesystem-backed locking: take advantage of open(O_CREAT|O_EXCL) or, under Windows, of the CREATE_NEW disposition to atomically create an application lock file; unlike OF_SHARE_EXCLUSIVE approach above, it would be up to you to deal with stale lock files (i.e. lock files left by a process which did not shut down gracefully such as after a crash.)
I would implement method 1.
Method 2 would also work, but it is in a sense reinventing the wheel.
Method 3 arguably has the advantage of allowing your reader process to wait on the writer process and vice-versa, eliminating the need for the arbitrary sleep delays between the retries of methods 1 and 2 (polling); however, if you are OK with polling then you should still use method 1
Method 4 is listed for completeness only, as it is complex to implement (when the lock file is detected to be stale, e.g. by checking whether the PID contained therein still exists, multiple processes can potentially be competing for its removal, which introduces a race condition requiring a second lock, which in turn can become stale etc. etc., e.g.:
process A creates the lock file but dies without removing the lock file
process A restarts and tries to acquire the lock file but realizes it is stale
process B comes out of a sleep delay and also tries to acquire the lock file but realizes it is stale
process A removes the lock file, which it knew to be stale, and recreates it essentially reacquiring the lock
process B removes the lock file, which it (still) thinks is stale (although at this point it is no longer stale and owned by process A) -- violation
Instead of deleting images, what about appending them to the end of the file? This would allow you to keep adding to the file while the reader is still operating without destroying the file. The reader can then delete the image when it's done with it (provided it is necessary) and move onto the next image. Or, the other option would be store the image in a buffer, for writing, and you test the file pointer. If it's set to the head of the file then you can go ahead and write from the buffer to the file. Otherwise, wait until reader finishes and puts the pointer back at the head of the file.
couldn't you store a few images? ('n' sounds like a good number :-)
Not too many to fill your disk, but surely 3 would be enough? if not, you are writing faster than you can process and have a fundamental problem anyhoo (tune to discover 'n').
Cyclically overwrite.