I'm setting the stale lock time to 100 ms using this code:
QLockFile lock1(fn);
lock1.setStaleLockTime(100);
QVERIFY(lock1.lock());
QLockFile lock2(fn);
lock2.setStaleLockTime(100);
QVERIFY(lock2.lock());
I expected it to block for only 100ms, but it blocks indefinitely. Why is that?
Am I misunderstanding how lock files should become stale? Here's what the docs say:
The value of staleLockTime is used by lock() and tryLock() in order to determine when an existing lock file is considered stale, i.e. left over by a crashed process. This is useful for the case where the PID got reused meanwhile, so one way to detect a stale lock file is by the fact that it has been around for a long time.
You misunderstand something
If the process holding the lock crashes, the lock file stays on disk
and can prevent any other process from accessing the shared resource,
ever. For this reason, QLockFile tries to detect such a "stale" lock
file, based on the process ID written into the file. To cover the
situation that the process ID got reused meanwhile, the current
process name is compared to the name of the process that corresponds
to the process ID from the lock file. If the process names differ, the
lock file is considered stale. Additionally, the last modification
time of the lock file (30s by default, for the use case of a
short-lived operation) is taken into account. If the lock file is
found to be stale, it will be deleted.
So not only staleLockTime but also process ID checked and other things. So you can't use this method such way.
Related
Scenario
Suppose there are "Thread_Main" and "Thread_DB", with a shared SQLite database object. It's guaranteed that,
"Thread_main" seldom uses SQLite object for reading (i.e. SELECT())
"Thread_DB" uses the SQLite object most of the time for various INSERT, UPDATE, DELETE operations
To avoid data races and UB, SQLite should be compiled with SQLITE_THREADSAFE=1 (default) option. That means, before every operation, an internal mutex will be locked, so that DB is not writing when reading and vice versa.
"Thread_Main" "Thread_DB" no. of operation on DB
============= =========== ======================
something INSERT 1
something UPDATE 2
something DELETE 3
something INSERT 4
... ... ... (collapsed)
something INSERT 500
something DELETE 501
... ... ... (collapsed)
something UPDATE 1000
something UPDATE 1001
... ... ... (collapsed)
SELECT INSERT 1200 <--- here is a serious requirement of mutex
... ... ... (collapsed)
Problem
As seen in above, out of 100s of operations, the need of real mutex is required only once in a while. However to safeguard that small situation, we have to lock it for all the operations.
Question: Is there a way in which "Thread_DB" holds the mutex most of the time, so that every time locking is not required? The lock/unlocks can happen only when "Thread_Main" requests for it.
Notes
One way is to queue up the SELECT in the "Thread_DB". But in larger scenario with several DBs running, this will slow down the response and it won't be real time. Can't keep the main thread waiting for it.
I also considered to have a "Thread_Main" integer/boolean variable which will suggest that "Thread_Main" wants to SELECT. Now if any operation is running in "Thread_DB" at that time, it can unlock the mutex. This is fine. But if no writeable operation is running on that SQLite object, then "Thread_main" will keep waiting, as there is no one in "Thread_DB" to unlock. Which will again delay or even hang the "Thread_Main".
Here's a suggestion: modify your program somewhat so that Thread_Main has no access to the shared object; only Thread_DB is able to access it. Once you've done that, you won't need to do any serialization at all, and Thread_DB can work at full efficiency.
Of course the fly in the ointment is that Thread_Main does sometimes need to interact with the DB object; how can it do that if it doesn't have any access to it?
The solution to that issue is message-passing. When Thread_Main needs to do something with the DB, it should pass a Message object of some sort to Thread_DB. The Message object should contain all the details necessary to characterize the desired interaction. When Thread_DB receives the Message object, Thread_DB can call its execute(SQLite & db) method (or whatever you want to call it), at which point the necessary data insertion/extraction can occur from within the context of the Thread_DB thread. When the interaction has completed, any results can be stored inside the Message object and the Message object can then be passed back to the main thread for the main thread to deal with the results. (the main thread can either block waiting for the Message to be sent back, or continue to operate asynchronously to the DB thread, it's up to you)
I have to write a daemon to decide acces policy for mutexes ( it establishes which process get the mutex if more than one want the same mutex on whatever criteria)
For that I established some codes : L 1 231 (LOCK mtx_id process_pid).
When a process requests a mutex it writes on a shared memory zone some code similar to the one above.
The daemon reads it. (For every mutex I have a queue with processe waiting to get it.) Puts the process pid in queue.
If it is unlocked , pop queue, give mutex.( Write in shared memory id_mutex and process of the pid that got it, for other processes to read and know who has the mutex.
My question is : how do more processe request same mutex ? Creating them at first and selecting the requested process manually does not seem such a good option.
Any help is appreciated.THank you
Many OS have a container, a catalog, directory or registry, of OS objects that can be stored by name. Once stored in the container, they can be looked up by name and a reference token returned. That token can then be used to access the object.
A synchro object like an inter-process mutex would be a good candidate for storage in the container. Multiple processes could then look up the mutex by name and use it.
Such cataloged objects are often reference-counted so that they are only destroyed when the last process with a token calls for it to be closed.
BTW - see comments, your design suc.... has issues :(
I'm trying to make inter-process communication in C/C++ on Windows environment.
I am creating a shared memory page file and two processes get the handle to that file. It's like this:
Process1: Initialize shared memory area. Wait for Process2 to fill it.
Process2: Get handle to shared memory area. Put stuff in it.
I am creating a named mutex in process1 as well. Now process1 acquires the ownership of the mutex soon after creating it (using WaitSingleObject). Obviously, there is nothing in the memory area so I need to release the mutex. Now I need to wait until the memory is filled instead of trying to acquire the mutex again.
I was thinking of conditional variables. Process2 signals the condition variable once it fills in the memory area and process1 will acquire the information immediately.
However, as per MS Documentation on Condition Variables, they are not shared across processes which is clear from their initialization as they are not named.
Furthermore, the shared memory area can hold up to one element at any given moment which means process2 cannot refill after filling it unless process1 extracts its information.
From the given description it's clear that condition variables are the best for this purpose (or Monitors). So is there a way around this?
Conditional variables can be used with in the process, but not across the processes.
Try NamedPipe with PIPE_ACCESS_DUPLEX as open mode. So that you have communication options from both process.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365150(v=vs.85).aspx
I have used events for this before. Use 2 named auto reset events. 1 data ready event and one buffer ready event. Writer waits for buffer ready, writes data and sets the data ready event. Reader waits for data ready event, reads memory and sets the buffer ready event. If done properly you should not need the mutex.
I've seen a project where communication between processes was made using shared memory (e.g. using ::CreateFileMapping under Windows) and every time one of the processes wanted to notify that some data is available in shared memory, a synchronization mechanism using named events notified the interested party that the content of the shared memory changed.
I am concerned on the fact that the appropriate memory fences are not present for the process that reads the new information to know that it has to invalidate it's copy of the data and read it from main memory once it is "published" by the producer process.
Do you know how can this be accomplished on Windows using shared memory?
EDIT
Just wanted to add that after creating the file mapping the processes uses MapViewOfFile() API only once and every new modification to the shared data uses the pointer obtained by the initial call to MapViewOfFile() to read the new data sent over the shared memory. Does correct synchronization require that every time data changes in shared memory the process that reads data must create MapViewOfFile() every time ?
If you use a Windows Named Event for signaling changes, then everything should be OK.
Process A changes the data and calls SetEvent.
Process B waits for the event using WaitForSingleObject or similar, and sees that it is set.
Process B then reads the data. WaitForSingleObject contains all the necessary synchronization to ensure that the changes made by process A before the call to SetEvent are read by process B.
Of course, if you make any changes to the data after calling SetEvent, then these may or may not show up when process B reads the data.
If you don't want to use Events, you could use a Mutex created with CreateMutex, or you could write lock-free code using the Interlocked... functions such as InterlockedExchange and InterlockedIncrement.
However you do the synchronization, you do not need to call MapViewOfFile more than once.
What you're looking for for shared memory on windows is the InterlockedExchange function. See the msdn article here. The REALLY important part is quoted:
This function generates a full memory barrier (or fence) to ensure
that memory operations are completed in order.
This will function cross-process. I've worked with it before, and found it 100% reliable for implementing a mutex-like construct on top of shared memory.
How you do that is that you exchange it with the "set" value. If you get "clear" back, you have it (it was clear), but if you get "set" back, then somebody else had it. You loop, sleep between looping, etc, until you "get" it. Basically this:
#define LOCK_SET 1
#define LOCK_CLEAR 0
int* lock_location = LOCK_LOCATION; // ensure this is in shared memory
if (InterlockedExchange(lock_location, LOCK_SET) == LOCK_CLEAR)
{
return true; // got the lock
}
else
{
return false; // didn't get the lock
}
As above, and loop until you "get" it.
Let's call process A the data producer and process B the data consumer. Until now, you have a mechanism for process A to notify process B that new data has been produced. I suggest you created a reverse notification (from B to A) which tells process A that the data has been consumed. If, for performance reason, you don't want process A to wait for the data to be consumed, you could set up a ring-buffer in the shared memory.
I'm working with two independent c/c++ applications on Windows where one of them constantly updates an image on disk (from a webcam) and the other reads that image for processing. This works fine and dandy 99.99% of the time, but every once in a while the reader app is in the middle of reading the image when the writer deletes it to refresh it with a new one.
The obvious solution to me seems to be to have the reader put some sort of a lock on the file so that the writer can see that it can't delete it and thus spin-lock on it until it can delete and update. Is there anyway to do this? Or is there another simple design pattern I can use to get the same sort of constant image refreshing between two programs?
Thanks,
-Robert
Try using a synchronization object, probably a mutex will do. Whenever a process wants to read or write to a file it should first acquire the mutex lock.
Yes, a locking mechanism would help. There are, unfortunately, several to choose from. Linux/Unix e.g. has flock (2), Windows has a similar (but different) mechanism.
Another (somewhat hacky) solution is to just write the file under a temporary name, then rename it. Many filesystems guarantee that a rename is atomic, so this may work. This however depends on the fs, so it's a bit hacky.
If you are willing to go with the Windows API, opening the file with CreateFile and passing in 0 for the dwShareMode will not allow any other application to open the file.
From the documentation:
Prevents other processes from opening a file or device if they
request delete, read, or write access.
Then you'd have to use ReadFile, WriteFile, CloseFile, etc rather than the C standard library functions.
Or, as a really simple kludge, the reader creates a temp file (says, .lock) before starting reading and deletes it afterwards. The write doesn't manipulate the file so long as .lock exists.
That's how Open Office does it (and others) and it's probably the simplest to implement, no matter which platform.
Joe, many solutions have been proposed; I commented on some of them but I'd like to chime in with an overall view and some specifics and recommendations:
You have the following options:
use filesystem locking: under Windows have both the reader and writer open (and create with the CREATE_ALWAYS disposition, respectively) the shared file in OF_SHARE_EXCLUSIVE mode; have both the reader and writer ready to handle ERROR_SHARING_VIOLATION and retry after some predefined period of time (e.g. 250ms)
use file renaming to essentially transfer file ownership: have the writer create a writer-private file (e.g. shared_file.tmpwrite), write to it, close it, then make it publicly available to the reader by renaming it to an agreed-upon "public" name (e.g. simply shared-file); have the reader periodically test for the existence of a file with the agreed-upon "public" name (e.g. shared-file) and, when one is found, attempt to first rename it to a reader-private name (e.g. shared_file.tmpread) before having the reader open it (under the reader-private name); under Windows use MOVEFILE_REPLACE_EXISTING; the rename operation does not have to be atomic for this to work
use other forms of interprocess communication (IPC): under Windows you can create a named mutex, and have both the reader and writer attempt to create (the existing mutex will be returned if it already exists) then acquire the named mutex before opening the shared file for reading or writing
implement your own filesystem-backed locking: take advantage of open(O_CREAT|O_EXCL) or, under Windows, of the CREATE_NEW disposition to atomically create an application lock file; unlike OF_SHARE_EXCLUSIVE approach above, it would be up to you to deal with stale lock files (i.e. lock files left by a process which did not shut down gracefully such as after a crash.)
I would implement method 1.
Method 2 would also work, but it is in a sense reinventing the wheel.
Method 3 arguably has the advantage of allowing your reader process to wait on the writer process and vice-versa, eliminating the need for the arbitrary sleep delays between the retries of methods 1 and 2 (polling); however, if you are OK with polling then you should still use method 1
Method 4 is listed for completeness only, as it is complex to implement (when the lock file is detected to be stale, e.g. by checking whether the PID contained therein still exists, multiple processes can potentially be competing for its removal, which introduces a race condition requiring a second lock, which in turn can become stale etc. etc., e.g.:
process A creates the lock file but dies without removing the lock file
process A restarts and tries to acquire the lock file but realizes it is stale
process B comes out of a sleep delay and also tries to acquire the lock file but realizes it is stale
process A removes the lock file, which it knew to be stale, and recreates it essentially reacquiring the lock
process B removes the lock file, which it (still) thinks is stale (although at this point it is no longer stale and owned by process A) -- violation
Instead of deleting images, what about appending them to the end of the file? This would allow you to keep adding to the file while the reader is still operating without destroying the file. The reader can then delete the image when it's done with it (provided it is necessary) and move onto the next image. Or, the other option would be store the image in a buffer, for writing, and you test the file pointer. If it's set to the head of the file then you can go ahead and write from the buffer to the file. Otherwise, wait until reader finishes and puts the pointer back at the head of the file.
couldn't you store a few images? ('n' sounds like a good number :-)
Not too many to fill your disk, but surely 3 would be enough? if not, you are writing faster than you can process and have a fundamental problem anyhoo (tune to discover 'n').
Cyclically overwrite.