Redisson: Why reactive client doesn't have PermitExpirableSemaphore? - redisson

I am using redisson reactive Java client. In non reactive client one could get an expirable Semaphore as:-
RPermitExpirableSemaphore semaphore = redisson.getPermitExpirableSemaphore("mySemaphore");
But If I create a reactive client, I can only find redisson.getSemaphore("value") function. I need PermitExpirableSemaphore because:-
I need a lock which could be released by different thread(so can't use RLock).
I need a lease timeout to prevent deadlock in case the lock aquiring thread is killed or stuck.
Is there any way to achieve this behavior in Redisson Distributed Locking?
Edit1: I can set lease time global in Config as:- Config().setLockWatchdogTimeout(leaseTimeMs), but I really need different leasetime at different locks.
Edit2: Asked a question on Redisson github at https://github.com/redisson/redisson/issues/1391

As linked in the edit2, there was no way of doing this in Redisson. Nikita, who is lead developer of Redisson quickly introduced the requested feature, to be launched in 2.11.6.

Related

SQLExecDirect doesn't return when deadlocked

I have a C++ app calling a stored procedure via SQLExecDirect. If there is a deadlock SQLExecDirect doesn't return until the deadlock is resolved.
I've read in the .net world it can detect deadlocks and throw an exception. Is there any way with C++/ODBC to regain control while deadlocked? I suspect the answer is no, but I'm hoping there's some ODBC feature I haven't found yet.
The only work around I can think of is kicking off another thread to run it and and setting a timeout for the thread to return.
And no, I can't fix the deadlock. This app is running queries or stored procedures from my customer's DBs that they choose. I just don't want it unresponsive for the duration of the deadlock.
If your ODBC provider supports asynchronous execution you could perform the operation that way, ODBC 3.8+ even supports an event-based notification mode over the older 3.0 polling-only. Note that unless using something like SQLServer's MARS your connection will still be deadlocked it just allows the thread to do something else while waiting for an answer.

POCO C++: Can you have multiple logging hierarchies?

I am looking for a unique logging solution using the POCO C++ library.
I will try to explain our design and the issue we are facing.
We have a TCPServerConnectionFactory that spawns a new environment (new set of objects) with each new connection. The spawned environment gets a new socket and has a listening thread. If a message comes in to an established connection a pthread will handle the message. Each useful message that comes in will contain an identifier that will identify all actions that happen until this process is completed by closing the connection and destructing the set of objects that were created for this connection.
Many connections may happen simultaneously. Before moving to a pthreaded environment we were able to use Thread::setName along with the %T identifier to clearly see which log messages were coming from which connection. Now that we are multi-threaded we need a new solution.
I have been unable to find a clean way to make each object that gets spawned through the life of a connection aware of our unique identifier. A global would get overwritten by a new transaction. Passing the ID to each new object would be messy.
My next attempt was to use the POCO Logger channel framework to save each connection's logs to a new file named by the unique identifier we would receive in a message. The issue here is that if a new connection comes in during the life of another, it will overwrite the channel properties and start pointing logs to a different file.
Using the Logger framework, is there a way for me to have a new Logger hierarchy per connection? Basically, we need the set of objects spawned by the new connection to all use the same logging properties, and to not affect any other set of objects logging properties.
Any insight as to a proper way to share the identifier among all objects created during the life of a connection would be useful as well.
Thanks!
If you want to store tiny amounts of information then use a singleton instance of your logger along with a mutex and a semaphore to avoid deadlock / livelock issues.But if you're expecting lots of connections, blocking on the mutex would slow things down therefore you should consider using 1 logger instance per connection.
In case you're going singleton consider using std::mutex since it has built-in deadlock protection.

POCO raise event on TCPServer connected threads

I'm new to Poco framework and not to good with C++ but I am learning. I have to create a server-client based application in windows.
The problem that I have now is that I need to send repeatedly from minute to minute some data to the clients. i need to do this for the clients that have an active tcp connection with the server. I don't know how can I create an event, or something that is triggered in a thread and starts all the active threads to send data to the clients.
My first idea is that I have to rewrite, or extend the TCPServerDispatcher Class. And I don't know how can I identify the active threads from the ThreadPool.
Do you have any ideas, or maybe suggestions, or a tutorial, something?
I can't figure it out how to do it...
Hope somebody can give me an idea, or some code example. Thank you.
Can these server<> client threads not obtain the data for themselves? It would be fairly easy to add a 60-second timeout on a read() in each thread and send the data then. Maybe this would involve too many database connections?
Failing that, can you put the latest data in a lockable object and have the threads just lock, write and unlock the latest data on a timeout? Such a solution should really have a write timeout as well to prevent a badly-behaved client causing its server thread to block while holding the lock. If it's not too large, I suppose the server<> client thread could make a copy of the data to send, but I'm not a great fan of copying, TBH.
There are more complex ways of signaling the server<> client threads that new data is avalable. It is quite possible to signal each thread that new data is available and have them act upon it 'immediately'. This usually means the server<> client thread waiting on more than one signal. In general, the lower the latency, the more complex the solution:(
Rgds,
Martin

Does the sqlceclientsyncprovider provider lock the .sdf file during synchronisation on a mobile device?

I am using SQL Server Compact Edition 3.5 on a mobile device. Synchronisation using the sync framework works fine in the context of the user doing it via a button press and waiting for it to complete. There are no issues there.
I have recently attempted to do this in a background thread which runs every 'n' minutes or so. This also works fine, provided I am not using the database at the time. If I am using the database, the whole app locks up and I haven't yet found the specific exception that must be happening. I will continue to do that, but that is not part of my question.
My question is does the SqlCeClientSyncProvider somehow throw an exclusive lock or otherwise physically lock the .SDF file during synchronisation? If so, are there any options to override this behaviour?
No, it doesn't lock the .SDF file, after testing I see that at most it creates a transaction with a read committed isolation level. The issue I was having was a deadlocking issue in my own threading code -- which I was able to resolve after some careful refactoring. I was raising an event 'SyncBegun' before the sync happened and raising a subsequent event 'SyncEnded' afterwards. These were using separate locks which were stepping on each other's toes.

Concurrency with SQLITE

we are planning to use SQLite in our project , and bit confused with the concurrency model of it (because i read many different views on this in the community). so i am putting down my questions hoping to clear off my apprehension.
we are planning to prepare all my statements during the application startup with multiple connection for reads and one connection for write.so basically with this we are create connection and prepare the statement in one thread and use another thread for binding and executing the statement.
i am using C APIs on windows and Linux.
Creating connection on one thread and using it in another . does this pose any problem.
should i use "Shared cache Mode".
i am thinking of using one lock to synchronize between reads and writes and there would not be any sync between Reads. should i sync between reads as well?
does concurrent multiple read work on same connection
does concurrent multiple read work on different connection
EDIT : one more question , how to validate the connection i,e we are opening the connection at the application startup ,the same connection will be used till the application exits, so in this process, how do we validate the connection before using it
Thanks in Advance
Regards
DEE
1) I do not think SQLite uses any thread specific data, therefore creating a connection on one thread and using on another should be fine (They say its so for version 3.5 onwards)
2) I don't think it will have any significant performance benefit to use shared cache mode, experiment and see, it takes only a single statement to enable it per thread
3) You need to use a Single-Writer-Multiple-Reader kind of lock, using simple locks will serialize all reads and writes and nullify any performance benefits of using multiple threads.
4 & 5) Any read operation should work concurrently without any problem.
SQL Lite faq covers threading in detail. Specific FAQ on threads As of 3.3.1 it is safe to do what you say, under certain conditions (see FAQ).