I'm running the Sample Code from Redisson. I notice the RMAP Write Through will write to the cache before running the MapWriter method. Since it is synchronous, I would like to update the database first, and then the Redis cache if the query/transaction is successful. I am throwing errors in the method but cache still has the data as if error never occurred. Using the Write Though MapOption.
https://github.com/redisson/redisson-examples/blob/master/collections-examples/src/main/java/org/redisson/example/collections/MapReadWriteThroughExamples.java
Related
I'm creating a few simple helper classes and methods for working with libpq, and am wondering if I receive an error from the database - (e.g. SQL error), how should I handle it?
At the moment, each method returns a bool depending on whether the operation was a success, and so is up to the user to check before continuing with new operations.
However, after reading the libpq docs, if an error occurs the best I can come up with is that I should log the error message / status and otherwise ignore. For example, if the application is in the middle of a transaction, then I believe it can still continue (Postgresql won't cancel the transaction as far as I know).
Is there something I can do with PostgreSQL / libpq to make the consequences of such errors safe regarding the database server, or is ignorance the better policy?
You should examine the SQLSTATE in the error and make handling decisions based on that and that alone. Never try to make decisions in code based on the error message text.
An application should simply retry transactions for certain kinds of errors:
Serialization failures
Deadlock detection transaction aborts
For connection errors, you should reconnect then re-try the transaction.
Of course you want to set a limit on the number of retries, so you don't loop forever if the issue doesn't clear up.
Other kinds of errors aren't going to be resolved by trying again, so the app should report an error to the client. Syntax error? Unique violation? Check constraint violation? Running the statement again won't help.
There is a list of error codes in the documentation but the docs don't explain much about each error, but the preamble is quite informative.
On a side note: One trap to avoid falling into is "testing" connections with a trivial query before using them, and assuming that means the real query can't fail. That's a race condition. Don't bother testing connections; simply run the real query and handle any error.
The details of what exactly to do depend on the error and on the application. If there was a single always-right answer, libpq would already do it for you.
My suggestions:
Always keep a record of the transaction until you've got a confirmed commit from the DB, in case you have to re-run. Don't just fire-and-forget SQL statements.
Retry the transaction without a disconnect and reconnect for SQLSTATEs 40001 (serialization_failure) and 40P01 (deadlock_detected), as these are transient conditions generally resolved by re-trying. You should log them, as they're opportunities to improve how the app interacts with the DB and if they happen a lot they're a performance problem.
Disconnect, reconnect, and retry the transaction at least once for error class 08 (connection exceptions).
Handle 53300 (too_many_connections) and 53400 (connection limit exceeded) with specific and informative errors to the user. Same with the other 53 class entries.
Handle class 57's entries with specific and informative errors to the user. Do not retry if you get a query_cancelled (57014), it'll make sysadmins very angry.
Handle 25006 (read_only_sql_transaction) by reporting a different error, telling the user you tried to write to a read-only database or using a read-only transaction.
Report a different error for 23505 (UNIQUE violation), indicating that there's a conflict in a unique constraint or primary key constraint. There's no point retrying.
Error class 01 should never produce an exception.
Treat other cases as errors and report them to the caller, with details from the problem - most importantly SQLSTATE. Log all the details if you return a simplified error.
Hope that's useful.
I have in working thread, which runs forever, connection to postgresql database ( in c++ using libpqxx)
#include <pqxx/connection>
// later in code on starting thread only once executed
connection* conn= conn = new connection(createConnectionString(this->database, this->port, this->username, this->password));
How to check if connection is still active couple hours later, for example if I in meanwhile restrt postgre server ( worker thread still running and is not restarted) I should when I try to execute new query check if it is still valid and reconnect if it not.
How to know if it is still alive ?
How to check if connection is still active couple hours later
Don't.
Just use it, as if it were alive. If an exception is thrown because there's something wrong, catch the exception and retry the transaction that failed from be beginning.
Attempts to "test" connections or "validate" them are doomed. There is an inherent race condition where the connection could go away between validation and actually being used. So you have to handle exceptions correctly anyway - at which point there's no point doing that connection validation in the first place.
Queries can fail and transactions can be aborted for many reasons. So your app must always execute transactions in a retry loop that detects possibly transient failure conditions. This is just one possibility - you could also have a query cancelled by the admin, a transaction aborted by the deadlock detector, a transaction cancelled by a serialization failure, etc.
To avoid unwanted low level TCP timeouts you can set a TCP keepalive on the connection, server-side or client-side.
If you really insist on doing this, knowing that it's wrong, just issue an empty query, i.e. "".
Should the same UIManagedDocument be open on both of my devices, and I save (using the following code):
[self.documentDatabase.managedObjectContext performBlockAndWait:^{
STNoteLabelCell *cell = (STNoteLabelCell *)[self.noteTableView cellForRowAtIndexPath:indexPath];
[cell setNote:newNote animated:YES];
}];
I am told that the UIManagedDocuments documentState is changed to UIDocumentStateSavingError then I get this error:
CoreData: error: (1) I/O error for database at /var/mobile/Applications/some-long-id/Documents/Read.dox/StoreContent.nosync/persistentStore. SQLite error code:1, 'cannot rollback - no transaction is active'
2013-05-14 16:30:09.062 myApp[11711:4d23] -[_PFUbiquityRecordImportOperation main](312): CoreData: Ubiquity: Threw trying to get the knowledge vector from the store: <NSSQLCore: 0x1e9e2680> (URL: file://localhost/var/mobile/Applications/some-long-id/Documents/Read.dox/StoreContent.nosync/persistentStore)
Does anybody know why this error happens?
A couple of things...
I think the saving the same document on two different devices is a red herring, as it will actually be working on a local copy of your database on each device - only the transaction logs get uploaded to iCloud.
Secondly, I may be confused, but I don't see anything in the above code snippet that indicates you are performing a save (unless it is triggered by one of those calls, or autosave happens).
What that code snippet does seem to be doing is:
On your database document's child MOC thread, run the following block
of code
And that block of code is doing purely UI related stuff,
nothing to actually do with the database? The only thing that might
be going out to the DB is cellForRowAtIndexPath - and this usually
would be expected to only be doing read operations, not something
that needs to save.
The only other thing.... if the above code does trigger a save - you might have an issue with performing that as a performBlockAndWait. The UIManagedDocument save routines do stuff asynchronously - but they need the run loop to execute before they actually get a chance to run the async part... So by blocking before continuing you may actually be preventing the actual save from being executed or something.
I'm guessing wildly here, but have seen enough with saving managed documents to know to be really careful with which thread things are actually being called from, and that after a save request has been made, to allow the run loop to have a chance to actually do it. To be fair, this has only ever been an issue with calling saveToURL: repeatedly within one method, or in a loop, in which case all the async parts of the saves get queued up and executed at the end, usually to great comical effect.
I am using SQL Server Compact Edition 3.5 on a mobile device. Synchronisation using the sync framework works fine in the context of the user doing it via a button press and waiting for it to complete. There are no issues there.
I have recently attempted to do this in a background thread which runs every 'n' minutes or so. This also works fine, provided I am not using the database at the time. If I am using the database, the whole app locks up and I haven't yet found the specific exception that must be happening. I will continue to do that, but that is not part of my question.
My question is does the SqlCeClientSyncProvider somehow throw an exclusive lock or otherwise physically lock the .SDF file during synchronisation? If so, are there any options to override this behaviour?
No, it doesn't lock the .SDF file, after testing I see that at most it creates a transaction with a read committed isolation level. The issue I was having was a deadlocking issue in my own threading code -- which I was able to resolve after some careful refactoring. I was raising an event 'SyncBegun' before the sync happened and raising a subsequent event 'SyncEnded' afterwards. These were using separate locks which were stepping on each other's toes.
I have a problem with an sqlite3 db which remains locked/unaccessible after a certain access.
Behaviour occurs so far on Ubuntu 10.4 and on custom (OpenEmbedded) Linux.
The sqlite version is 3.7.7.1). Db is a local file.
One C++-applications accesses the db periodically (5s). Each time several insert statements are done wrapped in a deferred transaction. This happens in one thread only. The connection to the db is held over the whole lifetime of the application. The statements used are also persistent and reused via sqlite3_reset. sqlite_threadsafe is set to 1 (serialized), journaling is set to WAL.
Then I open in parellel the sqlite db with the sqlite command line tool. I enter BEGIN IMMEDIATE;, wait >5s, and commit with END;.
after this the db access of the application fails: the BEGIN TRANSACTION returns return code 1 ("SQL error or missing database"). If I execute an ROLLBACK TRANSACTION right before the begin, just to be sure there is not already an active transaction, it fails with return code 5 ("The database file is locked").
Has anyone an idea how to approach this problem or has an idea what may cause it?
EDIT: There is a workaround: If the described error occures, I close and reopen the db connection. This fixes the problem, but I'm currently at a loss at to why this is so.
Sqlite is a server less database. As far as I know it does not support concurrent access from multiple source by design. You are trying to access the same backing file from both your application and the command tool - so you attempt to perform concurrent access. This is why it is failing.
SQLite connections should only be used from a single thread, as among other things they contain mutexes that are used to ensure correct concurrent access. (Be aware that SQLite also only ever supports a single updating thread at once anyway, and with no concurrent reads at the time; that's a limitation of being a server-less DB.)
Luckily, SQLite connections are relatively cheap when they're not doing anything and the cost of things like cached prepared statements is actually fairly small; open up as many as you need.
[EDIT]:
Moreover, this would explain closing and reopening the connection works: it builds the connection in the new thread (and finalizes all the locks etc. in the old one).