I had previously observed that sqlite db writes are significantly faster when I wrap around an atomic transaction (django) - 0.3 secs without and 0.004 secs with transaction. Hence, I applied transactions throughout my project. Strangely after doing that I started encountering 'database is locked' error which led me to debug it to find out that when an update is running over a transaction (lets call it update A) and when I try to concurrently run another update (B) over a transaction then it fails instantly without waiting for the timeout (5 secs default). But when I tried running update B without transaction, it waited for A to finish and then completed the update. Could anyone provide me with an possible explanation for this which doesn't include removing transactions.
This happens because of these two conditions:
by default transaction.atomic() starts a DEFERRED transaction so no lock is acquired at first
you are reading inside the transaction first and then trying to write
while another process already has a write lock on the database.
For example:
# no lock is acquired here because it executes BEGIN query which
# defaults to BEGIN DEFERRED
with transaction.atomic():
# this acquires read lock on DB
MyModelName.objects.all().first()
# this tries to change read lock to write lock
# but fails because another process is holding a write lock
MyModelName.objects.create(name='Example')
# "OperationalError: database is locked" is raised here
# immediately ignoring the timeout
I am not entirely sure why this happens but I found another post saying that it could be due to a deadlock:
sqlite3 ignores sqlite3_busy_timeout?
So your options are:
Make sure that the write lock is acquired first inside the transaction (i.e. you don't have any read queries before the first write query). You can do this by taking out the read query outside the transaction if possible.
Monkey-patch and force transaction.atomic() to acquire the write lock immediately as described by btimby here:
Fix SQLite "database is locked" problems using "BEGIN IMMEDIATE"
SQLite's timeout can be set with PRAGMA busy_timeout.
The default value is zero, and this settings applies only to the connection (not to the database), so it looks as if not all connections got those 5 seconds.
Ensure that all connections have a proper timeout set by executing that PRAGMA. (And five seconds is dangerous; better use thirty seconds.)
Related
We are using occi in order to access Oracle 12 via a C++ process. One of the operations has to ensure that the client has to pick the latest data in the database and operate according to the latest value. The statement is
std::string sqlStmt = "SELECT REF(a) FROM O_RECORD a WHERE G_ID= :1 AND P_STATUS IN (:2, :3) FOR UPDATE OF PL_STATUS"
(we are using TYPES). For some reason this command did not go though and the database table is LOCKED. All other operations are waiting the first thread to finish, however the thread is killed and we have reached a deadend.
What is the optimal solution to avoid this catastrophic senario? Can I set a timeout in the statement in order to by 100% that a thread can operate on the "select for update", let's say for a maximum of 10 seconds? In other words the thread of execution can lock the database table/row but no more than a predifined time.
Is this possible?
There is a session parameter ddl_lock_timeout but no dml_lock_timeout. So you can not go this way. So Either you have to use
SELECT REF(a)
FROM O_RECORD a
WHERE G_ID= :1 AND P_STATUS IN (:2, :3)
FOR UPDATE OF PL_STATUS SKIP LOCKED
And modify the application logic. Or you can implement your own interruption mechanism. Simply fire a parallel thread and after some time execute OCIBreak. It is documented and supported solution. Calling OCIBreak is thread safe. The blocked SELECT .. FOR UPDATE statement will be released and you will get an error ORA-01013: user requested cancel of current operation
So on OCCI level you will have to handle this error.
Edit: added the Resource Manager, which can impose an even more precise limitation, just focused on those sessions that are blocking others...
by means of the Resource Manager:
The Resource Manager allows the definition of more complex policies than those available to the profiles and in your case is more suitable than the latter.
You have to define a plan and the groups of users associated to the plan, have to specify the policies associated to plan/groups and finally have to attach the users to the groups. To have an idea of how to do this, you can reuse this example #support.oracle.com (it appears a bit too long to be posted here) but replacing the MAX_IDLE_TIME with MAX_IDLE_BLOCKER_TIME.
The core line would be
dbms_resource_manager.create_plan_directive(
plan => 'TEST_PLAN',
group_or_subplan => 'my_limited_throttled_group',
comment => 'Limit blocking idle time to 300 seconds',
MAX_IDLE_BLOCKER_TIME => 300)
;
by means of profiles:
You can limit the inactivity period of those session specifying an IDLE_TIME.
CREATE PROFILE:
If a user exceeds the CONNECT_TIME or IDLE_TIME session resource limit, then the database rolls back the current transaction and ends the session. When the user process next issues a call, the database returns an error
To do so, specify a profile with a maximux idle time, and apply it to just the relevant users (so you wont affect all users or applications)
CREATE PROFILE o_record_consumer
LIMIT IDLE_TIME 2; --2 minutes timeout
alter user the_record_consumer profile o_record_consumer;
The drawback is that this setting is session-wide, so if the same session should be able to stay idle in the course of other operations, this policy will be enforced anyway.
of interest...
Maybe you already know that the other sessions may cohordinate their access to the same record in several ways:
FOR UPDATE WAIT x; If you append the WAIT x clause to your select for update statement, the waiting session will give up the wait after "x" seconds have elapsed. (the integer "x" must be hardcoded there, for instance the value "3"; a variable won't do, at least in Oracle 11gR2).
SKIP LOCKED; If you append the SKIP LOCKED clause to your select for update statement, the select won't return the records that are locked (as ibre5041 already pointed up).
You may signal an additional session (a sort of watchdog) that your session is up to start the query and, upon successful execution, alert it about the completion. The watchdog session may implement its "kill-the-session-after-timeout" logic. You have to pay the added complexity but get the benefit of having the timeout applied to that specific statement, not to the session. To do so see ORACLE-BASE - DBMS_PIPE or 3.2 DBMS_ALERT: Broadcasting Alerts to Users, By Steven Feuerstein, 1998.
Finally, it may be that you are attempting to implement a homemade queue infrastructure. In this case, bear in mind that Oracle already has its own queue mechanics called Advanced Queue and you may get a lot with very little by simply using them; see ORACLE-BASE - Oracle Advanced Queuing.
I'm developing a program (using C++ running on a Linux machine) that uses SQLite as a back-end.
It has 2 threads which carry out the following tasks:
Thread 1
Waits for a piece of data to arrive (in this case, via a radio module)
Immediately inserts it into the database
Returns to waiting for new data
It is important this thread is "listening" for as much of the time as possible and isn't blocked waiting to insert into the database
Thread 2
Every 2 minutes, runs a SELECT on the database to find un-processed data
Processes the data
UPDATEs the rows fetched with a flag to show they have been processed
The key thing is to make sure that Thread 1 can always INSERT into the database, even if this means that Thread 2 is unable to SELECT or UPDATE (as this can just take place at a future point, the timing isn't critical).
I was hoping to find a way to prioritise INSERTs somehow using SQLite, but have failed to find a way so far. Another thought was for Thread 1 to push it's the data into a basic queue (held in memory) and then bulk INSERT it every so often (as this wouldn't be blocking the receiving of data and could do a simple check to see if the database was locked, if so, wait a few milliseconds and try again).
However, what is the "proper" way to do this with SQLite and C++ threads?
SQlite database can be opened with or without multi-threading support. Both threads should open the database separately.
If you want to do the hard way, you can use a priority queue and process the queries.
We have a C++ application on OSX (10.10.4) running natively on a MacBook Pro (no VM), using SQLite 3.7.7.1 (old, I realize), and I'm seeing the following behavior:
Thread 1 of app has connection to file1.db (on an Journaled HFS+ partition), creates a savepoint, inserts rows into table1, then releases savepoint.
Thread 2 of same app has connection to file2.db, and has previously attached file1.db in the same connection. It now tries to insert into file2.table2 by selecting from file1.table1.
If 1 & 2 occur close together (within 1 second), we are seeing that the row inserted in step 1 is not inserted into table2. But if we wait 1 second between steps 1 & 2, we always see the row inserted. Step 2 is definitely occurring after Step 1 commits, as we are logging the statements and can observe it.
We are running in the default journal_mode (delete).
Is there some kind of propagation delay between connections shared by the same application when in journal_mode=delete? The rules governing isolation levels (https://www.sqlite.org/isolation.html) don't expressly forbid it.
Problem Solved:
The source of this behavior was due to a misunderstanding on our part regarding how savepoints work. Specifically, rollback back a savepoint does not remove the savepoint, so subsequent operations on a connection after a "ROLLBACK TO" command are still within the original savepoint's transaction.
In our case, there was a savepoint prior to the savepoint described in step 1 which had rolled back, such that Thread 1's work within its released savepoint was still not visible to other connections. Subsequent operations within thread 1 using a savepoint with the same name as the previously rolled-back savepoint were RELEASED, which finally committed the data from Thread 1, which is why we saw the data appear later on in Thread 2's connection.
Since we're using a simple Savepoint class as a stack-created protection object, which keeps track of how many levels we're nested when creating savepoints (which is how the savepoint names get reused), the solution was simply to make sure that the outermost savepoint created (i.e., the one created when there are no other savepoints currently active on that connection) uses "BEGIN TRANSACTION"/"COMMIT"/"ROLLBACK" and all savepoints created within that one use "SAVEPOINT SP_1", "SAVEPOINT_SP2", etc.
I'm opening the sqlite database file with sqlite3_open and inserting data with a sqlite3_exec.
The file is a global log file and many users are writting to it.
Now I wonder, what happens if two different users with two different program instances try to insert data at the same time... Is the opening failing for the second user? Or the inserting?
What will happen in this case?
Is there a way to handle the problem, if this scenario is not working? Without a server side database?
In most cases yes. It uses file locking, but it is broken on some systems, see http://www.sqlite.org/faq.html#q5
In short, the lock is created when you start a transaction, and released immediately after. While locked, other instances can neither read nor write to the db (in "big" db, they can still read). However, you can connect sqlite in exclusive mode.
When you want to write to db, which is locked by another process, the execution halts for a specified timeout, by default 5 seconds. If lock is released, it proceeds with writing, if not it raises error.
I have a problem with an sqlite3 db which remains locked/unaccessible after a certain access.
Behaviour occurs so far on Ubuntu 10.4 and on custom (OpenEmbedded) Linux.
The sqlite version is 3.7.7.1). Db is a local file.
One C++-applications accesses the db periodically (5s). Each time several insert statements are done wrapped in a deferred transaction. This happens in one thread only. The connection to the db is held over the whole lifetime of the application. The statements used are also persistent and reused via sqlite3_reset. sqlite_threadsafe is set to 1 (serialized), journaling is set to WAL.
Then I open in parellel the sqlite db with the sqlite command line tool. I enter BEGIN IMMEDIATE;, wait >5s, and commit with END;.
after this the db access of the application fails: the BEGIN TRANSACTION returns return code 1 ("SQL error or missing database"). If I execute an ROLLBACK TRANSACTION right before the begin, just to be sure there is not already an active transaction, it fails with return code 5 ("The database file is locked").
Has anyone an idea how to approach this problem or has an idea what may cause it?
EDIT: There is a workaround: If the described error occures, I close and reopen the db connection. This fixes the problem, but I'm currently at a loss at to why this is so.
Sqlite is a server less database. As far as I know it does not support concurrent access from multiple source by design. You are trying to access the same backing file from both your application and the command tool - so you attempt to perform concurrent access. This is why it is failing.
SQLite connections should only be used from a single thread, as among other things they contain mutexes that are used to ensure correct concurrent access. (Be aware that SQLite also only ever supports a single updating thread at once anyway, and with no concurrent reads at the time; that's a limitation of being a server-less DB.)
Luckily, SQLite connections are relatively cheap when they're not doing anything and the cost of things like cached prepared statements is actually fairly small; open up as many as you need.
[EDIT]:
Moreover, this would explain closing and reopening the connection works: it builds the connection in the new thread (and finalizes all the locks etc. in the old one).