Avoiding sqlite3 database locked - c++

I have a multithreaded application that uses sqlite (3.7.3)
I'm hitting the database locked error that seems to be quite prevalent.
I'm wondering how to avoid it in my case.
Let me describe what I'm building. Sorry, no code it's too large and complex.
I have around 8 threads that simultaneously access the database. Any one of those threads can either read or write at the same time.
Each row in a table in the database has a file path that points to a resource + other attributes related to that resource.
3 fields of note are readers, status and del.
Readers is incremented each time a thread reads from the resource, but only if status > 0 and del = 0.
So I have some SQL that does
UPDATE resource set readers=readers+1 where id=? AND del=0 AND status>0
After that, I check the number of rows updated. It should only be 1.
After that I try to read the row back with a select. I do that even if it failed
to update because I need to know the reason that it failed.
I tried wrapping both the update and the select in in a transaction but that didn't help.
I've checked that I'm calling finalize on my statements too.
Now, I thought that sqlite serializes by default. I've tried a couple of open modes but I still get the same error.
And before you ask, no I'm not intending to go to mysql. I absolutely need zero config.
Can someone provide some pointers on how to avoid this type of problem? should I move the readers lock out of the DB? If I do that what mechanism should I replace it with? I'm using Linux under C++ and with the boost library available.
EDIT:
Interestingly, adding COMMIT after my updated call improved things dramatically.

When you open the db, you should configure the 'busy timeout'
int sqlite3_busy_timeout(sqlite3*, int ms);
http://www.sqlite.org/c3ref/busy_timeout.html

First question: are you trying to use one connection with all eight threads? If so, make sure each thread has their own connection. I don't know of any database that likes that.
Also check out the FAQ: http://www.sqlite.org/faq.html
Apparently SQLite has to be compiled with a SQLITE_THREADSAFE preprocessor option set to 1. They do have a method to determine if that's your problem.
Another issue is that writes can only happen from one process safely.

Related

How to enable WAL (write ahead log) in a C++ program?

I currently have two different processes, one process writes into a database and the other one reads and updates the records in the database that were inserted by the first process. Every time I try to start both processes concurrently the database gets locked, makes sense. Due to simultaneous read and write. I came across WAL, but haven't been able to come across any example as to how to enable WAL in a c++ code. Any help would be appreciated. Thanks.
To activate it, as per the documentation you just need to run the following statement:
PRAGMA journal_mode=WAL
This is persistent and only needs to be applied once.

sqlite in c++ - parallel inserts from different applications, what will happen?

I'm opening the sqlite database file with sqlite3_open and inserting data with a sqlite3_exec.
The file is a global log file and many users are writting to it.
Now I wonder, what happens if two different users with two different program instances try to insert data at the same time... Is the opening failing for the second user? Or the inserting?
What will happen in this case?
Is there a way to handle the problem, if this scenario is not working? Without a server side database?
In most cases yes. It uses file locking, but it is broken on some systems, see http://www.sqlite.org/faq.html#q5
In short, the lock is created when you start a transaction, and released immediately after. While locked, other instances can neither read nor write to the db (in "big" db, they can still read). However, you can connect sqlite in exclusive mode.
When you want to write to db, which is locked by another process, the execution halts for a specified timeout, by default 5 seconds. If lock is released, it proceeds with writing, if not it raises error.

sqlite db remains locked/unaccessible

I have a problem with an sqlite3 db which remains locked/unaccessible after a certain access.
Behaviour occurs so far on Ubuntu 10.4 and on custom (OpenEmbedded) Linux.
The sqlite version is 3.7.7.1). Db is a local file.
One C++-applications accesses the db periodically (5s). Each time several insert statements are done wrapped in a deferred transaction. This happens in one thread only. The connection to the db is held over the whole lifetime of the application. The statements used are also persistent and reused via sqlite3_reset. sqlite_threadsafe is set to 1 (serialized), journaling is set to WAL.
Then I open in parellel the sqlite db with the sqlite command line tool. I enter BEGIN IMMEDIATE;, wait >5s, and commit with END;.
after this the db access of the application fails: the BEGIN TRANSACTION returns return code 1 ("SQL error or missing database"). If I execute an ROLLBACK TRANSACTION right before the begin, just to be sure there is not already an active transaction, it fails with return code 5 ("The database file is locked").
Has anyone an idea how to approach this problem or has an idea what may cause it?
EDIT: There is a workaround: If the described error occures, I close and reopen the db connection. This fixes the problem, but I'm currently at a loss at to why this is so.
Sqlite is a server less database. As far as I know it does not support concurrent access from multiple source by design. You are trying to access the same backing file from both your application and the command tool - so you attempt to perform concurrent access. This is why it is failing.
SQLite connections should only be used from a single thread, as among other things they contain mutexes that are used to ensure correct concurrent access. (Be aware that SQLite also only ever supports a single updating thread at once anyway, and with no concurrent reads at the time; that's a limitation of being a server-less DB.)
Luckily, SQLite connections are relatively cheap when they're not doing anything and the cost of things like cached prepared statements is actually fairly small; open up as many as you need.
[EDIT]:
Moreover, this would explain closing and reopening the connection works: it builds the connection in the new thread (and finalizes all the locks etc. in the old one).

Race condition with Performance Counters for current process

I am trying to get around the old "How do I get a Windows Performance Counter for the current process" issue. Basically I am enumerating Process Object instances to get a list of Process objects that I can then query for their process id and compare to my own.
Based on this I can build a performance counter path using the correct instance index (to create something similar to \Process(my_program#3)\<counter>) that I can then use to query whatever counter it is that I am interested in. But what happens if one or more of the other instances of my_program exit prior to the PdhAddCounter call? If I understand correctly, this would mean that my counter path now refers to a different process or is now invalid. They might even disappear while querying for the process id...
How do I prevent the counter path from becoming invalid before I can use it to get a counter handle?
Wow, you are right. This seems like a major design flaw to me. Basically it is impossible to reliably monitor an instance if it's name is not unique. I did stumble across a workaround specifically for the Process and Thread objects, but that's a global setting that could affect other applications.
I think the safest way to do this would be to watch all process objects, and each time you collect the data go through and find the one with the desired Process Id.

How to detect Oracle broken/stalled connection?

In our server/client-setup we're experiencing some weird behaviour. The client is a C/C++-application which uses OCI to connect to an Oracle server (using the OTL library).
Every now and then the DB server dies in a way (yes this is the core issue, but from application-side we're unable to solve it but have to deal with it anyway), that the machine does not respond anymore to new requests/connections but the existing ones, like the Oracle-connections, do not drop or time out. Queries sent to the DB just never return successfully anymore.
What possibilities (if any) are provided by Oracle to detect these stalled connections from the client-application side and recover in a more or less safe way?
This is a bug in Oracle ( or call it a feature ) till 11.1.0.6 and they said the patch on Oracle 11g release 1 ( patch 11.1.0.7 ) which has the fix. Need to see that.
If it happens you will have to cancel ( kill ) the thread performing this action.
Not good approach though
In all my DB schema i have a table with one constant record. Just poll such table periodically by simple SQL request. All other methods unreliable.
There's a set_timeout API in OTL that might be useful for this.
Edit: Actually, ignore that. set_timeout doesn't work with OCI. Have a look at the set_timeout description from here where it describes a technique that can be used with OCI
Sounds like you need to fire off a query to the database (eg SELECT * FROM dual;), then if the database hasn't responded within a specified amount of time, assume the server has died and react accordingly. I'm afraid I don't know C/C++, but can you use multi-threading to fire off the statement then wait for the response, without hanging the application?
This works - I have done exactly what you are looking for.
Have a parent process (A) create a child process (B). The child process (B) connects to the database,
performs a query (something like "select 1 from a_table" - you will get better performance if you avoid using "dual" for this and create your own table). If (B) is successful then it writes out that it was successful and exits. (A) is waiting for a specified amount of time. I used 15 seconds. If (A) detects that (B) is still running - then it can assume that the database is hung - it Kills (B) and takes necessary actions (Like calling me on the phone with a SMS).
If you configure SQL*NET to use a timeout you will probably notice that large queries will fail because of it. The OCI set_timeout configuration will also cause this.
There is a manual way to avoid this. You can open a firewall and do something like ping database after every specified duration of time. In this way the database connection will not get lost.
idea
If (current_time - lastPingTime > configuredPingTime)
{
//Dummy query
select 1 from dual;
}