I am working on a QT C++ application which has a sqlite database. The tables are displayed using QTableView and QSqlTableModel. There are tables with around 10K records.
My issue is that when I try to update any record into a table with 10K records, I get the error as "Database is Locked, Unable to fetch row". This doesnt happen when the row count is less(say 20). The journal file is created in the applications folder. Seems some process is holding a lock onto database. Can't figure out the actual cause.
Can anyone suggest some solution?
Thanks,
Priyanka
In Qt, you send a PRAGMA to your database like this:
dbObj = QSqlDatabase::addDatabase(...);
dbObj.setDatabaseName(...);
dbObj.open();
dbObj.exec("PRAGMA locking_mode = EXCLUSIVE");
However, I don't think that is what you want. From the Qt documentation:
The driver is locked for updates while a select is executed. This may cause problems when using QSqlTableModel because Qt's item views fetch data as needed (with SqlQuery::fetchMore() in the case of QSqlTableModel).
Take a look at QSqlQuery::isActive which says:
Returns true if the query is active. An active QSqlQuery is one that has been exec()'d successfully but not yet finished with. When you are finished with an active query, you can make make the query inactive by calling finish() or clear(), or you can delete the QSqlQuery instance.
The bottom line is that you have a blocking query originating from somewhere that you either need to properly make "inactive" or that you'll need to arbitrate with.
Check to see if you have the sqlite database open in another window. I had the same issue but then noticed I had unsaved changes in another open window on the database. All worked perfectly once that instance was closed.
Related
first of all, I assure you, I have googled for hours now. My main problem is that I'm trying to fix a corrupted database of the tool paperless-ngx that I'm using. I am an IT admin but I have no experience with SQL whatsoever.
I'm getting this error:
ERROR: missing chunk number 0 for toast value 52399 in pg_toast_2619
Now every guide on the entire internet (I'm gonna post this one for reference) on how to fix this tells me to REINDEX the table.
When I do this using
reindex (verbose) table django_q_task;
it keeps waiting indefinitely with this errormessage:
WARNING: concurrent insert in progress within table "django_q_task"
I am positive that there is no write happening from paperless side, all containers except for the database container have been stopped. I tried locking the table using
lock table django_q_task in exclusive mode nowait;
but the error persists. I'm at the end of my wits. I beg of you, can someone provide me with detailed instructions for someone with no postgresql experience at all?
My target is, that files can be hydrated or dehydrated on user request via the Explorer "free up space" or "Always keep on Device" ContextMenu entry. In case I create a new placeholder file that is dehydrated from the beginning, everything works and I can hydrate it via the callback mechanics. But the way around does not work for me. Inside of the Explorer the file will be marked as UnPinned and the file will be marked as syncing, but my application does not receive any callback from CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE or CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE_COMPLETION. Then I wanted to do it manually with CfDehydratePlaceholder, but exactly the same behaviour. Nothing happens and the file remains in the state, syncing. Even if I used CfSetInSyncState to set the state to CF_IN_SYNC_STATE_IN_SYNC it remains to be in the state syncing.
Now I wanted to implement a minimal example with the help of Cloud Mirror Example, but I realized it has the same behaviour. When I try to dehydrate a file again exactly the same happens there as well. From my perspective, it feels for me like cfapi expects an ack from the cloud service, which it never gets.
But in OneDrive everything works like expected. What I am missing? Did I have to set some specific settings?
I had a misunderstanding of the whole API and here is how I understand the API now, to help other people, who are struggling with it.
You have to register your sync root and connecting your app to it. In case of connecting it, you will receive a CF_CONNECTION_KEY, which is needed to communicate with the virtual filesystem. Then you can add extended attributes to all files inside of your sync root. The most important are custom attributes you can choose by yourself to identify the file object by your app if needed and then the PinState and SyncState. Mostly the SyncState don't have to be changed by the app, besides marking a file as synced after it was processed by the app. (you can do it at the moment you update your custom attributes) Because in case a file changed, the SyncState will automatically be changed. The PinState declares which final state a file should have. For example UNPINNED means, that the file should be dehydrated, and PINNED the opposite. It does not mean, that the file necessarily has already this state. My misunderstanding was, that I thought in case I unpinned a file, it will be automatically dehydrated. Or in case I pinned a placeholder I will receive a request via the callback function I mentioned in my question. But this is not the case. Your app needs to find out via a FileWatcher (i can recommend my own created FileWatcher project: https://github.com/neXenio/panoptes) that the file attribute of specific files was changed. Then your app has to process every step. Like already mentioned in case of dehydrating, the app needs to call CfDehydratePlaceholder. In case of hydrating, you need to open a transfer session via CfGetTransferKey and then hydrate (send the data to the empty file) via the method CfExecute, where you need the connection key and the transfer key. And that's are the basics. There is much more to tell about it, but I guess with this beginning, everybody can figure it out by himself.
I am using SQLite3 in my RTOS system. I've set the configuration such that it will lock for each transaction. On my system I end up with one file on the drive
"SQLDB.db"
When there is a transaction you can usually see a lock file if you are fast enough. "SQLDB.db.lock".
What's driving me wild is that when I delete "SQLDB.db" I still have the ability to do SELECTs from the database, but I cannot insert. It's not a caching issue because I can do selects on multiple tables (that I haven't done any operations on before rebooting the system).
So my question is, is the DB file being cached? Is it saved in RAM somewhere? How is it possible to query this ghost database?
In Unix, when you delete a file, the directory entry is deleted immediately, but the actual file data is deleted only when all open file handles have been closed.
Apparently, your RTOS behaves the same.
My application creates a lot of "configuration" models (ie- they only live in the app at runtime and they won't ever be persisted). I load these on demand so my app is constantly creating records and then throwing them away.
//create a record that will never be persisted
this.store.createRecord('foo', {name: 'wat'});
In the past I would just do a clear of the store but I realized this doesn't actually "remove" anything. I've decided to use the unloadAll instead
this.store.unloadAll('foo');
... but I run into this error as I have these "configuration" models
Error while loading route: Error: Attempted to handle event
unloadRecord on while in state
root.loaded.created.uncommitted.
at new Error (native)
How can I avoid this error (while still using the unloadAll as I need to truly remove these from the browser) ?
Actually this has now (should be) fixed with my PR which was merged 2 days ago:
see: https://github.com/emberjs/data/pull/1714
That PR loosens the constraint which disallowed unloading all dirty records, to disallowing only inFlight records. I believe with some time and proper thought, that constraint may also be lifted.
The rest of the PR, is specifically around proper cleanup when unloading a model, a record array, or destroying the store. I do believe this is a good first pass at proper cleanup.
I hope this (merged) PR solves your issue, if not please open a descriptive issue, and lets squash the bug.
I've tried the Win32_DesktopMonitor and checked the "Availability", but the value returned is always 3 (powered on), even when the monitor is physically turned off.
Is the data cached and there's a "force refresh" command in WMI, or in this particular case, the "Availability" is just not reliable ?
I think there is caching going on somewhere. I've observed it recently.
I wrote code that was polling for updates to Win32_PnPSignedDriver via SelectQuery / ManagementObjectSearcher and the results appear to be cached because it never realizes that a new device/driver has been added. Running the query from a separate app instantly sees that it was updated.
You may have a look to your driver. According to the documentation, starting with Windows Vista, hardware that is not compatible with Windows Display Driver Model (WDDM) returns inaccurate property values for instances of this class. For me it's an another way to say that it's not reliable.