I have a simple DBExpress application using a SQLite database on C++ builder XE3.
The UI is having a few DBGrids and DBNavigators. The DBGrids are connected to TDataSources that are connected to TClientDataSets. The TClientDataSets gets data from TDataProviders, that are getting data from a TSQLDataSet. Everyting connected to a TSQLConnection.
The application works fine except when data is posted by another db component, not related to the application. To get the new 'remote' data, I have to click the Refresh button on the navigator twice(2).
I'm getting the same behavior with the following code:
mClientDBSession.insertImageFile(getFileNameNoPath(f), "Note..");
DBNavigator1->BtnClick(nbRefresh);
DBNavigator1->BtnClick(nbRefresh);
The mClientDBSession is a database object not related to DBExpress. The insertImageFile inserts a db record directly into the SQLite database and is synchronous, so i know the DB got the data after the function exits.
Oddly, if I don't call the BtnClick(nbRefresh) twice, the data is not updated , afaik from looking at the DBGrid.
Ideally I would have a timer automatically updating the DBExpress components with new server data every so often.
Related
We have a remote event receiver associated to a list and hooked on all events there. When you update any list item using OOB SharePoint page, the event receiver is executed; a web service which is taking care of the afterward actions works nicely. However when you update item use CSOM code e.g. in simple console application, nothing happens. The event receiver is not called at all. I found this issue on both SP 2013 and 2016.
I will not post any code while it is irrelevant: item is updated using standard approach and values are actually changed in the list item, only the event receiver is not fired. To put it simply:
item updated manually from site -> event receiver fired
item updated via CSOM -> event receiver not fired.
I remember similar issue on SharePoint 2010 when using server side code and system account. Could it be that behind the scene web service called by CSOM (e.g. list.asmx) is using system account to make changes as well? It's just hypothesis...
So after deeper investigation and many try/fails we found out it was indeed issue with code in our event receiver. For some strange reason original developers were checking Title field in after properties and cancelling code if not present. I guess it was probably an attempt to prevent looping calls.
One lesson learned: When using CSOM after event properties contains only those fields which were altered by CSOM code. Keep it in a mind in case you need to use other values than those you want to update. You may need to stupidly copy and assign them again just because of this.
I am experimenting with turning a more traditional ember-data based app into a real-time app that uses websockets to keep multiple instances in sync.
My first attempt involves sending any updated record back to all open sessions that have accessed the record so that they all can have the latest copy. This includes the session that initiated the change. This means that after I call record.save() in the client, I get back the updated copy both from the REST API and the websocket. The client-end of the websocket simply calls store.pushPayload(data) to update the store.
This causes problems because the record might be inFlight at the time, and I get the error:
Attempted to handle event `pushedData` on [...] while in state root.deleted.inFlight.
I have several ideas:
Somehow prevent the client from receiving its own records back and only send them to other websocket connections.
Somehow synchronize access to the store so that when I call pushPayload the affected records are not in-flight.
Both of these seem rather complicated and I was hoping there's an established means of keeping multiple Ember apps up-to-date.
I'm working on a system which uses versant object database.
We have functional tests which sends requests to the server, server performs requested operation on database and returns results.
Afterwards we send an opposite request which is supposed to restore db to previous state untill next test starts.
This is invalid approach, we try to restore db to previous state using the very same request we are testing.
Is there a feature similar to Oracle Flashback in Versant, if not what is the proper way to handle this problem?
I'm running a system with a few workers that's taking jobs from a message queue, all using Djangos ORM.
In one case I'm actually passing a message along from one worker to another in another queue.
It works like this:
Worker1 in queue1 creates an object (MySQL INSERT) and pushes a message to queue2
Worker2 accepts the new message in queue2 and retrieves the object (MySQL SELECT), using Djangos objects.get(pk=object_id)
This works for the first message. But in the second message worker 2 always fails on that it can't find object with id object_id (with Django exception DoesNotExist).
This works seamlessly in my local setup with Django 1.2.3 and MySQL 5.1.66, the problem occurs only in my test environment which runs Django 1.3.1 and MySQL 5.5.29.
If I restart worker2 every time before worker1 pushes a message, it works fine. This makes me believe there's some kind of caching going on.
Is there any caching involved in Django's objects.get() that differs between these versions? If that's the case, can I clear it in some way?
The issue is likely related to the use of MySQL transactions. On the sender's site, the transaction must be committed to the database before notifying the receiver of an item to read. On the receiver's side, the transaction level used for a session must be set such that the new data becomes visible in the session after the sender's commit.
By default, MySQL uses the REPEATABLE READ isolation level. This poses problems where there are more than one process reading/writing to the database. One possible solution is to set the isolation level in the Django settings.py file using a DATABASES option like the following:
'OPTIONS': {'init_command': 'SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED'},
Note however that changing the transaction isolation level may have other side effects, especially when using statement based replication.
The following links provide more useful information:
How do I force Django to ignore any caches and reload data?
Django ticket#13906
I'm using the new router and ember data rev 11.
I have a need to force ember-data to re-load data for a record from the server. Using App.MyRecord.find(2) in setInterval function loads the data from the client local store.
How can I reload the data from the server?
I just pushed record.reload() to Ember Data. This will ask the adapter to reload the data from the server and update the record with the new data.
Constraints:
You can only call reload() on a record if it has completed loading and has not yet been modified. Otherwise, the returned data will conflict with the modified data. In the future, we will add support for a merge hook to address these sorts of conflicts, which will allow reload() in more record states.
If you call reload() and change or save the record before the adapter returns the new data, you will get an error for the same reason. The error currently looks something like Attempted to handle event 'reloadRecord' on <Person:ember263:1> while in state rootState.loaded.updated.uncommitted.. Basically, this means that your record was in the "updated but uncommitted" state, and you aren't allowed to call reload() outside of the "loading and unmodified" state.