I'm writing a Qt application where different models can insert/delete/update the same table. When one model changes the database, I would like the other models to be notified of the change, so they can update their views accordingly.
It seems that the best way to monitor inserts, deletes and updates in SQLite is to use QSqlDriver::subscribeToNotification and then react to the notification signal. I know the syntax is along the lines of:
db.driver()->subscribeToNotification("anEventId");
However, I'm not sure what anEventId means. Is anEventId a constant provided by SQLite or do I code these specific events into SQLite using triggers or something else and then subscribe to them?
The subscribeToNotification implementation in the Qt sqlite driver relies upon the sqlite3_update_hook function of the sqlite C API. The Qt driver, however, is not forwarding the operation performed, just the table name, and that should be the misterious anEventId argument to pass to subscribeToNotification. In short, you can listen to events occurring in any table (given it is a rowid table, but it generally is) passing the table name to the subscribeToNotificationmethod. When your slot catches the notification signal, though, you only know that an operation occurred in that table, but (sadly) Qt won't tell you which one (INSERT, UPDATE, or DELETE).
So, given a QSqlDriver * driver:
driver->subscribeToNotification("mytable1");
driver->subscribeToNotification("mytable2");
driver->subscribeToNotification("mytable3");
then in your slot:
void MyClass::notificationSlot(const QString &name)
{
if(name == "mytable1")
{
// do something
}
else if(name == "mytable2")
{
//etc...
Related
I am currently attempting to use transactions in my C++ app, but I have a problem with the ODBC's auto commit mode.
I am using the POCO libaries to create a connection to a PostgreSQL database on the same machine. Currently, I can send data to this database as single statements, but I cannot get my head around how to use Poco's transaction libraries to be able to send this data more quickly.
As I have several thousand records to insert, and so continuing to use single insert statements is extrememly slow and inpractical - So I am trying to use Poco's transaction to speed this up a bit (a fair bit).
The error I am encountering is a theoretically a simple one - Poco is throwing the following error:
'Invalid access: Session is in auto commit mode.'
I understand, as a result of this, I should somehow set "auto commit" to false - as it only allows me to commit data to the database line by line, rather than as a single transaction.
The problem is how I set this.
Currently, I have a session created from Session.h, that looks alot like this:
session = new Poco::Data::Session(
"ODBC",
connection_data.str()
);
Where connection data is a simple stringstream with the login information, password, database, server and "Driver={PostgreSQL ANSI};" to tell ODBC to utilize PostgreSQL's driver.
I have tried just setting a property "autocommit" to false through the session's setFeature or setProperty settings, this, of course, was to no avail. (it was more of a ditch attempt at this point).
session->setFeature("AUTOCOMMIT", false);
Looking around, I saw a possible alternative method by creating a ODBC sessionImpl directly from ODBC/session/SessionImpl.h instead of using this generic method above, and then creating a new session object from this.
The benefits of this are that ODBC's sessionImpl has references to autocommit mode in the header, which would suggest it would be able to handle this:
void autoCommit(const std::string&, bool val);
/// Sets autocommit property for the session.
However, having not used sessionImpl before, I cannot garuntee if this will work or if can can get this to work with the limited documentation available.
I am using C++ 03 (Not 11), with Visual Studio 2015
Poco 1.7.5
Boost (Where needed)
Would any one know the correct way of setting this feature (above) or a alternative method to achieving this?
edit: Looking at the source of poco, at:
https://github.com/pocoproject/poco/blob/develop/Data/ODBC/src/SessionImpl.cpp#L153
The property seems be named autoCommit, and looking at
https://github.com/pocoproject/poco/blob/develop/Data/include/Poco/Data/AbstractSessionImpl.h#L120
the case of the property names seem to matter. So, does it help if you use session->setFeature("autoCommit", false);?
Cant you just call session->begin(); and session->end(); on the corresponding Session object?
What is returned by session->canTransact()?
According to the doc begin() will start a new transaction, the doc does not mention any property that needs to be set before or after.
See: https://pocoproject.org/docs/Poco.Data.Session.html
Also faced a similar issue.
First of all before begin() need:
m_ses.setFeature("autoCommit", false);
m_ses.begin();
And the second issue is that this feature stays "autoCommit" in false for all other sessions. So don't forget for the next session call
session.setFeature("autoCommit", true);
I am creating a new ember app. I want to use the newest version of ember-data. (ember-data 2.0). I want it to be a mobile webapp. Therefore it must handle variable network access and even offline.
I want it to store all data locally and use that data when it goes offline so the user gets the same experience regardless of the network connectivity.
Is ember-data 2.0 capable of handling the offline case? Do I just make an adapter that detects offline/online and then do....?
Or do I have to make my own in-between layer to hide the offline handling from ember-data?
Are there any libraries out there that has this problem solved? I have found some, but are there any that is up to date with the latest version of ember-data?
If device will go offline and user will try to transition to route, for which model is not loaded yet, you will have an error. You need to handle these situations yourself. For example, you may create a nice page with error message and a refresh button. To do this, you need:
First, In application route, create error action (it will catch errors during model hook), and when error occurs, save transition in memory. Do not try to use local storage for this task, it will save only properties, while we need an actual transition object. Use either window.failedTransition or inject in controllers and routes a simple object, which will contain a failed transition.
actions: {
error: function (error, transition) {
transition.abort();
/**
* You need to correct this line, as you don't have memoryStorage
* injected. Use window.failedTransition, or create a simple
* storage, Iy's up to you.
*/
this.get('memoryStorage').set('failedTransition', transition);
return true; //This line is important, or the whole idea will not work
}
}
Second, Create an error controller and template. In error controller define an action, retry:
actions: {
retry: function () {
/**
* Correct this line too
*/
var transition = this.get('memoryStorage').getAndRemove('failedTransition');
if (transition !== undefined) {
transition.retry();
}
}
}
Finally, In error template display a status and an error text (if any available) and a button with that action to retry a transition.
This is a simple solution for simple case (device gone offline just for few seconds), maybe you will need something way more complex. If you want your application to fully work without a network access, than you may want to use local storage (there is an addon https://github.com/funkensturm/ember-local-storage) for all data and sync it with server from time to time (i.e sync data every 10 sec in background). Unfortunately I didn't try such things, but I think it is possible.
I wrote a C++ program using Qt. some variables inside my algorithm are changed outside of my program and in a web page. every time the user changes the variable values in the web page I modify a pre-created SQL database.
Now I want my code to change the variables value during run time without to stop the code. there is two options :
Every n seconds check the database and retrieve the variables value -> this is not good since I have to check if database content is changed every n seconds (it might be without any change for years. Also I don't want to check if the database content is changed)
Every time the database is changed my Qt program emits a signal so by catching this signal I can refresh the variables value, This seems an optimal solution and I want to write a code for this part
The C++ part of my code is:
void Update Database()
{
QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL");
db.setHostName("localhost");
db.setDatabaseName("Mydataset");
db.setUserName("user");
db.setPassword("pass");
if(!db.open())
{
qDebug()<<"Error is: "<<db.lastError();
qFatal("Failed To Connect");
}
QSqlQuery qry;
qry.exec("SELECT * from tblsystemoptions");
QSqlRecord rec = qry.record();
int cols = rec.count();
qry.next();
MCH = qry.value(0).toString(); //some global variables used in other functions
MCh = qry.value(1).toString();
// ... this goes on ...
}
QSqlDriver supports notifications which emit a signal when a specific event has occurred. To subscribe to an event just use QSqlDriver::subscribeToNotification( const QString & name ). When an event that you’re subscribing to is posted by the database the driver will emit the notification() signal and your application can take appropriate action.
db.driver()->subscribeToNotification("someEventId");
The message can be posted automatically from a trigger or a stored procedure. The message is very lightweight: nothing more than a string containing the name of the event that occurred.
You can connect the notification(const QString&)signal to your slot like:
QObject::connect(db.driver(), SIGNAL(notification(const QString&)), this, SLOT(refreshView()));
I should note that this feature is not supported by MySQL as it does not have an event posting mechanism.
There is no such thing. The Qt event loop and the database are not connected in any way. You only fetch/alter/delete/insert/... data and that's it. Option 1 is the one you have to do. There are ways to use TRIGGER on the server side to launch external scripts but this would not help you very much.
Using doctrine 2.1 (and zend framework 1.11, not that it matters for this matter), how can I do post persist and post update actions, that involves re-saving to the db?
For example, creating a unique token based on the just generated primary key' id, or generating a thumbnail for an uploaded image (which actually doesn't require re-saving to the db, but still) ?
EDIT - let's explain, shall we ?
The above is actually a question regarding two scenarios. Both scenarios relate to the following state:
Let's say I have a User entity. When the object is flushed after it has been marked to be persisted, it'll have the normal auto-generated id of mysql - meaning running numbers normally beginning at 1, 2, 3, etc..
Each user can upload an image - which he will be able to use in the application - which will have a record in the db as well. So I have another entity called Image. Each Image entity also has an auto-generated id - same methodology as the user id.
Now - here is the scenarios:
When a user uploads an image, I want to generate a thumbnail for that image right after it is saved to the db. This should happen for every new or updated image.
Since we're trying to stay smart, I don't want the code to generate the thumbnail to be written like this:
$image = new Image();
...
$entityManager->persist($image);
$entityManager->flush();
callToFunctionThatGeneratesThumbnailOnImage($image);
but rather I want it to occur automatically on the persisting of the object (well, flush of the persisted object), like the prePersist or preUpdate methods.
Since the user uploaded an image, he get's a link to it. It will probably look something like: http://www.mysite.com/showImage?id=[IMAGEID].
This allows anyone to just change the imageid in this link, and see other user's images.
So in order to prevent such a thing, I want to generate a unique token for every image. Since it doesn't really need to be sophisticated, I thought about using the md5 value of the image id, with some salt.
But for that, I need to have the id of that image - which I'll only have after flushing the persisted object - then generate the md5, and then saving it again to the db.
Understand that the links for the images are supposed to be publicly accessible so I can't just allow an authenticated user to view them by some kind of permission rules.
You probably know already about Doctrine events. What you could do:
Use the postPersist event handler. That one occurs after the DB insert, so the auto generated ids are available.
The EventManager class can help you with this:
class MyEventListener
{
public function postPersist(LifecycleEventArgs $eventArgs)
{
// in a listener you have the entity instance and the
// EntityManager available via the event arguments
$entity = $eventArgs->getEntity();
$em = $eventArgs->getEntityManager();
if ($entity instanceof User) {
// do some stuff
}
}
}
$eventManager = $em->getEventManager():
$eventManager->addEventListener(Events::postPersist, new MyEventListener());
Be sure to check e. g. if the User already has an Image, otherwise if you call flush in the event listener, you might be caught in an endless loop.
Of course you could also make your User class aware of that image creation operation with an inline postPersist eventHandler and add #HasLifecycleCallbacks in your mapping and then always flush at the end of the request e. g. in a shutdown function, but in my opinion this kind of stuff belongs in a separate listener. YMMV.
If you need the entity id before flushing, just after creating the object, another approach is to generate the ids for the entities within your application, e. g. using uuids.
Now you can do something like:
class Entity {
public function __construct()
{
$this->id = uuid_create();
}
}
Now you have an id already set when you just do:
$e = new Entity();
And you only need to call EntityManager::flush at the end of the request
In the end, I listened to #Arms who commented on the question.
I started using a service layer for doing such things.
So now, I have a method in the service layer which creates the Image entity. After it calls the persist and flush, it calls the method that generates the thumbnail.
The Service Layer pattern is a good solution for such things.
I have a C++ application that uses ADO to talk to an Oracle database. I'm updating the application to support an offline documents. I've decided to implement SQLite for the local side.
I've implemented a wrapper around the ADO classes that will call the appropriate code. However, ADO's way of adding/editing/deleting rows is a bit difficult to implement for SQLite.
For ADO I'd write something like:
CADODatabase db;
CADORecordset rs( &db );
db.Open( "connection string" );
rs.Open( "select * from table1 where table1key=123" );
if (!rs.IsEOF())
{
int value;
rs.GetFieldValue( "field", value );
if (value == 456)
{
rs.Edit();
rs.SetFieldValue( "field", 456 );
rs.Update();
}
}
rs.Close();
db.Close();
For this simple example I realize that I could have just issued an update, but the real code is considerable more complex.
How would I get calls between the Edit() and Update() to actually update the data? My first thought is to have the Edit() construct a separate query and the Update() actually run it, but I'm not sure what fields will be changed nor what keys from the table to limit an update query to.
" but I'm not sure what fields will be changed nor what keys from the table to limit an update query to."
How about just selecting ROWID with the rest of the fields and then building an update based on that ?