before asking the actual question I will briefly explain my current situation:
I have a C++ application written using the Qt framework, the application itself runs on a network and several instances of the same application can live on different machines, they will all reflect the same data in the common shared database.
In this application I have several UIs and other internal processes that need to be aware when changes happen to the database (Sql Server 2012). But they don't watch the database so they need to be told when something changes. So the way it works now is that whenever in once instance of the application I execute a Stored procedure in the database, I also create an xml based event that is being broadcast to all other applications instances. Once they receive this event they will update the corresponding UI or start the internal processing they need to do.
I'm working to extend this application to be cross systems which means that I want another network somewhere else, which is running another database which should be identical, and so changes on a database in the first network needs to be mirrored in the database in the second network. Whenever this happens I want the same events to be fired under the second network so all UIs and internal processing can start in the second system.
My solution so far is to use SQL Server replication tool. This seems very nice for synchronizing the two databases (and admittedly seems quite smooth as a solution) but the problem that I am facing now is that the other system, where the database is being synchronized, will not have the events fired. This is because the changes to the database does not happen through the application code so no-one on the second network is creating the xml based event and so none of the application instances on the second network will know the database has changed.
The solution that I've been thinking for this problem is to add triggers to each table so when that table changes I will populate another table that one application instance will watch by polling it periodically (every second let's say) and every time a change happens to that table I will create the xml event and broadcast in the network.
Questions: is my solution maintainable over time? Are triggers really the only way to achieve what I want? (Fire events in the synchronized system network) Is there some other way (possibly sql server built-in solution) to get updates from the database in a C++ application when something changes so I don't have to use triggers and watch tables populated by them?
Related
We're looking into implementing audit logs in our application and we're not sure how to do it correctly.
I know that django-reversion works and works well but there's a cost of using it.
The web server will have to make two roundtrips to the database when saving a record even if the save is in the same transaction because at least in postgres the changes are written to the database and comitting the transaction makes the changes visible.
So this will block the web server until the revision is saved to the database if we're not using async I/O which is currently the case. Even if we would use async I/O generating the revision's data takes CPU time which again blocks the web server from handling other requests.
We can use database triggers instead but our DBA claims that offloading this sort of work to the database will use resources that are meant for handling more transactions.
Is using database triggers for this sort of work a bad idea?
We can scale both the web servers using a load balancer and the database using read/write replicas.
Are there any tradeoffs we're missing here?
What would help us decide?
You need to think about the pattern of db usage in your website.
Which may be unique to you, however most web apps read much more often than they write to the db. In fact it's fairly common to see optimisations done, to help scaling a web app, which trade off more complicated 'save' operations to get faster reads. An example would be denormalisation where some data from related records is copied to the parent record on each save so as to avoid repeatedly doing complicated aggregate/join queries.
This is just an example, but unless you know your specific situation is different I'd say don't worry about doing a bit of extra work on save.
One caveat would be to consider excluding some models from the revisioning system. For example if you are using Django db-backed sessions, the session records are saved on every request. You'd want to avoid doing unnecessary work there.
As for doing it via triggers vs Django app... I think the main considerations here are not to do with performance:
Django app solution is more 'obvious' and 'maintainable'... the app will be in your pip requirements file and Django INSTALLED_APPS, it's obvious to other developers that it's there and working and doesn't need someone to remember to run the custom SQL on the db server when you move to a new server
With a db trigger solution you can be certain it will run whenever a record is changed by any means... whereas with Django app, anyone changing records via a psql console will bypass it. Even in the Django ORM, certain bulk operations bypass the model save method/save signals. Sometimes this is desirable however.
Another thing I'd point out is that your production webserver will be multiprocess/multithreaded... so although, yes, a lengthy db write will block the webserver it will only block the current process. Your webserver will have other processes which are able to server other requests concurrently. So it won't block the whole webserver.
So again, unless you have a pattern of usage where you anticipate a high frequency of concurrent writes to the db, I'd say probably don't worry about it.
We have an internal application. As time went on and new applications were requested, that exchange data between eachother, the interaction became bound to the database schema. Meaning changes in the database require changes everywhere else. As we plan to build even more applications that will depend on the same data this quickly will become and unmanagable mess.
Now i'm looking to abstract that interaction behind an API. Currently i have trouble choosing the right tool.
Interaction at times could be complex, meaning data is posted to one service and if the action has been completed it should notify the sender of that.
Another example would be that some data does not have context without the data from other services. Lets say there is one service for [Schools] and one for [Students]. So if the [School] gets deleted or changed the [Student] needs to be informed about it immeadetly and not when he comes to [School].
Advice? Suggestions? SOAP/REST/?
I don't think you need an API. In my opinion you need an architecture which decouples your database from the domain logic and other parts of the application. Such an architecture is for example clean architecture, onion architecture and hexagonal architecture (ports&adapters by new name). They share the same concepts, you have a domain logic, which does not depend from any framework, external lib, delivery method, data storage solutions, etc... This domain logic communicates with the outside world through adapters having well defined interfaces. If you first design the inside of your domain logic, and the interfaces of the adapters, and just after the outside components, then it is called domain driven design (DDD).
So for example if you want to move from MySQL to MongoDB you already have a DataStorageInterface, and the only thing you need is writing a MongoDBAdapter which implements this interface, and ofc migrate the data...
To design the adapters you can use two additional concepts; command and query segregation (CQRS) and event sourcing (ES). CQRS is for connecting delivery methods like REST, SOAP, webapplications, etc... to the domain logic. For example you can raise a CreateUserCommand from your REST API. After that the proper listener in the domain logic processes that command, and by success it raises a domain event, like UserCreatedEvent. Your REST API can listen to that event and respond with a success message to the REST client. The UserCreatedEvent can be listened by one or more storage adapter too. So they can process that event and persist the new user. You don't necessary use only a single database. For example if a relational database is faster by a specific type of query, then you can use that, but if a noSQL database suites better to the job, then you can use that too. So you can use as many databases as you want for your queries, the only thing you need is writing a storage adapter for them. For example if your REST client wants to retrieve the profile of a specific user, then it can raise a GetUserProfileByIdQuery and the domain logic can ask the adapter of a database which can serve the query. After that the adapter can send for example an SQL query to a MySQL database and return the response. By ES you add EventStorage to your system, which stores the raised domain events. It can be very useful if you want to migrate your data from one query database to another. In that case you create a new storage adapter to your new database, and replay all of the domain events from the EventStorage in historical order to that adapter, so it can fill the new database with the relevant data. That's all, you don't have to write complicated migration scripts...
In your case I think your should create at least domain events, and use event sourcing. That will totally decouple your database from the other parts of your application. Adding a REST or SOAP API can have a similar effect, but building HTTP connections to access your database can slow down your application.
I'm working on a very simple small application which will be using Qt/SQLite. Sometimes the application will be accessing shared SQLite databases from multiple computers around the office.
Is there a way to detect (through events?) when an update has been made to the database from another computer on the network so the program can know to refresh their information?
Also, can connections to the DB be monitored by other computers (so they know who's editing the DB)?
Ideally, I don't want to create a separate table in the database for 'collaborative' purposes, and it's not worth setting up timers and loops just to monitor activity; I'd just like to know if there's anything built-in I could leverage to make sure everyone connected is always 'on the same page' in an efficient way.
As a last resort, would a Qt signal monitoring the SQLite database files' last modification time be a reliable way to track if there has been an update, or does the Qt SQLite driver tend to touch the database file outside of SQL transactions?
I agree that it'd be very nice if this were easy, but it isn't. SQLite doesn't provide any mechanisms to directly provide such functionality. The sqlite3_update_hook system only monitors one connection that must be within the monitoring process.
What you could do is to create a local distributed notification system. Roughly:
Create an sql trigger that calls a custom SQLite function.
The custom SQLite function posts an event to some notifier QObject. This needs to be done via an extern 'C' {...} stub.
The notifier QObject broadcasts on the local subnet what has changed and where. This can be picked up by all of your applications running on the network.
If you want to be really, really clever, you can have a custom proxy model on top of the sqlite model that receives the notifications and sends relevant signals.
This whole thing can be very, very simple, if you're after a particular case, not a general solution. It only sounds complicated :) If you want to see what might be involved otherwise, look here.
I'm building out a RESTful API for an iPhone app.
When a user "checks-in" [Inserts new row into a table] I want to then take data from that insert and call a web service, which would send push notifications based upon that insert.
The only way I can think of doing this is either doing it through a trigger, or having the actual insert method, upon successful insert, call the web service. That seems like a bad idea to me.
Was wondering if you had any thoughts on this or if there was a better approach that I haven't thought of.
Even if it technically could, it's really not a good idea! A trigger should be very lean, and it should definitely not involve a lengthy operation (which a webservice call definitely is)! Rethink your architecture - there should be a better way to do this!
My recommendation would be to separate the task of "noticing" that you need to call the webservice, in your trigger, from the actual execution of that web service call.
Something like:
in your trigger code, insert a "do call the webservice later" into a table (just the INSERT to keep it lean and fast - that's all)
have an asynchronous service (a SQL job, or preferably a Windows NT Service) that makes those calls separately from the actual trigger execution and stores any data retrieved from that web service into the appropriate tables in your database.
A trigger is a very finicky thing - it should always be very quick, very lean - do an INSERT or two at most - and by all means avoid cursors in triggers, or other lengthy operations (like a web service call)
Brent Ozar has a great webcast (presented at SQL PASS) on The Top 10 Developer Mistakes That Don't Scale and triggers are the first thing he puts his focus on! Highly recommended
It depends on the business needs. Usually I would stay away from using triggers for that, as this is a business logic, and should be handled by the BL.
But the answer is Yes to your question - you can do that, just make sure to call the web service asynchronously, so it does not delay the insert while the web service call finishes.
You may also consider using OneWay web service - i.e. fire and forget.
But, as others pointed out - you are always better off not using trigger.
If properly architectured, there should be only one piece of code, which can communicate with the database, i.e. some abstraction of the DAL in only a single service. Hook there to make whatever is needed after an insert.
I would go with a trigger, if there are many different applications which can write in the database with a direct access to the database, not trough a DAL service. Which again is a disaster waiting to happen.
Another situation, in which I may go with a trigger, if I have to deal with internally hosted third party application, i.e. if I have access to the database server itself, but not to the code which writes in the database.
What about a stored procedure? Instead of setting it up on a trigger, call a stored procedure, which will both insert the data, and possibly do something else.
As far as I know, triggers are pretty limited in their scope of what they can do. A stored procedure may have more scope (or maybe not).
In the worst case, you can always build your own "API" page; instead of directly inserting the data, request the API page, which can both insert the data and do the push notification.
Trigger->Queue->SP->XP_XMDShell->BAT->cURL->3rd party web service
I used a trigger to insert a record in a Queue table,
then a Stored procedure using a cursor to pull Queued entries off.
I had no WSDL or access to the 3rd party API developers and an urgent need to complete a prototype, so the Stored Procedure calls XP_CMDShell calling a .bat file with parameters.
The bat file calls cURL which manages the REST/JSON call and response.
It was free, quick and works reliably. Not architecturally pure but got the prototype off the ground.
A good practice is to have that web page make an entry into another table (i will call message_queue ) when the user hits the page.
Then have a windows service / *nix daemon on a server scan the message_queue table and perform the pushes via a web service to the mobile app. You can leverage the power of transaction processing in SQL to manage the queue processing.
The nice thing about this approach is you can start with everything on 1 stand alone server, and even separate the website, database, service/daemon onto different physical servers or server clusters as you scale up.
Microsoft Sync Framework with SQL 2005? Is it possible? It seems to hint that the OOTB providers use SQL2008 functionality.
I'm looking for some quick wins in relation to a sync project.
The client app will be offline for a number of days.
There will be a central server that MUST be SQL Server 2005.
I can use .net 3.5.
Basically the client app could go offline for a week. When it comes back online it needs to sync its data. But the good thing is that the data only needs to push to the server. The stuff that syncs back to the client will just be lookup data which the client never changes. So this means I don't care about sync collisions.
To simplify the scenario for you, this smart client goes offline and the user surveys data about some observations. They enter the data into the system. When the laptop is reconnected to the network, it syncs back all that data to the server. There will be other clients doing the same thing too, but no one ever touches each other's data. Then there are some reports on the server for viewing the data that has been pushed to the server. This also needs to use ClickOnce.
My biggest concern is that there is an interim release while a client is offline. This release might require a new field in the database, and a new field to fill in on the survey.
Obviously that new field will be nullable because we can't update old data, that's fine to set as an assumption. But when the client connects up and its local data schema and the server schema don't match, will sync framework be able to handle this? After the data is pushed to the server it is discarded locally.
Hope my problem makes sense.
I've been using the Microsoft Sync framework for an application that has to deal with collisions, and it's miserable. The *.sdf file is locked for certain schema changes (like dropping of a column, etc.).
Your scenario sounds like the Sync Framwork would work for you, out of the box... just don't enforce any referential integrity, since the Sync Framework will cause insert issues in these instances.
As far as updating schema, if you follow the Sync Framework forums, you are instructed to create a temporary table the way the table should look at the end, copy your data over, and then drop the old table. Once done, then go ahead and rename the new table to what it should be, and re-hookup the stuff in SQL Server CE that allows you to handle the sync classes.
IMHO, I'm going to be working on removing the Sync functionality from my application, because it is more of a hinderance than an aid.