Application update server - c++

my application (QT, MSVC2010) requires constant updates both in code (the executable file itself) and data (files to be used by the customer).
The main issue is that not every user has the right to download the whole set of updates so I need a way to send him only the appropiate files.
I decided to do something like this:
Client: send user ID
Server: check user ID in database, send him
appropiate updates
Client: receive updates
At this stage I'm not focusing on security issues (authentication, encryption), I'd just like to know if there is any ready solution I could use or if I have to code this by myself. Even a partial solution would be of great help.
I'm not aware of any server side application that can handle this kind of situation but I must admit this is really not my field.
Last point: I need to avoid any web based solution (user logging in a website, PHP and so on) for a very long list of reasons.
Thank you!

I don't know if it's really an answer, I can just describe how I've implemented very similar design in a simple way some time ago.
1) Client has a version information build in (through .rc file) and user credentials
2) Client access central database checking if there is a URL for it's current version and user credentials.
select url from some table for credentials and version more then current version
3) Client fetches updates as single zip file using Url from Http/Ftp using standard Qt classes. If you need custom made protocol you might want to implement some logic over it.
4) Client update itself based on received data
5) Client notifies server about update complete so we know whats installed where
It's really very simple skeleton with a lot of limitations, but it's solved perfect everything I needed in that project. So you can deploy an update for particular user without affecting others.

Related

Django communicating with another python application?

Is it possible to have django running on the server and one application from django inter communicating with another python process say that I developed and fetching a response from it or even make it just do a particular action?
It can be synchronous or asynchronous; I have some idea of being asynchronous where some package like hendrix, crossbar.io or even celery can be used. But I don't understand what would be the name for this inter-communication and how should I plan the architecture for this.
Going around my head I have the two following situations I'm seeking a plan to be developed:
1.
Say I have django and an e-mail sender with the python package smtp. A user making a request to a view would make django execute my python module I developed for sending an email to a particular user (with a smpt server from google/gmail). It could be synchronous or asynchronous.
OR
2
I have django (some application) and I want it to communicate with some server I maintain; say for making this server execute some code or just fetch a file (if it is an ftp server). Is this an appropriate situation to point to the term 'microservices'? Or there is another term or workaround here?
Your first solution would be called an installable python module, just like any package you install with pip. You can have this as a separate module if you need your code to be re-usable across multiple or just future projects.
Your second solution would be a microservice. This will require setting your small module as a service that could have a REST API to communicate with and make it do whatever you intend it to do.
If your question is "what is the right approach" then I would tell you it depends on your use case. If this is just some re-usable code that you don't want to repeat over and over through our project then just make it into a separate module. While if this is a service that you expect other built services will use and rely on, then just make it into a microservice. You can use a microframework such as Flask for easier and faster setup of your service. Otherwise, if it's just some code that you will use once and serves a single functionality on your application then just write it and keep it there.
There are no rules or standards on which approach should be taken. I personally judge things depending on the use-case.
Hope this helps!

How can I make a SQL application work offline?

I'm making a c++/Qt application. It connects to a small online database to find different information. I need to make sure that the application works offline. So I would like to do the following
On start up of application:
- Check if internet connection is available
- if available connect to online database, download database to local (for next time no internet is available)
- if not available connect to the kocally stored version of the database
My problem is I can't find a simple solution how to "download" the database. The user will not update the database, so there is no need for syncing when online again, just the ability to download the newest version of the database, whenever online. It is a MS SQL server that the application uses.
My only idea for a solution is to have an SQLite db in the application, and then write a script that clears the SQLite database and then puts everything from the online server into it, but this requires that I write a script that goes through all of the databse. There must be a better solution. I'm also not sure how this solution should work if the database structure changes. A solution for this could just be to send out a update for the application if the structure changes with a new SQLite db with the new structure.
I tried searching for a solution, but I could not find anything that are simple. Since I don't neew syncing back and forth, I thought there must be a simple solution. Any help pointing me in the right direction is appreciated.

Is it better to pull files from remote locations or grant users FTP access to my system?

I need to setup a process to update a database table with user supplied CSV-data (running Coldfusion 8/MySQL 5.0.88).
I'm not sure about the best way to do this.
Should I give users FTP-access to my system, generate a directory for every user and upload files from there, or should I pick files up from external locations, so the user has to setup an FTP folder my system can access. I'm sort of leaning towards the 2nd way and wanted to set this up using cfschedule and cfftp, but I'm not sure this is the best way to go forward. Security wise, I'm mor inclined to have users specify an FTP location, from where I pull, rather than handing out and maintaing FTP folders for every user.
Question:
Which approach is better both in terms of security and automation?
Thanks for input!
I wouldn't use either approach. I would give the users a web page to upload their csv files. The cf page that accepts the files would place them into a specific folder and make sure they have unique filenames. The cffile tag will help you with that.
The scheduled job would start with a cfdirectory tag on the target folder. This creates a query object. Loop through it and do what you have to do with each file.
Remember to check for the correct file extension. Then look at the first line of the file to ensure it matches the expected format.
Once you have finished processing the file, do something with it so that you don't process it again on the next scheduled job.
Setting up a custom FTP server is certainly a possibility, since you are able to create users, and give them privileges (automated). It is also secure.
But I don't know the best place to start if you don't have any experience with setting up a FTP server.
Try https://www.dropbox.com/
a.)Create a dropbox account,send invites to your users/clients.
b.)You can upload files/folders into dropbox,your clients/users can access it from their
dropbox account/dropbox desktop app..
c.)Your users/clients can upload files/folders and you can access it from your dropbox
website account/desktop app.
Dropbox is rank 1 software, better in security and automation.
Other solutions:
Best solution GOOGLE DRIVE(5gb free)
create a new gmail account,give ur id and password to your users.ask them to open google drive and import/export files.or try skydrive(25gb free)
http://www.syncplicity.com/
https://www.cubby.com/
http://www.huddle.com/?source=cj&aff=4003003
http://www.egnyte.com/
http://www.sharefile.com/

Business Process "Observer" application

My client is requesting to be notified any time one of their business processes fails for any reason. I had the idea of writing a seperate application that will run as an "observer" and check for various parts of the process.
An example would be that a daily file was generated and uploaded to an FTP location. The "Observer" might have the following "tests" :
Connect to the FTP
Go to folder where file should exist
Find file with naming convention
Verify create date of file
Failure of any step will send an alert email and also log to a report (both in case database is down OR email is down).
My question is.... Are there any products out there that do something close to this? I'd rather buy if there is something robust out there. If not, this almost seems like a unit test platform... Anything out there for testing I could potentially repurpose?
As an FYI, we are a Microsoft/Windows based shop.
Thx in advance!
You could even use a Continuous Integration framework for this. They normally monitor source code repositories and build&test things, but could be used for this as well.
For instance, Hudson, Jenkins and CrouseControl.NET are a few open source ones that are good and can easily be set up for something like this. Only change the monitoring of a repository to either filesystem over FTP and write a small script which checks what you need. Everything else comes for free by the framework, i.e. email, web interface for monitoring and running things.
Just an idea.

How to monitor an FTP upload directory in coldfusion without using event gateways?

Having spent a couple of hours coding an event gateway solution, I discover that they are not supported by CF standard edition. Buggerit! So back to the drawing board.
I can see how I can check the folder's dateLastModified attribute using cfdirectory and so I can run a scheduled task to see when a new file has been uploaded, but whats the best way of storing/comparing the file list so as to get a list of just the ones added since last check.
General hints/links appreciated
Assuming that, for whatever reason, you can't use a gateway, the simplest soluition that springs to mind is to move files you've procesed to a separate directory. Then, your scheduled task can just deal with files in the FTP directory itself.
they are not supported by CF standard
edition
Are you still using CF7? It has been supported by CF Standard Edition since CF8
As #Henry pointed out, you can use an Event Gateway.
If you decide not to use that approach, I'd suggest a ColdFusion scheduled task. Most foolproof algorithm for that task is storing the results of the last <cfdirectory/> call either in a persistent scope - application or server - or writing it out to a database or file (e.g. WDDX). Reason to hold on to all this information, rather than just a timestamp, is handling situations where newly added or changed files do not take on the correct timestamp for whatever reason (system clock off comes to mind).
If you use a database to capture the data you could use a MINUS/EXCEPT query in SQL Server or Oracle, respectively, to determine what's new. Else you'll need to do perform some nested looping in ColdFusion over the old a new queries to generate the list of new files.