I am working on an application that, among other applications, allows users to send emails. It works by writing everything onto an SQL server, so you can have multiple instances of an application.
The email sending currently works with an "Outbox" table on the SQL server, to which application instances directly write the data with SQL statements. I have, however, hit an issue, that a requirement for attachments on the emails has arisen.
My thinking is that if I can send the attached files to a directory on which the SQL server resides (possibly the TEMP directory?), and then store the path to that file (or a UUID, if the file is constant) in the table. The issue is I have no idea particularly where to start with sending the file, as I am still vaguely new to C++.
One term I have come across is sending it with sockets, but am struggling with where to start with it and do not know if it is indeed the best option. Could anyone provide some advice on this matter?
Thanks in advance.
If I correctly understand the way it works (applications save the emails to SQL then another application takes them out and sends them) you have two choices:
Save the attachment as binary in the SQL and have the mailer application do the rest.
Use sockets to transfer the file to the SQL server and save the path to it just as you said.
I'd say option 1 would be the best option if I understood correctly the way its currently working. And as for option 2, there are probably other ways to transfer the file but sockets would be the easily cross-platform option.
Its not hard to get started with sockets, there are a lot of examples all over the internet.
winsock
more winsock
sys/socket.h
more sys/socket.h
Related
I'm trying to implement a rather complex architecture for a desktop application (not to be distributed, so it's ok to use technology usually adopted for servers - please don't tell me to use electron or .NET).
It basically must store data coming from a UDP stream (with new data frequency at ~90Hz). The application should also open a websocket server and accept new clients, specifically from a tablet. The tablet user should be able to set a flag, enabling or disabling data storage.
This is a very simple block scheme of the system
I already used Django before, but for more standard usage (CMS, REST APIs, etc). After some research, I found some tools I could use to build the system:
1 - Celery, which to my understanding enables running asynchronous tasks (I guess I could use it to store the data coming from the UDP stream, maybe after accumulating a hundred values or so)
2 - Django channels, which should help me in the websocket communication
3 - Twisted, to receive UDP messages.
What confuses me is how to integrate these components, and exchange data between them. Looks like twisted is a completely separated server, so how can i run a celery task that takes input as data and writes it to a django model?
How should I implement the flag coming from webosckets? global variable?
any help appreciated!
The question is a little general, so to help narrow the focus, I'll share my current setup that is motivating this question. I have a LAMP web service running a RESTful API. We have two client implementations: one browser-based javascript client (local storage store) and one iOS-based client (core data store). Obviously these two clients store data very differently, but the data itself needs to be kept in two-way sync with the remote server as often as possible.
Currently, our "sync" process is a little dumb (as in, non-smart). Conceptually, it looks like:
Client periodically asks the server for ALL of the most-recent data.
Server sends down the remote data, which overwrites the current set of local data in the client's store.
Any local creates/updates/deletes after this point are treated as gold, and immediately sent to the server.
The data itself is stored relationally, and updated occasionally by client users. The clients in my specific case don't care too much about the relationships themselves (which is why we can get away with local storage in the browser client for now).
Obviously this isn't true synchronization. I want to move to a system where, conceptually, a "diff" of the most recent changes are sent to the server periodically, and the server sends back a "diff" of the most recent changes it knows about. It seems very difficult to get to this point, but maybe I just don't understand the problem very well.
REST feels like a good start, but REST only talks about the way two data stores talk to each other, not how the data itself is synchronized between them. (This sync process is left up to the implementer of each store.) What is the best way to implement this process? Is there a modern set of programming design patterns that apply to inform a specific solution to this problem? I'm mostly interested in a general (technology agnostic) approach if possible... but specific frameworks would be useful to look at too, if they exist.
Multi-master replication is always (and will always be) difficult and bespoke, because how conflicts are handled will be specific to your application.
IMO A more robust approach is to use Master-slave replication, with your web service as the master and the clients as slaves. To keep the clients in sync, use an archived atom feed of the changes (see event sourcing) as per RFC5005. This is the closest you'll get to a modern standard for this type of replication and it's RESTful.
When the clients are online, they do not update their replica directly, instead they send commands to the server and have their replica updated via the atom feed.
When the clients are offline things get difficult. Your clients will need to have a model of how your web service behaves. It will need to have an offline copy of your replica, which should be copied on write from the online replica (the online replica is the one that is updated by the atom feed). When the client executes commands that modify the data, it should store the command (for later replay against the web service), the expected result (for verification during replay) and update the offline replica.
When the client goes back online, it should replay the commands, compare the result with the expected result and notify the client of any variances. How these variances are handled will vary based on your application. The offline replica can then be discarded.
CouchDB replication works over HTTP and does what you are looking to do. Once databases are synced on either end it will send diffs for adds/updates/deletes.
Couch can do this with other Couch machines or with a mobile framework like TouchDB.
https://github.com/couchbaselabs/TouchDB-iOS
I've done a fair amount of it, but you can always set up CouchDB on one machine, set up TouchDB on a mobile device and then watch the HTTP traffic go back and forth to get an idea of how they do it.
Or read this: http://guide.couchdb.org/draft/replication.html
Maybe something from the link above will help you get an idea of how to do your own diffs for your REST service. (Since they are both over HTTP thought it could be useful.)
You may want to look into the Dropbox Datastore API:
https://www.dropbox.com/developers/datastore
It sounds like it might be a very good fit for your purposes. They have iOS and javascript clients.
Lately, I've been interested in Meteor.
The platform sets up Mongo on the server and minimongo in the browser. The client subscribes to some data and when that data changes, the platform automatically sends down the new data to the client.
It's a clever solution to the syncing problem, and it solves several other problems as well. It will be interesting to see if more platforms do this in the future.
I have some framework doing specific task in C++ and a django-based web app. The idea is to launch this framework, receive some data from it, send some data or request and check it's status in some period.
I'm looking for the best way of communication. Both apps run on the same server. I was wondering if a json server in C++ is a good idea. Django would send a request to this server, and server would parse it, and delegate a worker thread to complete the task. Almost all data that need to be send is string-like. Other data will be stored in database so there is no problem with that.
Is JSON a good idea? Maybe you know some better mechanism for local communication between C++ and django?
If your requirement is guaranteed to always have the C++ application on the same machine as the Django web application, include the C++ code by converting it into a shared library and wrapping python around it. Just like this Calling C/C++ from python?
JSON and other serializations make sense if you are going to do remote calls and the code needs to communicate across machines.
JSON seems like a fair enough choice for data serialization - it's good at handling strings and there's existing libraries for encoding/decoding JSON in both python & C++.
However, I think your bigger problem is likely to be the transport protocol that you use for transferring JSON between your client and server. Here's some options:
You could build an HTTP server into your C++ application (which I think might be what you mean by "JSON server" in your question), which would work fine, though might be a bit of a pain to implement unless you get a hold of a library to handle the hard work for you.
Another option might be to use the 0MQ library to send JSON (or otherwise) messages between your client and server. I think this would probably be a lot easier than implementing a full HTTP server, and 0MQ has some interprocess communication code that would likely be a lot faster than sending things over the network.
A third option would just be to run your C++ as a standalone application and pass the data in to it via stdin or command line parameters. This is probably the simplest way to do things, though may not be the most flexble. If you were to go this way, you might be better off just building a Python/C++ binding as suggested by ablm.
Alternatively you could attempt to build some sort of job queue based on redis or something other database system. The idea being that your django application puts some JSON describing the job into the job queue, and then the C++ application could periodically poll the queue, and use a seperate redis entry to pass the results back to the client. This could have the advantage that you could reasonably easily have several "workers" reading from the job queue with minimal effort.
There's almost definitely some other ways to go about it, but those are the ones that immediately spring to mind.
I am writing an application, similar to Seti#Home, that allows users to run processing on their home machine, and then upload the result to the central server.
However, the final result is maybe a 10K binary file. (Processing to achieve this output is several hours.)
What is the simplest reliable automatic method to upload this file to the central server? What do I need to do server-side to prevent blocking? Perhaps having the client send mail is simple and reliable? NB the client program is currently written in Python, if that matters.
Email is not a good solution; you will run into potential ISP blocking and other anti-spam mechanisms.
The easiest way is over HTTP via a simple webservice. Have a listener at your server that accepts the uploaded files as part of a HTTP POST and then dump them wherever they need to be post-processed.
I've posted this before but haven't obtained a suitable answer that fits my requirements. I'm looking for a technology to notify a C++ application when a change to a SQL Server table is made. Our middle-tier is C++ and we're not looking to move onto .NET infrastructure which means we can't use SQLDependency, or SQL Notification Servers. We're also stuck with SQL Server 2005 for the time being which also eliminates SQL Service Broker External Activation (that is introduced in SQL 2008).
To give a broader understanding of what I'm trying to achieve: our database is being updated with new information; Whenever a new piece of information is received, we'd like to push this to the C++ application so that its dashboard reflects up-to-date data for the user.
We know we can do this by having the C++ application polling the database but I see this as inefficient architecture and would like to have SQL push the information or a notification to C++.
You can actually use Query Notifications from C++. Both the OleDB and the ODBC clients for SQLNCLI10 and SQLNCLI providers support Query Notifications. See Working with Query Notifications, at the second half of the page you'll find the SSPROP_QP_NOTIFICATION... stuff for the OleDB rowsets and the SQL_SOPT_SS_QUERYNOTIFICATION... stuff for ODBC statements. So subscribing for notifications from a C++ mid tier is absolutely doable. And the second piece of the puzzle is to actually get the notifications, which is nothing else than posting a RECEIVE and waiting. In other words you can roll your own SqlDepdency in pure C++ over OleDB or ODBC. Once you have the mid-tier notified, it's a piece of cake (well, sort of) to update the client displays.
Between all the alternatives to detect data changes, you won't find anything better than Query Notifications.
BTW, one thing you should absolutely avoid is notifying the clients from triggers (oh, the horror...).
I would suggest writing a SQL CLR trigger that uses Net Pipes to notify you app.
HTH
There's another suggestion, and it may be simpler, you may deem this suggestion as rubbish because of the long-winded way of doing it, why not send an email on a trigger notification of the table such as insert/delete/update to a private email address with a subject line of 'DATA CHANGE NOTIFY', your C++ application can periodically poll via POP3 to check emails from the private email address... Mercury pop3 boxes can be used on the server (it's part of the xampp application) just a thought...
Hope this helps,
Best regards,
Tom.