So I found this here in the Docs:
Every client sharing a Firebase maintains its own internal version of any active data. When data is updated or saved, it is written to this local version of the Firebase. The Firebase client then synchronizes that data with the Firebase servers and with other clients on a 'best-effort' basis.
As a result, all writes to Firebase will trigger local events immediately, before any data has even been written to the server. This means the app will remain responsive regardless of network latency or Internet connectivity.
Once connectivity is reestablished, we'll receive the appropriate set of events so that the client "catches up" with the current server state, without having to write any custom code.
Yeah, I got that going for me, but to be more specific (and I wasn't able to find an answer to that):
I am using the REST-API in a c++-program, which executes a curl-request. Everything is working so far. For the insertion, this is not a big deal. In case of an error, I can easily store the data via redis or something and update them later, but how does reading work?
To give you a scenario:
I made a scanner, which recognizes an ID. After this process, it is inserted (as explained above) into the Firebase. People can also register on a regarding homepage (and insert their ID manually). This will be saved in the Firebase as well. Same node obviously.
Firebase is designed to provide the data from one endpoint to another by accessing the db, which is fine. The user registered on the site and is inserted in the DB. Suddenly, my internet connection went away.
Is there any way to get the last "stack" or "full dataset", which was used, before my connection went away? Is there a way to replicate the DB and queue jobs, which will sync after the connection is re-established?
Disclaimer: I work for Firebase.
The passage you're quoting above specifically refers to the client libraries that we maintain (currently in Objective-C, Java, and JavaScript) - which are pieces of code that we've written that you would run in your app.
In this case, you're specifically not using a client library - you're just hitting our regular REST endpoint, so you won't get any of the benefits. To implement your own client would be a significant undertaking; it's the client code that maintains the internal view of the data, compensates when it's offline, triggers local events, etc.
Related
I am writing a (Django-based) website which is working just fine. It displays a list of sensors and their status. If a new sensor is attached, the user needs to wait for a certain amount of time until it is warmed up and ready to use. Also, when the sensors are updated (which the user can trigger, but can also be done automatically by the system) - the user needs to wait.
On the server side I have all signals/Status updates/whatsoever available. Now I want to create an overlay for the current webpage where the statuschange is displayed for x seconds and userinput is disabled.
I have no clue what technology to use. I could frequently ask for updates client -> server but that doesn't feel like the correct way. Any suggestions on what to search for?
No code here because the answer is probably independed of my website code
Standard solution is to use Ajax (JavaScript) or similar to get state from your backend on specific intervals, that is the approach you're mentioning.
You can also "push" changes from your backend to frontend using WebSockets but that is a bit more complex. A popular framework is socket.io, I recommend you take a look at it.
I'm trying to modify a game engine so it records events (like key presses), and store these in a MySQL database on a remote server. The game engine is written in C++, and I currently have the following straightforward architecture, using mysql++ to directly INSERTrecords into appropriate databases:
Unfortunately, there's a very large overhead when connecting to the MySQL server, and the game stops for a significant amount of time. Pushing a batch of Xs worth of events to the server causes a significant delay in gameplay (60s worth of events can take 12s to synchronise). There are also apparently security concerns with leaving the MySQL port accessible publicly.
I was considering an alternative option, instead sending commands to the server, which can interact with the database in its own time:
Here the game would only send the necessary information (e.g. the table to update and the data to insert). I'm not sure whether the speed increase would be sufficient, or what system would be appropriate for managing the commands sent from the game.
Someone else suggested Log4j, but obviously I need a C++ solution. Is there an appropriate existing framework for accomplishing what I want?
Most applications gathering user-interface interaction data (in your case keystrokes) put it into a local file of some sort.
Then at an appropriate time (for example at the end of the game, or the beginning of another game), they POST that file, often in compressed form, to a publicly accessible web server. The software on the web server decompresses the data and loads it into the analytics system (the MySQL server in your case) for processing.
So, I suggest the following.
stop making your MySQL server's port available to people you don't know and trust.
get your game to gather keystrokes locally somehow.
get it to upload that data in big bunches when your game is not in realtime mode.
write a web service to receive and interpret these files.
That way you'll build a more secure analytics system and a more responsive game.
The question is a little general, so to help narrow the focus, I'll share my current setup that is motivating this question. I have a LAMP web service running a RESTful API. We have two client implementations: one browser-based javascript client (local storage store) and one iOS-based client (core data store). Obviously these two clients store data very differently, but the data itself needs to be kept in two-way sync with the remote server as often as possible.
Currently, our "sync" process is a little dumb (as in, non-smart). Conceptually, it looks like:
Client periodically asks the server for ALL of the most-recent data.
Server sends down the remote data, which overwrites the current set of local data in the client's store.
Any local creates/updates/deletes after this point are treated as gold, and immediately sent to the server.
The data itself is stored relationally, and updated occasionally by client users. The clients in my specific case don't care too much about the relationships themselves (which is why we can get away with local storage in the browser client for now).
Obviously this isn't true synchronization. I want to move to a system where, conceptually, a "diff" of the most recent changes are sent to the server periodically, and the server sends back a "diff" of the most recent changes it knows about. It seems very difficult to get to this point, but maybe I just don't understand the problem very well.
REST feels like a good start, but REST only talks about the way two data stores talk to each other, not how the data itself is synchronized between them. (This sync process is left up to the implementer of each store.) What is the best way to implement this process? Is there a modern set of programming design patterns that apply to inform a specific solution to this problem? I'm mostly interested in a general (technology agnostic) approach if possible... but specific frameworks would be useful to look at too, if they exist.
Multi-master replication is always (and will always be) difficult and bespoke, because how conflicts are handled will be specific to your application.
IMO A more robust approach is to use Master-slave replication, with your web service as the master and the clients as slaves. To keep the clients in sync, use an archived atom feed of the changes (see event sourcing) as per RFC5005. This is the closest you'll get to a modern standard for this type of replication and it's RESTful.
When the clients are online, they do not update their replica directly, instead they send commands to the server and have their replica updated via the atom feed.
When the clients are offline things get difficult. Your clients will need to have a model of how your web service behaves. It will need to have an offline copy of your replica, which should be copied on write from the online replica (the online replica is the one that is updated by the atom feed). When the client executes commands that modify the data, it should store the command (for later replay against the web service), the expected result (for verification during replay) and update the offline replica.
When the client goes back online, it should replay the commands, compare the result with the expected result and notify the client of any variances. How these variances are handled will vary based on your application. The offline replica can then be discarded.
CouchDB replication works over HTTP and does what you are looking to do. Once databases are synced on either end it will send diffs for adds/updates/deletes.
Couch can do this with other Couch machines or with a mobile framework like TouchDB.
https://github.com/couchbaselabs/TouchDB-iOS
I've done a fair amount of it, but you can always set up CouchDB on one machine, set up TouchDB on a mobile device and then watch the HTTP traffic go back and forth to get an idea of how they do it.
Or read this: http://guide.couchdb.org/draft/replication.html
Maybe something from the link above will help you get an idea of how to do your own diffs for your REST service. (Since they are both over HTTP thought it could be useful.)
You may want to look into the Dropbox Datastore API:
https://www.dropbox.com/developers/datastore
It sounds like it might be a very good fit for your purposes. They have iOS and javascript clients.
Lately, I've been interested in Meteor.
The platform sets up Mongo on the server and minimongo in the browser. The client subscribes to some data and when that data changes, the platform automatically sends down the new data to the client.
It's a clever solution to the syncing problem, and it solves several other problems as well. It will be interesting to see if more platforms do this in the future.
I am working on a project where a website needs to exchange complex and confidential (and thus encrypted) data with other systems. The data includes personal information, technical drawings, public documents etc.
We would prefer to avoid the Request-Reply pattern to the dependent systems (and there are a LOT of them), as that would create an awful lot of empty traffic.
On the other hand, I am not sure that a pure Publisher/Subscriber pattern would be apropriate -- mainly because of the complex and bulky nature of the data to be exchanged.
For that reason we have discussed the possibility of a "publish/subscribe/request" solution. The Publish/Subscribe part would be to publish a message to the dependent systems, that something is ready for pickup. The actual content is then picked up by old-school Request-Reply action.
How does this sound to you??
Regards,
Morten
If the systems are always online, it sounds good.
You might want to look at PubSubHubbub because:
1. Don't solve a problem that has already been solved 2. It is scalable and represents a good separation of concern.
It involves 3 parties:
Publishers (who publish stuff)
Subscribers (who are interested in certain publications)
Hubs (who mediate and get rid of 'polling')
It works in the following way:
A subscriber, registers their interest in a URL with a Hub and provides a callback URL.
A publisher, notifies the hub when publishing content.
A hub fetches the 'delta' and pushes it to interested subscribers.
The protocol itself is an extension to Atom, but it seems to fit your requirement, e.g. the new Atom 'content' could be an item containing URLs to newly published documents (which can then be downloaded separately).
New/modified documents => new/modified items in feed containing URLs to fetch them => Hub => Subscribers => Pull documents from Publisher
I don't have a great experience about this, but a messaging queue should help you accomplish what you need. I am using such a system while managing publishing data to multiple front end clients from a backend.
If the client is off, the data is not consumed and the server receives no acknowledgement of data being reveived. Once the client comes back online he consumes the data and remains listening for more messages onve the queue is clear. And ofc the publisher receives a ack for data being consumed. In this way we can identify and notify people who have problems at the receiving end as a bonus. Could this do it in your case?
This approach works if the dependent systems are always online - you can't send messages to PCs that are turned off for the night/weekend.
So if the clients are servers that run 24/7, this works. Otherwise, try this approach:
Let clients register themselves
When new documents come in, add an entry "client X needs to see this" in your database
When clients connect, send them all the entries.
When clients successfully downloaded a document, delete the "client X needs to see this" entry. That keeps the work table small.
This has several advantages:
Clients don't need to run 24/7
The flag is only removed after the client has seen the document (so no updates can be lost).
You have one place where you can see which client never pulls it's documents. A simple select client, count(*) group by client having count(*) > 10 tells you about problems.
Most clients will fetch their data timely, so the work table will stay small. That means there is little overhead when you have to collect the "what's now" data.
EDIT The problem with off-line subscribers is that they don't know what they're missing. So the sending side needs to keep track of the failed push/pull requests. Which means you must implement my suggested pseudo-code to make sure broken connections can be resumed.
I am developing a Windows Phone app where users can update a list. Each update, delete, add etc need to be stored in a database that sits behind a web service. As well as ensuring all the operations made on the phone end up in the cloud, I need to make sure the app is really responsive and the user doesn’t feel any lag time whatsoever.
What’s the best design to use here? Each check box change, each text box edit fires a new thread to contact the web service? Locally store a list of things that need to be updated then send to the server in batch every so often (what about the back button)? Am I missing another even easier implementation?
Thanks in advance,
Data updates to your web service are going to take some time to execute, so in terms of providing the very best responsiveness to the user your best approach would be to fire these off on a background thread.
If updates not taking place (until your app resumes) due to a back press is a concern for your app then you can increase the frequency of sending these updates off.
Storing data locally would be a good idea following each change to make sure nothing is lost since you don't know if your app will get interrupted such as by a phone call.
You are able to intercept the back button which would allow you to handle notifying the user of pending updates being processed or requesting confirmation to defer transmission (say in the case of poor performing network location). Perhaps a visual queue in your UI would be helpful to indicate pending requests in your storage queue.
You may want to give some thought to the overall frequency of data updates in a typical usage scenario for your application and how intensely this would utilise the network connection. Depending on this you may want to balance frequency of updates with potential power consumption.
This may guide you on whether to fire updates off of field level changes, a timer when the queue isn't empty, and/or manipulating a different row of data among other possibilities.
General efficiency guidance with mobile network communications is to have larger and less frequent transmissions rather than a "chatty" or frequent transmissions pattern, however this is up to you to decide what is most applicable for your application.
You might want to look into something similar to REST or SOAP.
Each update, delete, add would send a request to the web service. After the request is fulfilled, the web service sends a message back to the Phone application.
Since you want to keep this simple on the Phone application, you would send a URL to the web service, and the web service would respond with a simple message you can easily parse.
Something like this:
http://webservice?action=update&id=10345&data=...
With a reply of:
Update 10345 successful
The id number is just an incrementing sequence to identify the request / response pair.
There is the Microsoft Sync Framework recently released and discussed some weeks back on DotNetRocks. I must admit I didnt consider this till I read your comment.
I've not looked into the sync framework's dependencies and thus capability for running on the wp7 platform as yet, but it's probably worth checking out.
Here's a link to the framework.
And a link to Carl and Richard's show with Lev Novik, an architect on the project if you're interested in some background info. It was quite an interesting show.