send data from server with java ee 6 to client - web-services

Problem
We have a client-server application, server side is Glassfish 3.1.2. This app has many users, as well as many modules (e.g. View Transactions, View Banks etc). There are some long running processes invoked by client which run on server. Currently we have not found a nice solution to show the user what is going on on the server side. We want the users to get updated messages from server with given frequency. What would you suggest to use?
What we have done/tried
We (independently) used an approach with Singleton bean and a Map of client IDs similar to this, and it works of course. But then on the server side every method doSomething(Object... vars) must be converted to doSomething(Object... vars, String clientID) or whatever ID is type of. The client pulls data from server say once per second. I would like to avoid adding facades between server and client.
I was thinking about JAX-WS or JAX-RS, but I'm not familiar with these technologies deeply and not sure about what they can do.
Sockets
I should note that on the server side we have only Stateless beans (there is a reason for that), that is why I did not mention the use of Stateful bean (which is very good candidate I think).
Regards, Oleg

WebSocket could be a suitable choice, it allows the server to send unsolicited data to clients with no strong coupling, you just have to store a client id to map client connections to running tasks and be able to push updates to the right connection.
The client id/socket connection mapping can be maintained in a singleton bean using an in-memory structure, i.e. a hash map, or a permanent datastore for scalability purposes or in case you need a robust solution.
Some useful links to better understand WebSocket technology are this and this.

Related

How to decide between using messaging (e.g. RabbitMQ) versus a web service for backend component interactions/communication?

In developing backend components, I need to decide how these components will interact and communicate with each other. In particular, I need to decide whether it is better to use (RESTful, micro) web services versus a message broker (e.g. RabbitMQ). Are there certain criteria to help decide between using web services for each component versus messaging?
Eranda covered some of this in his answer, but I think three of the key drivers are:
Are you modeling a Request-Response type interaction?
Can your interaction be asynchronous?
How much knowledge does the sender of the information need to have about the recipients?
It is possible to do Request-Response type interactions with an asynchronous messaging infrastructure but it adds significantly to the complexity, so generally Request-Response type interactions (i.e. does the sender need some data returned from the recipient) are more easily modeled as RPC/REST interactions.
If your interaction can be asynchronous then it is possible to implement this using a REST interaction but it may scale better if you use a fire and forget messaging type interaction.
An asynchronous messaging interaction will also be much more appropriate if the provider of the information doesn't care who is consuming the information. An information provider could be publishing information and new consumers of that information could be added to the system later without having to change the provider.
Web server and message broker have their own use cases. Web server used to host web services and the message broker are use to exchange messages between two points. If you need to deploy a web service then you have to use a web server, where you can process that message and send back a response. Now let's think that you need to have publisher/subscriber pattern or/and reliable messaging between any two nodes, between two servers, between client and server, or server and client, that's where the message broker comes into the picture where you can use a message broker in the middle of two nodes to achieve it. Using message broker gives you the reliability but you have to pay it with the performance. So the components you should use depends on your use case though there are multiple options available.

How to create a full-duplex communication between API and various clients?

In my website, I'd like to create a public API that would allow clients (unknown people) to interact with my services. A classic REST API would work well in that case.
However, I need to be able to send events to the clients too. These events are not related to client HTTP requests. I saw "webhooks" are a way to deal with this. If I understood well, with webhooks, my service would send HTTP POST requests to a URL specified by the client, with event data inside this request.
I think websocket can be used too as a solution for this full-duplex communication need.
What I want to know, is which method would be the simplest for clients to implement to talk to my services? Simplicity is the key point here.
The hard thing is that my clients can use various technologies (full websites with HTTP servers, iOS/Android apps without server, etc.)
What are implications for clients if I use REST API + webhooks? Websockets? etc?
How to make a choice?
Hope it's clear (but not sure). Thanks :)
I would consider webhooks a simpler solution. And yes, you understood it well, that with webhooks, a developer using your API would register a URL where your backend would POST event data. It's a common pattern that's used in APIs.
A great benefit of using a webhooks design is that a client/server connection does not need to stay open. After all, if events occur infrequently (i.e. only a few times per hour, or per day) or keeping a consistent connection open is a challenge, establishing a connection only when it's needed is rather efficient.
The challenge of using webhooks for you, the API provider, is designing an evented backend system that deals with change of state detection and reliable webhook calling mechanisms (i.e. dealing with webhook receiver URLs that are unresponsive or throw errors).
The challenge of using webhooks on the developer end is that they need to stand up a reliable web server that listens for the event POST data from your server.
Realtime APIs (i.e. based on Websockets, Bayeux/CometD) are really swell because that live connection means that new connections do not have to be established, which is particularly useful with very chatty sessions. Additionally, there are a lot of projects and companies out there that have taken care of the heavy lifting on the server and client with fully-baked libraries. One of those is Fanout.io which makes pushing messages between the client/server possible with just a few lines of code, utilizing XMPP, Bayeux, and Websockets when possible.
(I am not affiliated with Fanout, but I have used it)
So, to sum it up, webhooks are simple mostly because you are already familiar with the architecture needed to implement them, and the pattern is a well traveled one. If you are leaning toward a persistent connection approach, I would look at tools/platforms like Fanout because it takes care of the heavy lifting (i.e. subscribe/publish, concurrent connection scale, client/server libraries).

What is the modern programming standard for synchronizing data between a web service and a client?

The question is a little general, so to help narrow the focus, I'll share my current setup that is motivating this question. I have a LAMP web service running a RESTful API. We have two client implementations: one browser-based javascript client (local storage store) and one iOS-based client (core data store). Obviously these two clients store data very differently, but the data itself needs to be kept in two-way sync with the remote server as often as possible.
Currently, our "sync" process is a little dumb (as in, non-smart). Conceptually, it looks like:
Client periodically asks the server for ALL of the most-recent data.
Server sends down the remote data, which overwrites the current set of local data in the client's store.
Any local creates/updates/deletes after this point are treated as gold, and immediately sent to the server.
The data itself is stored relationally, and updated occasionally by client users. The clients in my specific case don't care too much about the relationships themselves (which is why we can get away with local storage in the browser client for now).
Obviously this isn't true synchronization. I want to move to a system where, conceptually, a "diff" of the most recent changes are sent to the server periodically, and the server sends back a "diff" of the most recent changes it knows about. It seems very difficult to get to this point, but maybe I just don't understand the problem very well.
REST feels like a good start, but REST only talks about the way two data stores talk to each other, not how the data itself is synchronized between them. (This sync process is left up to the implementer of each store.) What is the best way to implement this process? Is there a modern set of programming design patterns that apply to inform a specific solution to this problem? I'm mostly interested in a general (technology agnostic) approach if possible... but specific frameworks would be useful to look at too, if they exist.
Multi-master replication is always (and will always be) difficult and bespoke, because how conflicts are handled will be specific to your application.
IMO A more robust approach is to use Master-slave replication, with your web service as the master and the clients as slaves. To keep the clients in sync, use an archived atom feed of the changes (see event sourcing) as per RFC5005. This is the closest you'll get to a modern standard for this type of replication and it's RESTful.
When the clients are online, they do not update their replica directly, instead they send commands to the server and have their replica updated via the atom feed.
When the clients are offline things get difficult. Your clients will need to have a model of how your web service behaves. It will need to have an offline copy of your replica, which should be copied on write from the online replica (the online replica is the one that is updated by the atom feed). When the client executes commands that modify the data, it should store the command (for later replay against the web service), the expected result (for verification during replay) and update the offline replica.
When the client goes back online, it should replay the commands, compare the result with the expected result and notify the client of any variances. How these variances are handled will vary based on your application. The offline replica can then be discarded.
CouchDB replication works over HTTP and does what you are looking to do. Once databases are synced on either end it will send diffs for adds/updates/deletes.
Couch can do this with other Couch machines or with a mobile framework like TouchDB.
https://github.com/couchbaselabs/TouchDB-iOS
I've done a fair amount of it, but you can always set up CouchDB on one machine, set up TouchDB on a mobile device and then watch the HTTP traffic go back and forth to get an idea of how they do it.
Or read this: http://guide.couchdb.org/draft/replication.html
Maybe something from the link above will help you get an idea of how to do your own diffs for your REST service. (Since they are both over HTTP thought it could be useful.)
You may want to look into the Dropbox Datastore API:
https://www.dropbox.com/developers/datastore
It sounds like it might be a very good fit for your purposes. They have iOS and javascript clients.
Lately, I've been interested in Meteor.
The platform sets up Mongo on the server and minimongo in the browser. The client subscribes to some data and when that data changes, the platform automatically sends down the new data to the client.
It's a clever solution to the syncing problem, and it solves several other problems as well. It will be interesting to see if more platforms do this in the future.

How to develop stateful servers in Java EE without Jax-ws

I'm developing a web service in Java EE using Apache Tomcat and so far I have written some basic server side methods and a test client. I can successfully invoke methods and get results but every time I invoke a method, the server constructor gets called again, and I also can't modify the instance variables of the server using the set methods. Is there a particular way to make my server stateful without using JAX-WS or EJB #Stateful tags?
This is a little bit of misconception here. The stateful EJB would maintain session between one client and server, so still the EJB state wouldn't be shared between various clients.
You can expose only stateless and singleton EJBs as a JAX-WS web service.
The best option is to use database for storing all bids and when the auction is finished choose the winning one.
If you want to use a file it is fine, as long as you like to play with issues like:
synchronizing access to that file from many clients
handling transactional reads and writes
resolve file corruption problems
a bunch of other problems that might happen if you are sufficiently unlucky
Sounds like a lot of work, which can be done by any sane database engine.

online game best-practice

I'm developing a django-based MMO, and I'm wondering what would be the best way for server-client communication. The solutions I found are:
periodical AJAX calls
keeping a connection alive and sending data through it
Later edit:
This would consist in "you have a message", "user x attacked you", "your transport to x has arrived" and stuff like this. They could grow in number (something like 1/second), but for a typical user they shouldn't reach 1/minute
Not sure if it's applicable to what you're looking for, but there's a pretty good live example of lightweight server-client communication using node.js for a simple chat service:
http://chat.nodejs.org/
You might want to take a look at crossbar
Crossbar.io is an open-source server software that allows developers
to create distributed systems, composed of application components
which are loosely coupled, communicate in (soft) real-time and can be
implemented in different languages
There's also a third technique involving "hanging" queries:
Client requests an updated page (or whatever)
Server doesn't answer right away
Sometime before the request times out, there's a state update in the server, and the server finally answers the client, which can then update.
If there really is nothing new to tell the client within the update period, then the server responds before the timeout with a "no news" message, and the client starts up another "hanging" request.
Advantages:
Client doesn't have to do Ajax. You could even make regular HTML pages "interactive" like this.
Probably not quite as much senseless polling traffic.
Disadvantages:
Server needs to keep more active connections open, and service them at least once per timeout period. Also,
depending on how well the server code supports multi-threading (does PHP provide any help there?), it may be more difficult to code than AJAX response handling.