We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing.
Now, the problem is, there is not 1 Web Service we are implementing, there is one Web Service which the client component hits, this initiates a series (5 more) of Web Services which gather data from their respective data stores and finally provide the data back to the original Web Service, which then delivers the data back to the client component.
So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel.
So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal Web Service and at the same time, also delivering the data to the client component.
Update 1
Using 5 WS, where, 1WS does not know about the others, except the next one is a business requirement. Actually, 5 companies "small services" are being integrated.
We use Java and Axis2
We've had a similar problem. Apart from trying to avoid it (eg for internal communication go direct to db instead of web service) you can mitigate it by at least not performing the 5 or so tasks in series. Make new threads to collect them all in parallel and process them at the end to reduce latency (except where they might contend for the same resource and bottle neck).
But before I'd do anything load test it and see if it is even an issue and get some baseline stats so you can see what improvement each change makes. Also sometimes you might be better off tweaking network settings or the actual network rather than trying to optimise the code - but again test and see.
Put all the data on a temporary compressed file and give back the ftp url of the file.
The client fetches the big data chunk uncompress it and reads it. (maybe some authentication mechanism for the ftp server)
Related
Background:
I've a local application that process the user input for 3 second (approximately) and then return an answer (output) to the user.
(I don't want to go into details about my application in purpose of not complicate the question and keep it a pure architectural question)
My Goal:
I want to make my application a service in the cloud and expose API
(for the upcoming website and for clients that will connect the service without install the software locally)
Possible Solutions:
Deploy WCF on the cloud and use my application there, so clients can invoke the service and use my application on the cloud. (RPC style)
Use a Web-API that will insert the request into queue and then a worker role will dequeue requests and post the results to a DB, so the client will send one request for creating a request in the queue, and another request for getting the result (which the Web-API will get from the DB).
The Problems:
If I go with the WCF solution (#1) I cant handle great loads of requests, maybe 10-20 simultaneously.
If I go with the WebAPI-Queue-WorkerRole solution (#2) sometimes the client will need to request the results multiple times its can be a problem.
If I go with the WebAPI-Queue-WorkerRole solution (#2) the process isn't sync, the client will not get the result once the process of his request is done, he need to request the result.
Questions:
In the WebAPI-Queue-WorkerRole solution (#2), can I somehow alert the client once his request has processed and done ? so I can save the client multiple request (for the result).
Asking multiple times for the result isn't old stuff ? I remmemeber that 10 - 15 years ago its was accepted but now ? I know that VirusTotal API use this kind of design.
There is a better solution ? one that will handle great loads and will be sync or async (returning result to the client once it done) ?
Thank you.
If you're using Azure, why not simply fire up more servers and use load balancing to handle more load? In that way, as your load increases, you have more servers to handle the requests.
Microsoft recently made available the Azure Service Fabric, which gives you a lot of control over spinning up and shutting down these services.
The question is a little general, so to help narrow the focus, I'll share my current setup that is motivating this question. I have a LAMP web service running a RESTful API. We have two client implementations: one browser-based javascript client (local storage store) and one iOS-based client (core data store). Obviously these two clients store data very differently, but the data itself needs to be kept in two-way sync with the remote server as often as possible.
Currently, our "sync" process is a little dumb (as in, non-smart). Conceptually, it looks like:
Client periodically asks the server for ALL of the most-recent data.
Server sends down the remote data, which overwrites the current set of local data in the client's store.
Any local creates/updates/deletes after this point are treated as gold, and immediately sent to the server.
The data itself is stored relationally, and updated occasionally by client users. The clients in my specific case don't care too much about the relationships themselves (which is why we can get away with local storage in the browser client for now).
Obviously this isn't true synchronization. I want to move to a system where, conceptually, a "diff" of the most recent changes are sent to the server periodically, and the server sends back a "diff" of the most recent changes it knows about. It seems very difficult to get to this point, but maybe I just don't understand the problem very well.
REST feels like a good start, but REST only talks about the way two data stores talk to each other, not how the data itself is synchronized between them. (This sync process is left up to the implementer of each store.) What is the best way to implement this process? Is there a modern set of programming design patterns that apply to inform a specific solution to this problem? I'm mostly interested in a general (technology agnostic) approach if possible... but specific frameworks would be useful to look at too, if they exist.
Multi-master replication is always (and will always be) difficult and bespoke, because how conflicts are handled will be specific to your application.
IMO A more robust approach is to use Master-slave replication, with your web service as the master and the clients as slaves. To keep the clients in sync, use an archived atom feed of the changes (see event sourcing) as per RFC5005. This is the closest you'll get to a modern standard for this type of replication and it's RESTful.
When the clients are online, they do not update their replica directly, instead they send commands to the server and have their replica updated via the atom feed.
When the clients are offline things get difficult. Your clients will need to have a model of how your web service behaves. It will need to have an offline copy of your replica, which should be copied on write from the online replica (the online replica is the one that is updated by the atom feed). When the client executes commands that modify the data, it should store the command (for later replay against the web service), the expected result (for verification during replay) and update the offline replica.
When the client goes back online, it should replay the commands, compare the result with the expected result and notify the client of any variances. How these variances are handled will vary based on your application. The offline replica can then be discarded.
CouchDB replication works over HTTP and does what you are looking to do. Once databases are synced on either end it will send diffs for adds/updates/deletes.
Couch can do this with other Couch machines or with a mobile framework like TouchDB.
https://github.com/couchbaselabs/TouchDB-iOS
I've done a fair amount of it, but you can always set up CouchDB on one machine, set up TouchDB on a mobile device and then watch the HTTP traffic go back and forth to get an idea of how they do it.
Or read this: http://guide.couchdb.org/draft/replication.html
Maybe something from the link above will help you get an idea of how to do your own diffs for your REST service. (Since they are both over HTTP thought it could be useful.)
You may want to look into the Dropbox Datastore API:
https://www.dropbox.com/developers/datastore
It sounds like it might be a very good fit for your purposes. They have iOS and javascript clients.
Lately, I've been interested in Meteor.
The platform sets up Mongo on the server and minimongo in the browser. The client subscribes to some data and when that data changes, the platform automatically sends down the new data to the client.
It's a clever solution to the syncing problem, and it solves several other problems as well. It will be interesting to see if more platforms do this in the future.
I always read that one reason to chose a RESTful architecture is (among others) better scalability for Webapplications with a high load.
Why is that? One reason I can think of is that because of the defined resources which are the same for every client, caching is made easier. After the first request, subsequent requests are served from a memcached instance which also scales well horizontally.
But couldn't you also accomplish this with a traditional approach where actions are encoded in the url, e.g. (booking.php/userid=123&travelid=456&foobar=789).
A part of REST is indeed the URL part (it's the R in REST) but the S is more important for scaling: state.
The server end of REST is stateless, which means that the server doesn't have to store anything across requests. This means that there doesn't have to be (much) communication between servers, making it horizontally scalable.
Of course, there's a small bonus in the R (representational) in that a load balancer can easily route the request to the right server if you have nice URLs, and GET could go to a slave while POSTs go to masters.
I think what Tom said is very accurate, however another problem with scalability is the barrier to change upon scaling. So, one of the biggest tenants of REST as it was intended is HyperMedia. Basically, the server will own the paths and pass them to the client at runtime. This allows you to change your code without breaking existing clients. However, you will find most implementations of REST to simply be RPC hiding behind the guise of REST...which is not scalable.
"Scalable" or "web scale" is one of the most abused terms when it comes to the web, the cloud and REST, and mainly used to convince management to get their support for moving their development team on board the REST train.
It is a buzzword that holds no value. If you search the web for "REST scalability" you'll find a lot of people parroting each other without any concrete evidence.
A REST service is exactly equally scalable as a service exposed over a SOAP interface. Both are just HTTP interfaces to an application service. How well this service actually scales depends entirely on how this service was actually implemented. It's possible to write a service that cannot scale as all in both REST and SOAP.
Yes, you can do things with SOAP that makes it scale worse, like rely on state and sessions. SOAP out of the box does not do this. This requires you to use a smarter load balancer, which you want anyway if you're really concerned with whatever form of scaling.
One thing that REST allows that SOAP doesn't, and that some other answers here address, is caching cacheable responses through an HTTP caching proxy or at the client side. This may make a REST service somewhat more lightly loaded than a SOAP service when a lot of operations' responses are cacheable. All this means is that fewer requests end up in your service.
The main reason behind saying a rest application is scalable is, Its built upon a HTTP protocol. Because HTTP is stateless. Stateless means it wont share anything between other request. So any request can go to any Server in a load balanced cluster. There is nothing forcing this user request go to this server. We can overcome this by using token.
Because of this statelessness,All REST application are very easy to scale. But if you want get high throughput(number of request capable in one second) in each server, then you should optimize blocking things from the application. Follow the following tips
Make each REST resource is a small entity. Don't read data from join of many tables.
Read data from near by databases
Use caches (Redis) instead of databases(You can save DISK I/O)
Always keep data sources as much as near by because these blocks will make server resources (CPU) ideal and it no other request can use that resource while it is ideal.
A reason (perhaps not the reason) is that RESTful services are sessionless. This means you can easily use a load balancer to direct requests to various web servers without having to replicate session state among all of your web servers or making sure all requests from a single session go to the same web server.
I have a web service which performs the submission of a small amount of data. It provides a synchronous request response service for my clients. This is working well. I have a new requirement to also support the submission of a much larger amount of the same data; about 10,000 times more data volume. Naturally the larger data will be an asynchronous service for my clients.
The infrastructure I use for the small amount of data cannot support both types of service; the large volume submissions will kill the responsiveness of my small volume submissions.
What I would like to do is be flexible with my deployment and make life simple for the people developing the client software which submits the data. I have been looking for a standards based way to do this:
- client calls my data submission web service
- server determines the amount of data being submitted
- if data is too big the server responds to the client with a different uri. The uri is for client to do the submission i.e. Redirect the client to bigger infrastructure
- client calls the different uri and gets service
I've done some searching and the general response is that this isn't something that is done in web services. I don't understand why. This seems like a reasonable requirement that is probably also true for clustered server scenario's.
Does anyone know if there are standards which cover this? If not, is there a better way?
A subtlety in my case is that I want all the traffic to flow differently for the large submission so I can't simply front end my infrastructure with a web service content aware proxy server. I need to push the web service call to a totally different place; much like a HTTP redirect.
Any help is appreciated.
Which is the best approach for creating a fast response in case a client application asks webserver for "check for updates".
Skype for example takes about 1 second to answer. How to achieve the same?
I assume you are running one or more web servers and one or more back-end servers (with business logic).
One possible approach that I have seen: keep a change counter in webserver and when the back-end state changes, let the business logic notify all webservers with new change counter value.
Each web browser polls regularly the webserver for counter value and compares the value to the previous value. In case old_value != new_value, the web browser goes and asks the webserver for new content.
This allows the regular polling to be super-fast (1ms) and cheap. And only if something has really changed the browser will ask for more resource-expensive content generation.
The other option would be to use some asynchronous HTTP magic (cometd) but the approach outlined above is simpler, more understandable and easier to troubleshoot.
The simple approach is to just have a flat text or XML file on the server, containing the details of the most recent version. The client app fetches it via http GET, compares the version, and reacts accordingly. The http server is simply returning a small file, which is what http servers are designed to do. You should be able to handle hundreds of requests per second this way.
Use a large, distributed systems, depending on the number of your users. Put your web server(s) closer to clients, avoiding longer latencies. Use cluster and load balancing software to enhance performance. Use reverse proxies to cache data.
But is is really important that a "check for updates" is that fast? You can also check in a background thread. I would improve performance for other tasks.