I read many articles and blogs including Wikipedia and came to know REST is stateless. But please make me clear in simple language How REST handles multiple requests from client ?.
Thanks.
I assume that your question is about multiple calls that depend on the sequence of prior calls, not independent ones. In other words, you would like to know about calls with a conversational state.
When REST system needs to preserve the conversational state between calls, it does so by transferring additional information to the client. Each call from the client carries the conversational state received in the previous calls, enabling the server to stay stateless.
Because of the stateless architecture, each request is handled with no server-side information of previous session data.
To create the illusion of state, the client application stores the session specific data and attaches it on the HTTP requests when necessary. Take the following example...
The server requires authentication
After authentication, the key is sent to server via HTTP request
Images taken from
http://www.codeproject.com/Articles/149738/Basic-Authentication-on-a-WCF-REST-Service
Related
I'm writing an app using Akka Streams and Akka Http which needs to connect to an authenticated web service (which returns an authentication token) and then needs to regularly query the service and potentially perform other actions with it in response to the query (download files etc.). The authentication token times out after a certain amount of time and so will need to be refreshed.
How should I handle the authentication token? It needs to be passed to different Flows in the graph (everywhere I'm querying the service), and when the authentication token becomes invalid I need to request a new one.
One idea would be to do the authentication request outside the stream then pass in the token when materialising the stream so that each flow gets the token as a parameter during materialisation. Then when the token eventually times out the stream will fail and I tear it down and make a new one. I think this would work, but it seems a little clumsy and I'd like to know if there's a way to work entirely with the stream-based world.
One thought I had was that the authentication token could be zipped with the other data flowing through the stream and passed along to each Flow element that needed it. Then if the token fails at some point the stream somehow requests a new one with somekind of feedback flow or a recovery mechanism. But I don't know if this is possible or how to implement it.
Is there a third approach I haven't thought of, or something I've missed in Akka streams or Akka HTTP?
akka-http is built on akka-streams, so you are already covered on that front. For user session management using akka-http, take a look at akka-http-session. You might also want to read through this excellent post.
You might also take a look at some sample code I recently uploaded, which does not use akka-http-session - available here. Hope some of these materials help.
Actors
Front-end (fat client-side Javascript application) which has Facebook access token.
Back-end which 100% relies on OAuth2 authentication. All requests need to be authenticated via Facebook.
To mutate user data on the back-end, I require user to be logged in via Facebook. Ideally, with every request, I would know the Facebook's user ID (the one that graph.facebook.com/me provides).
Question 1
Is there a way to get whatever graph.facebook.com/me returns to be signed, so I don't have to call Facebook to verify it with every request, nor store state in my backend?
Situation 2
If the answer to Question 1 is "no", it means I have to invent my own. I am thinking of the following:
The user sends the access token to the backend.
Backend calls token debug API, signs the result with my key, and sends back to the client.
Every time client does a request, it includes the previously-included blob.
Upon every incoming request to the backend, it verifies the signature, which, if matches, means that the request wasn't tampered with and I can trust it is coming from the previously-verified token.
Question 2
If I employ this scheme (sign the answer from Facebook and send it upon every request), how can I safely implement this? Are there resources I could read up which would tell me:
Things to be cautious about with this scheme.
Which signature algorithm to use, how to safely verify the signatures.
How to avoid common types of attacks and stupid mistakes.
Thanks!
It's not really clear what exactly you want to do, but I think you should have a look at the docs at
https://developers.facebook.com/docs/graph-api/securing-requests
Quote:
Graph API calls can be made from clients or from your server on behalf of clients. Calls from a server can be better secured by adding a parameter called appsecret_proof.
Access tokens are portable. It's possible to take an access token generated on a client by Facebook's SDK, send it to a server and then make calls from that server on behalf of the person. An access token can also be stolen by malicious software on a person's computer or a man in the middle attack. Then that access token can be used from an entirely different system that's not the client and not your server, generating spam or stealing data.
You can prevent this by adding the appsecret_proof parameter to every API call from a server and enabling the setting to require proof on all calls. This prevents bad guys from making API calls with your access tokens from their servers. If you're using the official PHP SDK, the appsecret_proof parameter is automatically added.
I have created a web service API and it's architecture is such that the server requires a client to sign the request along with a secret key assigned to it (signature is always different between multiple requests).
Server matches the client's signature with its own computed signature. If they are a match then the server returns the response.
I am wondering if a client should check the response coming back from the server to see if it's from the same application to which the request was made.
Is any kind of attack possible between HTTP request and HTTP response?
Do we need a security signature for the web service response?
It depends. There are a few types of web service APIs out there. Some need strict security other might not. You could have a few types of APIs:
(1) completely opened API. Say you have a blog where you post about writing RESTful services and clients. You host a complete working REST service based on one of your posts so that people give it a spin. You don't care who calls your service, the service returns some dummy data etc. It's just a demo, a toy, no security here, no request signing, nada. It's just plain HTTP calls.
(2) service with an API key. Say you have a web service and you want to know who calls it. This kind of service needs a pre-registration and each client who wants to call your service needs to register and obtain a key first. Do note that the registration is not about authentication or authorization, you just want to know who's using your API (e.g. what business sector they operate in, how many clients they have, why are they using your API for etc) so that you later make some analysis of your own and take some (marketing maybe) decisions of some sort later on based on the data you get back.
There is nothing secret about this API key. It's just an UUID of some sort, the most basic way of differentiating between calls. This again involves only plain HTTP calls with the key as an additional request parameter.
(3) service with an API key and a secret key. This is similar to number (2) but you need to absolutely make sure that the calls are coming from the client that presents some API key. You need this because you probably want to bill the client for how much they have used your service. You want to make sure the calls actually come from that client and not someone ill intentioned that maybe wants to overcharge the client's bill.
So the client uses it's key for identification and a signature of the request with the secret key to actually vouch for it's identity. This again can be plain HTTP calls with the key and signature as additional request parameters.
(4) data "tampered-safe" web services. For numbers (1), (2) and (3) above I haven't considered any message security issues because most APIs don't need it. What's exchanged isn't confidential and not all that important to protect. But sometimes although the data isn't confidential you need to make sure it wasn't tampered with during transit.
Say you are the owner of a shop that builds some product and you want to advertise your product on some partner web sites. You expose a service with the product details and your partners just use this data to display your product details on their sites. Everybody knows what products you are building so you don't need to hide that, but you are paranoid about your competition trying to ruin you so you want to avoid them intercepting the
request and multiplying by 10 all your prices in the responses of your result just to scare potential buyers away.
Number (3) above, although uses the signing part as a way to prove the identity of the caller, also assures the request was not tampered with (server will reject the request if the signature does not match). So if you need to assure an original response you can also sign the response.
For this, the service can provide the client with an API key and two secret keys. One secret key is used by the client to sign their requests while the second secret key is used by the client to verify the signature of the response (using an unique secret key for the server isn't all that safe so the server emits a server secret key specific to each client).
But this has a weak point: you would need to trust your partners that they will indeed validate the response signature before displaying the information on the site and not just bluntly display it. Paranoid as you are you want to protect against this and for this you need HTTPS.
As #SilverlightFox mentioned this proves the validity of the response. The data was not tempered with because it's encrypted. The client does not need to have an extra step to verify the response signature because that verification is already done at a lower (transport) level.
(5) secure services. And now we reach the last type of service where the data is actually confidential. HTTPS is a must for these services. HTTPS ensures the data remains confidential, that it isn't tempered in transit, identifies the server and can also identity the client if client side certificates are used.
So, in conclusion, it depends on what type of service you have.
Make the request over HTTPS to ensure the validity of the response.
This will ensure your data is not vulnerable to a MITM attack. Rolling your own untested encryption/hashing methods is a sure way to open up your application to attack, so you should use TLS/SSL which means that you should connect to your web service API over HTTPS. TLS is the proven and secure way to ensure the response is coming from the application that the request was made to.
I'm designing a web service that parses a large document (150-200k) and returns some analytical data. The contents of the document are sensitive, and currently not persisted by the backend.
With a stateless REST web service, where all requests are idempotent, this would require every request to include the large document payload, which seems less than ideal.
Would a stateful alternative be a more appropriate design for this scenario, where a session is established after the initial document is POSTed? The client could then make further requests to endpoints which would provide differing analytical results, using the document in memory?
You can think of it as a REST interface tacked onto a document storage service.
The document is stored temporarily. Perhaps it stays for 10 minutes or until released by the owner. The doc storage service returns a token allowing access to the document. But the token expires with the document timeout.
Then you only need REST services to ask questions about the document. Each call needs to include the token but can be repeated indefinitely and still get the same response.
You may want to cache certain information about each document. That's a performance issue.
You might want to consider how to encrypt the token in such a way that it can't be copied off the "wire" and used by a "bad guy(TM)".
I'm developing small REST service which should support client session persistence. As you know because of REST we can't store any client data on the server, data must be stored on client side and client's request must be self-sufficient. So...how we can store client sessions? Searching over the internet I've found some methods how to realize this. For example: we send to the client encrypted token which contains client's id(nick...etc), like token = AES(id, secretKey); and then we're authorize user every request decrypting token on the server with secret key. Can anyone advise anything? Maybe there is another good ways to do same functionality. Which crypto algorithm will be preferable for this? Thanks.
You mentioned:
As you know because of REST we can't store any client data on the
server, data must be stored on client side and client's request must
be self-sufficient.
REST doesn't say you can't store client data on the server; it just says you shouldn't store application state there, which you can think of as "what this client is in the middle of trying to do".
If you are primarily trying to just have a concept of authenticated users, then a standard login cookie will work just fine and is not "unRESTful".
It all comes down to your answer to this question: why do you need a "session" concept in the first place?
If you need to ensure that the client passes a cookie representing a set of credentials, consider instead having the client pass them as HTTPS authentication headers with each request instead.
If you need some sticky routing rules to be followed (to make sure that the client's request gets sent to a particular server), consider using this opportunity to get rid of that architectural straightjacket as it is the quickest way to kill your chances of future scalability. Instead, make your server choice arbitrary.
If you absolutely must route to a specific node, try requiring that the client pass enough identification data that you can use it to hash or shard the client down a particular "swim lane". You could split things up based on their username, for example.