I am thinking of a rest web service that ensure for every request sent to him that :
The request was generated by the user who claim it ;
The request has not been modified by someone else (uri/method/content/date);
For GET requests, it should be possible to generate a URI with enough information in it to check the signature and set a date of expiration. That way a user can delegate temporary READ permissions to a collaborator for a limited time period on a ressource with a generated URI.
Clients are authenticated with id and a content-signature based on their password.
There should be no session at all, and so server state ! The server and the client share a secret key (a password)
After thinking about it and talking with some really nice folks, it seems there is no rest service existing to do that as simple as it should be for my use case. (HTTP Digest and OAuth can do this with server state and are very chatty)
So I Imagined one, and I'm asking your greats comments on how it should be designed (I will release it OpenSource and Hope it can help others).
The service use a custom "Content-signature" header to store credentials. An authenticated request should contains this header :
Content-signature: <METHOD>-<USERID>-<SIGNATURE>
<METHOD> is the sign method used, in our case SRAS.
<USERID> stands for the user ID mentioned earlier.
<SIGNATURE> = SHA2(SHA2(<PASSWORD>):SHA2(<REQUEST_HASH>));
<REQUEST_HASH> = <HTTP_METHOD>\n
<HTTP_URI>\n
<REQUEST_DATE>\n
<BODY_CONTENT>;
A request is invalidated 10 minutes after it has been created.
For example a typical HTTP REQUEST would be :
POST /ressource HTTP/1.1
Host: www.elphia.fr
Date: Sun, 06 Nov 1994 08:49:37 GMT
Content-signature: SRAS-62ABCD651FD52614BC42FD-760FA9826BC654BC42FD
{ test: "yes" }
The server will answer :
401 Unauthorized
OR
200 OK
Variables would be :
<USERID> = 62ABCD651FD52614BC42FD
<REQUEST_HASH> = POST\n
/ressource\n
Sun, 06 Nov 1994 08:49:37 GMT\n
{ test: "yes" }\n
URI Parameters
Some parameters can be added to the URI (they overload the headers informations) :
_sras.content-signature=<METHOD>-<USERID>-<SIGNATURE> : PUT the credentials in the URI, not in the HTTP header. This allow a user to share a signed request ;
_sras.date=Sun, 06 Nov 1994 08:49:37 GMT (request date*) : The date when the request was created.
_sras.expires=Sun, 06 Nov 1994 08:49:37 GMT (expire date*) : Tell the server the request should not expire before the specified date
*date format : http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.18
Thanks for your comments.
There are several issues that you need to consider when designing a signature protocol. Some of these issues might not apply to your particular service:
1- It is customary to add an "X-Namespace-" prefix to non-standard headers, in your case you could name your header something like: "X-SRAS-Content-Signature".
2- The Date header might not provide enough resolution for the nonce value, I would therefore advise for a timestamp having at least 1 millisecond of resolution.
3- If you do not store at least the last nonce, one could still replay a message in the 10 minutes window, which is probably unacceptable on a POST request (could create multiple instances with same values in your REST web service). This should not be a problem for GET PUT or DELETE verbs.
However, on a PUT, this could be used for a denial of service attack by forcing to update many times the same object within the proposed 10 minutes window. On a GET or DELETE a similar problem exists.
You therefore probably need to store at least the last used nonce associated with each user id and share this state between all your authentication servers in real-time.
4- This method also requires that the client and servers be clock synchronized with less than 10 minutes skew. This can be tricky to debug, or impossible to enforce if you have AJAX clients for which you do not control the clock. This also requires to set all timestamps in UTC.
An alternative is to drop the 10 minutes window requirement but verify that timestamps increase monotonically, which again requires to store the last nonce. This is still a problem if the client's clock is updated to a date prior to the last used nonce. Access would be denied until the client's clock pass the last nonce or the server nonce state is reset.
A monotonically increasing counter is not an option for clients that cannot store a state, unless the client could request the last used nonce to the server. This would be done once at the beginning of each session and then the counter would be incremented at each request.
5- You also need to pay attention to retransmissions due to networks errors. You cannot assume that the server has not received the last message for which a TCP Ack has not been received by the client before the TCP connection dropped. Therefore the nonce needs to be incremented between each retransmission above the TCP level and the signature re-calculated with the new nonce. Yet a message number needs to be added to prevent double execution on the server: a double POST would result in 2 object being created.
6- You also need to sign the userid, otherwise, an attacker might be able to replay the same message for all users which nonces have not yet reached that of the replayed message.
7- Your method does not guaranty the client that the server is authentic and has not been DNS-hijacked. Server authentication is usually considered important for secure communications. This service could be provided by signing responses from the server, using the same nonce as that of the request.
I would note that you can accomplish this with OAuth, most notably "2-legged OAuth" where client and server share a secret. See https://www.rfc-editor.org/rfc/rfc5849#page-14. In your case, you want to omit the oauth_token parameter and probably use the HMAC-SHA1 signature method. There's nothing particularly chatty about this; you don't need to go through the OAuth token acquisition flows to do things this way. This has the advantage of being able to use any of several existing open source OAuth libraries.
As far as server-side state, you do need to keep track of what secrets go with which clients, as well as which nonces have been used recently (to prevent replay attacks). You can skip the nonce checking / lifetimes if you run things over HTTPS, but if you're going to do that, then HTTPS + Basic Auth gives you everything you described without having to write new software.
Related
This page provides public keys to decrypt headers from Google's Identity Aware Proxy. Making a request to the page provides its own set of headers, one of which is Expires (it contains a datetime).
What does the expiration date actually mean? I have noticed it fluctuating occasionally, and have not noticed the public keys changing at the expiry time.
I have read about Securing Your App With Signed Headers, and it goes over how to fetch the keys after every key ID mismatch, but I am looking to make a more efficient cache that can fetch the keys less often based on the expiry time.
Here are all the headers from the public keys page:
Accept-Ranges →bytes
Age →1358
Alt-Svc →quic=":443"; ma=2592000; v="39,38,37,36,35"
Cache-Control →public, max-age=3000
Content-Encoding →gzip
Content-Length →519
Content-Type →text/html
Date →Thu, 29 Jun 2017 14:46:55 GMT
Expires →Thu, 29 Jun 2017 15:36:55 GMT
Last-Modified →Thu, 29 Jun 2017 04:46:21 GMT
Server →sffe
Vary →Accept-Encoding
X-Content-Type-Options →nosniff
X-XSS-Protection →1; mode=block
The Expires header controls how long HTTP caches are supposed to hold onto that page. We didn't bother giving Google's content-serving infrastructure any special instructions for the keyfile, so whatever you're seeing there is the default value.
Is there a reason the "refresh the keyfile on lookup failure" approach isn't a good fit for your application? I'm not sure you'll be able to do any better than that, since:
Unless there's a bug or problem, you should never get a key lookup failure.
Even if you did have some scheduled key fetch, it'd probably still be advisable to refresh the keyfile on lookup failure as a fail-safe.
We don't currently rotate the keys super-frequently, though that could change in the future (which is why we don't publish the rotation interval), so it shouldn't be a significant source of load. Are you observing that refreshing the keys is impacting you?
--Matthew, Google Cloud IAP engineer
Is X-Amz-Expires a required header/parameter? Official documentation is inconsistent and uses it in some examples, while not in others.
If it is not required, what is the default expiration value of a signed request? Does it equal to the maximum possible value for X-Amz-Expires parameter, which is 604800 (seven days)?
The documentation (see above links) talks about X-Amz-Expires parameter only in context of passing signing parameters in a query string. If X-Amz-Expires parameter is required, is it only required for passing signing parametes in query string (as opposed to passing them with Authorization header)?
Update:
Introduction to AWS Security Processes paper, on page 17 says
A request must reach AWS within 15 minutes of the
time stamp in the request. Otherwise, AWS denies the request.
Now what time stamp are we talking about here? My guess is that it is X-Amz-Date. If I am correct, then another question crops up:
How do X-Amz-Date and X-Amz-Expires parameters relate to each other? To me it sounds like request expiration algorithm falls back to 15 mins from X-Amz-Date timestamp, if X-Amz-Expire is not present.
Is X-Amz-Expires a required header/parameter?
X-Amz-Expires is only used with query string authentication, not with the Authorization: header.
There is no default value with query string authentication. It is a required parameter, and the service will reject a request if X-Amz-Algorithm=AWS4-HMAC-SHA256 is present in the query string but X-Amz-Expires=... is not.
<Error>
<Code>AuthorizationQueryParametersError</Code>
...
Now what time stamp are we talking about here?
This refers to X-Amz-Date: when used with the Authorization: header. Because X-Amz-Date: is part of the input to the signing algorithm, a change in the date or time also changes the signature. An otherwise-identical request signed 1 second earlier or later has an entirely different signature. AWS essentially allows your server clock to be wrong by up to 15 minutes without breaking your ability to authenticate requests. It is not a fallback or a default. It is a fixed window.
The X-Amz-Date: of Authorization: header-based requests is compared by AWS to their system time, which is of course synched to UTC, and the request is rejected out if hand if this value is more than 15 minutes skewed from UTC when the request arrives. No other validation related to authentication occurs prior to the time check.
Validation of Query String authentication expiration involves different logic:
X-Amz-Expires must not be a value larger than 604800 or smaller than 0; otherwise the request is immediately denied without further processing, including a message similar to the one above.
X-Amz-Date must not be more than 15 minutes in the future, according to the AWS system clock. The error is Request is not yet valid.
X-Amz-Date must not be more than X-Amz-Expires number of seconds in the past, relative to the AWS system clock, and no 15 minute tolerance applies. The error is Request has expired.
If any of these conditions occur, no further validation is done on the signature, so these messages will not change depending on the validity of the signature. This is checked first.
Also, the leftmost 8 characters of your X-Amz-Date: must match the date portion of your Credential component of the Authorization: header. The date itself has zero tolerance for discrepancy against the credential (so, when signing, don't read your system time twice, else you risk generating an occasional invalid signature around midnight UTC).
Finally, requests do not expire while in the middle of processing. If you send a request using either signing method that is deemed valid when it arrives but would have expired very soon thereafter, it is always allowed to run to completion -- for example, a large S3 download or an EBS snapshot creation request will not start, then fail to continue, because the expiration timer struck while the request had already started on the AWS side. If the action was authorized when requested, then it continues and succeeds as normal.
Let's say we have a web service that creates and updates meeting room bookings. Updates can change various aspects of a booking, such as time and room number.
Let's imagine that user's network connection to the service may not be reliable (e.g. mobile network), and two users A and B try to update the same booking sequentially.
User A sends a POST request to change the meeting time to 2pm, the request reaches the server and server processed the request successfully. However, the response back to User A gets lost due to network connection, and User A thinks the request fails.
Before User A tries again, User B sends her request to change the meeting time to 2:30pm, and it succeeds and responds to User B successfully.
Now User A retries (perhaps automatically) the same request again, and this time both the request and response succeed without a problem. In other words, the meeting time is changed back to 2pm.
In the hypothetical scenario above, User A's duplicated requests cause User B's request be overwritten, and result an incorrect state on the server-side.
One possible but naive solution is to set a ID for each every request on the client, and this ID does not change if a request is simply re-tried/re-sent. Then on the server-side, the server maintains a collection of received request IDs and checks for duplicates.
What are the better techniques or methods for solving this problem?
This a common problem with concurrent users. One way to solve it is to enforce conditional requests, requiring clients to send the If-Unmodified-Since header with the Last-Modified value for the resource they are attempting to change. That guarantees nobody else changed it between the last time they checked and now. In your case, this would prevent A from overwritten B's changes.
For instance, user A wants to change the meeting time. It sends a GET request for the meeting resource and keep the value of the Last-Modified response header. Then, it sends the POST request with the Last-Modified value in the If-Unmodified-Since header. Following your example, this request actually succeeds, but the response is lost.
If A repeats the request immediately, it will fail with 412 Precondition Failed, since the condition is no longer valid.
If in the meantime B does the same thing and changes the meeting time again, when A tries to repeat the request, without checking for the current Last-Modified value corresponding to B's changes, it also fails with 412 Precondition Failed.
If I have a web api service (Order Notification) that allows a third party client to call in (they must call in to us, not use pushing to them) periodically (every 10 minutes) and gets new orders it has not yet received, how do I deal with failures?
For example there are 10 new Orders the client has not received since they last called in. The client calls into our Order Notification service. We retrieve the orders we have not sent (10 in this case). We update these 10 Orders as sent and return the response to the client.
However the client did not receive the response (sometime happened after leaving us e.g. http time out or something else).
So now we have a problem where on our side we have marked the orders as sent but the client never received them.
Any thoughts on how to solve this?
Just an idea, can you assign the caller some sort of identifier and when the caller succeeds it replies back saying it has acknowledged the request? The server will never know if something failed on the client side unless the client reports it.
For example, when caller A calls in for the requests it may do something like this:
call -> http://server/requests
server replies back with some xml that contains the result set for this caller along with a unique identifier that it will track to know if that particular call had a response (you can time out this identifier after a reasonable period of time)
when the client gets the request it can call back again
call -> http://server/requestComplete?id=[generatedID]
and the server marks it successful.
Lots of API's require some sort of identification token so it would already lend itself well to this kind of send/ack messaging system.
If you have access to both sides of the system you could create a received request so once the client picking up the data has received it makes a request to the original host telling that it's received successfully.
The scenario is: I'm implementing a RESTful web-service that will act as a cache to entities stored on remote a C system. One of the web-service's requirements is that, when the remote C system is offline, it would answer GET requests with the last cached data, but flagging it as "stale".
The way I was planning to flag the data as stale was returning a HTTP status code other than 200 (OK). I considered using 503 (service unavailable), but I believe that it would make some C#/Java HTTP clients throw exceptions, and that would indirectly force the users to use exceptions for control flow.
Can you suggest a more appropriate status code? Or should I just return 200 and add a staleness flag to the response body? Another option would be defining a separate resource that informs the connectivity state, and let the clients handle that separately.
Simply set the Last-Modified header appropriately, and let the client decide if it's stale. Stale data will have the Last-Modified date farther back than "normal". For fresh data, keep the Last-Modified header current.
I would return 200 OK and an appropriate application-specific response. No other HTTP status code seems appropriate, because the decision if and how to use the response is being passed to the client. I would also advise against using standard HTTP cache control headers for this purpose. I would use them only to control third-party (intermediary and client) caches. Using these headers to communicate application-specific information uneccesarily ties application logic to cache control. While it might not be immediately obvious, there are real long-term benefits in the ability to independently evolve application logic and caching strategy.
If you are serving stale responses RFC-2616 says:
If a stored response is not "fresh enough" by the most
restrictive freshness requirement of both the client and the
origin server, in carefully considered circumstances the cache
MAY still return the response with the appropriate Warning
header (see section 13.1.5 and 14.46), unless such a response
is prohibited (e.g., by a "no-store" cache-directive, or by a
"no-cache" cache-request-directive; see section 14.9).
In other words, serving 200 OK is perfectly fine.
In Mark Nottingham's caching article he says
Under certain circumstances — for example, when it’s disconnected from
a network — a cache can serve stale responses without checking with
the origin server.
In your case, your web service is behaving like an intermediary cache.
A representation is stale when either it's Expires or Max-age header has passed. Therefore if you returned a representation with
Cache-control: Max-age=0
Then you are effectively saying that the representation you are returning is already stale. Assuming that when you retrieve representations from the "System C" that the data can be considered fresh for some non-zero amount of time, your web service can return representations with something like,
Cache-control: Max-age=3600
The client can check cache control header for max-age == 0 to determine if the representation was stale when it was first retrieved or not.