Can I rely on ConnectionId for security with API Gateway Websockets? - amazon-web-services

I'm working on a project where the backend is built with the serverless framework. Recently, I added a feature using API Gateway's websockets. However, I have my doubts about my particular implementation's security, and wanted to ask how valid they were.
I struggled to build authentication into my websocket routes. There was an authorizer feature, but unfortunately native Javascript APIs provide no way to edit headers in a Websocket message - this means I would have to submit authorization tokens in the url params, which I would prefer not to do.
I came up with a workaround. I have existing HTTP microservices set up on API Gateway with serverless, authenticated through AWS Cognito Identity Federation. My solution was to "piggyback" my websocket authentication onto my HTTP services, as follows.
My client opens a websocket connection, and receives back the connectionId assigned to it by API Gateway.
My client calls an HTTP route with the connectionId, which is authenticated with Cognito. This serves to let my backend know that this particular connectionId is authenticated. I push the connectionId and the Cognito identity to a database, along with other information. This way, later I can find what connectionIds are associated with a particular Cognito identity.
When a client wants to call a "secured" websocket method, the websocket method checks the lookup table to see if that connectionId is associated with the correct Cognito identity. If it is, then the method goes through. Otherwise, the connection is closed.
I found this resource at Heroku on websocket safety which recommends a similar, but not quite identical process: https://devcenter.heroku.com/articles/websocket-security
It recommends the following:
"So, one pattern we’ve seen that seems to solve the WebSocket authentication problem well is a “ticket”-based authentication system. Broadly speaking, it works like this:
When the client-side code decides to open a WebSocket, it contacts the HTTP server to obtain an authorization “ticket”.
The server generates this ticket. It typically contains some sort of user/account ID, the IP of the client requesting the ticket, a timestamp, and any other sort of internal record
keeping you might need.
The server stores this ticket (i.e. in a database or cache), and also returns it to the client.
The client opens the WebSocket connection, and sends along this “ticket” as part of an initial handshake.
The server can then compare this ticket, check source IPs, verify that the ticket hasn’t been re-used and hasn’t expired, and do any other sort of permission checking. If all goes well, the WebSocket connection is now verified."
As far as I can tell, my method are heroku's are similar in that they both use an HTTP method to authenticate, but differ because
1) Heroku's method checks for authentication upon opening, while mine checks afterwards
2) Heroku's method requires generating and storing secure tokens
I don't want to send authorization over the websocket, because I'd have to store it in url params, and I also do not want to generate and store tokens, so I went with my method.
However, I have a couple of doubts about my method as well.
1) Because I don't check authorization on websocket open, in theory this approach is vulnerable to a dDos attack, where an attacker simply opens as many sockets as they can. My assumption here is that the responsibility falls on API Gateway to prevent, with its Leaky Bucket algorithm.
2) My strategy hinges on the connectionId being secure. If an attacker were able to spoof this connectionId, then my strategy would no longer work. I assume this connectionId is issued internally within API Gateway to mark specific connections, and should not be vulnerable as a result. However, I wanted to double check if this was the case.

I would suggest looking into JWT's. It was kind of created for this purpose where you need to have some way to authenticate client-side requests without exposing credentials. It is fully self contained and allows you to not make a request to a database everytime you make a request to validate the user making the request: https://jwt.io/
JWT's are very easy to implement in Serverless and attach to a web socket connection request. You can then do something like add the user IP address to the payload of the JWT and validate that at request time to ensure that the user is 100% validated.

Related

webservices - public certificate update

I'm not very familiar with web services security concepts but as a provider of web services, we have to update the public cert in our .jks file.
Should we share anything to the consumers of this service to update at their end?
Consumers sign their messages and sends the request. The service end-point is on HTTP protocol.
Consumers sign their messages and send the request
Signing messages involves the private key of the one who does the signing, in your case the client. See here for some intro details of how this stuff works. Changing the web service certificate (the public key) might not cause problems (certificates are updated on a constant basis on the internet for HTTPS sites and no browser starts spewing errors, for example) but at the same time your clients might fail if you are using message level security.
If you encrypt at the message level (the data that gets exchanged) instead of encryption at the transport level (how the data is sent - over HTTPS instead of HTTP) then you need to notify your clients.
You don't mention how the exchange is secured so maybe find out about that first. If the endpoint is on HTTP as you mentioned then it's message level security which means you service might sign or encrypt the message for itself so changing the keys will alter the signature and your clients will not trust the response anymore.
If you are still in doubt about what you need to do then find someone who does know what to do, then notify your clients before doing the change so they have time to make changes themselves if needed. They can decide for themselves if this has or hasn't an impact on them. Whatever you do though, don't give them your new private key.

Online application with RESTful webservice design

I am just woundering, how we can use RESTful architecture/webservice to implement online shopping kind of application ?
Say we want to build anything like Amazon where user can login and do shopping. First time, we will perform authentication using HTTP Basic or any other security mechanism, which is fine.
Now, when user made a second request he need to send some authorization code or sessionId or something else so that server will know this is the same user which has logon earlier. But, RESTful webservice is stateless so we are not suppose to store old session related stuff. In that case how we can authenticate user ?
I read something about cliet and server certificates but it is applicable to application where two different services are communicating with each other. Am I correct ?
I am new to webservice :-) so this type of silly question came to my mind.
The http basic auth stores the username and password on the client side, and it sends it again with every request. So by REST you have to send these identification factors and authenticate by every request...
You can cache the authentication mechanism if you want it to be faster...
This is an important thing by REST... REST stores the session on client side, not on server side... If you want store something important on server side, then it has to be a resource or a property of a resource...
If you allow somebody to write 3rd party application (another client for your REST service), then the user should accept that this 3rd party application can send requests in his/her name. Ofc. the user does not want to share his/her password, so give permissions to 3rd party applications is a hard stuff. For example oauth solves this problem...
The basic concept by 3rd party applications (clients), that you ask the user whether it allows them to send certain requests or not. For example by facebook it asks, if you want share your identity, list of acquaintances, etc... and you allow to send posts in your behalf, etc... After you clicked ok, the REST application should store that information and give the client permissions to your account. How to check who sends the requests? Ofc. CSRF is not allowed, so the 3rd party client cannot send a cross domain request on your behalf with the client you are using. So it has to send its requests trough a different connection, probably with curl. What should it send? Ofc. the request details. What else? Its identity (an api key) and your identity. This is the most basic approach.
There are other solutions. You can use a similar approach to what you are using by storing passwords in a database. You store only hashes of the passwords hashed with a slow algorithm. By the authentication you create the hash again on the given password. When the stored hash is equal with the newly created, then the application accepts the identity and grants access to the account. You can use the same approach by requests. The 3rd party client requires a hash for a request. After that it sends the request with the hash it got, and by getting the request, the server compares that hash with the hash it creates based on the content of the request. If they are equal, the request is valid. This is cool stuff, because it prevents a CSRF attack on a 3rd party client as well...
I guess there are many other, more complex approaches, I don't know, I am not a security expert and probably you won't be either. You just have to understand the basics and use a tool for example oauth if you want to allow 3rd party access to your api. If you don't want that, then probably you don't need a REST application, just a simple web application... That depends on your needs, visitor count, etc...

Do we need a security signature for the web service response?

I have created a web service API and it's architecture is such that the server requires a client to sign the request along with a secret key assigned to it (signature is always different between multiple requests).
Server matches the client's signature with its own computed signature. If they are a match then the server returns the response.
I am wondering if a client should check the response coming back from the server to see if it's from the same application to which the request was made.
Is any kind of attack possible between HTTP request and HTTP response?
Do we need a security signature for the web service response?
It depends. There are a few types of web service APIs out there. Some need strict security other might not. You could have a few types of APIs:
(1) completely opened API. Say you have a blog where you post about writing RESTful services and clients. You host a complete working REST service based on one of your posts so that people give it a spin. You don't care who calls your service, the service returns some dummy data etc. It's just a demo, a toy, no security here, no request signing, nada. It's just plain HTTP calls.
(2) service with an API key. Say you have a web service and you want to know who calls it. This kind of service needs a pre-registration and each client who wants to call your service needs to register and obtain a key first. Do note that the registration is not about authentication or authorization, you just want to know who's using your API (e.g. what business sector they operate in, how many clients they have, why are they using your API for etc) so that you later make some analysis of your own and take some (marketing maybe) decisions of some sort later on based on the data you get back.
There is nothing secret about this API key. It's just an UUID of some sort, the most basic way of differentiating between calls. This again involves only plain HTTP calls with the key as an additional request parameter.
(3) service with an API key and a secret key. This is similar to number (2) but you need to absolutely make sure that the calls are coming from the client that presents some API key. You need this because you probably want to bill the client for how much they have used your service. You want to make sure the calls actually come from that client and not someone ill intentioned that maybe wants to overcharge the client's bill.
So the client uses it's key for identification and a signature of the request with the secret key to actually vouch for it's identity. This again can be plain HTTP calls with the key and signature as additional request parameters.
(4) data "tampered-safe" web services. For numbers (1), (2) and (3) above I haven't considered any message security issues because most APIs don't need it. What's exchanged isn't confidential and not all that important to protect. But sometimes although the data isn't confidential you need to make sure it wasn't tampered with during transit.
Say you are the owner of a shop that builds some product and you want to advertise your product on some partner web sites. You expose a service with the product details and your partners just use this data to display your product details on their sites. Everybody knows what products you are building so you don't need to hide that, but you are paranoid about your competition trying to ruin you so you want to avoid them intercepting the
request and multiplying by 10 all your prices in the responses of your result just to scare potential buyers away.
Number (3) above, although uses the signing part as a way to prove the identity of the caller, also assures the request was not tampered with (server will reject the request if the signature does not match). So if you need to assure an original response you can also sign the response.
For this, the service can provide the client with an API key and two secret keys. One secret key is used by the client to sign their requests while the second secret key is used by the client to verify the signature of the response (using an unique secret key for the server isn't all that safe so the server emits a server secret key specific to each client).
But this has a weak point: you would need to trust your partners that they will indeed validate the response signature before displaying the information on the site and not just bluntly display it. Paranoid as you are you want to protect against this and for this you need HTTPS.
As #SilverlightFox mentioned this proves the validity of the response. The data was not tempered with because it's encrypted. The client does not need to have an extra step to verify the response signature because that verification is already done at a lower (transport) level.
(5) secure services. And now we reach the last type of service where the data is actually confidential. HTTPS is a must for these services. HTTPS ensures the data remains confidential, that it isn't tempered in transit, identifies the server and can also identity the client if client side certificates are used.
So, in conclusion, it depends on what type of service you have.
Make the request over HTTPS to ensure the validity of the response.
This will ensure your data is not vulnerable to a MITM attack. Rolling your own untested encryption/hashing methods is a sure way to open up your application to attack, so you should use TLS/SSL which means that you should connect to your web service API over HTTPS. TLS is the proven and secure way to ensure the response is coming from the application that the request was made to.

Secure centralized HMAC-based authentication service

I need to centralize authentication to my rest web services and make this authentication the same for all of our webservices. So I started writing an external web service to take care about the authentication.
To keep compatibility, since the authentication was performed using a HMAC signature (signed using a private key) alongside the single request (so there is no token of any sort) I thought to make all web services to send the HMAC included inside the incoming request and the StringToSign (a representation of data used to generate the HMAC).
So the Authorization service can (knowing the private key) try to compose the same signature, if it matches then answers with 200 OK and with a JSON object saying "authorized".
All this communication happens over HTTPS, but I'm trying to figure out what could happen if someone would intercept or modify this answer, making a 403 Forbidden to become 200 OK...
Should I use some sort of way to recognize this is the original answer? If so, what could I do?
I do agree that ssl certificates released by CA's are secure, but how could I make sure my HTTPS layer has not been compromised allowing an attacker to modify authorization responses?
P.S. please provide some standard solution if any, I don't want it to be related to the technology I'm using right now, since each service may use its own stack and I don't really want it to be .NET or something else because there's a proprietary implementation for the authentication mechanism.
All this communication happens over HTTPS, but I'm trying to figure
out what could happen if someone would intercept or modify this answer
This is what the S in HTTPS is for: SSL guarantees integrity of the message. If the attacker forges the request, the client will notice it.
You can ask the experts at #security.

Securing REST API without reinventing the wheel

When designing REST API is it common to authenticate a user first?
The typical use case I am looking for is:
User wants to get data. Sure cool we like to share! Get a public API key and read away!
User wants to store/update data... woah wait up! who are you, can you do this?
I would like to build it once and allow say a web-app, an android application or an iPhone application to use it.
A REST API appears to be a logical choice with requirements like this
To illustrate my question I'll use a simple example.
I have an item in a database, which has a rating attribute (integer 1 to 5).
If I understand REST correctly I would implement a GET request using the language of my choice that returns csv, xml or json like this:
http://example.com/product/getrating/{id}/
Say we pick JSON we return:
{
"id": "1",
"name": "widget1",
"attributes": { "rating": {"type":"int", "value":4} }
}
This is fine for public facing APIs. I get that part.
Where I have tons of question is how do I combine this with a security model? I'm used to web-app security where I have a session state identifying my user at all time so I can control what they can do no matter what they decide to send me. As I understand it this isn't RESTful so would be a bad solution in this case.
I'll try to use another example using the same item/rating.
If user "JOE" wants to add a rating to an item
This could be done using:
http://example.com/product/addrating/{id}/{givenRating}/
At this point I want to store the data saying that "JOE" gave product {id} a rating of {givenRating}.
Question: How do I know the request came from "JOE" and not "BOB".
Furthermore, what if it was for more sensible data like a user's phone number?
What I've got so far is:
1) Use the built-in feature of HTTP to authenticate at every request, either plain HTTP or HTTPS.
This means that every request now take the form of:
https://joe:joepassword#example.com/product/addrating/{id}/{givenRating}/
2) Use an approach like Amazon's S3 with private and public key: http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
3) Use a cookie anyway and break the stateless part of REST.
The second approach appears better to me, but I am left wondering do I really have to re-invent this whole thing? Hashing, storing, generating the keys, etc all by myself?
This sounds a lot like using session in a typical web application and rewriting the entire stack yourself, which usually to me mean "You're doing it wrong" especially when dealing with security.
EDIT: I guess I should have mentioned OAuth as well.
Edit 5 years later
Use OAuth2!
Previous version
No, there is absolutely no need to use a cookie. It's not half as secure as HTTP Digest, OAuth or Amazon's AWS (which is not hard to copy).
The way you should look at a cookie is that it's an authentication token as much as Basic/Digest/OAuth/whichever would be, but less appropriate.
However, I don't feel using a cookie goes against RESTful principles per se, as long as the contents of the session cookie does not influence the contents of the resource you're returning from the server.
Cookies are evil, stop using them.
Don't worry about being "RESTful", worry about security. Here's how I do it:
Step 1: User hits authentication service with credentials.
Step 2: If credentials check out, return a fingerprint, session id, etc..., and pop them into shared memory for quick retrieval later or use a database if you don't mind adding a few milliseconds to your web service turnaround time.
Step 3: Add an entry point call to the top of every web service script that validates the fingerprint and session id for every web service request.
Step 4: If the fingerprint and session id aren't valid or have timed out redirect to authentication.
READ THIS:
RESTful Authentication
Edit 3 years later
I completely agree with Evert, use OAuth2 with HTTPS, and don't reinvent the wheel! :-)
By simpler REST APIs - not meant for 3rd party clients - JSON Web Tokens can be good as well.
Previous version
Use a cookie anyway and break the stateless part of REST.
Don't use sessions, with sessions your REST service won't be well scalable... There are 2 states here: application state (or client state or session s) and resource state. Application state contains the session data and it is maintained by the REST client. Resource state contains the resource properties and relations and is maintained by the REST service. You can decide very easy whether a particular variable is part of the application state or the resource state. If the amount of data increases with the number of active sessions, then it belongs to the application state. So for example user identity by the current session belongs to the application state, but the list of the users or user permissions belongs to the resource state.
So the REST client should store the identification factors and send them with every request. Don't confuse the REST client with the HTTP client. They are not the same. REST client can be on the server side too if it uses curl, or it can create for example a server side http only cookie which it can share with the REST service via CORS. The only thing what matters that the REST service has to authenticate by every request, so you have to send the credentials (username, password) with every request.
If you write a client side REST client, then this can be done with SSL + HTTP auth. In that case you can create a credentials -> (identity, permissions) cache on the server to make authentication faster. Be aware of that if you clear that cache, and the users send the same request, they will get the same response, just it will take a bit longer. You can compare this with sessions: if you clear the session store, then users will get a status: 401 unauthorized response...
If you write a server side REST client and you send identification factors to the REST service via curl, then you have 2 choices. You can use http auth as well, or you can use a session manager in your REST client but not in the REST service.
If somebody untrusted writes your REST client, then you have to write an application to authenticate the users and to give them the availability to decide whether they want to grant permissions to different clients or not. Oauth is an already existing solution for that. Oauth1 is more secure, oauth2 is less secure but simpler, and I guess there are several other solution for this problem... You don't have to reinvent this. There are complete authentication and authorization solutions using oauth, for example: the wso identity server.
Cookies are not necessarily bad. You can use them in a RESTful way until they hold client state and the service holds resource state only. For example you can store the cart or the preferred pagination settings in cookies...