HTTP request method with no data - web-services

I have a REST endpoint in my application that takes no data and returns no data.
The endpoint is clearing out some data I previously stored in the user's session. I don't need to send or receive data from the client -- just hit the endpoint.
I currently allow the endpoint to only receive HTTP POST requests.
Is there a better HTTP request method than POST for this scenario? If so why?

I think this is actually fine. POST doesn't necessarily need to create a resource. If it's modifying the client's session, it's ok in my book. For the return code, consider 204/No content.

The endpoint is not actually ReSTful. Clearing out session data means your aren't transferring state on each request, see If REST applications are supposed to be stateless, how do you manage sessions?

Related

Can I rely on ConnectionId for security with API Gateway Websockets?

I'm working on a project where the backend is built with the serverless framework. Recently, I added a feature using API Gateway's websockets. However, I have my doubts about my particular implementation's security, and wanted to ask how valid they were.
I struggled to build authentication into my websocket routes. There was an authorizer feature, but unfortunately native Javascript APIs provide no way to edit headers in a Websocket message - this means I would have to submit authorization tokens in the url params, which I would prefer not to do.
I came up with a workaround. I have existing HTTP microservices set up on API Gateway with serverless, authenticated through AWS Cognito Identity Federation. My solution was to "piggyback" my websocket authentication onto my HTTP services, as follows.
My client opens a websocket connection, and receives back the connectionId assigned to it by API Gateway.
My client calls an HTTP route with the connectionId, which is authenticated with Cognito. This serves to let my backend know that this particular connectionId is authenticated. I push the connectionId and the Cognito identity to a database, along with other information. This way, later I can find what connectionIds are associated with a particular Cognito identity.
When a client wants to call a "secured" websocket method, the websocket method checks the lookup table to see if that connectionId is associated with the correct Cognito identity. If it is, then the method goes through. Otherwise, the connection is closed.
I found this resource at Heroku on websocket safety which recommends a similar, but not quite identical process: https://devcenter.heroku.com/articles/websocket-security
It recommends the following:
"So, one pattern we’ve seen that seems to solve the WebSocket authentication problem well is a “ticket”-based authentication system. Broadly speaking, it works like this:
When the client-side code decides to open a WebSocket, it contacts the HTTP server to obtain an authorization “ticket”.
The server generates this ticket. It typically contains some sort of user/account ID, the IP of the client requesting the ticket, a timestamp, and any other sort of internal record
keeping you might need.
The server stores this ticket (i.e. in a database or cache), and also returns it to the client.
The client opens the WebSocket connection, and sends along this “ticket” as part of an initial handshake.
The server can then compare this ticket, check source IPs, verify that the ticket hasn’t been re-used and hasn’t expired, and do any other sort of permission checking. If all goes well, the WebSocket connection is now verified."
As far as I can tell, my method are heroku's are similar in that they both use an HTTP method to authenticate, but differ because
1) Heroku's method checks for authentication upon opening, while mine checks afterwards
2) Heroku's method requires generating and storing secure tokens
I don't want to send authorization over the websocket, because I'd have to store it in url params, and I also do not want to generate and store tokens, so I went with my method.
However, I have a couple of doubts about my method as well.
1) Because I don't check authorization on websocket open, in theory this approach is vulnerable to a dDos attack, where an attacker simply opens as many sockets as they can. My assumption here is that the responsibility falls on API Gateway to prevent, with its Leaky Bucket algorithm.
2) My strategy hinges on the connectionId being secure. If an attacker were able to spoof this connectionId, then my strategy would no longer work. I assume this connectionId is issued internally within API Gateway to mark specific connections, and should not be vulnerable as a result. However, I wanted to double check if this was the case.
I would suggest looking into JWT's. It was kind of created for this purpose where you need to have some way to authenticate client-side requests without exposing credentials. It is fully self contained and allows you to not make a request to a database everytime you make a request to validate the user making the request: https://jwt.io/
JWT's are very easy to implement in Serverless and attach to a web socket connection request. You can then do something like add the user IP address to the payload of the JWT and validate that at request time to ensure that the user is 100% validated.

In the backend, can you access data from previous requests?

This is more of a theory question, so I'm not going to post any code.
On the frontend, the user types in a search command. On the backend (Django in my case), it hits an API, the results of the search are saved into a Django View in views.py. On the frontend, the user interacts with this returned data and sends another request. On the backend, is the data from the first Django View still available for use? How do you access it?
(The data is also in the frontend and I can send it with the second request. But if it's still stored on the backend then I wouldn't need to.)
HTTP by it's own nature is a stateless protocol. It does mean that protocol doesn't know what or when should happen any request. Request comes and your API just reacts to this request by your implemented logic.
If you want to persist/save any state/data on your API side, you can do it by persisting them to database or saving to any local/global variable. Then you can access this saved state/data while recieving other requests to your back-end and implement the logic to use of previous state with the new incoming data.

Securely transferring data to webpage

The question is related to securely transferring data to a webpage. I need to transfer some data to a webpage/website. Assume that for all the mentioned scenarios, I am using HTTPS as the protocol.
Do I need to append data/Parameter to URL. Do I need to encrypt it so that it does not transmit as plain text?
Do I make a POST request to website and it will return me the rendered HTML page?
Security is the major concern for me and I have to use HTTP or restful web services for the purpose.
Query string data will be encrypted, but it will also be visible in the browser address bar and could be logged in browser history. Even if it is a server side request, query string data could be logged in server logs.
Sending the data via POST is preferred - it is not guaranteed to not be logged, but by POSTing the data you are implying that it is used to create a change in state and that it should not be replayed or cached.

embedded linux clients and authentication

I need to come up with a scheme for remote devices running linux to push data to a web service via https. I'm not sure how I want to handle authentication. Can anyone see any security risks by including some kind of authentication in the body of the request itself? I'm thinking of having the request body be JSON, and it would look like this:
{
'id':'some unique id',
'password':'my password',
'data':1234
}
If the id and password in the JSON don't match what is in my database, the request gets rejected.
Is there a problem with this? Is there a better way to ensure that only my clients can push data?
That scheme is primitive, but it works.
Usually a real session is preferred since it offers some advantages:
separation of authentication and request
history of requests in a session
credentials get sent only once for multiple requests
flexible change of authentication strategy
...

Sessions in REST services

I'm developing small REST service which should support client session persistence. As you know because of REST we can't store any client data on the server, data must be stored on client side and client's request must be self-sufficient. So...how we can store client sessions? Searching over the internet I've found some methods how to realize this. For example: we send to the client encrypted token which contains client's id(nick...etc), like token = AES(id, secretKey); and then we're authorize user every request decrypting token on the server with secret key. Can anyone advise anything? Maybe there is another good ways to do same functionality. Which crypto algorithm will be preferable for this? Thanks.
You mentioned:
As you know because of REST we can't store any client data on the
server, data must be stored on client side and client's request must
be self-sufficient.
REST doesn't say you can't store client data on the server; it just says you shouldn't store application state there, which you can think of as "what this client is in the middle of trying to do".
If you are primarily trying to just have a concept of authenticated users, then a standard login cookie will work just fine and is not "unRESTful".
It all comes down to your answer to this question: why do you need a "session" concept in the first place?
If you need to ensure that the client passes a cookie representing a set of credentials, consider instead having the client pass them as HTTPS authentication headers with each request instead.
If you need some sticky routing rules to be followed (to make sure that the client's request gets sent to a particular server), consider using this opportunity to get rid of that architectural straightjacket as it is the quickest way to kill your chances of future scalability. Instead, make your server choice arbitrary.
If you absolutely must route to a specific node, try requiring that the client pass enough identification data that you can use it to hash or shard the client down a particular "swim lane". You could split things up based on their username, for example.