CSRF prevention in browser plugin-only API that is available on HTTPS - web-services

I have to design an API that will support browser plugins only (latest version of Chrome, Firefox, IE). The API will be served over HTTPS. The API will be using a cookie-based access control scheme.
I am wondering what tactics to employ for CSRF prevention. Specifically, I want my API only to get requests from my own browser plugin itself and not from any other pages/plugins.
Would I be able to:
Assume that in most cases there would be an Origin header?
Would I be able to compare and trust the Origin header to ensure that the requests only come from a white-listed set of Origins?
Would this be compatible across the board (Chrome/Firefox/IE)?
I'm aware of multiple techniques used to prevent CSRF, such as the Synchronizer Token Pattern, but I would like to know in my limited scope above, if simply checking the Origin header would be sufficient?
Thanks in advance!

Set a custom header like X-From-My-Plugin: yes. The server should require its presence. It can be a constant. A web attacker can either:
make the request from the user's browser: this sends the cookie, but they can't send the custom header cross-origin; or
make the request from a different HTTP client: they can send the custom header, but they can't send the cookie because they don't know it
Either way, the attacker's request won't have both the cookie and the custom header.

Related

How to store JWT token response from dynamo

I have stored my JWT token in dynamodb, from a step function used to generate it.
I have fetched the token using api gateway to my static site hosted in s3.
Does anyone know how to save it in cookie?
I am using the serverless framework to deploy my lambdas, if that is any help.
Thanks
To save this (or anything else, really) in a cookie, you need to either respond to a browser request with Set-Cookie HTTP header, see:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie
or to set it with client-side JavaScript, see:
https://developer.mozilla.org/en-US/docs/Web/API/Document/cookie
How you do it exactly depends on:
do you want to set it with HTTP headers or client-side JavaScript?
what framework do you use?
how your application is accessed?
how do you want to use that cookie?
Remember that cookie is client-side state and all you tell us about is how you get the data that you want to be stored in a cookie (which doesn't matter) and not how the client-server interaction is done (which is what matters here).

Specifying headers in Chromecast Receiver

The API that I'm using to retrieve media payload from requires an authentication header to be specified. Right now, I'm simply setting the cookies of the documents then getting the content because this seems to be the easiest way.
I ran into issues with cookies since cookies are domain specific and I'm hosting the receiver file in Google Drive instead of the domain that points to the API endpoints, I needed to set up proxy servers which would overwrite the cookies midway before the requests reach the endpoints.
This is an extremely complicated way of authentication. Does anyone know how I can specify header in the receiver file?

Standard -server to server- and -browser to server- authentication method

I have server with some resources; until now all these resources were requested through a browser by a human user, and the authentication was made with an username/password method, that generates a cookie with a token (to have the session open for some time).
Right now the system requires that other servers make GET requests to this resource server but they have to authenticate to get them. We have been using a list of authorized IPs but having two authentication methods makes the code more complex.
My questions are:
Is there any standard method or pattern to authenticate human users and servers using the same code?
If there is not, are the methods I'm using now the right ones or is there a better / more standard way to accomplish what I need?
Thanks in advance for any suggestion.
I have used a combination of basic authentication and cookies in my web services before. In basic authentication you pass the user name/password encoded in the HTTP header where it looks something like this.
Authorization: Basic QWxhZGluOnNlc2FtIG9wZW4=
The string after the word "Basic" is the encoded user name and password that is separated by a colon. The REST API can grab this information from the HTTP header and perform authentication and authorization. If authentication fails I return an HTTP Unauthorized error and if they are authenticated but are not authorized I return an HTTP Forbidden error to distinguish between failure to authentication versus authorization. If it is a web client and the person is authenticated then I pass the following in the HTTP header with a request.
Authorization: Cookie
This tells the web service to get the cookie from the HTTP request and use it for authorization instead of doing the authentication process over again.
This will allow clients that are not web browsers to use the same techniques. The client can always use basic authentication for every request, or they can use basic authentication on the initial request and maintain cookies thereafter. This technique also works well for Single Page Applications (SPAs) where you do not have a separate login page.
Note: Encoding the user name and password is not good enough security; you still want to use HTTPS/SSL to secure the communications channel.

Are JAXRS restful services prone to CSRF attack when content type negotiation is enabled?

I have a RESTful API which has annotations like #Consumes(MediaType.JSON) - in that case, would the CSRF attack still be possible on such a service? I've been tinkering with securing my services with CSRFGuard on server side or having a double submit from client side. However when I tried to POST requests using FORM with enctype="text/plain", it didn't work. The technique is explained here This works if I have MediaType.APPLICATION_FORM_URLENCODED in my consumes annotation. The content negotiation is useful when I'm using POST/PUT/DELETE verbs but GET is still accessible which might need looking into.
Any suggestions or inputs would be great, also please let me know if you need more info.
Cheers
JAX-RS is designed to create REST API which is supposed to be stateless.
The Cross Site Request Forgery is NOT a problem with stateless applications.
The way Cross Site Request Forgery works is someone may trick you to click on a link or open a link in your browser which will direct you to a site in which you are logged in, for example some online forum. Since you are already logged in on that forum the attacker can construct a url, say something like this: someforum.com/deletethread?id=23454
That forum program, being badly designed will recognize you based on the session cookie and will confirm that you have the capability to delete the thread and will in fact delete that thread.
All because the program authenticated you based on the session cookie (on even based on "remember me" cookie)
With RESTful API there is no cookie, no state is maintaned between requests, so there is no need to protect against session hijacking.
The way you usually authenticate with RESTFul api is be sending some additional headers. If someone tricks you into clicking on a url that points to restful API the browser is not going to send that extra headers, so there is no risk.
In short - if REST API is designed the way it supposed to be - stateless, then there is no risk of cross site forgery and no need to CSRF protection.
Adding another answer as Dmitri’s answer mixes serverside state and cookies.
An application is not stateless if your server stores user information in the memory over multiple requests. This decreases horizontal scalability as you need to find the "correct" server for every request.
Cookies are just a special kind of HTTP header. They are often used to identify a users session but not every cookie means server side state. The server could also use the information from the cookie without starting a session. On the other hand using other HTTP headers does not necessarily mean that your application is automatically stateless. If you store user data in your server’s memory it’s not.
The difference between cookies and other headers is the way they are handled by the browser. Most important for us is that the browser will resend them on every subsequent request. This is problematic if someone tricks a user to make a request he doesn’t want to make.
Is this a problem for an API which consumes JSON? Yes, in two cases:
The attacker makes the user submit a form with enctype=text/plain: Url encoded content is not a problem because the result can’t be valid JSON. text/plain is a problem if your server interprets the content not as plain text but as JSON. If your resource is annotated with #Consumes(MediaType.JSON) you should not have a problem because it won’t accept text/plain and should return a status 415. (Note that JSON may become a valid enctype one day and this won’t be valid any more).
The attacker makes the user submit an AJAX request: The Same Origin Policy prevents AJAX requests to other domains so you are safe as long as you don’t disable this protection by using CORS-headers like e.g. Access-Control-Allow-Origin: *.

How to stop people stealing from my website?

I consider myself newbie when it comes to securing my web applications.
I have built a website which updates the webpages regularly through an AJAX call. The Ajax call returns a decent JSON object to be used at the client side.
There is a simple problem I need to overcome: How can I prevent other people to use the same AJAX call without permission? What if they build a website, AND at the client side they allow their users to make the same AJAX call to my servers and grab what they need.. AND THEN parse it to their own needs at the client side?
I cannot put an extra layer of security like user authentication.
They won't be able to actually do this from the client directly because the browser will prevent cross domain AJAX requests for anything other than JSONP (scripts). That said, they can proxy it on their server if they want so it doesn't buy you much.
ASP.NET MVC has an antiforgery token mechanism that you should look at for inspiration. The basic idea is that you use both an encrypted cookie and an encrypted, hidden form input containing the same data that you write to each page that you want to secure. Do your AJAX calls using a POST and make sure to send back the form input. On the server-side decrypt the cookie and input and compare the data to ensure they're the same. Since the cookie is tied to your domain, it will be much harder to inject in the request that is being sent back. Use SSL and regenerate the cookie/input content periodically to make it even harder to fake the cookie/input.
You can check the HTTP_REFERER http header and see if the request originates from your page. This can however be spoofed, so don't think of it as a bulletproof solution. The best counter-meassure is user authentication, really.
You can't. That's because you can't differenciate between an AJAX call from your web app and another user's webapp.
Here are some things that might help a little bit.
Obscuring/encrypting your AJAX response. This fails mainly because you have to include the decryption code in your app as well.
Check the IP origin. If the IP didn't access your server before, you can assume that the AJAX call is not from your website. This doesn't work if a) the user switches the IP while being on your site / timing out or b) if another website sends a fake http request first before using your AJAX API.
Another idea would be to send Javascript instead of a JSON object. The Javascript should contain all the logic needed to update your website, and of course could check if the website is your own. (window.location). That has some disadvantages though: more work for you, higher traffic load and it can be broken anyways.
I don't think it's a bad thing actually. Another website could have just as easily scraped the info from your website.
If by "stealing" you mean getting some content from your website (using HTTP GET), that's more or less the same problem as hot-linking. You could have some basic protection technique using the HTTP Referer header (it can be worked around, but it works in most cases).
The other problem you have (making sure the requests come from your application) have to do with CSRF (Cross-Site Request Forgery). There are various protection mechanisms against this, mostly based on embedding tokens in forms for example.
You could potentially combine the two approaches, although the real protection against getting the content would come from user authentication (otherwise, the other site could also get the page from which you're delivering those tokens and proxy it).
(In addition, techniques that rely on remembering the IP address would probably not work well in the whole web architecture: it might cause problems if you get a pool of proxy servers or if the client is a mobile device that may change IP address between various requests, which would be perfectly legitimate.)