I try to do some request from a javascript client to rest api build with Django rest framework.
All GET request to /api/test are public, then no session or token or watever are needed.
All POST to api/test are private and user have to use oauth2
According to the documentation, I have to manage cross origin request with django-core-headers. After installing this module to my django, I've set
CORS_ORIGIN_ALLOW_ALL to True but:
1) is it a good practice ?
2) is there a good solution to allow cross origin request only on some points ?
Thanks
With django-core-headers you can restrict CORS origins with CORS_ORIGIN_WHITELIST and CORS_ORIGIN_REGEX_WHITELIST. If you don't need to allow arbitrary origins, then set those; otherwise, you're good.
You could, if you wanted to, write a decorator to check origin in your views to see if it matches a desired origin (perhaps something set on whatever model is tracking which users are authorized for POST requests?). But if you're allowing GET requests from any arbitrary origin, and don't care where POST requests come from as long as they are authorized, then you're in the clear--after all, how can you restrict origin if you don't know where clients might make requests from?
Related
I am making an app which includes a messaging feature. Through the messaging feature, users can send photos to others users. These photos should be completely private.
At first, I thought of S3's signedURL feature. But then I realized that I cannot make caching work which is done by my CDN provider and my client side because caching is done based on URLs.
So I moved on to CloudFront's signed cookie. It seemed promising at first, but I found another problem. Users who got signed cookies can access to any content in the allowed scope. But I should not allow to show photos that were sent in other chat rooms. Users who have signed cookies should not be able to access to photo urls that were not shared in their rooms. So I cannot use signed cookies.
I moved on to CloudFlare and found a post that they were allowed to use special cache keys instead of url based caching. (https://blog.bigbinary.com/2019/01/29/how-to-cache-all-files-using-cloudflare-worker-along-with-hmac-authentication.html) I do not know how much the Enterprise Plan is, but Business Plan which is one level below is $200/month.
The business plan allows CloudFlare users to use token authentication. (https://blog.cloudflare.com/token-authentication-for-cached-private-content-and-apis/) (https://support.cloudflare.com/hc/en-us/articles/115001376488-How-to-setup-Token-Authentication-) I might be able to utilize this token authentication by making my images including tokens like this:
<Image source={{
uri: 'https:image_url.jpeg',
method: 'GET',
headers: {
Authorization: token
},
}}
style={{width: width, height: height}}
/>
Another thing I could do is getting signed URLs from CloudFront, not from a S3 level. In that way, I can make my CDN(CloudFront, in this case) to properly cache my S3 images and then make unique URLs per photo. But I still have to deal with client side caching as URLs clients see are always different. I have to save URLs in Localstorage as this(https://stackoverflow.com/a/37817503) answer suggested. Or I can use a React Native caching library. However, I will deploy this app on the web as well as mobile environment, so I am not sure if it will be a viable option for me to use such caching libraries.
To sum up, signed URLs cause two-level problems. It does not work with CDN caching. It does not work with client caching. I should use CloudFront's signed URLs and deal with client side caching(which is not ideal) Or I should use CloudFlare's token method. Bandwidth is free for CloudFlare, though Business Plan costs $200. So will it be worth it if I assume my app scales well?
What discourages me from using CloudFlare is it is not well documented. I have to deal with workers in CloudFlare, but the only document I found about how to use signed URL in the CDN level is this (https://developers.cloudflare.com/workers/about/tips/signing-requests/#verifying-signed-requests) And the only one I found about how to access to S3 private bucket from CloudFlare is this (https://help.backblaze.com/hc/en-us/articles/360010017893-How-to-allow-Cloudflare-to-fetch-content-from-a-Backblaze-B2-private-bucket)
Is CloudFlare with token verification method the right way to go for me? Is there any other method I can try out?
I have to design an API that will support browser plugins only (latest version of Chrome, Firefox, IE). The API will be served over HTTPS. The API will be using a cookie-based access control scheme.
I am wondering what tactics to employ for CSRF prevention. Specifically, I want my API only to get requests from my own browser plugin itself and not from any other pages/plugins.
Would I be able to:
Assume that in most cases there would be an Origin header?
Would I be able to compare and trust the Origin header to ensure that the requests only come from a white-listed set of Origins?
Would this be compatible across the board (Chrome/Firefox/IE)?
I'm aware of multiple techniques used to prevent CSRF, such as the Synchronizer Token Pattern, but I would like to know in my limited scope above, if simply checking the Origin header would be sufficient?
Thanks in advance!
Set a custom header like X-From-My-Plugin: yes. The server should require its presence. It can be a constant. A web attacker can either:
make the request from the user's browser: this sends the cookie, but they can't send the custom header cross-origin; or
make the request from a different HTTP client: they can send the custom header, but they can't send the cookie because they don't know it
Either way, the attacker's request won't have both the cookie and the custom header.
My reading of this documentation is that if I wanted to make
an in-browser XMLHttpRequest on behalf of a logged-in user, I would need to use a URL of the form:
https://storage.cloud.google.com/BUCKET/OBJECT
Because those URLs respect cookie-based authentication.
However, my testing seems to indicate that the CORS headers set for the bucket are not sent along with the response from URLs of that form (but they are from URLs of the form storage.googleapis.com/BUCKET/OBJECT).
Is this true? Is there no way to get both cookie-based authentication, and CORS headers?
You are correct. Custom CORS policies are only fully supported via the "storage.googleapis.com" endpoint (including custom domain names with CNAME redirects to c.storage.googleapis.com), and that endpoint does not support cookies. There is not a good way to use both at once.
I would suggest avoiding cookie-based authentication if you can. OAuth 2 is a good alternative and may provide additional benefits, depending on what you are trying to do.
First apologies: This feels to me like a "dumb" question, and I expect I'll soon regret even asking it ...but I can't figure it out at the moment as my mind seems to be stuck in the wrong rut. So please bear with me and help me out:
My understanding is that "Same Origin" is a pain in the butt for web services, and in response CORS loosens the restrictions just enough to make web services work reasonably, yet still provides decent security to the user. My question is exactly how does CORS do this?
Suppose the user visits website A, which provides code that makes web service requests to website Z. But I've broken into and subverted website Z, and made it into an attack site. I quickly made it respond positively to all CORS requests (header add Access-Control-Allow-Origin: "*"). Soon the user's computer is subverted by my attack from Z.
It seems to me the user never visited Z directly, knows nothing about Z's existence, and never "approved" Z. And it seems to me -even after the breakin becomes known- there's nothing website A can do to stop it (short of going offline itself:-). Wouldn't security concerns mandate A certifying Z, rather than Z certifying A? What am I missing?
I was investigating this as well, as my thought process was akin to yours. Per my new understanding: CORS doesn't provide security, it circumvents it to provide functionality. Browsers in general don't allow cross-origin requests; if you go to shady.com, and there is a script there that tries to access bank.com using a cookie on your machine, shady.com's script would then be able to perform actions on bank.com using that cookie to impersonate you. To prevent this, bank.com would not mark it's APIs as CORS enabled, so that when shady.com's script begins the HTTP request, the browser itself prevents the request.
So same-origin protects users from themselves because they don't know what auth cookies are laying around; CORS allows a server that owns resources on behalf of the user to mark APIs as accessible from other sites' scripts, which will cause the browser to then ignore its own cross-origin protection policy.
(anyone that understands this better, please add or correct as needed!)
CORS does nothing for security. It does allow someone selling web fonts to decide which websites get easy access to their fonts though. That's pretty much the only use case.
The user is just as unaware as they were before the introduction of CORS. And please remember that cross origin requests used to work before CORS (people often complain that you have to shim jQuery to get CORS support in IE... But in IE you could just make the request and get the response without any extra effort..it just worked).
Generally speaking the trust model is backwards. As others said you have implied trust by referencing some other site...so give me the freaking data!
CORS protects the website that receives the request (Z in your example) against the one that makes the request (A in your example) by telling the user's browser who is or is not allowed to see the response of the request.
When a JavaScript application asks the browser to make a HTTP request to an origin that's different than its own, the browser does not know if there is mutual agreement between the two origins to make such calls. For sure, if the request come from origin A then A agrees (and A is responsible to its users if Z is malicious), but does Z, the recipient, agrees ? The only way for the browser to know is to ask Z, and it does that by actually doing the request. Unless Z explicitly allows A to receive the response, the browser will not let A's application read it.
You are right that the only effect of CORS is to relax the same-origin policy. Before that, cross-origin requests were permitted, and the browser would automatically include the cookies it has for the destination, that is, it would send an authenticated request to Z. This means that, without same-origin policy, A could browse Z just as if it was the user, see it's data, etc. Same-origin fixes this very severe security vulnerability, but because some services still need to use cross-origin requests sometimes, CORS was created.
Note that CORS does not prevent the request from being sent, so if A's JS app sends a request to Z ordering it to send all the user's money to some account, Z will receive this request with all the cookies in it. This is called a Cross-Site Request Forgery (CSRF). Interestingly, the main defence against this type of attack is based on CORS. It consists in requiring some secret value in the request (a “CSRF token”) that can only be obtained through a cross-origin request, which A cannot obtain if it's not on the authorized list of Z. Nowadays, same-site cookies can be used as well, they are easier to manage but don't work cross-origin.
I have a RESTful API which has annotations like #Consumes(MediaType.JSON) - in that case, would the CSRF attack still be possible on such a service? I've been tinkering with securing my services with CSRFGuard on server side or having a double submit from client side. However when I tried to POST requests using FORM with enctype="text/plain", it didn't work. The technique is explained here This works if I have MediaType.APPLICATION_FORM_URLENCODED in my consumes annotation. The content negotiation is useful when I'm using POST/PUT/DELETE verbs but GET is still accessible which might need looking into.
Any suggestions or inputs would be great, also please let me know if you need more info.
Cheers
JAX-RS is designed to create REST API which is supposed to be stateless.
The Cross Site Request Forgery is NOT a problem with stateless applications.
The way Cross Site Request Forgery works is someone may trick you to click on a link or open a link in your browser which will direct you to a site in which you are logged in, for example some online forum. Since you are already logged in on that forum the attacker can construct a url, say something like this: someforum.com/deletethread?id=23454
That forum program, being badly designed will recognize you based on the session cookie and will confirm that you have the capability to delete the thread and will in fact delete that thread.
All because the program authenticated you based on the session cookie (on even based on "remember me" cookie)
With RESTful API there is no cookie, no state is maintaned between requests, so there is no need to protect against session hijacking.
The way you usually authenticate with RESTFul api is be sending some additional headers. If someone tricks you into clicking on a url that points to restful API the browser is not going to send that extra headers, so there is no risk.
In short - if REST API is designed the way it supposed to be - stateless, then there is no risk of cross site forgery and no need to CSRF protection.
Adding another answer as Dmitri’s answer mixes serverside state and cookies.
An application is not stateless if your server stores user information in the memory over multiple requests. This decreases horizontal scalability as you need to find the "correct" server for every request.
Cookies are just a special kind of HTTP header. They are often used to identify a users session but not every cookie means server side state. The server could also use the information from the cookie without starting a session. On the other hand using other HTTP headers does not necessarily mean that your application is automatically stateless. If you store user data in your server’s memory it’s not.
The difference between cookies and other headers is the way they are handled by the browser. Most important for us is that the browser will resend them on every subsequent request. This is problematic if someone tricks a user to make a request he doesn’t want to make.
Is this a problem for an API which consumes JSON? Yes, in two cases:
The attacker makes the user submit a form with enctype=text/plain: Url encoded content is not a problem because the result can’t be valid JSON. text/plain is a problem if your server interprets the content not as plain text but as JSON. If your resource is annotated with #Consumes(MediaType.JSON) you should not have a problem because it won’t accept text/plain and should return a status 415. (Note that JSON may become a valid enctype one day and this won’t be valid any more).
The attacker makes the user submit an AJAX request: The Same Origin Policy prevents AJAX requests to other domains so you are safe as long as you don’t disable this protection by using CORS-headers like e.g. Access-Control-Allow-Origin: *.