django log only receiving some activity - django

Suddenly, as of this morning, our Django log does not display the logging from some requests, even though it appears that all requests are being serviced. Some requests display their logging in the log, others don't, but no one has changed anything. (I am the only person that has access to change anything). Does anyone know of a reason this could occur?
Thanks.

This turns out to not be a Django issue at all. Unbeknownst to us, our web hosting provider adjusted an Apache policy regarding caching. Our iOS app now needs to specifically specify its cache policy in the HTTP header, or provide a unique URL parameter for the request, or Apache serves up stale responses. (The requests were not even reaching Django).

Related

Privacy Policy(ies). Does the cookie "collect" browser data or "request" browser data?

I'm working on legal portion of my site, Privacy Policy in particular. I've done the research and found that nearly all the answers to my question (below), is generalized.
Question: Do cookies "collect" data from user browsers, or do cookies "request" then receive data from user browsers?
This seems to be a very important distinction. Do I put into my privacy policy that my site "collects" data from my users or do I "request" data from my users.
My understanding of the core functionality is that cookies request data of user browser or browser activity. Users control how their browser will respond (or handle cookies) in their browser settings. If users have the ultimate control of handling "responses" to cookies is it proper for website privacy policies to state that they use cookies to collect browser data? Isn't it more accurate to state something like: "We use cookies to request data from your browser. Depending on you have your settings, your response to our request my impact your experience." Or something along those lines.
For years the way I understood the phrase "cookies collect browser data" is that we (websites) force code (the cookie), onto your browser that opens a siv for all your activity to flow back to us. But this isn't the case at all. Cookies actually make a "request" (i.e., asks) for the user's permission first, and depending on how the user has set up their browser settings, the cookie request is honored or denied.
I'm trying to stay away from the term "collect" as a general matter. I think it's improperly used and leaves the wrong impression on users.
Has anyone else thought about this? Am I missing something?
Cookies are being stored to the system/computer - or you can say browser. Cookies are used for authentication, preferences, advertisements, performance and analytics, security purposes. Yes, we need to mention that in privacy policy or some organization also add separate cookie policy.
Following should be mentioned for cookies in policies for standard web applications:
The application may use and store cookies to your system/compute which can help better to know your preferences when you visit the website later. Cookies can also be used for authentication/session checks, advertising, performance, analytics and research and security purposes. //Remove whichever is not applied for your site.

Amazon S3 custom error pages and enabling web hosting on bucket

Hi I'll try and keep it brief, hope one of you guys knows the answer and I'm not duplicating content.
At the moment I'm using a bucket to take the strain off my server and upload large user files to amazon. This is then reserved to them when they want it via expiring URLs. When the URL expires the user is sent an XML response to say access is denied, and i want to show them a custom error page.
Here Create my own error page for Amazon S3
and Here http://docs.aws.amazon.com/AmazonS3/latest/dev/CustomErrorDocSupport.html
It says you must enable web hosting on the bucket for custom error pages...
So the question is if I do this then just grant any user permissions to access just the custom error pages will this mess anything up with my current usage scenario?
Or is it as simple as everything else stays the same? The docs seem vague and I dont want to mess up my current system...
Sorry if this is a noob question but everyone with the same problem in my research seems happy with the 'Enable hosting' answer and i just want to be sure...
Cheers all
Ed
It's not possible to combine the two things you're trying to combine: query string authentication and custom error pages.
S3 buckets can be made accessible by two different sets of endpoints, each providing a different set of front-end behaviors.
The REST endpoints provide authentication and private content (and SSL), while the Web site endpoints provide custom error (and index) documents, but the objects must be public in order to be accessible, since the web site endpoint does not support authentication (or SSL).
The differences are explained here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff
In some environments, I use an intermediate reverse-proxy, hosted in EC2, acting as a front-end for S3 (which gives me the additional capability of rewriting portions of the request headers and capturing access logs in real-time) and I suspect this is the most viable mechanism for also providing "friendly" errors -- as my proxy does if the URL is completely missing elements like Signature= in the URL (since that can't possibly be anything but an error) but have not yet implemented anything to capture 403 Forbidden responses and style them up.
I did do some preliminary testing to add a Link: header to the error response (in the proxy), in an attempt to convince the browser to load an XSL stylesheet, but so far that has not proven viable.

Why CSRF protect session-less users?

Some frameworks (e.g. Django) support CSRF protection for users without any kind of session. What is the benefit of that?
What is the exploit that a CSRF attack could take advantage of when there is no existing session for the user?
Off the top of my head:
Having CSRF protection on day 1 means you don't need to worry about adding it after the fact if on day 17 you add user sessions
Even if there's no explicit sessions, there still could be some other authentication or mechanism that is protecting the site (example would be if you were running a django site on your private network; if you were browsing from inside your network that site AND evil.com, evil.com could trick your browser into sending requests to your private site. Rather unlikely but at least makes the point I hope.)
You might also want to raise this on the security stack overflow.
(Updated based on comment below)
Even if there was no authentication or other reason to trust the browser, there are two other weak benefits for using CSRF protection:
As Bobince points out, it does prevent simpler spamming attacks (since they now need to connect to the first page to get the CSRF token), and it also means that if someone does do something malicious, the IP in the server logs is going to be linked to them and not the client's IP. (Of course, that's spoofable etc. but it's still slightly better than making it trivial to make it look like the attack is coming from someone else)
If you were using some form of persistent authentication that wasn't based around session association (eg: HTTP Basic Auth), that would still need CSRF protection.
For entirely anonymous connections, it can still act as an obfuscation measure to block automated form submissions from the stupider kind of bots.

Are JAXRS restful services prone to CSRF attack when content type negotiation is enabled?

I have a RESTful API which has annotations like #Consumes(MediaType.JSON) - in that case, would the CSRF attack still be possible on such a service? I've been tinkering with securing my services with CSRFGuard on server side or having a double submit from client side. However when I tried to POST requests using FORM with enctype="text/plain", it didn't work. The technique is explained here This works if I have MediaType.APPLICATION_FORM_URLENCODED in my consumes annotation. The content negotiation is useful when I'm using POST/PUT/DELETE verbs but GET is still accessible which might need looking into.
Any suggestions or inputs would be great, also please let me know if you need more info.
Cheers
JAX-RS is designed to create REST API which is supposed to be stateless.
The Cross Site Request Forgery is NOT a problem with stateless applications.
The way Cross Site Request Forgery works is someone may trick you to click on a link or open a link in your browser which will direct you to a site in which you are logged in, for example some online forum. Since you are already logged in on that forum the attacker can construct a url, say something like this: someforum.com/deletethread?id=23454
That forum program, being badly designed will recognize you based on the session cookie and will confirm that you have the capability to delete the thread and will in fact delete that thread.
All because the program authenticated you based on the session cookie (on even based on "remember me" cookie)
With RESTful API there is no cookie, no state is maintaned between requests, so there is no need to protect against session hijacking.
The way you usually authenticate with RESTFul api is be sending some additional headers. If someone tricks you into clicking on a url that points to restful API the browser is not going to send that extra headers, so there is no risk.
In short - if REST API is designed the way it supposed to be - stateless, then there is no risk of cross site forgery and no need to CSRF protection.
Adding another answer as Dmitri’s answer mixes serverside state and cookies.
An application is not stateless if your server stores user information in the memory over multiple requests. This decreases horizontal scalability as you need to find the "correct" server for every request.
Cookies are just a special kind of HTTP header. They are often used to identify a users session but not every cookie means server side state. The server could also use the information from the cookie without starting a session. On the other hand using other HTTP headers does not necessarily mean that your application is automatically stateless. If you store user data in your server’s memory it’s not.
The difference between cookies and other headers is the way they are handled by the browser. Most important for us is that the browser will resend them on every subsequent request. This is problematic if someone tricks a user to make a request he doesn’t want to make.
Is this a problem for an API which consumes JSON? Yes, in two cases:
The attacker makes the user submit a form with enctype=text/plain: Url encoded content is not a problem because the result can’t be valid JSON. text/plain is a problem if your server interprets the content not as plain text but as JSON. If your resource is annotated with #Consumes(MediaType.JSON) you should not have a problem because it won’t accept text/plain and should return a status 415. (Note that JSON may become a valid enctype one day and this won’t be valid any more).
The attacker makes the user submit an AJAX request: The Same Origin Policy prevents AJAX requests to other domains so you are safe as long as you don’t disable this protection by using CORS-headers like e.g. Access-Control-Allow-Origin: *.

How to stop people stealing from my website?

I consider myself newbie when it comes to securing my web applications.
I have built a website which updates the webpages regularly through an AJAX call. The Ajax call returns a decent JSON object to be used at the client side.
There is a simple problem I need to overcome: How can I prevent other people to use the same AJAX call without permission? What if they build a website, AND at the client side they allow their users to make the same AJAX call to my servers and grab what they need.. AND THEN parse it to their own needs at the client side?
I cannot put an extra layer of security like user authentication.
They won't be able to actually do this from the client directly because the browser will prevent cross domain AJAX requests for anything other than JSONP (scripts). That said, they can proxy it on their server if they want so it doesn't buy you much.
ASP.NET MVC has an antiforgery token mechanism that you should look at for inspiration. The basic idea is that you use both an encrypted cookie and an encrypted, hidden form input containing the same data that you write to each page that you want to secure. Do your AJAX calls using a POST and make sure to send back the form input. On the server-side decrypt the cookie and input and compare the data to ensure they're the same. Since the cookie is tied to your domain, it will be much harder to inject in the request that is being sent back. Use SSL and regenerate the cookie/input content periodically to make it even harder to fake the cookie/input.
You can check the HTTP_REFERER http header and see if the request originates from your page. This can however be spoofed, so don't think of it as a bulletproof solution. The best counter-meassure is user authentication, really.
You can't. That's because you can't differenciate between an AJAX call from your web app and another user's webapp.
Here are some things that might help a little bit.
Obscuring/encrypting your AJAX response. This fails mainly because you have to include the decryption code in your app as well.
Check the IP origin. If the IP didn't access your server before, you can assume that the AJAX call is not from your website. This doesn't work if a) the user switches the IP while being on your site / timing out or b) if another website sends a fake http request first before using your AJAX API.
Another idea would be to send Javascript instead of a JSON object. The Javascript should contain all the logic needed to update your website, and of course could check if the website is your own. (window.location). That has some disadvantages though: more work for you, higher traffic load and it can be broken anyways.
I don't think it's a bad thing actually. Another website could have just as easily scraped the info from your website.
If by "stealing" you mean getting some content from your website (using HTTP GET), that's more or less the same problem as hot-linking. You could have some basic protection technique using the HTTP Referer header (it can be worked around, but it works in most cases).
The other problem you have (making sure the requests come from your application) have to do with CSRF (Cross-Site Request Forgery). There are various protection mechanisms against this, mostly based on embedding tokens in forms for example.
You could potentially combine the two approaches, although the real protection against getting the content would come from user authentication (otherwise, the other site could also get the page from which you're delivering those tokens and proxy it).
(In addition, techniques that rely on remembering the IP address would probably not work well in the whole web architecture: it might cause problems if you get a pool of proxy servers or if the client is a mobile device that may change IP address between various requests, which would be perfectly legitimate.)