I installed django-cors-headers in my django application.
I want to display svg file in webbrowser.
For first time, its not loading properly and its showing 304 response in network.
Can anyone help me how to rectify this problem?
This response should be fine, it just indicates that your browser has a cached version. It saves Django from having to pass back the response again.
From Wikipedia
304 Not Modified
Indicates that the resource has not been modified since the version specified by the request headers If-Modified-Since or If-None-Match. This means that there is no need to retransmit the resource, since the client still has a previously-downloaded copy.
This sounds like what you are looking for, as the SVG should still be rendered by the browser.
Related
I am currently trying to implement this solution here. The solution seems pretty simple and possible since I am the owner of both of the hosts. On mysite1.com I have added the following img tag.
<img src="//mysite1.com.com/cookie_set/" style="display:none;">
On my site2.com (django), I have a view like so:
def cookie_set(request):
response = HttpResponse()
response.set_cookie('my_cookie', value='awesome')
return response
When I release this code live. I get the following error:
Cross-Origin Read Blocking (CORB) blocked cross-origin response https://www.mysite2.com/cookie_set/ with MIME type text/html. See https://www.chromestatus.com/feature/121212121221 for more details.
I thought that maybe if I just added "Access-Control-Allow-Origin" in my view this might fix things, but according the docs here: https://www.chromium.org/Home/chromium-security/corb-for-developers, there's one more consideration:
For example, it will block a cross-origin text/html response requested from a or tag, replacing it with an empty response instead.
Are my assumptions correct? After adding the correct headers should I just change the content-type to something other than text/html?
Ultimately, my final goal is I would like to set a cookie for a different domain that I have control of (ideally without a redirect).
Best solution: use a different tag for this. (i.e. iframe).
The point behind CORB is to prevent certain tags from being used for XSSI data injection So img tags requests should not return text/html, application/json, or xml content types.
So unless the call to img tag really is for capturing the request itself (for referrer tracking, for example), then you get much more versatility by executing in an iframe anyway (like for SSO-redirection workflows).
See also: Setting third party cookie by using 1x1 <img> tag - Javascript doesn't drop cookie
I fixed this for image files by updating the Content-Type metadata under Properties in S3 - image/jpeg for JPEG files and image/png for PNG files.
My application uploads image files via multer-s3 and it seems it applies Content-Type: 'application/x-www-form-urlencoded'. It has a contentType option with content-type auto-detect feature - this should prevent improper headers and fix the CORB issue.
It seems the latest Chrome 76 version update includes listening to remote file URL headers, specifically Content-Type. CORB was not an issue for other browsers such as Firefox, Safari, and in-app browsers e.g. Instagram.
I'm trying to use Unity 2017.3 to send a basic HTTP POST request from Unity scripting. I want to send an image data, which I can access in my script either as file.png or in a byte[]. I am only posting to a local server running Python/Django, and the server does register the POST request coming in - but no matter what I do the request arrives empty of any content, body, files or raw data at my web app.
IEnumerator WriteAndSendPng() {
#extra code that gets bytes from a render texture goes here
#can verify that drawing.png is a valid image for my purposes
System.IO.File.WriteAllBytes("drawing.png", bytes);
List<IMultipartFormSection> formData = new List<IMultipartFormSection>();
formData.Add( new MultipartFormFileSection ("drawing", bytes, "drawing.png", "image/png") );
UnityWebRequest www = UnityWebRequest.Post("http://127.0.0.1:8000/predict/", formData);
yield return www.SendWebRequest();
if(www.isNetworkError || www.isHttpError) Debug.Log(www.error);
else Debug.Log("Form upload complete!");
}
I'm following the docs and there are a bunch of constructors for MultipartFormFileSection, most of which I feel like I've tried. Also tried UploadHandlers, or the old AddBinaryField WWW API - still the request is always empty when it hits my app... I've read the thorough response to this SO ticket, Sending http requests in C# with Unity. I have tried many of the implementations here but again, Django receives empty requests. Even submitting the simplest possible request from Unity sends empty requests. So weird.
List<IMultipartFormSection> formData = new List<IMultipartFormSection>();
formData.Add (new MultipartFormDataSection ("someVar=something"));
The Python server just sees:
[11/Feb/2018 14:14:12] "POST /predict/ HTTP/1.1" 200 1
>>> print(request.method) # POST
>>> print(request.encoding) # None
>>> print(request.content_type) #multipart/form-data
>>> print(request.POST) # <QueryDict: {}>
>>> print(request.body) # b''
I thought it might be a Django issue, but posting to the same app w/ Postman or from other sources, it sees the incoming data just fine. Anyone done this recently? I thought this would be a piece of cake and many hours into into it I remain stymied. All help appreciated! Thanks, all.
Figured it out courtesy of Unity staffer Aurimas-Cernius on their forums: "The issue most likely is that your Django server does not support HTTP 1.1 chunked transfer. You can either try updating your server, or set chunkedTransfer property on UnityWebRequest to false."
He was right. Setting that flag to false allowed me to start sending simple test case data and receiving it as expected in the Django app - bet I'll be able to get images working in no time. I was also experiencing side effects of using Python 3.5.x (mistakenly assuming I needed to). Upgrading that fixed the chunk issue, too. Cheers!
My understanding of setting Cache-Control with a max-age value is that the browser is instructed to Cache the file.
What I then expect is that if I hit "enter" on the address bar for the same link, the browser would return a 200 (from cache) response.
My question is that why is it returning a 304 Not Modified response?
The way I see it is that with the 200 (from cache) the browser no longer makes a connection with the Server to validate the file and immediately just serves the cached content. But with the 304, although the browser will not download the file again and will simply instruct the browser to serve the cached file, it will still need to send a request to validate the freshness of the content.
The assets here are served with Amazon's CloudFront CDN with Amazon S3 buckets as the origin. The Cache-Header there (in S3) have been set already. This was is not an issue for all other self-hosted assets.
Thanks for the help!
EDIT: I found this What is the difference between HTTP status code 200 (cache) vs status code 304?. Additional question: I already have Cache-Control set to max-age=31536000, s-maxage=2592000, no-transform, public and still I'm getting a 304, do it need to set the Expire also? I could cache fine before on self-hosted sites with just the Cache-Control.
You expect to see a 200 with the content, rather than a 304 saying "not modified". That's the browser asking to see if the content is newer than what it has cached. 304 means "no, don't waste your bandwidth, your content is current". It can do this with a couple of methods- etag and if-modified-since.
As an example, we can use your stackoverflow avatar image. When I load that in Chrome and look at the Developer Tools, I can see it has a 304 response and is passing those two headers:
if-modified-since:Thu, 28 Jan 2016 13:16:24 GMT
if-none-match:"484ab25da1294b24f8d9d13afb913afd"
I use Postman extension to check out my RESTful APIs
I am trying to make a request to my "localhost", but it seems to have cached one of the query parameters.
I tried clearing cache of my chrome browser but this does not seem to work. I went to the extent of even changing the API resource name.
Has anyone come across such an issue?
Cache-Control request header can be used but one thing to clarify
no-cache does not mean do not cache. In fact, it means on every HTTP request it "revalidate with server" before using any cached response. If the server says that the resource is still valid then the cache will still use the cached version.
while no-store is effectively asking to not cache at all and is intended not to to store anything in the cache.
I tried the solution above and it didn't work for me. What worked was restart the application. I'm using eclipse and running a spring boot application.
In case someone is using the same environment and facing the same problem it may help.
I suggest to use Postman App rather than the extension because with postman app you can do lot more cool things like you can use the console to debug your APIs, create/delete cookies and cache with excellent GUI.
I came across same situation where the request are cached in Postman. I deleted JSESSIONID cookie from Cookies section on PM rather closing the PM app, it solved my problem (means - the call reached to my localhost app) and got accurate response. Please try it if someone needs this solution.
I usually just request the data on a chrome incognito tab/firefox private tab and I guess that this just resets the cache and then it appears on my Postman app.
(I would recommend using the Postman app instead of the website as it has many more features!)
So here's the story, I've got a play framework application that uses org.apache.cxf plugin to provide SOAP services. In my routes file, I have the following:
GET /soap/*path org.apache.cxf.transport.play.CxfController.handle(path)
POST /soap/*path org.apache.cxf.transport.play.CxfController.handle(path)
This routes to one of my own functions that turns the path into another request that will hit my usual controllers. We do this by building up on a WSRequestHolder object. We set headers, query parameters, etc.
This used to work quite well in play 2.2 but with the upgrade to 2.3.8, there seems to be an issue. I've traced it to this line:
Promise<WSResponse> responsePromise = request.get();
WSResponse response = responsePromise.get(2000);
When we make the request (when calling response.Promise.get) the call times out regardless of the timeout set. I was testing with a basic login request and it used to respond in less that 200 ms. I've reproduced the request parameters using postman and the request seems to work fine on it's own but when it's being fired from my webservice, it times out.
I maybe have missed something in the upgrade to 2.2 but I'm not even sure what to debug. It clearly doesn't hit the controller, and turning on play logs at the DEBUG level doesn't even see the request.
Any help would be appreciated.
Update:
I have tested it in dev and prod mode. Both seem to fail in the same place.
I figured it out. The issue was that, during our redirection of the request, we were re-adding the Content-Length header twice, once as the length and once as zero (in an attempt to force the regeneration of the length). Turns out this works in play 2.2 but causes it to hang in 2.3. Making sure to only add the Content-Length header once prevents the request from hanging. Dev/Prod mode tested and working.