In Django projects deployed on Heroku, I used to upload files to Google cloud storage via boto. However, recently I have to upload large files which will cause Heroku timeout.
I am following Heroku's documentation about direct file upload to S3, and customizing as follows:
Python:
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY,
gs_secret_access_key=GS_SECRET_KEY)
presignedUrl = conn.generate_url(expires_in=3600, method='PUT', bucket=<bucketName>, key=<fileName>, force_http=True)
JS:
url = 'https://<bucketName>.storage.googleapis.com/<fileName>?Signature=...&Expires=1471451569&GoogleAccessId=...'; // "presignUrl"
postData = new FormData();
postData.append(...);
...
$.ajax({
url: url,
type: 'PUT',
data: postData,
processData: false,
contentType: false,
});
I got the following error message:
XMLHttpRequest cannot load http:/... Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8000' is therefore not allowed access.
EDIT:
The output of gsutil cors get gs://<bucketName>:
[{"maxAgeSeconds": 3600, "method": ["GET", "POST", "HEAD", "DELETE", "PUT"], "origin": ["*"], "responseHeader": ["Content-Type"]}]
It seems the CORS is OK. So, how do I solve the problem? Thanks.
EDIT 2:
The header of the OPTION request from Firefox:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: zh-TW,zh;q=0.8,en-US;q=0.5,en;q=0.3
Access-Control-Request-Method: PUT
Connection: keep-alive
Host: <bucketName>.storage.googleapis.com
Origin: http://localhost:8000
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0
The header of the OPTION request from Chrome:
Accept:*/*
Accept-Encoding:gzip, deflate, sdch
Accept-Language:zh-TW,zh;q=0.8,en;q=0.6,en-US;q=0.4,zh-CN;q=0.2
Access-Control-Request-Headers:
Access-Control-Request-Method:PUT
Connection:keep-alive
Host:directupload.storage.googleapis.com
Origin:http://localhost:8000
Referer:http://localhost:8000/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
X-Client-Data:CIe2yQEIprbJAQjznMoB
The header issue is not coming from your app, I think it's coming from the cloud storage bucket. I had the same issue when setting up an api, the resource you are posting to is missing the header.
https://cloud.google.com/storage/docs/cross-origin
While useful for preventing malicious behavior, this security measure also prevents useful and legitimate interactions between known origins. For example, a script on a page hosted from Google App Engine at example.appspot.com might want to use static resources stored in a Cloud Storage bucket at example.storage.googleapis.com. However, because these are two different origins from the perspective of the browser, the browser won't allow a script from example.appspot.com to fetch resources from example.storage.googleapis.com using XMLHttpRequest because the resource being fetched is from a different origin.
So it looks like you need to configure the bucket to allow cors requests. The google documentation shows the following code to be run from the google cli.
https://cloud.google.com/storage/docs/cross-origin#Configuring-CORS-on-a-Bucket
gsutil cors set cors-json-file.json gs://example
[
{
"origin": ["http://mysite.heroku.com"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD", "DELETE", "PUT"],
"maxAgeSeconds": 3600
}
]
Which would allow you get, upload, and delete content. Hope that helps.
Based on the information in EDIT 2, something is wrong with the request. The preflight (OPTIONS) request includes the header ACCESS-CONTROL-REQUEST-HEADER. This is not a valid CORS header. The correct header is ACCESS-CONTROL-REQUEST-HEADERS, notice the 'S' at the end.
Even if the header was correct, it should not be requesting authorization for a access-control-allow-origin header. ACCESS-CONTROL-ALLOW-ORIGIN is not a header that is sent from the client. It is a header that will automatically be sent in the response from the server to the client when the server gets a preflight request. The client/browser will not allow a cross-origin PUT request unless it gets a ACCESS-CONTROL-ALLOW-ORIGIN header authorizing the browser document's current origin from the cross-origin server in the preflight request.
The presence of the bad header appears to correlate well with the error response you are receiving. However, it looks like that header was probably not in your original code, it looks like you added it later (based on your comments). Make sure to take that header config out, it is definitely not correct.
So I am a little confused about where that header is coming from, but I think it is the source of your problem.
It looks like you are using jQuery to make the AJAX PUT request. All I can really suggest is to make sure you haven't called $.ajaxSetup() somewhere in your JS code that might be configuring the bad header.
After so many trials and errors, I came up with the following. The programs worked, however, sometimes/some of the uploaded images are not visible; other times they are OK. I have no idea why this happened.
I'd like to solicit more ideas why file uploads are OK but some of the images are corrupted.
gsutil commands:
gsutil cors set cors.json gs://<bucketName>
gsutil defacl ch -u allUsers:R gs://<bucketName>
Content of cors.json file:
[
{
"origin": ["*"],
"responseHeader": ["Content-Type"],
"method": ["GET", "POST", "HEAD", "DELETE", "PUT"],
"maxAgeSeconds": 3600
}
]
HTML:
<p id=status>Choose your avatar:</p>
<input id=fileInput type=file>
JavaScript:
$(document).on('change', '#fileInput', function() {
var $this = $(this);
var file = $this[0].files[0];
$.ajax({
url: 'upload/sign/?fileName=' + file.name + '&contentType=' + file.type,
type: 'GET'
})
.done(function(data) {
var response = JSON.parse(data);
uploadFile(file, response.presignedUrl, response.url, response.contentType)
})
.fail(function() {
alert('Unable to obtain a signed URL.');
});
});
function uploadFile(file, presignedUrl, url, contentType) {
var postData = new FormData();
postData.append('file', file);
$.ajax({
url: presignedUrl,
type: 'PUT',
data: postData,
headers: {
'Content-Type': contentType,
},
processData: false,
contentType: false
})
.done(function() {
alert('File upload successful');
})
.fail(function() {
alert('Unable to upload the file.');
});
}
Django:
Project's urls.py:
urlpatterns = [
...
url(r'upload/', include('upload.urls', namespace='upload')),
]
App's urls.py:
urlpatterns = [
url(r'^$', views.upload, name='upload'),
url(r'^sign/', views.sign, name='sign'),
]
views.py:
def upload(request):
# ... render the template
def sign(request):
fileName = request.GET.get('fileName')
contentType = request.GET.get('contentType')
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY,
gs_secret_access_key=GS_SECRET_KEY)
presignedUrl = conn.generate_url(3600, 'PUT', GS_BUCKET_NAME, fileName, headers={'Content-Type':contentType})
return HttpResponse(
json.dumps({
'presignedUrl': presignedUrl,
'url': GS_URL + fileName,
'contentType': contentType
})
)
I’m my experience I would like to note “It is not possible to by-pass heroku 30s timeout without using javascript AWS SDK”. Don’t use python AWS SDK (boto). You have to completely leave the back-end out of this. Now from your access origin error, the solution is your CORS. You should put this on your CORS Policy:
[
{
"AllowedHeaders": [
""
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
""
],
"ExposeHeaders": [
"ETag"
]
}
]
Next for javascript AWS SDK. Follow my answer here: Upload file to s3 in front-end with JavaScript AWS SDK on django
There’s a lot missing from the answer I linked as I had to come up with a custom solution because Javascript AWS SDK also passes heroku 30s timeout. What I did was upload the video via javascript SDK than pass the videos ‘AWS url’ another view in a two step django form. With the changing of djnago views I reset heroku 30s timeout with the video already in my s3 bucket, and passed the fileKey to my url with the redirect. On the second part of the form I gain other information for my djnago object than submit it. This was so hard going through all documentation of direct upload to s3. If anyone is reading this and need help please comment for more. I’m on my phone now but I’ll kindly respond from my desktop to post code snippets ✌🏾
Related
I'm getting the following CORS error:
Access to fetch at 'https://___backend.herokuapp.com/api/tickets/21/' from origin 'http://___frontend.herokuapp.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
I believe all my django-cors-header settings are correct:
CORS_ALLOW_ALL_ORIGINS = False
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOWED_ORIGINS = ["https://____frontend.herokuapp.com"]
CSRF_TRUSTED_ORIGINS = ["https://____frontend.herokuapp.com"]
CORS_ALLOW_HEADERS = DEFAULT_HEADERS #this list includes Access-Control-Allow-Origin
CORS_ALLOW_METHODS = [
"DELETE",
"GET",
"OPTIONS",
"PATCH",
"POST",
"PUT",
]
The cors headers middleware is on top of the list and it's in installed apps.
The weird part is that I'm only getting this error for 2 endpoints even though I'm using the exact same request options on the frontend:
const requestOptions = {
method: 'GET',
headers: {
'Accept': 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'Authorization': `Token ${localStorage.getItem('token')}`
}
}
I'm really confused as to how API calls to other endpoints can work fine but not for these 2. Another weird part is that once I get the CORS error for the 2 endpoints and then try to signout for example, I get the same error from that endpoint. If I don't try to access the 2 troublesome endpoints beforehand the signout endpoint works fine.
I have a Google Cloud Storage bucket with the following CORS configuration:
[
{
"origin": ["http://localhost:8080"],
"responseHeader": [
"Content-Type",
"Access-Control-Allow-Origin",
"Origin"],
"method": ["GET", "HEAD", "DELETE", "POST", "PUT", "OPTIONS"],
"maxAgeSeconds": 3600
}
]
I am generating a signed URL with the following code:
let bucket = storage.bucket(bucketName);
let file = bucket.file(key);
const options = {
version: "v4",
action: "write",
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: "application/zip"
};
let url = await file.getSignedUrl(options))[0];
For my requests I am using the following headers:
Origin: http://localhost:8080
Content-Type: application/zip
When I try using a PUT request to upload the data everything works fine and I get the Access-Control-Allow-Origin header containing my Origin. But when I do a OPTIONS request with the exact same headers it fails to return the Access-Control-Allow-Origin header. I have tried many alterations to my CORS config but none have worked like:
Changing the origins to *
The different changes described in Stackoverflow it's answer and comments.
The different changes described in GitHub Google Storage API Issues
I solved my own problem with some help from my colleagues, when I was testing the function in Postman the CORS header was not sent as a response to the OPTIONS request because the request was missing the Access-Control-Request-Method header. When I added this header it worked fine.
Notice that as stated on the public documentation you shouldn't specify OPTIONS in your CORS configuration and notice that Cloud Storage only supports DELETE, GET, HEAD, POST, PUT for the XML API and DELETE, GET, HEAD, PATCH, POST, PUT for the JSON API. So, I believe that what you are experiencing with the OPTIONS method should be expected behavior.
I'm trying to upload files to Google Cloud Storage (GCS) from the client browser. For that I have a back-end in node.js request a Signed URL from GCS, which is then transmitted to the front-end client to upload the file. The Signed URL request code is the following:
const options = {
version: "v4",
action: "write",
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: content_type,
};
// Get a v4 signed URL for uploading file
const [url] = await storage
.bucket(bucketName)
.file(filename)
.getSignedUrl(options)
This is working. The issue comes when I try using the Signed URL on the client. Every request ends up with the following error:
Access to XMLHttpRequest at 'https://storage.googleapis.com/deploynets_models/keras_logo.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=urlsigner%40deploynets.iam.gserviceaccount.com%2F20200711%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20200711T192805Z&X-Goog-Expires=900&X-Goog-SignedHeaders=content-type%3Bhost&X-Goog-Signature=82b73435e4759e577e9d3b8056c7c69167fdaac5f0f450381ac616034b4830a7661bdb0951a82bf749a35dc1cf9a8493b761f8993127d53948551d7b33f552d118666dcf8f67f494cfaabf2268d7235e955e1243ce3cd453dcc32552677168ad94c6f1fca0032eb57941a806cc14139915e3cd3efc3585497715a8ad32a1ea0278f2e1165272951ae0733d5c6f77cc427fd7ff69431f74f1f3f0e7779c28c2437d323e13a2c6474283b264ab6dc6a94830b2b26fde8160684839a0c6ea551ca7eff8e2d348e09a8c213a93c0532f6fed1dd167cd9cf3480415c0c35987b27abd03684e088682eb5e89008d33dcbf630b58ea6b86e7d7f6574466aa2daa982566' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I've set the CORS policy on the GCS bucket with the following JSON:
[
{
"origin": ["*"],
"method": ["GET", "POST", "PUT", "DELETE", "OPTIONS"],
"responseHeader": ["*"],
"maxAgeSeconds": 120
}
]
(I've also tried listing specifically the origin as http://localhost:3000 for example)
It should be mentioned that I have got it to work in one specific case: when the upload payload is just a plain string. For some reason it works fine then.
I've looked up online all the similar errors I could but to no avail so far. Any idea?
Update: Finally got it to work! I had to add contentType: 'application/octet-stream', and processData: false, for it to work. Otherwise I was getting the errors:
CORS error Access to fetch at 'https://storage.googleapis.com/...' from origin 'http://localhost:8080' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Uncaught TypeError: Illegal invocation and Uncaught (in promise) TypeError: Failed to execute 'arrayBuffer' on 'Blob': Illegal invocation
So final AJAX request looks like this (working):
$.ajax({
url: <GCS signed upload URL>,
type: 'PUT',
data: f, //file object
contentType: 'application/octet-stream', //seems to be required
processData: false, //seems to be required
})
I also had to set the bucket CORS using the command:
gsutil cors set gcs_cors.json gs://<my_bucket>
The file gcs_cors.json contents were:
[
{
"origin": ["https://<myapp>.appspot.com", http://localhost:8080"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD", "DELETE", "PUT", "POST"],
"maxAgeSeconds": 120
}
]
Original post: I am not sure how this worked for you (happy it did though!), I have tried using the file directly and using Form Data but it was not working. Uploading to GCS seems to be quite a frustrating process. I was hoping the documentation would have a complete working example (front-end/back-end) but it doesn't seem to be the case.
I figured out what was wrong. In the upload code (browser side), I was passing a FormData object as the data to be uploaded, which for some reason was triggering the CORS error above. I fixed it when I passed the file directly.
So instead of using the following:
var data = new FormData()
data.append('file', event.target.files[0])
I use directly the file in event.target.files[0].
I was facing the exact same issue and there was a problem with the options that I was passing, try changing the options to
const options = {
action: "write",
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
}
this solution worked for me.
I'm attempting to use multiple CORS origin sites with Google Storage.
Mozilla Firefox seems to not have any issues with multiple origins. But Google Chrome throws this error: Access to XMLHttpRequest at 'FILE AT GOOGLE STORAGE' from origin 'https://example2.org' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header has a value 'https://example1.org' that is not equal to the supplied origin.
I have tried writing the cors json file like so:
[
{
"origin": ["https://example1.org", "https://example2.org"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD"],
"maxAgeSeconds": 1800
}
]
and like so:
[
{
"origin": ["https://example1.org"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD"],
"maxAgeSeconds": 1800
},
{
"origin": ["https://example2.org"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD"],
"maxAgeSeconds": 1800
}
]
Google Chrome doesn't like either variants.
The first way you specified it is correct, save a JSON file (ie. bucket-cors-config.json):
[{
"origin": ["https://example1.org", "https://example2.org"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD"],
"maxAgeSeconds": 1800
}]
Then use gcloud cli util to set on your bucket:
gsutil cors set bucket-cors-config.json gs://my-bucket
If you're checking your different origins in a browser and you're getting a CORS error, make sure it's not because of the browser cache. Some browsers will use the cached response on the wrong origin because the destination URLs match. This cached response will have the wrong origin header and cause the CORS error.
Arjun's method works well.
In case anyone is having issues with the preflight being cached, and doesn't wish to set the origin to a wildcard or reduce the cache window; old-school cache-busting exercises work well, such as a query-string value.
It's a bit of a hack, but it worked well in my use-case to append an origin query-string value:
const fetchHandler = url => fetch(`${url}?cache=${window.origin}`)
.then(res => {
if (res.ok) return Promise.resolve(res);
return Promise.reject(res);
})
.catch(err => Promise.reject(err));
export default fetchHandler;
I have a react application linked to a Django backend on two separate servers. I am using DRF for django and I allowed cors using django-cors-headers. For some reason when I curl POST the backend, I am able to get the request out. However when I use axios POST the backend, I get and error. The status of the POST request from axios is failed. The request and takes more than 10 seconds to complete. My code was working locally (both react and django codes), but when I deployed to AWS ec2 ubuntu, the axios requests stopped working.
Console error logs
OPTIONS http://10.0.3.98:8000/token-auth/ net::ERR_CONNECTION_TIMED_OUT
{
"config": {
"transformRequest": {},
"transformResponse": {},
"timeout": 0,
"xsrfCookieName": "XSRF-TOKEN",
"xsrfHeaderName": "X-XSRF-TOKEN",
"maxContentLength": -1,
"headers": {
"Accept": "application/json, text/plain, */*",
"Content-Type": "application/json;charset=UTF-8",
"Access-Control-Allow-Origin": "*"
},
"method": "post",
"url": "http://10.0.3.98:8000/token-auth/",
"data": "{\"username\":\"testaccount\",\"password\":\"testpassword\"}"
},
"request": {}
}
Here is my request code
axios.post('http://10.0.3.98:8000/token-auth/',
JSON.stringify(data),
{
mode: 'no-cors',
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin' : '*'
},
},
).then( res => (
console.log(JSON.stringify(res)),
)
).catch( err => (
console.log(JSON.stringify(err))
)
);
my curl code that worked
curl -d '{"username":"testaccount", "password":"testpassword"}' -H "Content-Type: application/json" -X POST http://10.0.3.98:8000/token-auth/
UPDATE 1
on firefox i am getting the warning
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at http://10.0.3.98:8000/token-auth/. (Reason:
CORS request did not succeed).[Learn More]
UPDATE 2
Perhaps it has something to do with my AWS VPC and subnets? My django server is in a private subnet while my react app is in a public subnet.
UPDATE 3 - my idea of what the problem is
I think the reason why my requests from axios aren't working is because the requests i'm making is setting the origin of the request header to http://18.207.204.70:3000 - the public/external ip address - instead of the private/internal ip address which is http://10.0.2.219:3000 - i search online that the origin is a forbidden field so it can't be changed. How can i set the origin then? Do I have to use a proxy - how can I do that.
try this http request instead of axios, it's called superagent (https://www.npmjs.com/package/superagent) , just install it to your react app via npm,
npm i superagent
and use this instead of axios.
import request from 'superagent'
const payload ={
"1": this.state.number,
"2": this.state.message
}
request.post('LINK HERE')
.set('Content-Type', 'application/x-www-form-urlencoded')
.send(payload)
.end(function(err, res){
if (res.text==='success'){
this.setState({
msgAlert: 'Message Sent!',
})
}
else{
console.log('message failed/error')
}
});
The issue here is that the request is being made on the client browser. You need to either use a reverse proxy or request directly to the api server. You cannot do a local ssh forwarding either.