CORS issue with GCP signed URL - google-cloud-platform

I'm trying to upload to GCP from the frontend with fetch, using a signed URL and I'm running into persistent CORS issue.
Is the file to be uploaded supposed to be embedded in the signedurl, or sent to the signedurl in a request body?
This is the error:
Access to fetch at <signedurl> from origin 'http://my.domain.com:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
This is the CORS config on the bucket:
[
{
"origin": ["http://gcs.wuddit.com:3000"],
"responseHeader": ["Content-Type", "Authorization", "Content-Length", "User-Agent", "x-goog-resumable", "Access-Control-Allow-Origin"],
"method": ["GET", "POST", "PUT", "DELETE"],
"maxAgeSeconds": 3600
}
]
This is the fetch call:
const uploadHandler = async (theFile, signedUrl) => {
try {
const response = await fetch(signedUrl, {
method: 'POST',
headers: {
'Content-Type': theFile.type,
},
body: theFile,
});
const data = await response;
} catch (error) {
console.error('Upload Error:', error);
}
};
Signed URL example:
// https://storage.googleapis.com/my-bucket-name/my-filename.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=wuddit-images-service%40wuddit-427.iam.gserviceaccount.com%2F20210305%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210305T032415Z&X-Goog-Expires=901&X-Goog-SignedHeaders=content-type%3Bhost&X-Goog-Signature=18a2428f051e59fbeba0a8b97a824bdee0c70cffe1f9cce5696e95b9fd81b74974f1c291e5195e874eac29ede5619c9cf07538f21b442fb81e7fc1e764bb5444f5d7acae78e3f2b5876450fccde122f059348efc6d5d8be1bbef7250a1aa2433957d85e65f51c69e8daf020341cbf8044ed2b532205a331acc3728437c9295b25bb6e61ef71a99798bb38a6f05e664678d5e12aed916ab41d2e2f9e7e0974588b797ebef87f2c0949f7071687d1d12f232e871d892f6cd2da397888285783d5372657822275f56a44f9ca14a21fb4e4d6552d380f9e4a597d12663c51aea0a2bdc0f47994f687b59c9d629c1010245cefc975718f3574cd6ae331aa1b89d797d

I figured this out. My lord. In my node Express backend, on the endpoint I was using to call the generateV4UploadSignedUrl() function, I had to setthe 'Access-Control-Allow-Origin' on theres`.
So this:
app.get('/api/gcloud', async (req, res) => {
try {
const url = await generateV4UploadSignedUrl().catch(console.error);
res.json({ url });
} catch (err) {
console.log('err', err);
}
});
Became this:
app.get('/api/gcloud', async (req, res) => {
res.set('Access-Control-Allow-Origin', 'http://gcs.whatever.com:3000'); // magic line. Note this must match the domain on your GCP bucket config.
try {
const url = await generateV4UploadSignedUrl().catch(console.error);
res.json({ url });
} catch (err) {
console.log('err', err);
}
});
My bucket CORS config:
[
{
"origin": "http://gcs.whatever.com:3000",
"responseHeader": ["Content-Type", "Authorization"],
"method": ["GET", "POST", "PUT", "DELETE"],
"maxAgeSeconds": 3600
}
]
Save the above into a file, e.g. 'cors.json', then cd into the location where you saved the file, then use this gsutil command to set bucket CORS config:
gsutil cors set cors.json gs://your-bucket-name

[
{
"origin": ["*"],
"method": ["*"],
"maxAgeSeconds": 3600,
"responseHeader": ["*"]
}
]
worked for me. responseHeader was the missing ingredient!

Related

Access to fetch blocked by CORS policy: Response to preflight request doesn't pass access control check

I am getting this error when trying to fetch REST API from Amazon Web Services in script defined html file:
Access to fetch at '$(url)' from origin 'null' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.
script
let body = { token: params.token};
const response = await fetch(
url,
{
method: "POST",
body: JSON.stringify(body),
headers: { "Content-type": "application/json", "Access-Control-Allow-Origin":"*" },
}
);
console.log(response);
const myJson = await response.json();
console.log("response-->", myJson);
if (myJson.statusCode != 200) {
console.log("failed");
return;
}
console.log("success");
return;
}
Running into CORS error
API is deployed with below CORS configurations:
enter image description here
Once you'r done with Cors Console Enable (i see that you already done it on the image).
You need to follow this stepts to setup lambda.
And on your function include headers and response in this way:
const headers = {'Content-Type':'application/json',
'Access-Control-Allow-Origin':'*',
'Access-Control-Allow-Methods':'POST'}
const response = {
statusCode: 200,
headers:headers,
body: JSON.stringify({ token: params.token})
};
return response;
so in your fetch you could call it directly and not await it like this:
fetch(...).then((response) => {
return response.json();
})

How to Validate a file which user send by Presigned URl to S3 Bucket during upload file

When user send a file or any data by Presigned URL to S3 Bucket. In between no restriction. So User Can send anything by Presigned URL to S3 Bucket.
But I want check data between Presigned URL and S3 Bucket which data user send.
I am using serverless framework.
please help me, Thanks in Advance.
My lambda function code here
module.exports.uploadLarge = async (event) => {
console.log({ event })
try {
const body = JSON.parse(event.body);
console.log({ body })
const action = body.action;
const type = body.type;
const key = body.key;
const params = {
Bucket: BucketName,
Key: key,
// ContentType: type,
Expires: 10000,
}
if (action === "putObject") {
params.ContentType = type;
// params.Expires = 20000
}
console.log({ params })
// const url = S3.getSignedUrlPromise(action, params);
const u = S3.getSignedUrl(action, params);
console.log({ u });
// console.log({ url });
return {
statusCode: 200,
body: JSON.stringify({ u }),
headers: {
// "Content-Type": "application/json"
'Access-Control-Allow-Origin': '*',
}
}
} catch (err) {
return {
statusCode: 500,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify(err)
}
}
}
But I want check data between Presigned URL and S3 Bucket which data user send.
Its not possible with your current design. You can only perform a check after the user have uploaded the file. For example, setup an S3 trigger for PutObject event which will trigger a lambda function to verify the file. Otherwise, you have to change your architecture, and put some proxy between users and S3. For example, Apigateway, or CloudFront, or custom application.

403 forbidden error when uploading to S3 bucket

I'm pretty new with AWS but im fairly certain I had my IAM user set up properly... are there any other permissions i need to add other than AmazonS3FullAccess? the name implies that it should be enough... either its a permissions issue or I messed up somewhere with my code.
I was trying to follow along with the guide at https://devcenter.heroku.com/articles/s3-upload-node. any help would be appreciated. :)
Here is my relevant code:
//server side code
router.get('/sign-s3', (req, res) => {
const s3 = new aws.S3();
const { fileName, fileType } = req.query;
s3.getSignedUrl('putObject', {
Bucket: S3BUCKET,
Key: fileName,
Expires: 60,
ContentType: fileType,
ACL: 'public-read'
}, (err, data) => {
if (err) {
console.log(err);
res.status(500).json(err)
}
res.json({
signedRequest: data,
url: `https://${S3BUCKET}.s3.amazonaws.com/${fileName}`
});
});
});
//client side code
const onChangeHandler = (e) => {
const file = e.target.files[0];
axios
.get(`/api/bucket/sign-s3?fileName=${file.name}&fileType=${file.type}`)
.then(signedResponse => {
axios
.put(signedResponse.data.signedRequest,file, {
headers: {
'Content-Type': 'multipart/form-data'
}
})
.then(response => {
console.log("upload successful");
props.addImages([signedResponse.data.url]);
})
.catch(error => console.error(error));
})
.catch(error => console.error(error));
}
and a screenshot of my error:
UPDATE:
Removing the line ACL: 'public-read' from my sign route allows the upload to go through but then nobody can access the images. :P based on johns comments down below i assumed it was some kind of header issue so i added 'x-amz-acl': 'public-read' header to my put request on the client side but its still giving me the same issue of an invalid signature
I was receiving same error with an IAM user with "AmazonS3FullAccess". What worked for me was adding this CORS configuration
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]

Sending http request from aws lambda to google firebse funcitons

I have set up firebase functions to receive http requests and have verified that the same is working. Now im trying to send http request to firebase from aws lambda function. But there is no response either in aws lambda or in the firebase functions log. This is my aws lambda code:
const postData = JSON.stringify({
"queryresult" : {
"parameters": {
"on": "1",
"device": "1",
"off": ""
}
}
});
const options = {
hostname: 'https://<the firebase function endpoint>',
port: 443,
path: '',
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': Buffer.byteLength(postData)
}
};
const req = https.request(options, postData)
.then((response) => {
console.log(response);
})
.catch((err) => {
console.log(err);
});
// Write data to request body
req.write(postData);
req.end();
}
The promise part here is suppose to execute the console logs but it is not getting executed. Is there something that i'm missing here. The host is the URL that we obtain when we deploy a function. Or is there some firebase or aws related plan problem. I'am using the spark plan in firebase. Thankyou.

AWS CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. The headers are present

I have been battling with these dreaded CORS issues with AWS for a while now. I thought I had it sorted out and then it turned up again... I have done exactly want I have in the other Lambda functions that work fine.
Why won't it work now?
I have added in the headers in the response to all of the Lambda functions in my handler.js file (I am using serverless to deploy to AWS)
docClient.get(params, function (err, data) {
if (err) {
const response = {
statusCode: 500,
headers: {
"Access-Control-Allow-Origin": "*", // Required for CORS support to work
"Access-Control-Allow-Credentials": true
},
body: JSON.stringify({
message: 'Failed to fetch service request from the database.',
error: err
}),
};
callback(null, response);
}
else {
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*", // Required for CORS support to work
"Access-Control-Allow-Credentials": true
}
};
callback(null, response);
}
});
And in the .yml file:
myLambdaFunc:
handler: handler.myLambdaFunc
events:
- http:
path: myLambdaFunc
method: POST
cors: true
I figured out that the problem lies with the docClient.get. I was testing with data where the primary key item being searched for was not in the table.
I wish it didn't tell me that it was a CORS issue because it really wasn't..