403 forbidden error when uploading to S3 bucket - amazon-web-services

I'm pretty new with AWS but im fairly certain I had my IAM user set up properly... are there any other permissions i need to add other than AmazonS3FullAccess? the name implies that it should be enough... either its a permissions issue or I messed up somewhere with my code.
I was trying to follow along with the guide at https://devcenter.heroku.com/articles/s3-upload-node. any help would be appreciated. :)
Here is my relevant code:
//server side code
router.get('/sign-s3', (req, res) => {
const s3 = new aws.S3();
const { fileName, fileType } = req.query;
s3.getSignedUrl('putObject', {
Bucket: S3BUCKET,
Key: fileName,
Expires: 60,
ContentType: fileType,
ACL: 'public-read'
}, (err, data) => {
if (err) {
console.log(err);
res.status(500).json(err)
}
res.json({
signedRequest: data,
url: `https://${S3BUCKET}.s3.amazonaws.com/${fileName}`
});
});
});
//client side code
const onChangeHandler = (e) => {
const file = e.target.files[0];
axios
.get(`/api/bucket/sign-s3?fileName=${file.name}&fileType=${file.type}`)
.then(signedResponse => {
axios
.put(signedResponse.data.signedRequest,file, {
headers: {
'Content-Type': 'multipart/form-data'
}
})
.then(response => {
console.log("upload successful");
props.addImages([signedResponse.data.url]);
})
.catch(error => console.error(error));
})
.catch(error => console.error(error));
}
and a screenshot of my error:
UPDATE:
Removing the line ACL: 'public-read' from my sign route allows the upload to go through but then nobody can access the images. :P based on johns comments down below i assumed it was some kind of header issue so i added 'x-amz-acl': 'public-read' header to my put request on the client side but its still giving me the same issue of an invalid signature

I was receiving same error with an IAM user with "AmazonS3FullAccess". What worked for me was adding this CORS configuration
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]

Related

CORS issue with GCP signed URL

I'm trying to upload to GCP from the frontend with fetch, using a signed URL and I'm running into persistent CORS issue.
Is the file to be uploaded supposed to be embedded in the signedurl, or sent to the signedurl in a request body?
This is the error:
Access to fetch at <signedurl> from origin 'http://my.domain.com:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
This is the CORS config on the bucket:
[
{
"origin": ["http://gcs.wuddit.com:3000"],
"responseHeader": ["Content-Type", "Authorization", "Content-Length", "User-Agent", "x-goog-resumable", "Access-Control-Allow-Origin"],
"method": ["GET", "POST", "PUT", "DELETE"],
"maxAgeSeconds": 3600
}
]
This is the fetch call:
const uploadHandler = async (theFile, signedUrl) => {
try {
const response = await fetch(signedUrl, {
method: 'POST',
headers: {
'Content-Type': theFile.type,
},
body: theFile,
});
const data = await response;
} catch (error) {
console.error('Upload Error:', error);
}
};
Signed URL example:
// https://storage.googleapis.com/my-bucket-name/my-filename.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=wuddit-images-service%40wuddit-427.iam.gserviceaccount.com%2F20210305%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210305T032415Z&X-Goog-Expires=901&X-Goog-SignedHeaders=content-type%3Bhost&X-Goog-Signature=18a2428f051e59fbeba0a8b97a824bdee0c70cffe1f9cce5696e95b9fd81b74974f1c291e5195e874eac29ede5619c9cf07538f21b442fb81e7fc1e764bb5444f5d7acae78e3f2b5876450fccde122f059348efc6d5d8be1bbef7250a1aa2433957d85e65f51c69e8daf020341cbf8044ed2b532205a331acc3728437c9295b25bb6e61ef71a99798bb38a6f05e664678d5e12aed916ab41d2e2f9e7e0974588b797ebef87f2c0949f7071687d1d12f232e871d892f6cd2da397888285783d5372657822275f56a44f9ca14a21fb4e4d6552d380f9e4a597d12663c51aea0a2bdc0f47994f687b59c9d629c1010245cefc975718f3574cd6ae331aa1b89d797d
I figured this out. My lord. In my node Express backend, on the endpoint I was using to call the generateV4UploadSignedUrl() function, I had to setthe 'Access-Control-Allow-Origin' on theres`.
So this:
app.get('/api/gcloud', async (req, res) => {
try {
const url = await generateV4UploadSignedUrl().catch(console.error);
res.json({ url });
} catch (err) {
console.log('err', err);
}
});
Became this:
app.get('/api/gcloud', async (req, res) => {
res.set('Access-Control-Allow-Origin', 'http://gcs.whatever.com:3000'); // magic line. Note this must match the domain on your GCP bucket config.
try {
const url = await generateV4UploadSignedUrl().catch(console.error);
res.json({ url });
} catch (err) {
console.log('err', err);
}
});
My bucket CORS config:
[
{
"origin": "http://gcs.whatever.com:3000",
"responseHeader": ["Content-Type", "Authorization"],
"method": ["GET", "POST", "PUT", "DELETE"],
"maxAgeSeconds": 3600
}
]
Save the above into a file, e.g. 'cors.json', then cd into the location where you saved the file, then use this gsutil command to set bucket CORS config:
gsutil cors set cors.json gs://your-bucket-name
[
{
"origin": ["*"],
"method": ["*"],
"maxAgeSeconds": 3600,
"responseHeader": ["*"]
}
]
worked for me. responseHeader was the missing ingredient!

aws S3 bucket: Internal Server Error 500 on POST file upload from browser

I receive <Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message> when doing a POST form data upload on to an s3 bucket.
S3 configuration:
const attachmentBucket = new Bucket(this, 'caS3Bucket', {
bucketName: environmentName + '.caattachments',
cors: [{
allowedMethods: [HttpMethods.GET, HttpMethods.POST],
allowedOrigins: ['*'],
allowedHeaders: ['*'],
maxAge: 3000
} as CorsRule]
} as BucketProps);
Pre-signing upload url through a lambda:
const params = {
Bucket: process.env.S3_BUCKET!.split(':')[5],
Fields: {
key: payload.path,
acl: 'public-read'
},
Expires: 3600
};
const postData = await new Promise(resolve => {
s3.createPresignedPost(params, (err, data) => {
resolve(data);
});
}) as AWS.S3.PresignedPost;
I append all parameters in the postData.fields to the input form with the file. Is there any way to debug this?
The issue was that the form was missing the 'Policy' field. I wish aws errors were more descriptive. Final working form fields looks like this:
const formData: FormData = new FormData();
formData.append('key', uploadData.fields.key);
formData.append('acl', uploadData.fields.acl);
formData.append('bucket', uploadData.fields.bucket);
formData.append('X-Amz-Algorithm', uploadData.fields.algorithm);
formData.append('X-Amz-Credential', uploadData.fields.credential);
formData.append('X-Amz-Date', uploadData.fields.date);
formData.append('X-Amz-Security-Token', uploadData.fields.token);
formData.append('Policy', uploadData.fields.policy);
formData.append('X-Amz-Signature', uploadData.fields.signature);
formData.append('file', file, file.name);

Postman is seemingly ignoring my POST in the pre-request

I am trying to set up a DELETE call, but to do this I first need to create the data to delete so I am trying to call a POST in the pre-request. I have run the POST as a normal request and it works fine, but inside the pre-request it just seems to be getting ignored. I know this because I can alter the URL to something that should not work and it makes no difference and raises no errors.
This is what I have in my pre-request: -
pm.sendRequest({
url: "http://someurl/test",
method: 'POST',
header: {
'Authorization': 'Basic Tmlfefe89899eI='
},
body: {
"ClientId": 594,
"Name": null,
"Disabled": false
}, function (err, res) {
console.log(res);
}
});
Is there anything special I have to do to use a POST as a pre-request?
Any help would be greatly appreciated.
Seems strange that nothing is happening, might be returning a 400 but you're only going to see that in the Postman Console.
You can open this by hitting this icon, you'll find it in the bottom left of the app:
This is an example view from the Console, the icon next to the timing on the first request will show that a pm.sendRequest() function was used:
I would suggest just changing your request a little bit to something like the one below and it should be fine:
pm.sendRequest({
url: 'http://someurl/test',
method: 'POST',
header: {
'Authorization': 'Basic Tmlfefe89899eI=',
'Content-Type': 'application/json'
},
body: {
mode: 'raw',
raw: JSON.stringify({ ClientId: 594, Name: null, Disabled: false})
}
}, function (err, res) {
console.log(res);
});
There is a trick to get request information for the pre-request script.
Save the request in a separate collocation (temporary) and export that collection. And then check that json collection using IDE or notepad, you'll get all the information there, use that as it is for your request.
Using given information in the question, here is how your pre-request script looks like,
pm.sendRequest({
url: "http://someurl/test",
method: "POST",
header: [{
"key": "Authorization",
"value": "Basic Tmlfefe89899eI=",
"type": "text",
},
{
"key": "Content-Type",
"name": "Content-Type",
"value": "application/json",
"type": "text"
}],
body: {
mode: 'raw',
"raw": ""raw": "{\n \"ClientId\": 594,\n \"Name\": null,\n \"Disabled\": false\n}"
}
}, function(err, res) {
console.log(res);
});
Also check Postman console, you'll get all the information there, including request, response and errors if any.

File Upload in React native using ImagePicker and s3

I am trying to upload the image from the device to s3 directly. I am able to read the image metadata and sending it to the server to generate the pre-signed url for the aws s3. I also have the pre-signed url to with I want to upload the file/image using axios but somehow the image/file is not getting uploaded. Here is my code.
Image data (read by the ImagePicker)
data: "" // image raw data
fileName: "acx.jpg"
fileSize: ""
uri: ""
path: ""
Code for sending the selected image to aws s3.
const options = { headers: { 'Content-Type': fileType}};
axios.put(res.data.signedRequest, data , options);
I'm getting the following respose.
res = {
config:
data: ""
status: 200
StatusText: undefined
...
}
So what should I pass as data in the axios request?
Have you explored this plugin ? It would make the process a lot easier. You could then try
upload = () => {
const file = {
uri: this.state.imageuri,
name: acx.jpg,
type: "image/jpeg"
};
const options = {
keyPrefix: "ts/",
bucket: "celeb-c4u",
region: "eu-west-1",
accessKey: "AKIAI2NHLR7A5W2R3OLA",
secretKey: "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",
successActionStatus: 201
};
return RNS3.put(file, options)
.then(response => {
if (response.status !== 201)
throw new Error("Failed to upload image to S3");
else {
console.log(
"Successfully uploaded image to s3. s3 bucket url: ",
response.body.postResponse.location
);
this.setState({
url: response.body.postResponse.location
});
}
})
.catch(error => {
console.log(error);
});
};

AWS CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. The headers are present

I have been battling with these dreaded CORS issues with AWS for a while now. I thought I had it sorted out and then it turned up again... I have done exactly want I have in the other Lambda functions that work fine.
Why won't it work now?
I have added in the headers in the response to all of the Lambda functions in my handler.js file (I am using serverless to deploy to AWS)
docClient.get(params, function (err, data) {
if (err) {
const response = {
statusCode: 500,
headers: {
"Access-Control-Allow-Origin": "*", // Required for CORS support to work
"Access-Control-Allow-Credentials": true
},
body: JSON.stringify({
message: 'Failed to fetch service request from the database.',
error: err
}),
};
callback(null, response);
}
else {
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*", // Required for CORS support to work
"Access-Control-Allow-Credentials": true
}
};
callback(null, response);
}
});
And in the .yml file:
myLambdaFunc:
handler: handler.myLambdaFunc
events:
- http:
path: myLambdaFunc
method: POST
cors: true
I figured out that the problem lies with the docClient.get. I was testing with data where the primary key item being searched for was not in the table.
I wish it didn't tell me that it was a CORS issue because it really wasn't..