I am working on an app that is deployed to AWS EC2 (Both client and server as a separate instance). My app uploads users' images to the s3 bucket.
I just added domains to both instances for https certification to both client and rest API
and since then I am getting this error while trying to save files to my S3 Bucket:
code: "AccessDenied"
extendedRequestId: "*****"
message: "Access Denied"
region: null
requestId: "****"
retryDelay: 67.72439862213535
retryable: false
statusCode: 403
time: "2020-09-12T13:42:29.739Z"
message: "Access Denied"
I have made this bucket public, even then it's not working.
here is my code
require('dotenv').config();
let multer = require('multer');
let AWS = require('aws-sdk');
let { uuid } = require('uuidv4');
let s3 = new AWS.S3({
accessKeyId: process.env.AWS_ID,
secretAccessKey: process.env.AWS_SECRET,
});
let storage = multer.memoryStorage({
destination: function (req, file, callback) {
callback(null, '');
},
});
let multiUpload = multer({ storage }).fields([
{ name: 'profile', maxCount: 1 },
{ name: 'gallery' },
]);
router.post('/', auth.required, multiUpload, async function (req, res, next) {
var profile = new Profile();
profile.userId = req.payload.id;
if (typeof req.files.profile !== 'undefined') {
let myImage = req.files.profile[0].originalname.split('.');
let fileType = myImage[myImage.length - 1];
let params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `${uuid()}.${fileType}`,
Body: req.files.profile[0].buffer,
ContentType: 'image/jpeg',
ACL: 'public-read',
};
let data = await s3.upload(params).promise();
if (!data.Location) res.sendStatus(500);
profile.profileImage.url = data.Location;
profile.profileImage.imageId = data.key;
}
if (typeof req.files.gallery !== 'undefined') {
let galleryImageList = [];
for (let i = 0; i < req.files.gallery.length; i++) {
let myImage = req.files.gallery[i].originalname.split('.');
let fileType = myImage[myImage.length - 1];
let params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `${uuid()}.${fileType}`,
Body: req.files.gallery[i].buffer,
ContentType: 'image/jpeg',
ACL: 'public-read',
};
let data = await s3.upload(params).promise();
if (!data.Location) res.sendStatus(500);
let galleryItem = {
url: data.Location,
imageId: data.key,
};
galleryImageList.push(galleryItem);
}
profile.gallery = galleryImageList;
}
profile
.save()
.then(function () {
return res.json({ profile: profile.toProfileJSONFor() });
})
.catch(next);
});
I apparently don't have enough reputation to make a comment, but you likely should include whether you're getting Access Denied from reading from the bucket or writing to the bucket or both, and you should include your code snippets that read/write from the bucket. Also, you should include what you mean by "Added domains to both instances for https certification" because I wouldn't think that should be necessary
Editing my response since you included your code:
It looks like you are using Keys to upload your files:
let s3 = new AWS.S3({
accessKeyId: process.env.AWS_ID,
secretAccessKey: process.env.AWS_SECRET,
});
let data = await s3.upload(params).promise();
So, if you are getting Access Denied on your writes, you should look at your keys permissions.
Related
I'm using https://www.npmjs.com/package/aws-s3 and https://www.npmjs.com/package/filepond to upload images to my AWS S3 bucket. I've got it running, but I'm wondering if there's an easy way to show all images in the AWS S3 bucket. I don't want to save each link to a image in a database and then run through that. Any suggestions?
Switched to package https://www.npmjs.com/package/aws-sdk
Added some scripting:
<script>
import AWS from 'aws-sdk'
AWS.config.update({
credentials: new AWS.CognitoIdentityCredentials({
IdentityPoolId: '/******'
}),
region: '/******'
});
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
params: { Bucket: 'gamesnap' }
})
export default {
data() {
return {
baseUrl: 'https://******.s3.eu-central-1.amazonaws.com/',
images: []
}
},
mounted() {
s3.listObjectsV2((err, data) => {
if (err) {
console.log("Error: " , err);
} else {
this.images = data.Contents
}
});
}
}
</script>
I receive <Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message> when doing a POST form data upload on to an s3 bucket.
S3 configuration:
const attachmentBucket = new Bucket(this, 'caS3Bucket', {
bucketName: environmentName + '.caattachments',
cors: [{
allowedMethods: [HttpMethods.GET, HttpMethods.POST],
allowedOrigins: ['*'],
allowedHeaders: ['*'],
maxAge: 3000
} as CorsRule]
} as BucketProps);
Pre-signing upload url through a lambda:
const params = {
Bucket: process.env.S3_BUCKET!.split(':')[5],
Fields: {
key: payload.path,
acl: 'public-read'
},
Expires: 3600
};
const postData = await new Promise(resolve => {
s3.createPresignedPost(params, (err, data) => {
resolve(data);
});
}) as AWS.S3.PresignedPost;
I append all parameters in the postData.fields to the input form with the file. Is there any way to debug this?
The issue was that the form was missing the 'Policy' field. I wish aws errors were more descriptive. Final working form fields looks like this:
const formData: FormData = new FormData();
formData.append('key', uploadData.fields.key);
formData.append('acl', uploadData.fields.acl);
formData.append('bucket', uploadData.fields.bucket);
formData.append('X-Amz-Algorithm', uploadData.fields.algorithm);
formData.append('X-Amz-Credential', uploadData.fields.credential);
formData.append('X-Amz-Date', uploadData.fields.date);
formData.append('X-Amz-Security-Token', uploadData.fields.token);
formData.append('Policy', uploadData.fields.policy);
formData.append('X-Amz-Signature', uploadData.fields.signature);
formData.append('file', file, file.name);
I am trying to upload the image from the device to s3 directly. I am able to read the image metadata and sending it to the server to generate the pre-signed url for the aws s3. I also have the pre-signed url to with I want to upload the file/image using axios but somehow the image/file is not getting uploaded. Here is my code.
Image data (read by the ImagePicker)
data: "" // image raw data
fileName: "acx.jpg"
fileSize: ""
uri: ""
path: ""
Code for sending the selected image to aws s3.
const options = { headers: { 'Content-Type': fileType}};
axios.put(res.data.signedRequest, data , options);
I'm getting the following respose.
res = {
config:
data: ""
status: 200
StatusText: undefined
...
}
So what should I pass as data in the axios request?
Have you explored this plugin ? It would make the process a lot easier. You could then try
upload = () => {
const file = {
uri: this.state.imageuri,
name: acx.jpg,
type: "image/jpeg"
};
const options = {
keyPrefix: "ts/",
bucket: "celeb-c4u",
region: "eu-west-1",
accessKey: "AKIAI2NHLR7A5W2R3OLA",
secretKey: "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",
successActionStatus: 201
};
return RNS3.put(file, options)
.then(response => {
if (response.status !== 201)
throw new Error("Failed to upload image to S3");
else {
console.log(
"Successfully uploaded image to s3. s3 bucket url: ",
response.body.postResponse.location
);
this.setState({
url: response.body.postResponse.location
});
}
})
.catch(error => {
console.log(error);
});
};
I just used the following code to get an S3 pre-signed URL:
import AWS from 'aws-sdk';
AWS.config.update({
accessKeyId: process.env.AWS_S3_KEY,
secretAccessKey: process.env.AWS_S3_SECRET,
region: process.env.AWS_S3_REGION
});
const s3 = new AWS.S3({
region: process.env.AWS_S3_REGION,
signatureVersion: 'v4'
});
export const s3Auth = (req, res) => {
s3.getSignedUrl(
'putObject',
{
Bucket: 'bucket',
Key: 'mykey',
ContentType: 'multipart/form-data',
Expires: 60
},
(error, url) => {
if (!error && url) {
res.send({
url
});
} else {
res.status(500);
res.send({ error: 'AWS error!' });
throw error;
}
}
);
};
As you can see I set my region in both AWS and AWS.S3 objects. But still, the URL returned by this function does not include the correct region. It returns us-east-1 instead of ap-southeast-1 which I set. The environment variables are coming correctly. I tested them. Any idea what's happening?
Sample url:
https://{BUCKET_NAME}.s3.amazonaws.com/{FOLDER}?Content-Type=multipart%2Fform-data&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={KEY}%2F20171023%2Fus-east-1%2Fs3%2Faws4_request...
While following the AWS docs for setting up DynamoDB in my front end project, with settings taken from the docs the API throws:
Error: Missing region in config
at constructor.<anonymous> (aws-sdk-2.129.0.min.js:42)
at constructor.callListeners (aws-sdk-2.129.0.min.js:44)
at i (aws-sdk-2.129.0.min.js:44)
at aws-sdk-2.129.0.min.js:42
at t (aws-sdk-2.129.0.min.js:41)
at constructor.getCredentials (aws-sdk-2.129.0.min.js:41)
at constructor.<anonymous> (aws-sdk-2.129.0.min.js:42)
at constructor.callListeners (aws-sdk-2.129.0.min.js:44)
at constructor.emit (aws-sdk-2.129.0.min.js:44)
at constructor.emitEvent (aws-sdk-2.129.0.min.js:43)
My settings:
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.129.0.min.js"></script>
<script>
var myCredentials = new AWS.CognitoIdentityCredentials({IdentityPoolId:'eu-west-1_XXXXXX'});
var myConfig = new AWS.Config({
credentials: myCredentials, region: 'eu-west-1',
});
console.log(myConfig.region); //logs 'eu-west-1'
var dynamodb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
dynamodb.listTables({Limit: 10}, function(err, data) {
if (err) {
console.log(err);
} else {
console.log("Table names are ", data.TableNames);
}
});
</script>
What am I missing?
Looks like you’re newing up AWS.config.
Change the line
var myConfig = new AWS.Config({
credentials: myCredentials, region: 'eu-west-1',
});
to
AWS.config.update({
credentials: myCredentials, region: 'eu-west-1',
}});
Reference:
http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html
Hope it helps.
For others hitting the same issue, where the docs mention:
If you have not yet created one, create an identity pool...
and you got forwarded to the Amazon Cognito service, choose the Manage Federal Identities not the Manage User Pools option.