In my REST API, I call a createPresignedPost method which gives back me as usual an url and other fields. As the returned url wasn't working, I modified it by http://${url.split('/')[3]}.s3.amazonaws.com this. After appending my file, I made post request with the url and everything worked fine. But, I opened it now after a month and now it isn't working.
I searched a lot to resolve this issue, created a new ACCESS KEY ID and ACCESS KEY SECRET, modified policies, tried with new buckets. Also, I am not able to get any simplified method of debugging my errors in amazon s3. Can anyone elaborate what exactly is the problem here. This Acccess denied message is too vague to understand.
The XML errror message is:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>SYBG5Q84R219PN35</RequestId>
<HostId>CVNuh2YSb6gYqYRWPjlONc7hi8wLKfhCQC09nsmwWJz9oqUJ3mF1Ji5U39PO4Tm0xqrGxb7ql1w=</HostId>
</Error>
frontend code
const res = await Axios.get(`RESTAPI`);
const { url, fields } = res.data;
const newUrl = `http://${url.split('/')[3]}.s3.amazonaws.com`;
const formData = new FormData();
const formArray = Object.entries({ ...fields, file: blob });
formArray.forEach(([key, value]) => formData.append(key, value as any));
// Uploading file to AWS
await Axios({
method: 'POST',
url: newUrl,
headers: {
'Content-Type': 'multipart/form-data',},
data: formData,
responseType: 'text',
});
backend code
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY_VERCEL,
secretAccessKey: process.env.AWS_SECRET_KEY_VERCEL,
region: process.env.AWS_REGION_VERCEL,
signatureVersion: 'v4',
});
const s3 = new aws.S3();
const fileName = `fileName`;
const params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: fileName,
Expires: 10000000, // seconds
Conditions: [
['content-length-range', 0, 1048576], // up to 1 MB
],
};
await s3.createPresignedPost(params, (err, data) => {
if (data) {
return res.status(200).json(data);
}
if (err) {
return res.status(500).json({ message: 'Upload Failed!', type: 'subtitle' });
}
});
Related
Basically I'm trying to get a pre-signed URL for the putObject method in S3. I send this link back to my frontend, which then uses it to upload the file directly from the client.
Here's my code :
const AWS = require("aws-sdk");
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_IAM_ACCESS,
secretAccessKey: process.env.AWS_IAM_SECRET,
region: 'ap-southeast-1',
});
const getPreSignedUrlS3 = async (data) => {
try {
//DO SOMETHING HERE TO GENERATE KEY
const options = {
Bucket: process.env.AWS_USER_CDN,
ContentType: data.type,
Key: key,
Expires: 5 * 60
};
return new Promise((resolve, reject) => {
s3.getSignedUrl(
"putObject", options,
(err, url) => {
if (err) {
reject(err);
}
else resolve({ url, key });
}
);
});
} catch (err) {
console.log(err)
return {
status: 500,
msg: 'Failed to sync with CDN, Please try again later!',
}
}
}
I'm getting the following error from the aws sdk : The security token included in the request is invalid.
Things I have tried :
Double check the permissions from my IAM user. Even made bucket access public for testing. My IAM user is given full s3 access policy.
Tried using my root user security key and access details. Still got the same error.
Regenerated new security credentials for my IAM user. I don't have any MFA turned on.
I'm following this documentation.
SDK Version : 2.756.0
I've been stuck on this for a while now. Any help is appreciated. Thank you.
Pre-signed URLs are created locally in the SDK so there's no need to use the asynchronous calls.
Instead, use a synchronous call to simplify your code, something like this:
const getPreSignedUrlS3 = (Bucket, Key, ContentType, Expires = 5 * 60) => {
const params = {
Bucket,
ContentType,
Key,
Expires
};
return s3.getSignedUrl("putObject", params);
}
After a few days of trying to upload a video to AWS, I have successfully (almost) been able to. The main problem I am seeing is when I head to my S3 bucket, the file has a Size 0 B. I was hoping to see what I might be doing wrong that is causing this to occur.
On the backend I get a presignedUrl such as:
const s3 = new AWS.S3({
accessKeyId: ACCESSKEY_ID,
secretAccessKey: SECRETKEY
});
const s3Params = {
Bucket: BUCKET_NAME,
Key: uuidv4() + '.mov',
Expires: 60 * 10,
ContentType: 'mov',
ACL: 'public-read'
};
let url = await s3.getSignedUrl('putObject', s3Params);
return { url };
Once I have the url for upload. On the frontend the way I am sending the file is:
const uploadFileToS3 = async (uri) => {
const type = video.uri.split('.').pop();
const respo = await fetch(uri, {
method: 'PUT',
body: {
url: video.uri,
type,
name: 'testing'
},
headers: {
'Content-Type': type,
'x-amz-acl': 'public-read'
}
});
const some = await JSON.stringify(respo);
It does seem to be saving something since I see it in the bucket but am unable to download or view it. Just shows an empty page and it feels like nothing (the video) possibly was uploaded to S3. Any pointers to where I might be going wrong in uploading a video to S3?
Thank you for all the help.
You can not specify an URL when you upload a file. You need 2 fetches:
the first one downloads the video from video.uri
the second uploads the video to S3: body: blob
To download a file as a blob, use response.blob(). Then you can use that to upload the file (here is an example).
I am working on a project where I can generate shareable links for users which will expire after some time also link can be regenerated.
I have tried all the solutions in web to generate signed urls but none of them is working for me.
I have used node aws-sdk 's getSignedUrl method and other third party packages but none of them is working
when i try to access file via S3's link it retun an error
here is the code I am using
const s3 = new AWS.S3({
accessKeyId: ID,
signatureVersion: 'v4',
secretAccessKey: SECRET,
region:region
});
const uploadFile = (filePath) => {
// Read content from the file
const fileContent = fs.readFileSync(filePath);
const params = {
Bucket: BUCKET_NAME,
Key: `Share-TEST/${Date.now()}`,
ACL:"private",
ContentType:"image/jpeg",
Body: fileContent
};
s3.upload(params, function(err, data) {
if (err) {
throw err;
}
console.log(data.Location);
s3.getSignedUrl('getObject', {Bucket: BUCKET_NAME, Key: data.Key, Expires: 60}, function (err, url) {
console.log(err,url)
})
});
};
uploadFile("/path/to/file")
and the url generated is link
https://bucket-name.s3.aws-region.amazonaws.com/Share-TEST/15852504924-1?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=111111111111111/22222222/region-name/s3/aws4_request&X-Amz-Date=202033333416Z&X-Amz-Expires=60&X-Amz-Signature=333333333&X-Amz-SignedHeaders=host'
Please help me with some code or links
I am trying to create presigned-url using boto3 below
s3 = boto3.client(
's3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_ACCESS_SECRET,
region_name=settings.AWS_SES_REGION_NAME,
config=Config(signature_version='s3v4')
)
metadata = {
'test':'testing'
}
presigned_url = s3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': settings.AWS_S3_BUCKET_NAME,
'Key': str(new_file.uuid),
'ContentDisposition': 'inline',
'Metadata': metadata
})
So, after the URL is generated and I try to upload it to S3 using Ajax it gives 403 forbidden. If I remove Metadata and ContentDisposition while creating URL it gets uploaded successfully.
Boto3 version: 1.9.33
Below is the doc that I referring to:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.generate_presigned_url
Yes I got it working,
Basically after the signed URL is generated I need to send all the metadata and Content-Dispostion in header along with the signed URL.
For eg: My metadata dictionary is {'test':'test'} then I need to send this metadata in header i.e. x-amz-meta-test along with its value and content-dispostion to AWS
I was using createPresignedPost and for me I got this working by adding the metadata I wanted to the Fields param like so :-
const params = {
Expires: 60,
Bucket: process.env.fileStorageName,
Conditions: [['content-length-range', 1, 1000000000]], // 1GB
Fields: {
'Content-Type': 'application/pdf',
key: strippedName,
'x-amz-meta-pdf-type': pdfType,
'x-amz-meta-pdf-id': pdfId,
},
};
As long as you pass the data you want, in the file's metadata, to the lambda that you're using to create the preSignedPost response then the above will work. Hopefully will help someone else...
I found the metadata object needs to be key/value pairs, with the value as a string (example is Nodejs lambda):
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event) => {
const { key, type, metadata } = JSON.parse(event.body);
// example
/*metadata: {
foo: 'bar',
x: '123',
y: '22.4213213'
}*/
return await s3.getSignedUrlPromise('putObject', {
Bucket: 'the-product-uploads',
Key: key,
Expires: 300,
ContentType: type,
Metadata: metadata
});
};
Then in your request headers you need to add each k/v explicitly:
await fetch(signedUrl, {
method: "PUT",
headers: {
"Content-Type": fileData.type,
"x-amz-meta-foo": "bar",
"x-amz-meta-x": x.toString(),
"x-amz-meta-y": y.toString()
},
body: fileBuffer
});
In boto, you should provide the Metadata parameter passing the dict of your key, value metadata. You don't need to name the key as x-amz-meta as apparently boto is doing it for you now.
Also, I didn't have to pass the metadata again when uploading to the pre-signed URL:
params = {'Bucket': bucket_name,
'Key': object_key,
'Metadata': {'test-key': value}
}
response = s3_client.generate_presigned_url('put_object',
Params=params,
ExpiresIn=3600)
I'm using a similar code in a lambda function behind an API
I managed to get my generated pdf uploaded to s3 from my node JS server. Pdf looks okay on my local folder but when I tried to access it from the AWS console, it indicates "Failed to load PDF document".
I have tried uploading it via the s3.upload and s3.putObject APIs, (for the putObject I also used an .on finish checker to ensure that the file has been fully loaded before sending the request). But the file in the S3 bucket is still the same (small) size, 26 bytes and cannot be loaded. Any help is greatly appreciated!!!
var pdfDoc = printer.createPdfKitDocument(inspectionReport);
var writeStream = fs.createWriteStream('pdfs/inspectionReport.pdf');
pdfDoc.pipe(writeStream);
pdfDoc.end();
writeStream.on('finish', function(){
const s3 = new aws.S3();
aws.config.loadFromPath('./modules/awsconfig.json');
var s3Params = {
Bucket: S3_BUCKET,
Key: 'insp_report_test.pdf',
Body: '/pdf/inspectionReport.pdf',
Expires: 60,
ContentType: 'application/pdf'
};
s3.putObject(s3Params, function(err,res){
if(err)
console.log(err);
else
console.log(res);
})
I realised that pdfDoc.end() must come before piping starts. Have also used a callback to ensure that s3 upload is called after pdf write is finished. See code below, hope it helps!
var pdfDoc = printer.createPdfKitDocument(inspectionReport);
pdfDoc.end();
async.parallel([
function(callback){
var writeStream = fs.createWriteStream('pdfs/inspectionReport.pdf');
pdfDoc.pipe(writeStream);
console.log('pdf write finished!');
callback();
}
], function(err){
const s3 = new aws.S3();
var s3Params = {
Bucket: S3_BUCKET,
Key: 'insp_report_test.pdf',
Body: pdfDoc,
Expires: 60,
ContentType: 'application/pdf'
};
s3.upload(s3Params, function(err,result){
if(err) console.log(err);
else console.log(result);
});
}
);