I know there is a way to upload to S3 directly from the web browser using POST without the files going to your backend server. But is there a way to do it from URL instead of web browser.
Example, upload a file that resides at http://example.com/dude.jpg directly to S3 using post. I mean I don't want to download the asset to my server then upload it to S3. I just want to make a POST request to S3 and it uploads it automatically.
It sounds like you want S3 itself to download the file from a remote server where you only pass the URL of the resource to S3.
This is not currently supported by S3.
It needs an API client to actually transfer the content of the object to S3.
I thought I should share my code to achieve something similar. I was working on the backend but possibly could do something similar in frontend though be mindful about AWS credentials likely to be exposed.
For my purposes, I wanted to download a file from the external URL and then ultimately get back the URL form S3 of the uploaded file instead.
I also used axios in order to get the uploadable format and file-type to get the proper type of the file but that is not the requirement.
Below is the snippet of my code:
async function uploadAttachmentToS3(type, buffer) {
var params = {
//file name you can get from URL or in any other way, you could then pass it as parameter to the function for example if necessary
Key : 'yourfolder/directory/filename',
Body : buffer,
Bucket : BUCKET_NAME,
ContentType : type,
ACL: 'public-read' //becomes a public URL
}
//notice use of the upload function, not the putObject function
return s3.upload(params).promise().then((response) => {
return response.Location
}, (err) => {
return {type: 'error', err: err}
})
}
async function downloadAttachment(url) {
return axios.get(url, {
responseType: 'arraybuffer'
})
.then(response => {
const buffer = Buffer.from(response.data, 'base64');
return (async () => {
let type = (await FileType.fromBuffer(buffer)).mime
return uploadAttachmentToS3(type, buffer)
})();
})
.catch(err => {
return {type: 'error', err: err}
});
}
let myS3Url = await downloadAttachment(url)
I hope it helps people who still struggle with similar issues. Good luck!
I found this article with some details. You will probably have to modify your buckets' security settings in some fashion to allow this type of interaction.
http://aws.amazon.com/articles/1434
There will be some security issues on the client as well since you never want your keys publicly accessible
You can use rclone to achieve this easily:
https://rclone.org/commands/rclone_copyurl/
Create a new access key on AWS for rclone and use rclone config like this:
https://rclone.org/s3/
Then, you can easily interact with your S3 buckets using rclone.
To upload from URL:
rclone -Pva copy {URL} RCLONE_CONFIG_NAME:/{BUCKET_NAME}/{FOLDER}/
It is quite handy for me as I am archiving my old files from Dropbox Business to S3 Glacier Deep Archive to save on Dropbox costs.
I can easily create a file transfer from Dropbox (100GB per file limit), copy the download link and upload directly to S3 using rclone.
It is copying at 10-12 MiB/s on a small DigitalOcean droplet.
If you are able you can use Cloudinary as an alternative to S3. They support remote upload via URL and more.
https://cloudinary.com/documentation/image_upload_api_reference#upload_examples
Related
I am working on a project that requires me to upload large files directly from the browser to Amazon S3 using javascript.
Does anyone know how to do it? Is there Amazon Javascript SDK that supports this?
Try EvaporateJS. It has a large community and broad browser support. https://github.com/TTLabs/EvaporateJS.
Use aws-sdk-js to directly upload to s3 from browser. In my case the file sizes could go up to 100Gb. I used multipart upload, very easy to use.
I had to upload in a private bucket, for authentication I used WebIdentityCredentials. You also have an option to use CognitoIdentityCredentials.
If you can add logic to the server side, you could return a pre-signed S3 upload URL to the browser and upload the file straight to S3.
This answer has a similar code, but using AWS SDK v2.
Example in Javascript (source):
const { S3, PutObjectCommand } = require("#aws-sdk/client-s3");
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
...
const credentials = {
accessKeyId: "KEY", // UPDATE THIS
secretAccessKey: "SECRET", // UPDATE THIS
};
const options = {
credentials,
region: "REGION", // UPDATE THIS
apiVersion: "2006-03-01", // if you want to fix the api version, optional
};
const s3Client = new S3(options);
// Create the command
const command = new PutObjectCommand({
Bucket: 'BUCKET', // UPDATE THIS
Key: 'OBJ_ID_ON_S3', // UPDATE THIS
});
// Create the presigned URL
const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 60 * 2, // This makes the URL expires after 2 min
});
Following this AWS documentation, I was able to create a new endpoint on my API Gateway that is able to manipulate files on an S3 repository. The problem I'm having is the file size (AWS having a payload limitation of 10MB).
I was wondering, without using a lambda work-around (this link would help with that), would it be possible to upload and get files bigger than 10MB (even as binary if needed) seeing as this is using an S3 service as a proxy - or is the limit regardless?
I've tried PUTting and GETting files bigger than 10MB, and each response is a typical "message": "Timeout waiting for endpoint response".
Looks like Lambda is the only way, just wondering if anyone else got around this, using S3 as a proxy.
Thanks
You can create a Lambda proxy function that will return a redirect link with a S3 pre-signed URL.
Example JavaScript code that generating a pre-signed S3 URL:
var s3Params = {
Bucket: test-bucket,
Key: file_name,
ContentType: 'application/octet-stream',
Expires: 10000
};
s3.getSignedUrl('putObject', s3Params, function(err, data){
...
}
Then your Lambda function returns a redirect response to your client, like,
{
"statusCode": 302,
"headers": { "Location": "url" }
}
You might be able to find more information you need from this documentation.
If you have large files, consider directly uploading them to S3 from your client. You can create a API endpoint to return a signed URL for the client to use for the upload (To Implement Access Control) your private content.
Also you can consider using multi-part uploads for even larger files to speed up the uploading.
I'm hosting my static website on AWS S3, with Cloudfront as a CDN, and I'm wondering how I can get clean URLs working.
I currently have to go to example.com/about.html to get the about page. I'd prefer example.com/about as well as across all my other pages. Also, I kind of have to do this because my canonical URLs have been set with meta tags and search engines, and it's gonna be a bit much to go changing them.
Is there a setting in Cloudfront that I'm non seeing?
Updates
There are two options I've explored, one detailed by Matt below.
First is trimming .html off the file before uploading to S3 and then editing the Content Header in the http for that file. This might work beautifully, but I can't figure out how to edit content headers from the command line, where I'm writing my "push website update" bash script.
Second is detailled by Matt below and leverages S3's feature that recognizes root default files, usually index.html. Might be a great approach, but it makes my local testing challenging, and it leaves a trailing slash on the URLs which doesn't work for me.
Try AWS Lamda#Edge. It solves this completely.
First, create an AWS Lambda function and then attach your CloudFront as a trigger.
In the code section of this AWS Lamda page, add the snippet in the repository below.
https://github.com/CloudUnder/lambda-edge-nice-urls/blob/master/lambdaRewrite.js
Note the options in the readme section of the repo
When you host your website in S3 (and by extension CloudFront), you can configure S3 to have a "default" file to load when a directory is requested. This is called the "index document".
For example, you can configure S3 to load index.html as the default file. This way, if the request was for example.com/abc/, then it would load abc/index.html.
When you do this, if they requested example.com/abc/123.html, then it will serve up abc/123.html. So the default file only applies when a folder is requested.
To address your request for example.com/about/, you could configure your bucket with a default file of index.html, and put about/index.html in your bucket.
More information can be found in Amazon's documentation: Index Document Support.
You can overcome ugly urls by using a custom origin, when your S3 bucket is configured as a website endpoint, with your Cloudfront distribution. The downside is that you can't configure Cloudfront to use HTTPS to communicate between Cloudfront and your origin. You can still use HTTPS, just not end-to-end encryption.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-s3-origin.html
You can use Lambda as a reverse proxy to your website.
In API Gateway you need to create a "proxy resource" with resource path = "{proxy+}"
Then, create a Lambda function to route the requests:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const myBucket = 'myBucket';
exports.handler = async (event) => {
var responseBody = "";
if (event.path=="/") {
responseBody = "<h1>My Landing Page</h1>";
responseBody += "<a href='/about'>link to about page</a>";
return buildResponse(200, responseBody);
}
if (event.path == "/about") {
var params = {
Bucket: myBucket,
Key: 'path/to/about.html',
};
const data = await s3.getObject(params).promise();
return buildResponse(200, data.Body.toString('utf-8'));
}
return buildResponse(404, 'Page Not Found');
};
function buildResponse(statusCode, responseBody) {
var response = {
"isBase64Encoded": false,
"statusCode": statusCode,
"headers": {
"Content-Type" : "text/html; charset=utf-8"
},
"body": responseBody,
};
return response;
}
Then you can create a CloudFront distribution for your API Gateway, using your custom domain.
For more details check this answer:
https://stackoverflow.com/a/57913763/2444386
I'm trying to upload a video file to AWS S3 bucket without the user logging via google or anything (the user shouldn't have to do anything!) This page suggests that there are 3 options:
Using Amazon Cognito to authenticate users
Using web identity federation to authenticate users
Hard-coded in your application
3rd option works for me, but is not acceptable on production environment.
1st and 2nd option both require a lot of hassling not just for developers but for a user also if I understood correctly.
// Not safe, but working!
/*AWS.config.update({
accessKeyId: "myaccesskey",
secretAccessKey: "mysecretkey",
"region": "eu-central-1"
});*/
AWS.config.region = 'eu-central-1';
var s3 = new AWS.S3();
var params = {
Bucket: 'mybucketname',
Key: file.name,
Body: file
};
s3.putObject(params, function (err, res) {
if (err) {
console.log("Error uploading data: ", err);
} else {
console.log("Successfully uploaded data to myBucket/myKey");
}
});
I also tried loading credentials from json file but this also did not work.
AWS.config.loadFromPath('./config.json');
The error with this approach is:
aws-sdk-2.4.13.min.js?ver=0.0.1:5 Uncaught TypeError:
n.FileSystemCredentials is not a constructor
Will I need to use server scripts to upload the file? (create a proxy between my JS and S3)
Any help appreciated.
Thanks.
EDIT: someone suggested I followed http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html to make a safe requests via browser/js. Anyone tried it yet perhaps? Any JS examples anywhere?
Posting the answer (if it can be called so) as I noticed this was not answered: I actually used server side AWS libraries to make this work and the variables were safely stored as ENV variables. Also, I changed some functionality so I did not need to use JS to make this work.
So did not need to fiddle with 1st and 2nd option mentioned above.
If someone finds a JS solution for the above two methods I'll gladly accept their answer.
I've got an application written in Sails JS.
I want to set caching for my S3 files.
I'm not really sure where to start, do I need to do something with my Image GET function? Has anyone had any experience on setting caching for S3 assets?
This Is My Get Function for User Avatars:
var SkipperDisk = require('skipper-s3');
var fileAdapter = SkipperDisk(
{
key: 'xxx',
secret: 'xxx+xxx',
bucket: 'xxx-xxx'
});
fileAdapter.read(user.avatarFd).on('error', function(err) {
// return res.serverError(err);
return res.redirect('/noavatar.gif');
}).pipe(res);
});
Why not enable static website hosting for your S3 bucket? Upload the images to a bucket which can be referenced by images.yourapp.com/unique-image-path
Store the avatar url for each users in database.
Return the image url instead of returning the image.
Doing so might help you to take advantage of client side caching.
While uploading a file to S3, you can set meta data for a file. Set Expires header to a future date to help caching. You can also set Cache-Control header. Skipper-s3 supports setting headers for a file while uploading to S3.
https://github.com/balderdashy/skipper#uploading-files-to-s3
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#RESTObjectPUT-requests