I am working on uploading image file in AWS S3 bucket by using putObject method in lambda.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
But putObject is taking more than 20 second to upload 5 MB image file and
all of my resources are hosted in same region so implementing accelerate endpoint is also not making any difference.
http://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html?region=REGION-NAME&origBucketName=BUCKET-NAME
Is this the expected time to upload file or is there any other way to accelerate the uploading time ?
We created VPC endpoint for S3 service and now there is some good improvement in performance while uploading file.
Related
I’m hosting videos on my site via s3 buckets. I have a video that keeps buffering. The video is 4K and 6.5GB. Smaller videos shot in a lower resolution do not buffer. I’m having a hard time deciding whether it’s the video’s size in GB’s or 4K resolution that’s making it buffer. Anyone knows what makes a video buffer from a s3 bucket? Is it the size of the video or the resolution of the video? Also, does any know how to stop video buffering. Yes, I’ve already tried using cloud front but the same result.
Resolution
For large files, Amazon S3 might separate the file into multiple uploads to maximize the upload speed. The Amazon S3 console might time out during large uploads because of session timeouts. Instead of using the Amazon S3 console, try uploading the file using the AWS Command Line Interface (AWS CLI) or an AWS SDK.
Note: If you use the Amazon S3 console, the maximum file size for uploads is 160 GB. To upload a file that is larger than 160 GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.
AWS CLI
First, install and configure the AWS CLI. Be sure to configure the AWS CLI with the credentials of an AWS Identity and Access Management (IAM) user or role. The IAM user or role must have the correct permissions to access Amazon S3.
Important: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.
To upload a large file, run the cp command:
aws s3 cp cat.png s3://docexamplebucket
Note: The file must be in the same directory that you're running the command from.
When you run a high-level (aws s3) command such as aws s3 cp, Amazon S3 automatically performs a multipart upload for large objects. In a multipart upload, a large file is split into multiple parts and uploaded separately to Amazon S3. After all the parts are uploaded, Amazon S3 combines the parts into a single file. A multipart upload can result in faster uploads and lower chances of failure with large files.
For more information on multipart uploads, see How do I use the AWS CLI to perform a multipart upload of a file to Amazon S3?
AWS SDK
For a programmable approach to uploading large files, consider using an AWS SDK, such as the AWS SDK for Java. For an example operation, see Upload an object using the AWS SDK for Java.
Note: For a full list of AWS SDKs and programming toolkits for developing and managing applications, see Tools to build on AWS.
for details:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-uploads/
I am trying to build a file upload and download app using AWS APi gateway,AWS lambda and S3 for storage.
AWS lambda puts a cap of 6 mb on the file size and API gateway a limit of 10 mb.
Therefore we decided to use pre sign url for uploading n downloading files.
Step 1- Client sends the list of filename(let's say 5 files) to lambda.
Step 2- Lamda creates and returns the list of pre sign url(PUT) for those files(5 urls).
Step 3- Client uploads the file to S3 using the urls which it received.
Note - The filename are S3 bucket keys.
Similar approach with downloading file .
Now the issue is with the latency, it takes quite a long time and performance is the collateral damage.
The question is, the above approach the only way to do file upload n download in lambda.
It looks like the case of S3 Transfer Acceleration. You'll still create pre-signed URLs but enable this special setting in S3 which will reduce latency.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Alternatively you can use CloudFront with S3 origin to upload / download file. Might have to re-architect your solution but with CloudFront and AWS networking backbone, latency can be reduced a lot.
I am using RESTful API, API provider having images on S3 bucket more than 80GB size.
I need to download these images and upload in my AWS S3 bucket, its time taking job.
Is there any way to copy image from API to my S3 bucket instead of I download and upload again.
I talked with API support they saying you are getting image URL, so its up to you how you handle,
I am using laravel.
is it way to get the sourced images url's and directly move images to S3 instead of first I download and upload.
Thanks
I think downloading and re-uploading to different accounts would be inefficient plus pricey for the API Provider. Instead of that I would talk to the respective API Provider and try to replicate the images across accounts.
Post replicate you can Amazon S3 inventory for various information related to the objects in the bucket.
Configuring replication when the source and destination buckets are owned by different accounts
You want "S3 Batch Operations". Search for "xcopy".
You do not say how many images you have, but 1000 at 80GB is 80TB, and for that size you would not even want to be downloading to a temporary EC2 instance in the same region file by file which might be a one or two day option otherwise, you will still pay for ingress/egress.
I am sure AWS will do this in an ad-hoc manner for a price, as they would do if you were migrating from the platform.
It may also be easier to allow access to the original bucket from the alternative account, but this is no the question.
Tried uploading a file of size ~220 MB to s3. I tried doing this through the aws console and it took a lot of time. The upload speed was around 500Kbps on average. I know it isn't a bottleneck because of my network because I'm able to upload this same file to google drive console in about 47seconds.
I've tried uploading to the same directory through aws s3 cli and it is much faster ~2 minutes. I was wondering if there is any issue with doing uploads directly on s3 console. I'm also thinking this would be a risk, because I want my application to be able to upload to s3 using a signed url, but that is taking a similar amount of time to the console upload time.
Google Drive upload: 49 seconds
S3 console upload: REALLY SLOW (>10 minutes before I gave up).
AWS cli (no custom settings): ~ 2 minutes.
Upload through my UI: (similar to s3 console upload time).
You should be using S3 Multipart API for uploading large files to S3.
The Multipart upload API enables you to upload large objects in parts.
You can use this API to upload new large objects or make a copy of an
existing object.
The reason why your CLI upload is quicker because it internally uses the multipart API for big objects automatically.
The recommended method is to use aws s3 commands (such as aws s3 cp)
for multipart uploads and downloads, because these aws s3 commands
automatically perform multipart uploading and downloading based on the
file size.
Source : https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
I have to upload video files into an S3 bucket from my React web application. I am currently developing a simple react application and from this application, I am trying to upload video files into an S3 bucket so I have decided two approaches for implementing the uploading part.
1) Amazon EC2 instance: From the front-end, I am hitting the API and the server is running in the Amazon EC2 instance. So I can upload the files into S3 bucket from the ec2 instance.
2) Amazon API Gateway + Lambda: I am directly sending the local files into an S3 bucket through API + Lambda function by calling the https URL with data.
But I am not happy with these two methods because both are more costly. I have to upload files into an S3 bucket, and the files are more than 200MB. I don't know I can optimize this uploading process. Video uploading part is necessary for my application and I should be very careful to do this part and also I have to increase the performance and cost-effective.
If someone knows any solution please share with me, I will be very helpful for me to continue my process.
Thanks in advance.
you can directly upload files from your react app to s3 using aws javascript sdk and cognito identity pools and for the optimization part you can use AWS multipart upload capability to upload file in multiple parts I'm providing links to read about it further
AWS javascript upload image example
cognito identity pools
multipart upload to S3
also consider a look at aws managed upload made for javascript sdk
aws managed upload javascript
In order to bypass the EC2, you can use a pre-authenticated POST request to directly upload you content from the browser to the S3 bucket.