I’m hosting videos on my site via s3 buckets. I have a video that keeps buffering. The video is 4K and 6.5GB. Smaller videos shot in a lower resolution do not buffer. I’m having a hard time deciding whether it’s the video’s size in GB’s or 4K resolution that’s making it buffer. Anyone knows what makes a video buffer from a s3 bucket? Is it the size of the video or the resolution of the video? Also, does any know how to stop video buffering. Yes, I’ve already tried using cloud front but the same result.
Resolution
For large files, Amazon S3 might separate the file into multiple uploads to maximize the upload speed. The Amazon S3 console might time out during large uploads because of session timeouts. Instead of using the Amazon S3 console, try uploading the file using the AWS Command Line Interface (AWS CLI) or an AWS SDK.
Note: If you use the Amazon S3 console, the maximum file size for uploads is 160 GB. To upload a file that is larger than 160 GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.
AWS CLI
First, install and configure the AWS CLI. Be sure to configure the AWS CLI with the credentials of an AWS Identity and Access Management (IAM) user or role. The IAM user or role must have the correct permissions to access Amazon S3.
Important: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.
To upload a large file, run the cp command:
aws s3 cp cat.png s3://docexamplebucket
Note: The file must be in the same directory that you're running the command from.
When you run a high-level (aws s3) command such as aws s3 cp, Amazon S3 automatically performs a multipart upload for large objects. In a multipart upload, a large file is split into multiple parts and uploaded separately to Amazon S3. After all the parts are uploaded, Amazon S3 combines the parts into a single file. A multipart upload can result in faster uploads and lower chances of failure with large files.
For more information on multipart uploads, see How do I use the AWS CLI to perform a multipart upload of a file to Amazon S3?
AWS SDK
For a programmable approach to uploading large files, consider using an AWS SDK, such as the AWS SDK for Java. For an example operation, see Upload an object using the AWS SDK for Java.
Note: For a full list of AWS SDKs and programming toolkits for developing and managing applications, see Tools to build on AWS.
for details:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-uploads/
Related
I am still new to the cloud and when I first started I used Clever Cloud.
But now I want to migrate to AWS, and I have data that I want to move from Cellar to Amazon S3.
I am not sure what are the conventions on this or the best practices, and if anyone can help with documentation or explanation on how I can proceed is very much appreciated.
Thank you very much.
Clever Cloud Cellar is an Amazon S3 compatible service. This means it operates pretty much the same as S3.
Clever Cloud is not able to communicate directly to Amazon S3, and Amazon S3 is not able to communicate directly to Cellar. Therefore, you will need to:
Download the files from Cellar using s3cmd or the AWS Command-Line Interface (CLI) (see instructions on CleverCloud Cellar website)
Upload the files to an Amazon S3 bucket using the AWS CLI and your AWS credentials
This activity would be most efficient if performed from an Amazon EC2 instance since it has high bandwidth connectivity to Amazon S3.
Note that there will be Data Transfer costs from Clever Cloud for "Outbound traffic".
I suggest you start by getting s3cmd or the AWS CLI working with Cellar to download a single file, and then get the AWS CLI working with Amazon S3 to upload a single file. You can then use the sync command to copy whole directories of files.
I can't find some information about Amazon S3, hope you will help me. When is a file available for user to download, after the POST upload? I mean some small JSON file that doesn't require much processing. Is it available to download immediately after uploading? Or maybe amazon s3 works in some sessions and it always takes a few hours?
According to the doc,
Amazon S3 provides strong read-after-write consistency for PUTs and DELETEs of objects in your Amazon S3 bucket in all AWS Regions.
This means that your objects are available to download immediately after it's uploaded.
An object that is uploaded to an Amazon S3 bucket is available right away. There is no time period that you have to wait. That means if you are writing a client app that uses these objects, you can access them as soon as they are uploaded.
In case anyone is wondering how to programmatically interact with objects located in an Amazon S3 bucket through code, here is an example of uploading and reading objects in an Amazon S3 bucket from a client web app....
Creating an example AWS photo analyzer application using the AWS SDK for Java
Tried uploading a file of size ~220 MB to s3. I tried doing this through the aws console and it took a lot of time. The upload speed was around 500Kbps on average. I know it isn't a bottleneck because of my network because I'm able to upload this same file to google drive console in about 47seconds.
I've tried uploading to the same directory through aws s3 cli and it is much faster ~2 minutes. I was wondering if there is any issue with doing uploads directly on s3 console. I'm also thinking this would be a risk, because I want my application to be able to upload to s3 using a signed url, but that is taking a similar amount of time to the console upload time.
Google Drive upload: 49 seconds
S3 console upload: REALLY SLOW (>10 minutes before I gave up).
AWS cli (no custom settings): ~ 2 minutes.
Upload through my UI: (similar to s3 console upload time).
You should be using S3 Multipart API for uploading large files to S3.
The Multipart upload API enables you to upload large objects in parts.
You can use this API to upload new large objects or make a copy of an
existing object.
The reason why your CLI upload is quicker because it internally uses the multipart API for big objects automatically.
The recommended method is to use aws s3 commands (such as aws s3 cp)
for multipart uploads and downloads, because these aws s3 commands
automatically perform multipart uploading and downloading based on the
file size.
Source : https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
I have to upload video files into an S3 bucket from my React web application. I am currently developing a simple react application and from this application, I am trying to upload video files into an S3 bucket so I have decided two approaches for implementing the uploading part.
1) Amazon EC2 instance: From the front-end, I am hitting the API and the server is running in the Amazon EC2 instance. So I can upload the files into S3 bucket from the ec2 instance.
2) Amazon API Gateway + Lambda: I am directly sending the local files into an S3 bucket through API + Lambda function by calling the https URL with data.
But I am not happy with these two methods because both are more costly. I have to upload files into an S3 bucket, and the files are more than 200MB. I don't know I can optimize this uploading process. Video uploading part is necessary for my application and I should be very careful to do this part and also I have to increase the performance and cost-effective.
If someone knows any solution please share with me, I will be very helpful for me to continue my process.
Thanks in advance.
you can directly upload files from your react app to s3 using aws javascript sdk and cognito identity pools and for the optimization part you can use AWS multipart upload capability to upload file in multiple parts I'm providing links to read about it further
AWS javascript upload image example
cognito identity pools
multipart upload to S3
also consider a look at aws managed upload made for javascript sdk
aws managed upload javascript
In order to bypass the EC2, you can use a pre-authenticated POST request to directly upload you content from the browser to the S3 bucket.
AWS s3 cli sync can use multipart upload option?
on-premise server sync to s3 using AWS cli sync
but, speed is very slow.
Assuming you mean the aws command (and not e.g. s3cmd): Yes, sync uses multipart upload by default. From the docs:
All high-level commands that involve uploading objects into an Amazon S3 bucket (aws s3 cp, aws s3 mv, and aws s3 sync) automatically perform a multipart upload when the object is large
I guess the slowness is caused by another factor, e.g. your bandwidth is low (check e.g. with speedtest or it is already saturated