All my work to access AWS S3 is in the region: us-east2 and the AZ is us-east-2a.
But I saw some throttling complaints from S3, so I am wondering if I move some of my work to another AZ like us-east-2b, could it mitigate the problem? ( Or it will not help since us-east-2a and us-east-2b are actually pointing to same endpoint?)
Thank you.
The throttling is not per AZ, its for a bucket. The below quote is from the AWS documentation.
You can send 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per partitioned prefix in an S3 bucket. When you have an increased request rate to your bucket, Amazon S3 might return 503 Slow Down errors while it scales to support the request rate. This scaling process is called partitioning.
To avoid or minimize 503 Slow Down responses, verify that the number of unique prefixes in your bucket supports your required transactions per second (TPS). This helps your bucket leverage the scaling and partitioning capabilities of Amazon S3. Additionally, be sure that the objects and the requests for those objects are distributed evenly across the unique prefixes. For more information, see Best Practices Design Patterns: Optimizing Amazon S3 Performance.
If possible enable exponential backoff to retry in the S3 bucket. If the application uploading to S3 is performance sensitive the suggestion would be to handoff to a background application that can upload to S3 at a later time.
Related
I'm hosting a static website in Amazon S3 with CloudFront. Is there a way to set a limit for how many reads (for example per month) will be allowed for my Amazon S3 bucket in order to make sure I don't go above my allocated budget?
If you are concerned about going over a budget, I would recommend Creating a Billing Alarm to Monitor Your Estimated AWS Charges.
AWS is designed for large-scale organizations that care more about providing a reliable service to customers than staying within a particular budget. For example, if their allocated budget was fully consumed, they would not want to stop providing services to their customers. They might, however, want to tweak their infrastructure to reduce costs in future, such as changing the Price Class for a CloudFront Distribution or using AWS WAF to prevent bots from consuming too much traffic.
Your static website will be rather low-cost. The biggest factor will likely be Data Transfer rather than charges for Requests. Changing the Price Class should assist with this. However, the only true way to stop accumulating Data Transfer charges is to stop serving content.
You could activate CloudTrail data read events for the bucket, create a CloudWatch Event Rule to trigger an AWS Lambda Function that increments the number of reads per object in an Amazon DynamoDB table and restrict access to the objects once a certain number of reads has been reached.
What you're asking for is a very typical question in AWS. Unfortunately with near infinite scale, comes near infinite spend.
While you can put a WAF, that is actually meant for security rather than scale restrictions. From a cost-perspective, I'd be more worried about the bandwidth charges than I would be able S3 requests cost.
Plus once you put things like Cloudfront or Lambda, it gets hard to limit all this down.
The best way to limit, is to put Billing Alerts on your account -- and you can tier them, so you get a $10, $20, $100 alerts, up until the point you're uncomfortable with. And then either manually disable the website -- or setup a lambda function to disable it for you.
I have a 3rd party client to which I have exposed my S3 bucket which will upload files in it. I want to rate limit on the bucket so that in case of anomaly at their end I dont receive a lot of file upload requests on my bucket which is connected to my SQS queue and DynamoDB so it will lead to throttling at the DB and in queue as well. Also I would be charged heftily.How do I prevent this?
It is not possible to configure a rate-limit for Amazon S3. However, in some situations, Amazon S3 might impose a rate limit when data.
A way to handle this would be to process all uploads through API Gateway and your back-end service. However, this might lead to more overhead and costs than you are trying to save.
You could configure an AWS Lambda function to be triggered when a new object is created, then store information in a database to track the upload rate, but this again would involve more complexity and (a little) expense.
I have been using the AmazonS3 service to store some files.
I have uploaded 4 videos and they are public. I'm using a third party video player for those videos (JW Player). As a new user on the AWS Free Tier, my free PUT, POST and LIST requests are almost used up from 2000 allowed requests, and for four videos that seems ridiculous.
Am I missing something or shouldn't one upload be one PUT request, I don't understand how I've hit that limit already.
The AWS Free Tier for Amazon S3 includes:
5GB of standard storage (normally $0.023 per GB)
20,000 GET requests (normally $0.0004 per 1,000 requests)
2,000 PUT requests (normally $0.005 per 1,000 requests)
In total, it is worth up to 13.3 cents every month!
So, don't be too worried about your current level of usage, but do keep an eye on charges so you don't get too many surprises. You can always Create a Billing Alarm to Monitor Your Estimated AWS Charges.
The AWS Free Tier is provided to explore AWS services. It is not intended for production usage.
It would be very hard to find out the reason for this without debugging a bit. So I would suggest you try the following debugging :
See if you have cloudtrail enabled. If yes, then you can track the API calls to S3 to see if anything is wrong there.
If you have cloudtrail enabled then it itself put data into the S3 bucket that might also take up some of the requests.
See if you have logging enabled at the bucket level, that might give you more insight on what all requests are reaching your bucket.
Your vides are public and that is the biggest concern here as you don't know who all can access it.
Setup cloudwatch alarms to avoid any surprises and try to look at logs to find out the issue.
Since my company using Sign Url to upload directly to S3 and with accelerated upload. And Cloudfront to download. Do I need the bucket to be same region with out web app. Or I can use US oregon for the S3. The reason to save the cost with since some region cost more than others region.(The S3 Infrequent Access ).
By using CloudFront, you will use AWS' dedicated backbone to connect to the bucket wherever it is anyway. It does mean increased latency, though, so you should be considerate of what you want (best performance or best cost). Do be aware, too, if you download the same file multiple times the latency is much less of an issue as it will be cached at the Cloudfront edge node.
Agree with what #henry said,
also consider Data-Transfer Cost within regions , sometimes they incur more cost along with storage if kept in a region with a low storage cost
I have read documents about them, but I don't know their difference exactly.
could you let me know what's the difference?
TL;DR: CloudFront is for content delivery. S3 Transfer Acceleration is for faster transfers and higher throughput to S3 buckets (mainly uploads).
Amazon S3 Transfer Acceleration is an S3 feature that accelerates uploads to S3 buckets using AWS Edge locations - the same Edge locations as in AWS CloudFront service.
However, (a) creating a CloudFront distribution with an origin pointing to your S3 bucket and (b) enabling S3 Transfer acceleration for your bucket - are two different things serving two different purposes.
When you create a CloudFront distribution with an origin pointing to your S3 bucket, you enable caching on Edge locations. Consequent requests to the same objects will be served from the Edge cache which is faster for the end user and also reduces the load on your origin. CloudFront is primarily used as a content delivery service.
When you enable S3 Transfer Acceleration for your S3 bucket and use <bucket>.s3-accelerate.amazonaws.com instead of the default S3 endpoint, the transfers are performed via the same Edge locations, but the network path is optimized for long-distance large-object uploads. Extra resources and optimizations are used to achieve higher throughput. No caching on Edge locations.
More inromation:
https://aws.amazon.com/blogs/aws/aws-storage-update-amazon-s3-transfer-acceleration-larger-snowballs-in-more-regions/
http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html
https://aws.amazon.com/about-aws/whats-new/2016/04/transfer-files-into-amazon-s3-up-to-300-percent-faster/
If you are interested in the difference between these two options pertaining to uploading content to S3 you may be interested in the following from Amazon's FAQ for S3:
Q. How should I choose between Transfer Acceleration and Amazon
CloudFront’s PUT/POST? Transfer Acceleration optimizes the TCP
protocol and adds additional intelligence between the client and the
S3 bucket, making Transfer Acceleration a better choice if a higher
throughput is desired. If you have objects that are smaller than 1GB
or if the data set is less than 1GB in size, you should consider using
Amazon CloudFront's PUT/POST commands for optimal performance.
As the FAQ answer states, transfer acceleration should be used if you need higher throughput.
Per the FAQs:
Q: How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance.
https://aws.amazon.com/s3/faqs/#s3ta
both Amazon cloudfront and amazon S3 are very different. Here is what these are for:
Amazon S3 provides a storage service on the internet while Amazon CloudFront is a web service for content delivery. Amazon S3 uses its own global network of websites while Amazon CloudFront delivers your content through a worldwide network of edge locations. Major differences in the features of both these services are mentioned Here.
And if you want to know about the S3 transfer accelerators, it actually takes advantage of Amazon CloudFront’s globally distributed edge locations to deliver/transfer fast, easy, and secure way of files over long distances between your client and an S3 bucket. Want to read more about S3 transfer accelerator, click here.
CloudFront is download direction only, so it is not offering a performant upload to the origin. Whereas S3 with Transfer Acceleration, it will utilize Edge locations like CloudFront both for upload and download.