Availability SLA vs Designed for availability - amazon-web-services

I am trying to find out the availability percentage of Amazon's S3. The link is below.
https://aws.amazon.com/s3/storage-classes/
What is the difference between Availability SLA and Designed for availability?

You seem to bit confused , let me try to explain in simpler terms from AWS Documentation :
Availability SLA and Designed for availability?
The Plain English Translation would be I can design X service to be available for X Percentage , that is just saying that I have designed it such way!
The Availability SLA is you can think of contract that legally binds them to serve it so terms defined it it are absolute.
It is Designed for durability of 99.999999999% of objects.
- That simply means that the chances of durability of object stored in s3 is that mentioned above.
Now,
Designed for 99.99% availability over a given year.
- Now this is what they say would be s3 as a service you use would be available and availability of S3 as Service is backed by Amazon S3 Service Level Agreement for availability.
What is S3 Service Commitment according to Amazon S3 Service Level Agreement for availability.:
AWS will use commercially reasonable efforts to make Amazon S3
available with the applicable Monthly Uptime Percentage (as defined
below) during any monthly billing cycle (the “Service
Commitment”). In the event Amazon S3 does not meet the Service
Commitment, you will be eligible to receive a Service Credit.
Definitions:
“Error Rate” means: (i) the total number of internal server errors returned by Amazon S3 as error status “InternalError” or
“ServiceUnavailable” divided by (ii) the total number of
requests for the applicable request type during that five minute
period. We will calculate the Error Rate for each Amazon S3 account
as a percentage for each five minute period in the monthly billing
cycle. The calculation of the number of internal server errors will
not include errors that arise directly or indirectly as a result of
any of the Amazon S3 SLA Exclusions (as defined below).
“Monthly Uptime Percentage” is calculated by subtracting from 100% the average of the Error Rates from each five minute period in
the monthly billing cycle.
A “Service Credit” is a dollar credit, calculated as set forth below, that we may credit back to an eligible Amazon S3 account.
Hope this clears your doubt.

Related

Why is Cloudwatch billing me for GetMetricData API calls in every regions? I run my app in a single region

I've been combing through my AWS bills and noticed something strange - Cloudwatch bills a very small amount for every available region with a charge it lists as $0.01 per 1,000 metrics requested using GetMetricData API - Asia Pacific (Singapore) (or insert region here).
In my main region (US East) I see billing as expected - a few bucks total for PutLogEvents and GetMetricData calls, as you'd expect from normal use.
But in every other region I also have these costs - a few cents each - for GetMetricData calls in that region. Does anybody know the source of these costs?
please take a look to https://aws.amazon.com/cloudwatch/pricing/?nc1=h_ls
The Costs for GetMetricData and GetMetricWidgetImage is not applicaple for the free tier, you will have on your AWS account.
Do you deploy something in your account, which collects data from all your regions or you did not restrict the values for those metrics? E.g. if you are using 3rd party services like NewRelic, Datadog, etc. and collect those metrics, this API call is used to receive the list of all available metrics.

How to limit number of reads from Amazon S3 bucket

I'm hosting a static website in Amazon S3 with CloudFront. Is there a way to set a limit for how many reads (for example per month) will be allowed for my Amazon S3 bucket in order to make sure I don't go above my allocated budget?
If you are concerned about going over a budget, I would recommend Creating a Billing Alarm to Monitor Your Estimated AWS Charges.
AWS is designed for large-scale organizations that care more about providing a reliable service to customers than staying within a particular budget. For example, if their allocated budget was fully consumed, they would not want to stop providing services to their customers. They might, however, want to tweak their infrastructure to reduce costs in future, such as changing the Price Class for a CloudFront Distribution or using AWS WAF to prevent bots from consuming too much traffic.
Your static website will be rather low-cost. The biggest factor will likely be Data Transfer rather than charges for Requests. Changing the Price Class should assist with this. However, the only true way to stop accumulating Data Transfer charges is to stop serving content.
You could activate CloudTrail data read events for the bucket, create a CloudWatch Event Rule to trigger an AWS Lambda Function that increments the number of reads per object in an Amazon DynamoDB table and restrict access to the objects once a certain number of reads has been reached.
What you're asking for is a very typical question in AWS. Unfortunately with near infinite scale, comes near infinite spend.
While you can put a WAF, that is actually meant for security rather than scale restrictions. From a cost-perspective, I'd be more worried about the bandwidth charges than I would be able S3 requests cost.
Plus once you put things like Cloudfront or Lambda, it gets hard to limit all this down.
The best way to limit, is to put Billing Alerts on your account -- and you can tier them, so you get a $10, $20, $100 alerts, up until the point you're uncomfortable with. And then either manually disable the website -- or setup a lambda function to disable it for you.

Cloudwatch log store costing vs S3 costing

I have an ec2 instance which is running apache application.
I have to store my apache log somewhere. For this, I have used two approaches:
Cloudwatch Agent to push logs to cloudwatch
CronJob to push log file to s3
I have used both of the methods. Both methods suit fine for me. But, here I am little worried about the costing.
Which of these will have minimum cost?
S3 Pricing is basically is based upon three factors:
The amount of storage.
The amount of data transferred every month.
The number of requests made monthly.
The cost for data transfer between S3 and AWS resources within the same region is zero.
According to Cloudwatch pricing for logs :
All log types. There is no Data Transfer IN charge for any of CloudWatch.Data Transfer OUT from CloudWatch Logs is priced.
Pricing details for Cloudwatch logs:
Collect (Data Ingestion) :$0.50/GB
Store (Archival) :$0.03/GB
Analyze (Logs Insights queries) :$0.005/GB of data scanned
Refer CloudWatch pricing for more details.
Similarly, according to AWS, S3 pricing differs region wise.
e.g For N.Virginia :
S3 Standard Storage
First 50 TB / Month :$0.023 per GB
Next 450 TB / Month :$0.022 per GB
Over 500 TB / Month :$0.021 per GB
Refer S3 pricing for more details.
Hence, we can conclude that sending logs to S3 will be more cost effective than sending them to CloudWatch.
They both have similar storage costs, but CloudWatch Logs has an additional ingest charge.
Therefore, it would be lower cost to send straight to Amazon S3.
See: Amazon CloudWatch Pricing – Amazon Web Services (AWS)

What are hidden costs or "NOT obvious" costs on AWS

AWS says that everything is "pay as you use". But are there any hidden costs or "NOT obvious" costs on AWS ?
Costs which generally are ignored by people and can give shock:
It is recommended that we deploy our application in Multi AZ for High availability. We assume that data transfer between these servers will be free as this is like intranet; but that is not true. There are charges ( around 10% of internet bandwidth charges ) for data transfer across AZ in same region.
Data transfer within AWS and across AWS regions is also charged.
On AWS Aurora; by default provisioned IOPS are enabled which leads to a huge bill.
If Versioning is enabled on S3; then u need to pay for all versions of every object.
These are not hidden charges but can give you a shock:
Even on other RDS; if u use provisioned IOPS it leads to a huge bill depending on usage.
I think one of the most confusing parts of AWS is the 'EC2-Other' cost category. Most of these costs are based on utilization and can get out of control quickly. I did a write up on how to break down EC2-Other here: EC2-Other Cost Breakdown

Does the AWS Billing Management Dashboard take into account Free Tier usage

About a month ago I opened an AWS account to try out Amazon's own tutorial for EC2 services, only to give up after encountering an error.
Today I accessed my account once again, only to find out three tasks have been running in the background the whole month. My Billing Management Dashboard shows a hefty total in the upper right, but in the "free usage" tier the only exceeded entry is S3 Puts, of about 10%.
I can't seem to find a soruce anywhere in the documentation explaining whether the total billing in the upper right takes into account the Free Tier or not. At the end of this month, will I be billed entirely or only the % difference? I'm more or less okay with the latter, but I can't really afford the former.
I've obviously opened a support ticket right away, but since I'm on the basic plan I'm afraid they might answer me after the current bill becomes active.
Thank you for any answers.
You will be billed only for the % difference.
All services that offer a free tier have limits on what you can use without being charged. Many services have multiple types of limits. For example, Amazon EC2 has limits on both the type of instance you can use, and how many hours you can use in one month. Amazon S3 has a limit on how much memory you can use, and also on how often you can call certain operations each month. For example, the free tier covers the first 20,000 times you retrieve a file from Amazon S3, but you are charged for additional file retrievals. Each service has limits that are unique to that service.
Source: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier-limits.html