Cloudwatch log store costing vs S3 costing - amazon-web-services

I have an ec2 instance which is running apache application.
I have to store my apache log somewhere. For this, I have used two approaches:
Cloudwatch Agent to push logs to cloudwatch
CronJob to push log file to s3
I have used both of the methods. Both methods suit fine for me. But, here I am little worried about the costing.
Which of these will have minimum cost?

S3 Pricing is basically is based upon three factors:
The amount of storage.
The amount of data transferred every month.
The number of requests made monthly.
The cost for data transfer between S3 and AWS resources within the same region is zero.
According to Cloudwatch pricing for logs :
All log types. There is no Data Transfer IN charge for any of CloudWatch.Data Transfer OUT from CloudWatch Logs is priced.
Pricing details for Cloudwatch logs:
Collect (Data Ingestion) :$0.50/GB
Store (Archival) :$0.03/GB
Analyze (Logs Insights queries) :$0.005/GB of data scanned
Refer CloudWatch pricing for more details.
Similarly, according to AWS, S3 pricing differs region wise.
e.g For N.Virginia :
S3 Standard Storage
First 50 TB / Month :$0.023 per GB
Next 450 TB / Month :$0.022 per GB
Over 500 TB / Month :$0.021 per GB
Refer S3 pricing for more details.
Hence, we can conclude that sending logs to S3 will be more cost effective than sending them to CloudWatch.

They both have similar storage costs, but CloudWatch Logs has an additional ingest charge.
Therefore, it would be lower cost to send straight to Amazon S3.
See: Amazon CloudWatch Pricing – Amazon Web Services (AWS)

Related

AWS CloudWatch Logs Archive (not S3), how to use it

I am reading AWS CloudWatch Logs documentation here. They says
Archive log data – You can use CloudWatch Logs to store your log data in highly durable storage. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.
And in the pricing page, they have
Store (Archival) $0.03 per GB
And in the Pricing Calculator, they mention
Log Storage/Archival (Standard and Vended Logs)
Log volume archived is estimated to be 15% of Log volume ingested (due to compression). Storage/Archival costs are estimated assuming customer choses a retention period of one (1) month. Default retention setting is ‘never expire’.
Problem
I am trying to understand the behavior of this archive feature to decide if I need to move my log data to S3. but I cannot find any further details. I have tried exploring every button and link in CloudWatch Logs pages but cannot find a way to archive the data, I can only delete them or edit their retention rules.
So how does it work? The remark in the Pricing Calculator says it is estimated to be 15% of ingested volume, does this mean it always archive 15% of the log automatically? And why do they have to assume in the calculation taht the retention period is set to 1 month, does the archive feature behave differently otherwise?
The Archive log data feature refers to storing log data in CloudWatch Logs. You do not need to do anything additional to 'archive'. It is the regular storage you can see on console.
Considering only storage pricing, storing logs in S3 is cheaper. It varies depending on region but in average on S3 Standard is about $0.025 per GB vs $0.03 per GB on CloudWatch Logs Storage. And if you move the objects to other storage classes it becomes cheaper.
About:
Log volume archived is estimated to be 15% of Log volume ingested (due
to compression)
It refers to if 100GB of data are ingested on CloudWatch Logs, it reflects as only 15GB (15%) on Storage due to the special compressed format in which they stored this logs.

Pricing: AWS Dynamo db vs AWS Cloudwatch

Does Cloudwatch cost more than Dynamodb for storage or my understanding is wrong?
From the pricing pages:
https://aws.amazon.com/cloudwatch/pricing/ - From this, in the logs section of us-east-1, it looks like it is costing 0.50$/GB
https://aws.amazon.com/blogs/aws/dynamodb-price-reduction-and-new-reserved-capacity-model/ - From this, it looks like it is costing 0.250$/GB.
So, here it looks like cloudwatch for logs cost more than dynamodb for actual data? Am I understanding something wrong?
I always used to think that cloudwatch is cheaper since it uses S3 or something and also logs we create are normally more than the actual data.
There are many dimensions to the pricing for each service.
The pure storage cost for Amazon DynamoDB is $0.25/GB/month. (There are also costs for read/write capacity.)
The Store (Archival) cost for Amazon CloudWatch Logs is $0.03/GB/month. This is similar to Amazon S3 storage costs of $0.023/GB/month for standard storage.
The $0.50/GB figure you quote for CloudWatch appears to be the Collect (Data Ingestion) charge, which is not for storage.

How to monitor daily costs of Amazon S3

I was looking around to learn about Do and Don'ts of Amazon S3 and was looking at options on how to monitor usage daily. I have couple of questions here:
I see on the Amazon S3 pricing page that PUT, COPY, POST, or LIST Requests are $0.005 per 1,000 requests
I have 5 buckets in S3 and each of these buckets have sub folders having 100k files.
If I do aws s3 ls --recursive will I be charged
5*100k=500000/1000 = 500*0.005 = $2.50?
Any suggestions on tools that can be used to email daily usage rate of my S3 bucket?
A LIST request can return up to 1000 objects. Therefore, to obtain the full list of 100k objects, it would require 100 LIST requests. That would consume a tenth of the initial $0.005 charge.
If you were to repeat this for 5 buckets, it would require 500 LIST requests, which would consume of half of the initial $0.005 charge.
Alternatively you could use Amazon S3 Storage Inventory, which can provide a daily/weekly CSV file that contains a list of all your objects. It is charged at $0.0025 per million objects listed (per bucket per run).
I sense that you are worried about your Amazon S3 costs. You can configure billing alerts and notifications to remain aware of when your costs rise above expected levels. This works for all AWS services.
See:
Monitoring Charges with Alerts and Notifications
Create a Billing Alarm to Monitor Your Estimated AWS Charges

Pricing for using SQS within EC2

If messages are both sent and received from different EC2 instances in the same region, there will be no cost right?
In other words, messages are only billed if it's coming from or going to non-EC2 instances/systems?
See the Amazon SQS Pricing page for details of SQS pricing.
It is charged as:
Price per 1 Million Requests after Free Tier (Monthly)
Plus data transfer
A request includes SendMessage, ReceiveMessage, DeleteMessage, etc.
The Data Transfer charge within a Region is free. For example, if requests come from an Amazon EC2 instance in the same region as the Amazon SQS queue, there is no charge. The Data Transfer charge only applies for data going out of the cloud, eg to the Internet or to another Region.
As per the page:
Data transferred between Amazon SQS and Amazon EC2 within a single region is free of charge (that is, $0.00 per GB). Data transferred between Amazon SQS and Amazon EC2 in different regions is charged at Internet Data Transfer rates on both sides of the transfer.

What AWS CloudWatch Logs are using for storage?

I started working with amazon CloudWatch Logs. The question is, are AWS using Glacier or S3 to store the logs? They are using Kinesis to process the logs using filters. Can anyone please tell the answer?
AWS is likely to use S3, not Glacier.
Glacier would make problems if you would want access older logs as to get data stored in Amazon Glaciers can take few hours and this is definitely not the reaction time one expects from CloudWatch log analysing solution.
Also the price set for storing 1 GB of ingested logs seems to be derived from 1 GB stored on AWS S3.
S3 price for one GB stored a month is 0.03 USD, and price for storing 1 GB of logs per month is also 0.03 USD.
On CloudWatch pricing page is a note:
*** Data archived by CloudWatch Logs includes 26 bytes of metadata per log event and is compressed using gzip level 6 compression. Archived
data charges are based on the sum of the metadata and compressed log
data size.
According to Henry Hahn (AWS) presentation on CloudWatch it is "3 cents per GB and we compress it," ... " so you get 3 cents per 10 GB".
This makes me believe, they store it on AWS S3.
They are probably using DynamoDB. S3 (and Glacier) would not be good for files that are appended to on a very frequent basis.