Is there any data limitation on aws cloudwatch logs to send the logs , because in my case I am getting the logs data 6 million records per 3 days from my application. So is aws cloudwatch logs will able to handle that much data?
Check out the aws quotas page. Not sure what you mean by "60lac" but the limits on CloudWatch are more than adequate for the majority of use cases.
There is no published limit on the overall data volume held. There'll be a practical limit somewhere but it won't be hit by a single AWS customer. If you're using the putLogEvents API you could be constrained by the limit of 5 requests per second per log stream, in which case consider using more streams or larger batches of events (up to 1MB).
Related
I have an ec2 instance which is running apache application.
I have to store my apache log somewhere. For this, I have used two approaches:
Cloudwatch Agent to push logs to cloudwatch
CronJob to push log file to s3
I have used both of the methods. Both methods suit fine for me. But, here I am little worried about the costing.
Which of these will have minimum cost?
S3 Pricing is basically is based upon three factors:
The amount of storage.
The amount of data transferred every month.
The number of requests made monthly.
The cost for data transfer between S3 and AWS resources within the same region is zero.
According to Cloudwatch pricing for logs :
All log types. There is no Data Transfer IN charge for any of CloudWatch.Data Transfer OUT from CloudWatch Logs is priced.
Pricing details for Cloudwatch logs:
Collect (Data Ingestion) :$0.50/GB
Store (Archival) :$0.03/GB
Analyze (Logs Insights queries) :$0.005/GB of data scanned
Refer CloudWatch pricing for more details.
Similarly, according to AWS, S3 pricing differs region wise.
e.g For N.Virginia :
S3 Standard Storage
First 50 TB / Month :$0.023 per GB
Next 450 TB / Month :$0.022 per GB
Over 500 TB / Month :$0.021 per GB
Refer S3 pricing for more details.
Hence, we can conclude that sending logs to S3 will be more cost effective than sending them to CloudWatch.
They both have similar storage costs, but CloudWatch Logs has an additional ingest charge.
Therefore, it would be lower cost to send straight to Amazon S3.
See: Amazon CloudWatch Pricing – Amazon Web Services (AWS)
I just started using AWS services. I want to receive notifications if any service usage exceeds limit. After searching for the options I found that same can be achieved suing AWS Cloudwatch alarm and AWS Limit Monitor using AWS CloudFormation. My question is, will i be charged if i use these services to receive notifications?
Yes, you can setup all kinds of notifications to keep a handle on what you are being billed, but that doesn't stop you from actually getting billed if you exceed your limits.
For example I have alerts to notify me when I reach 25%, 50%, 75% and 100% of my typical monthly spend - so I roughly should get one notification each week - but a lot can happen between when you get sent the notification, and when you take action - especially if, for example, someone got access to your account and started crypto-mining on some big ec2 instances.
I have messages being put into SQS on a cron job at a rate of about 1,000 per minute.
I am looking to run a lambda function periodically, that will grab some of the messages and out them into dynamoDB with regards to the throughout which will changeover time.
You can go with 'OnDemand' pricing for your use-case. AWS link The pricing is different that the provisioned capacity method.
With on-demand capacity mode, you pay per request for the data reads and writes your application performs on your tables. You do not need to specify how much read and write throughput you expect your application to perform as DynamoDB instantly accommodates your workloads as they ramp up or down.
With this approach, you don't need to configure WCUs (or RCUs).
The AWS documentation indicates that multiple log event records are provided to Lambda when streaming logs from CloudWatch.
logEvents
The actual log data, represented as an array of log event
records. The "id" property is a unique identifier for every log event.
How does CloudWatch group these logs?
Time? Count? Randomly, from my perspective?
Currently you get one Lambda invocation for every PutLogEvents batch that CloudWatch Logs had received against that log group. However you should probably not rely on that because AWS could always change it (for example batch more, etc).
You can observe this behavior by running the CWL -> Lambda example in the AWS docs.
Some aws services allow you to configure the log intervals such as elastic load balancing. There's a choice between five and sixty minute log intervals. You may not see a specific increment or parameter in the docs because they are configurable based on each service.
How many requests per minute I can send for each custom metric in CloudWatch?
p.s. I know for standard metrics is 1 per minute.
Here's a quote from section Publishing Single Data Points within the Amazon CloudWatch Developer Guide:
Although you can publish data points with time stamps as granular as one-thousandth of a second, Amazon CloudWatch aggregates the data to a minimum granularity of one minute