I'm trying to view all events related to all of my RDS instance for audit purposes.
I am using this
aws rds describe-events --source-type db-instance
But I am getting an empty result
{
"Events": []
}
Even if I include a --source-identifier, it still yields an empty result
by default it retrieves result for the last an hour, if there are no events to display then it might be empty.
so include flag --duration 3600 (in minutes) to get one day events .. if you go higher then you will get the results ( max 14 days)
Just experienced the same: adding --start-time and --end-time fixed it for me. For some reason --duration did not.
So something like this should work:
aws rds describe-events --start-time 2020-07-24T00:00Z --end-time 2020-07-24T18:00Z
Related
In my AWS organization we have multiple (5+) accounts and a Cloudtrail trail for management events with Cloudwatch log group for all the accounts and regions in the org.
For the log group there is a series of generic metric filters setup, each with a Cloudwatch alarm. These are all tied to one or more SNS topics and associated subscriptions and all out of the box.
For example:
aws logs put-metric-filter --log-group-name CLOUDWATCH-LOG-GROUP --filter-name FILTER-NAME --metric-transformations metricName=METRIC-NAME,metricNamespace=NAMESPACE,metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }'
Creates a metric filter for any failed logins on console for any of the accounts that sets the metric value to 1 when a match for the filter pattern is found in the log.
Paired with
aws cloudwatch put-metric-alarm --alarm-name METRIC-NAME-alarm --metric-name METRIC-NAME --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace NAMESPACE --alarm-actions SNS-TOPIC-ARN
This creates a simple alarm that fires an event to the SNS topic.
The major drawback here however is if my org account ID is #1234 and my worker account ID is #4321 and the failed login happens at the worker account, the Alarm that is sent refers only to the org account ID and I need to login and look to figure out where this happened.
Now, I can add a dimension to the metric-transformation to include something to help me differentiate (for example recipientAccountId). That would look something like
aws logs put-metric-filter --log-group-name CLOUDWATCH-LOG-GROUP --filter-name FILTER-NAME --metric-transformations metricName=METRIC-NAME,metricNamespace=NAMESPACE,metricValue=1,dimensions={"Account"=($.recipientAccountId)} --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }'
But, in order for the Cloudwatch alarm to trigger, it appears that the dimension must also be included there, it isn't possible to have one Alarm for all the possible Value that exist. Which then seems to require one Alarm per recipientAccountId (ACCOUNT-ID in the below).
Ex:
aws cloudwatch put-metric-alarm --alarm-name METRIC-NAME-alarm --metric-name METRIC-NAME --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --dimensions Name=Account,Value=ACCOUNT-ID --namespace NAMESPACE --alarm-actions SNS-TOPIC-ARN
Is this correct?
All of this is really just to include some metadata with the alarm to let me know which Account triggered, is there a better solution?
I am struggling to match CloudWatch metric count (as shown in CloudWatch Console ) with count shown in Billing (Estimate for current month ). Attached here are both screen shots.
1) According to below billing screen shot, I should have 10+9.597 = 19.597 metric Alarms
2) According to CloudWatch Console, I have only 3 Alarms. Infact I don't remember of creating more.
I have couple of In Alarms, but in current month ( billing period July ), have only one, see screen shot below.
Couple of In Alarms are shown in red, but they are for last month, which is already billed.
Let me know please if you need more information or any specific screen shot from aws console.
4) Output of aws query
aws cloudwatch describe-alarms --query 'MetricAlarms[*].[AlarmName]' --region us-east-2 > metric-alarams
Music-ReadCapacityUnitsLimit-BasicAlarm
Music-WriteCapacityUnitsLimit-BasicAlarm
TargetTracking-table/TextNote-AlarmHigh-09765769-6e5d-6cab83249c9d
TargetTracking-table/TextNote-AlarmHigh-82c98240-0435-101ab605b404
TargetTracking-table/TextNote-AlarmLow-a0552914-7d04-bd0d74cb9d9a
TargetTracking-table/TextNote-AlarmLow-d4b5d3ff-9b62--b6fafd379abe
TargetTracking-table/TextNote-ProvisionedCapacityHigh-1fc9e0fc--8fc1-c5830689655d
TargetTracking-table/TextNote-ProvisionedCapacityHigh-e2f0ac8b--8826-fd764296f4e8
TargetTracking-table/TextNote-ProvisionedCapacityLow-3e182ade--a070-3d1a515b01a5
TargetTracking-table/TextNote-ProvisionedCapacityLow-e8f2afd9--8ccf-d7dad436cedb
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-AlarmHigh-7693771a-92ee-8cd83a388fec
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-AlarmHigh-b761bab7-a8e6-8386252be6b2
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-AlarmLow-2bc4ee0c-95c6-31721866055d
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-AlarmLow-8b591a75-be8f-ff3209a4b54e
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-ProvisionedCapacityHigh-a369b9dc-8d8f-40d2bb7966cb
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-ProvisionedCapacityHigh-d65c9c16-9313-aed4e691d811
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-ProvisionedCapacityLow-3bd977f5-9acb-b6608ff14d91
TargetTracking-table/TextNote/index/textNoteSecondaryIndex-ProvisionedCapacityLow-f852b0c7-b066-5ac2734d9a65
TargetTracking-table/texthash-AlarmHigh-26e45329-b495-85f3eda0f92e
TargetTracking-table/texthash-AlarmHigh-7b2169a1-d914-50d8b09341d8
TargetTracking-table/texthash-AlarmLow-844f04e2-8e2d-b38bb95e8f1b
TargetTracking-table/texthash-AlarmLow-f7ae2480-7cb8-0bf1adffece6
TargetTracking-table/texthash-ProvisionedCapacityHigh-ad8c3e30-9861-feb73bb2b88f
TargetTracking-table/texthash-ProvisionedCapacityHigh-dc6e4a74-beab-1e55e10f25f6
TargetTracking-table/texthash-ProvisionedCapacityLow-7f34588a-872e-26413a88f905
TargetTracking-table/texthash-ProvisionedCapacityLow-c8bbf607-962b-c7ecd956a6f2
awsec2-i-0fc458fad8fc7fac2-LessThanOrEqualToThreshold-CPUCreditBalance
To view the alarm names of all the billable CW alarms in the us-east-2 region, use the following AWS CLI command:
aws cloudwatch describe-alarms --query 'MetricAlarms[*].[AlarmName]' --region us-east-2
You can delete them using:
aws cloudwatch delete-alarms --region us-east-2 --alarm-names ...
I ran this command:
gcloud --project=xxx beta container clusters create xxx --network xxx --subnetwork xxx --cluster-secondary-range-name=xxx
Turns out I had a typo. My secondary range is actually zzz not xxx. So, I have to wait 30 minutes for my cluster creation to fail and finally see what the actual error is:
Retry budget exhausted (80 attempts): Secondary range "xxx" does not
exist in network "xxx", subnetwork "xxx".
That's bad. Until the 30 or so minutes elapses I get no error messages or logs and I can't even delete the cluster until it "finishes" and fails.
There has to be a better way! Can I get this to fail fast or at least get some kind of verbose output?
Answering the question from the title:
Can I set the “Retry budget” for gcloud beta container clusters create command?
Currently there is no possibility to change the Retry budget and it also means that $ gcloud container clusters command isn't supporting that change either.
You can follow a Feature Request for it, that was created on Issuetracker:
Issuetracker.google.com: Allow configuration of GKE "maximum retry budget" when using custom ranges
Additional resources:
Stackoverflow.com: Questions: Terraform Google container cluster adjust maximum retry budget
due to huge costs in our environment, I have a task to create a lambda to tag all log groups like corresponding resources (the source of these log groups). However, I am facing a challenge to identify the resource arn of log groups. There are many logs in our environment like logs for lambda, logs for elastic-beanstalk, logs for ec2. But how can I match the log group with the corresponding resource? I would appreciate any help very much!
I would try using describe-log-groups. To use it you'll need to work your way "backwards" by going from resources to the log-groups, but I can't think of any other way at the moment.
aws logs describe-log-groups --query 'logGroups[*].arn' --log-group-name-prefix '/aws/lambda/[name-of-your-lambda]'
Output:
[
"arn:aws:logs:[region]:[account-id]:log-group:/aws/lambda/[name-of-your-lambda]:*"
]
Hope that helps.
Is there a way to stream an AWS Log Group to multiple Elasticsearch Services or Lambda functions?
AWS only seems to allow one ES or Lambda, and I've tried everything at this point. I've even removed the ES subscription service for the Log Group, created individual Lambda functions, created the CloudWatch Log Trigger, and I can only apply the same CloudWatch Log trigger on one Lambda function.
Here is what I'm trying to accomplish:
CloudWatch Log Group ABC -> No Filter -> Elasticsearch Service #1
CloudWatch Log Group ABC -> Filter: "XYZ" -> Elasticsearch Service #2
Basically, I need one ES cluster to store all logs, and another to only have a subset of filtered logs.
Is this possible?
I've ran into this limitation as well. I have two Lambda's (doing different things) that need to subscribe to the same CloudWatch Log Group.
What I ended up using is to create one Lambda that subscribes to the Log Group and then proxy the events into an SNS topic.
Those two Lambdas are now subscribed to the SNS topic instead of the Log Group.
For filtering events, you could implement them inside the Lambda.
It's not a perfect solution but it's a functioning workaround until AWS allows multiple Lambdas to subscribe to the same CloudWatch Log Group.
I was able to resolve the issue using a bit of a workaround through the Lambda function and also using the response provided by Kannaiyan.
I created the subscription to ES via the console, and then unsubscribed, and modified the Lambda function default code.
I declared two Elasticsearch endpoints:
var endpoint1 = '<ELASTICSEARCH ENDPOINT 1>';
var endpoint2 = '<ELASTICSEARCH ENDPOINT 2>';
Then, declared an array named "endpoint" with the contents of endpoint1 and endpoint2:
var endpoint = [endpoint1, endpoint2];
I modified the "post" function which calls the "buildRequest" function that then references "endpoint"...
function post(body, callback) {
for (index = 0; index < endpoint.length; ++index) {
var requestParams = buildRequest(endpoint[index], body);
...
So every time the "post" function is called it cycles through the array of endpoints.
Then, I modified the buildRequest function that is in charge of building the request. This function by default calls the endpoint variable, but since the "post" function cycles through the array, I renamed "endpoint" to "endpoint_xy" to make sure its not calling the global variable and instead takes the variable being inputted into the function:
function buildRequest(endpoint_xy, body) {
var endpointParts = endpoint_xy.match(/^([^\.]+)\.?([^\.]*)\.?([^\.]*)\.amazonaws\.com$/);
...
Finally, I used the response provided by Kannaiyan on using the AWS CLI to implement the subscription to the logs, but corrected a few variables:
aws logs put-subscription-filter \
--log-group-name <LOG GROUP NAME> \
--filter-name <FILTER NAME>
--filter-pattern <FILTER PATTERN>
--destination-arn <LAMBDA FUNCTION ARN>
I kept the filters completely open for now, but will now code the filter directly into the Lambda function like dashmug suggested. At least I can split one log to two ES clusters.
Thank you everyone!
Seems like AWS console limitation,
You can do it via command line,
aws logs put-subscription-filter \
--log-group-name /aws/lambda/testfunc \
--filter-name filter1 \
--filter-pattern "Error" \
--destination-arn arn:aws:lambda:us-east-1:<ACCOUNT_NUMBER>:function:SendToKinesis
You also need to add permissions as well.
Full detailed instructions,
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Hope it helps.
As of September 2020 CloudWatch now allows two subscriptions to a single CloudWatch Log Group, as well as multiple Metric filters for a single Log Group.
Update: AWS posted October 2, 2020, on their "What's New" blog that "Amazon CloudWatch Logs now supports two subscription filters per log group".