Sumo Logic and Cloudwatch logs not working with querying source - amazon-web-services

I'm trying to ingest Cloudwatch logs in Sumo Logic.
It works for metrics but not for logs. When I try to perform a log search querying
_sourceCategory=aws/cloudwatch
nothing is retrieved.
If I do the same in metrics, it works. So the issue seems to be with the logs.
Here's the context and how I set it up
First I created a role with their template. But since it wasn't working I add an open permission to AWS Cloudwatch and AWS Logs (top of actions):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:*",
"cloudwatch:*",
"cloudwatch:ListMetrics",
"cloudwatch:GetMetricStatistics",
"tag:GetResources"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
Then, I used the wizard to set up streaming data.
It didn't appear AWS CloudWatch Logs, but it appeared AWS CloudWatch Metrics.
(Could be this related to my issue?)
For source category: aws/cloudwatch
Regions: us-east-1
Namespaces to include:
AWS/Logs
AWS/Lambda
In role, I pasted the ARN of the role created previously.
As I said previously, I can use the metrics and query by metrics, but not query logs. I'm new with both, AWS and Sumo and I don't know what am I missing.
I will appreciate advice.

If you used the wizard, then it makes sense you only get the metrics. Collecting CloudWatch logs is done differently. You will need a Lambda Function to push the logs to Sumo Logic. Sumo Logic cannot pull CloudWatch logs directly.
Grzegorz attached the link to the documentation for collecting CloudWatch logs.

Related

AWS Event Bus fails to write logs to CloudWatch on a custom log group from AWS Lambda

I have an AWS lambda whose job it is to consume logs from an external source and write these logs to a custom CloudWatch log group. Please note that this lambda is already writing logs to its own log group, that's not my question. What I want is for it to write the externally-derived logs to another CloudWatch group.
Following the AWS documentation, and using CloudFormation, I created an event bus and a rule that targets CloudWatch:
redacted
I have omitted most of the CloudFormation template for clarity, just leaving in the parts that seem relevant.
What I am finding is that the Lambda receives the logs (via Kinesis), processes them and sends them to the event bus in the code snippet below:
redacted
The last line above indicates that the event is sent to the event bus:
redacted
However the Event Bus, having i believe, received the event, does not send the event off to CloudWatch. Even if i manually create the log group: ${AWS::StackName}-form-log-batch-function (I have kept the stack reference as a parameter to preserve anonymity).
I have checked the CloudFormation creation and all resources are present (confirmed by the Lambda not experiencing any exceptions, when it tries to send the event).
Anyone understand what I am missing here?
You can't write to CloudWatch Logs (CWL) using your WebLogsEventBusLoggingRole role. As AWS docs explain, you have to use CWL resource-based permissions:
When CloudWatch Logs is the target of a rule, EventBridge creates log streams, and CloudWatch Logs stores the text from the triggering events as log entries. To allow EventBridge to create the log stream and log the events, CloudWatch Logs must include a resource-based policy that enables EventBridge to write to CloudWatch Logs.
Sadly, you can't setup such permissions from vanila CloudFormation (CFN). This is not supported:
AWS::Events::Rule targetting Cloudwatch logs
To do it from CFN, you have to create custom resource in a form of a lambda function. The function would set CWL permissions using AWS SDK.
I hope this is helpful for others still looking for answers.
Problem#1 I want Cloudformation to work
You can use the cloud formation from https://serverlessland.com/patterns/eventbridge-cloudwatch or terraform
Problem#2 Why is EventBridge not able to write to Cloudwatch Logs in general
Just like what was said above, in order for aws event bridge to write to cloudwatch there needs to be a resource policy (a Policy set on the destination, in this case, Cloudwatch Logs). However please note
If you create a cloudwatch logs Target in the console a Resource Policy will be auto-generated for you, but the auto-generated one has a Twist
{
"Version": "2012-10-17",
"Statement":
[
{
"Sid": "TrustEventsToStoreLogEvent",
"Effect": "Allow",
"Principal":
{
"Service":
[
"events.amazonaws.com",
"delivery.logs.amazonaws.com"
]
},
"Action":
[
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:us-east-1:777777777:log-group:/*:*"
}
]
}
You will notice that the resource take the form of /*:*
which mean the log group has to start with / if you are going to use the auto generated one.
So if your log group is not in the format /event/myloggroup/ then the policy will not help you.
So for example
Target Log Group Name
ARN
Does it work?
event-bridge-rule2
arn:aws:logs:us-east-1:281458815962:log-group:event-bridge-rule2:*
Note the arn is missing starting /
/aws/events/helpme
arn:aws:logs:us-east-1:281458815962:log-group:/aws/events/helpme:*
Works like a charm
My advice is put a policy that makes sense to you and doesn't rely on the automatic one.
just create a loggroup with the name /aws/events/<yourgroupname> and it will work fine & also set logs:*

Why I am not seeing recent events under RDS default sqlserver_audit parameter group?

I have RDS SQL server instance and it has the default sqlserver_audit parameter group, but I am not seeing any recent events. What is the issue?
A screen shot of what I am seeing:
Events generated from sqlserver_audit parameter group (HIPAA audit) are not directly visible to you in AWS Console. For more info about HIPAA audit implementation in RDS for SQL Server see this AWS forum post.
When you want to see events from your SQL Server audits, you need to use SQLSERVER_AUDIT option. In that case, RDS will stream data from audits on your RDS instance to your S3 bucket. You can also configure retention time, during which those .sqlaudit files are kept on RDS instance and you can access them by msdb.dbo.rds_fn_get_audit_file. For more info see documentation.
In both cases, "Recent events" will contain only important messages related to your instance, not audited events. So for example, whenever RDS can't access your S3 bucket for writing in order to store your audits, it will tell you so in "Recent events".
Vasek's answer helped me understand why I wasn't seeing logs show up in my s3 bucket and it was because the inline IAM policy attached to my IAM role used to transfer the audit logs was incorrect.
If you use the automated options-group creation wizard to add the SQLSERVER_AUDIT option to your RDS instance, be sure you don't include a trailing slash on your s3 key prefix.
The incorrect IAM policy statement the AWS option group creation wizard created is shown below.
{
"Effect": "Allow",
"Action": [
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-audit-logs-bucket/audits//*" # <---- INCORRECT
]
}
I changed my SQLSERVER_AUDIT options group to use the bucket's root and changed the IAM policy to the following correct configuration shown below and my audit logs started showing up in my S3 buck
{
"Effect": "Allow",
"Action": [
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-audit-logs-bucket/*"
]
}
From the docs:
RDS uploads the completed audit logs to your S3 bucket, using the IAM role that you provide. If you enable retention, RDS keeps your audit logs on your DB instance for the configured period of time.
So the log evens will be in S3, assuming all permissions are set correctly, not in the RDS Events console.

AWS Backup - How to get notification from failed backups

I'm using AWS Backup to backup my resources. I would like to get notifications from failed backups, but the only way to check the status of backups is from the AWS Backup service page - there is nothing AWS Backup related on Cloudwatch metrics, I was thinking of creating SNS-topic from Cloudwatch metric but that doesn't seem to be possible now?
Another question - would there be any way to get weekly report from AWS Backup, like "There are 25 resources currently being backed up, and from the last 7 days there is 175 restore points available"?
First of all you should create an SNS topic, add AWS Backup as a trusted entity in the resource-based policy of the SNS topic:
{
"Sid": "__console_pub_0",
"Effect": "Allow",
"Principal": {
"Service": "backup.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-west-2:{accountId}:test"
}
then turns on notifications for that topic and add BACKUP_JOB_COMPLETED event by the following AWS documentation:
Using Amazon SNS to Track AWS Backup Events.
Each time when AWS Backup job is completed or failed you will be informed to subscribed email address in SNS topic.
However, I can't find a way to customize notification.

AWS Lambda#Edge debugging

I'm currently working on a lambda#edge function.
I cannot find any logs on CloudWatch or other debugging options.
When running the lambda using the "Test" button, the logs are written to CloudWatch.
When the lambda function is triggered by a CloudFront event the logs are not written.
I'm 100% positive that the event trigger works, as I can see its result.
Any idea how to proceed?
Thanks ahead,
Yossi
1) Ensure you have provided permission for lambda to send logs to cloudwatch. Below is the AWSLambdaBasicExecutionRole policy which you need to attach to the exection role which you are using for your lambda function.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
2) Lambda creates CloudWatch Logs log streams in the CloudWatch Logs regions closest to the locations where the function is executed. The format of the name for each log stream is /aws/lambda/us-east-1.function-name where function-name is the name that you gave to the function when you created it. So ensure you are checking the cloudwatch logs in the correct REGION.
In case anyone finds it useful.
The fact that AWS prefixes your function name, which breaks the built-in "CloudWatch at a glance" Dashboard, and that Lambda#Edge runs across multiple regions inspired me to create this CloudWatch Dashboard template that gives you similar standard monitoring for all regions in one dashboard.

AWS CloudWatch Logs are not Created

I am trying to use AWS Glue to run an ETL job that fetches data from Redshift to S3.
When I run a crawler it successfully connects to Redshift and fetches schema information. Relevant logs are created under a log group aws-glue/crawlers.
When I run the ETL job, it is supposed to create a log stream under log groups aws-glue/jobs/output and aws-glue/jobs/error, but it fails to create such log streams, and eventually the job too fails.
( I am using AWS managed AWSGlueServiceRole policy for Glue service)
Since it does not produce any logs, it is difficult to identify the reason for ETL job failure. I would appreciate it if you could help me resolve this issue.
Most of the time this has to do with your AWS service not having the correct permissions (yes, even for just writing logs!).
Adding something like this to the Glue role might do the trick:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:::*"
}
]
}
I would make sure that your Endpoint and VPC is set up correctly via these instructions:
http://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
I had my inbound rules set up correctly but did not set up the outbound rules which is what I think the issue was.