Sending SNS notifications when there is an IAM Change - amazon-web-services

I set an SNS notification to send me an email whenever there is a change regarding the IAM policies. When a change occurs, CloudTrail sends a Log to CloudWatch which triggers an alarm attached to an SNS topic. More details in this link.
Here is an example of what I get by mail:
Alarm Details:
- Name: PolicyAlarm
- Description: This alarm is to monitor IAM Changes
- State Change: INSUFFICIENT_DATA -> ALARM
- Reason for State Change: Threshold Crossed: 1 datapoint [1.0 (31/08/17 09:15:00)] was greater than or equal to the threshold (1.0).
- Timestamp: Thursday 31 August, 2017 09:20:39 UTC
- AWS Account: 00011100000
Threshold:
- The alarm is in the ALARM state when the metric is GreaterThanOrEqualToThreshold 1.0 for 300 seconds.
The only relevant information here is the AWS Account ID. Is there a way to also include the change? Who made it, when and where? Or maybe send little information from the cloudwatch log like the "eventName" ?

There are two ways to trigger notifications from an AWS CloudTrail:
Configure Amazon CloudWatch Logs to look for specific strings. When found, it increments a metric. Then, create an alarm that triggers when the metric exceeds a particular value over a particular period of time. When the notification is sent, only information about the alarm is sent. OR...
Create a rule in Amazon CloudWatch Events to look for the event. Set an Amazon SNS topic as the target. When the notification is sent, full details of the event are passed through.
You should use # 2, since it provides full details of the event.
Here's what I did to test:
Created an Amazon SQS queue in us-east-1 (where all IAM events take place)
Created an Amazon CloudWatch Events rule in us-east-1 with:
Service Name: IAM
Event Type: AWS API Call via CloudTrail
Specific Operations: PutUserPolicy
Edited an IAM policy
Within a short time, the event appeared in SQS:
Here's the relevant bits of the policy that came through:
{
"detail-type": "AWS API Call via CloudTrail",
"source": "aws.iam",
"region": "us-east-1",
"detail": {
"eventSource": "iam.amazonaws.com",
"eventName": "PutUserPolicy",
"awsRegion": "us-east-1",
"requestParameters": {
"policyDocument": "{\n \"Version\": \"2012-10-17\",\n ... }",
"policyName": "my-policy",
"userName": "my-user"
},
"eventType": "AwsApiCall"
}
}
I sent the message to SQS, but you could also send it to SNS to then forward via email.

Related

AWS EventBridge - is it able to include tag value of instance in the Rule Pattern & the output

Here is a sample event pattern
{
"source": ["aws.ec2"],
"detail-type": ["EC2 Instance State-change Notification"],
"detail": {
"state": ["terminated"]
}
}
Would it possible to include tag values of specific instance in the detail and get pass those tag values into the Input message send to the target?
Currently I am using Input Transfomrer with following configuration
Input configuration
{"account":"$.account","instance-id":"$.detail.instance-id","region":"$.region","state":"$.detail.state","time":"$.time"}
"At <time>, the status of your EC2 instance <instance-id> on account <account> in the AWS Region <region> has changed to <state>."
Ouptut message
"At 2022-01-26T00:29:41Z, the status of your EC2 instance i-0ae54c6931ad72f12 on account XXXXXX in the AWS Region us-west-2 has changed to terminated."
Prefer Message
"At 2022-01-26T00:29:41Z, the status of your EC2 instance name ABC-XYZ on account XXXXXX in the AWS Region us-west-2 has changed to terminated."
{"version":"0","id":"ef80b5de-5221-c559-b5c3-590c4dfgb8bf","detail-
type":"EC2 Instance State-change
Notification","source":"aws.ec2","account":"xxxxxx","time":"2022-01-
26T11:24:35Z","region":"xxxx","resources":["arn:aws:ec2:eu-west-
1:xxxxx:instance/i-082xxxxxx"],"detail":{"instance-id":"i-
082xxxxx","state":"terminated"}}
This is the event you get when you don't use Input Transformer and use Matched event. It means you can get only the values present here. So you question about InstanceName is not possible as you can see it's not present in the event.
You have to send the instance-id you get from your input transformer to a lambda which does aws ec2 describe-instances --instance-ids instance-id(this is the one you get from input transformer) which will give you all the tags and this lambda can then send an sns.

Amazon EventBridge rule S3 put object event cannot trigger the AWS StepFunction

After setting the EventBridge, S3 put object event still cannot trigger the StepFuction.
However, I tried to change the event rule to EC2 status. It's working !!!
I also try to change the rule to S3 all event, but it still not working.
Amazon EventBridge:
Event pattern:
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["MY_BUCKETNAME"]
}
}
Target(s):
Type:Step Functions state machine
ARN:arn:aws:states:us-east-1:xxxxxxx:stateMachine:MY_FUNCTION_NAME
Reference:https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html
Your step function isn't being triggered because the PutObject events aren't being published to cloudtrail. S3 operations are classified as data events so you must enable Data events when creating your cloudTrail. The tutorial says next next and create which seems to suggest no additional options need to be selected. By default, Data Events on the next step (step 2 - Choose log events - as of this writing) is not checked. You have to check it and fill up the bottom part to specify if all buckets/events are to be logged.

How to test lambda using test event

I have lambda which is triggered by cloudwatch event when VPN tunnels are down or up. I searched online but can't find a way to trigger this cloudwatch event.
I see an option for test event but what can I enter in here for it to trigger an event that tunnel is up or down?
You can look into CloudWatchEventsandEventPatterns
Events in Amazon CloudWatch Events are represented as JSON objects.
For more information about JSON objects, see RFC 7159. The following
is an example event:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/ i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}
Also log based on event, you can pick your required event from AWS CW EventTypes
I believe in your scenario, you don't need to pass any input data as you must have built the logic to test the VPN tunnels connectivity within the Lamda. You can remove that JSON from the test event and then run the test.
If you need to pass in some information as part of the input event then follow the approach mentioned by #Adiii.
EDIT
The question is more clear through the comment which says
But question is how will I trigger the lambda? Lets say I want to
trigger it when tunnel is down? How will let lambda know tunnel is in
down state? – NoviceMe
This can be achieved by setting up a rule in Cloudwatch to schedule the lambda trigger at a periodic interval. More details here:
Tutorial: Schedule AWS Lambda Functions Using CloudWatch Events
Lambda does not have an invocation trigger right now that can monitor a VPN tunnel, so the only workaround is to poll the status through lamda.

Are there tools to view SQS queue status with only API keys?

I am working on Amazon SES with SQS to receive the bounce list of the email. For security reason, I am only given the information that necessary to connect to the SES and SQS service (host name, API keys, etc), so I am not able to use the AWS console to see the status of the queue. This is reasonable as I don't want to mess with many other services that are under the same account - especially when the services are not free. However, as the job is added to SQS by SES, I would need a way to see what's in SQS, so as to know if the bug is because the job is not inside SQS or simply because my code failed to retrieve the job.
So, are there tools that I can view the SQS status when I don't have access to AWS console?
Yes, you can use the AWS CLI (https://aws.amazon.com/cli/) to view basic information about the queue:
For example:
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/99999999/HBDService-BackgroundTaskQueue --attribute-names All
will show you this:
{
"Attributes": {
"LastModifiedTimestamp": "1522235654",
"ApproximateNumberOfMessages": "7",
"ReceiveMessageWaitTimeSeconds": "20",
"CreatedTimestamp": "1522235629",
"ApproximateNumberOfMessagesDelayed": "0",
"QueueArn": "arn:aws:sqs:us-east-1:999999999:HBDService-BackgroundTaskQueue",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:999999999:HBDService-BackgroundTaskQueue-DLQ\",\"maxReceiveCount\":100}",
"MaximumMessageSize": "262144",
"DelaySeconds": "0",
"ApproximateNumberOfMessagesNotVisible": "0",
"MessageRetentionPeriod": "1209600",
"VisibilityTimeout": "180"
}
}

AWS trigger when an instance is available

I have created a workflow like this:
Use requests for an instance creation through a API Gateway endpoint
The gateway invokes a lamda function that executes the following code
Generate a RDP with the public dns to give it to the user so that they can connect.
ec2 = boto3.resource('ec2', region_name='us-east-1')
instances = ec2.create_instances(...)
instance = instances[0]
time.sleep(3)
instance.load()
return instance.public_dns_name
The problem with this approach is that the user has to wait almost 2 minutes before they were able to login successfully. I'm totally okay to let the lamda run for that time by adding the following code:
instance.wait_until_running()
But unfortunately the API gateway has a 29 seconds timeout for lambda integration. So even if I'm willing to spend it wouldn't work. What's the easiest way to overcome this?
My approach to accomplish your scenario could be Cloudwatch Event Rule.
The lambda function after Instance creation must store a kind of relation between the instance and user, something like this:
Proposed table:
The table structure is up to you, but these are the most important columns.
------------------------------
| Instance_id | User_Id |
------------------------------
Creates a CloudWatch Event Rule to execute a Lambda function.
Firstly, pick Event Type: EC2 Instance State-change Notification then select Specific state(s): Running:
Secondly, pick the target: Lambda function:
Lambda Function to send email to the user.
That Lambda function will receive the InstanceId. With that information, you can find the related User_Id and send the necessary information to the user. You can use the SDK to get information of your EC2 instance, for example, its public_dns_name.
This is an example of the payload that will be sent by Cloudwatch Event Rule notification:
{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2015-12-22T18:43:48Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2:us-east-1:123456789012:instance/i-12345678"
],
"detail": {
"instance-id": "i-12345678",
"state": "running"
}
}
That way, you can send the public_dns_name when your instance is totally in service.
Hope it helps!