Automated cross-account DB backup using AWS Lambda,RDS,SNS - amazon-web-services

I have a Lambda function that shares an RDS manual snapshot with another AWS account.
I am trying to create a 'chain reaction' where the lambda is executed in the 1st account , then the snapshot is visible to the 2nd account, and another lambda is triggered that copies the visible snapshot in another region (inside the 2nd account) .
I tried using RDS event subscriptions and SNS topics, but I noticed that there is no RDS event subscription for sharing and/or modifying a RDS snapshot.
Then, I tried to setup cross-account permissions so the lambda from the first account will publish to an SNS topic which will trigger the lambda in the second account, but it seems that the topic and the target lambda must be in the same region (however the code that copies the db snapshot must be in the target region) . I followed this guide and I end up with this error:
A client error (InvalidParameter) occurred when calling the Subscribe operation: Invalid parameter: TopicArn
Has anyone tried something like this?
Is cross-region communication eventually feasible?
Could I trigger something in one region from something in another (any AWS service is welcome)?
My next attempts will be:
cross-region lambda invocation
Make use of API Gateway

Related

How to use an AWS Restore job to a trigger Lambda function

I have a DocumentDB cluster backed up using AWS Backup. When I restore it, it just creates a cluster with no instances and the cluster uses the default security group of the VPC.
I could not find any solution to fix this as part of the restore job. So, I am using a lambda function that uses boto3 to update the security group and add instances to the cluster.
Now is it possible to trigger the Lambda function automatically when the restore job is completed?
When your Backup job finishes, you can capture an event using EventBridge and then trigger your Lambda off of that.
This blog post from AWS covers triggering a Lambda off the back of an AWS Backup job using EventBridge. It's not the exact same scenario since they're triggering the Lambda from the Backup AND Restore jobs, but you should be able to extract the steps you need from that.

python lambda code for aws ec2 gets stopped

could any one please help me the lambda code , whenever AWS Ec2 instances get stopped, we need to get the email notifications with sns. In the email we need instance name. I could able to get instance id but not the instance name.
AWS CloudTrail allows you to identify and track EC2 instance lifecycle API calls (launch, start, stop, terminate). See How do I use AWS CloudTrail to track API calls to my Amazon EC2 instances?
And you can trigger a Lambda function to run arbitrary code when CloudTrail logs certain events. See Triggering a Lambda function with AWS CloudTrail events.
You can also create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and triggers a Lambda via CloudWatch Events.
You can create a rule in Amazon CloudWatch Events that:
Triggers when an instance enters the Stopped state
Sends a message to an Amazon SNS Topic
Like this:
If you want to modify the message that is being sent, then configure the Rule to trigger an AWS Lambda function instead. Your function should:
Extract the instance information (eg InstanceId) from the event parameter
Call describe-instances to obtain the Name of the instance (presumably the Tag with a Key of Name)
Publish a message to the Amazon SNS Topic

Invoking Lambda from VPC with CodePipeline Fails with Timeout

I have a Lambda that I have created following the example given by the aws docs (https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html), but I am invoking my Lambda from within a VPC and it seems that the CodePipeline never successfully talks to the Lambda (it gets a timeout and never seems to enter the Lambda as CloudWatch has none of my console.logs); this is despite the fact that I have created a CodePipeline Endpoint from within the VPC and associated the private subnet from which I launch the Lambda out to codepipeline.
I can give the Lambda an API Gateway endpoint and fire it manually just fine from Postman; it takes ~1 second to run. My Cloudwatch logs just have "Task timed out after 20.02 seconds." I'm not sure what else I can try; what else might prevent CodePipeline from talking to the Lambda?
After additional logging, I discovered that I actually had the VPC set up correctly and that the Lambda was being invoked; the Lambda was failing to get to S3 and was hanging on getting objects. I created another Endpoint for S3 for the VPC and was able to move passed the initial issue.

DynamoDB triggering a Lambda function in another Account

I have a DynamoDB table in Account A and an AWS Lambda function in Account B. I need to trigger the Lambda function when there are changes in the DynamoDB table.
I came across aws lambda - It is possible to Access AWS DynamoDB streams accross accounts? - Stack Overflow which says it is not possible. But again I found amazon web services - Cross account role for an AWS Lambda function - Stack Overflow which says it is possible. I am not sure which one is correct.
Has somebody tried the same scenario as I am trying to achieve?
The first link that is being pointed to is correct. Triggers from Stream-based event to Lambda is limited to same aws account and same region.
However, there is a way you will be able to achieve your goal.
Pre-requisite: I assume you already have a Dynamo DB (DDB) table (let's call it Table_A) created in AWS account A. Also, you have a processing lambda(let's call it Processing_Lambda) in AWS account B.
Steps:
Create a new proxy lambda(let's call it Proxy_Lambda) in Account A. This lambda will broadcast the event that it processes.
Enable dynamo stream on DDB table Table_A. This stream will contain all update/insert/delete events being done on the table.
Create a lambda trigger for Proxy_Lambda to read events from dynamo db table stream of Table_A.
Crete new SNS topic (let's call it AuditEventFromTableA) in AWS account A
Add code in Proxy_Lambda to publish the event read from stream to the SNS topic AuditEventFromTableA.
Create an AWS SQS queue (can also be FIFO queue if your use-case requires sequential events). This queue is present in AWS account B. Let's call this queue AuditEventQueue-TableA-AccountA.
Create a subscription for SNS topic AuditEventFromTableA present in AWS account A to the SQS queue AuditEventQueue-TableA-AccountA present in AWS account B. This will allow all the SNS events from account A to be received in the SQS queue of Account B.
Create a trigger for Processing_Lambda present in AWS account B to consume message from SQS queue AuditEventQueue-TableA-AccountA.
Result: This way you will be able to trigger the lambda present in account B, based on the changes in dynamo table of account A.
Note: if your use-case demands strict tracking of the sequential event, you may prefer publishing update events from Proxy_Lambda directly to AWS Kinesis stream present in Account B instead of SNS-SQS path.
Simple!
Create a proxy lambda A in Account A and permit A to call target lambda Bin account B.
DDB stream trigger lambda A. Lambda A call Lambda B.

Run AWS Lambda code when creating a new AWS EC2 instance

I'd like to run some code using Lambda on the event that I create a new EC2 instance. Looking the blueprint config-rule-change-triggered I have the ability to run code depending on various configuration changes, but not when one is created. Is there a way to do what I want? Or have I misunderstood the use case of Lambda?
We had similar requirements couple of days back(Users were supposed to get emails whenever a new instance gets launched)
1) Go to cloudwatch, then select Rules
2) Select service name (its ec2 for your case) then select "Ec2 instance state-change notification"
3) Then select pending in "Specific state" dropdown
4) Click on Add target option and select your lambda function.
That's it, whenever a new instance gets launched, Cloudwatch will trigger your lambda function.
Hope it helps !!
You could do this by inserting code into your EC2 instance launch userdata and have that code explicitly invoke a Lambda function, but that's not the best way to do it.
A better way is to use a combination of CloudTrail and Lambda. If you enable CloudTrail logging (every a/c should have this enabled, all the time, in all regions) then CloudTrail will log to S3 all of the API calls made in your account. You then connect this to Lambda by configuring S3 to publish events to Lambda. Your Lambda function will receive an S3 event, can then retrieve the API logs, find RunInstances API calls, and then do whatever work you need to as a consequence of the new instance being launched.
Some helpful references here and here.
I don't see a notification trigger for instance startup, however what you can do is write a startup script and pass that in via userdata. That startup script would need to download and install the AWS CLI and then authenticate to SNS and publish a message to a pre-configured topic. The startup script would authenticate to SNS and whatever other AWS services are needed via your IAM Role, so you would need to give the IAM Role permission to do whatever you want the script to do. This can be done in the IAM console.
That topic would then have your Lambda function subscribed to it, which would execute. Similar to the below article (though the author is doing something similar for shutdown, not startup).
http://rogueleaderr.com/post/48795010760/how-to-notifyemail-yourself-when-an-ec2-instance
If you are putting the EC2 instances into an autoscale group, I believe there is a trigger that gets fired when the autoscale group launches a new instance, so you could take advantage of that.
I hope that helps.
If you have CloudTrail enabled, then you can have S3 PutObject/TrailBucket trigger a Lambda function. Lambda function parses the object that is passed to it and if it finds RunInstances event, then run your code.
I do the exact same thing to notify certain users when a new instance is launched. With Lambda/Python, it is ~20 lines of code.