Add AWS Lambda function trigger to existing function uding Terraform - amazon-web-services

Currently I have a lambda function that is created in a different Terrraform module. I need to create a Cloudwatch Logs trigger for that lambda function from a serperate repository. So far, I don't see any Terraform resources (that I know of) to do this. I have also looked into using Boto3 in local-exec through terraform. This doesn't look possible either. Are there any ways that I am missing, that can complete this using Terraform, AWS_CLI, or python.
Thanks

You need to define a aws_cloudwatch_log_subscription_filter with the Lambda function's ARN as the destination value. You could pass the Lambda function's ARN into the CloudWatch module, or you could have it lookup the function by name to get the ARN. You'll probably also need to create an aws_lambda_permission resource to give CloudWatch permission to invoke the Lambda function.

Related

AWS cloud9 use lambda function

The AWS cloud9 UI changed a lot and I cannot find how to create a lambda function from it. What I have done is to create a lambda function and use upload lambda to the folder but it doesn't show anything. How to do it?

Dynamically tagging of AWS resources

I'm new to this and I'd like to get some ideas in terms of a code that can dynamically tag AWS resources. I'm confuse as to what will trigger the execution of the code that will tag it. Can someone please point me to right resources and sample codes?
You need to monitor CloudTrail events for creation of resources you would like to tag and invoke a Lambda function for the matching events, which tags
the resources accordingly.
CloudWatch Event Rule is setup to monitor :create* API calls via CloudTrail.
This rule triggers the lambda function whenever a matching event found.
The Lambda function fetches the resource identifier and principal information from the event and tags the resources accordingly.
I've devised a solution to tag EC2 resources for governance. It is developed in CDK Python and uses Boto3 to attach tags.
You can further extend this code to cover other resource types or maintain a DynamoDb table to store additional tags per principal
such as Project, Team, Cost Center. You can then simply fetch the tags of a principal and apply them all at once.
You can write lambda functions and use Cloudwatch events to trigger that function which will assign tag to your resources.
You can use AWS nodejs-sdk or boto3 for Python.

Serverless-ly Query External REST API from AWS and Store Results in S3?

Given a REST API, outside of my AWS environment, which can be queried for json data:
https://someExternalApi.com/?date=20190814
How can I setup a serverless job in AWS to hit the external endpoint on a periodic basis and store the results in S3?
I know that I can instantiate an EC2 instance and just setup a cron. But I am looking for a serverless solution, which seems to be more idiomatic.
Thank you in advance for your consideration and response.
Yes, you absolutely can do this, and probably in several different ways!
The pieces I would use would be:
CloudWatch Event using a cron-like schedule, which then triggers...
A lambda function (with the right IAM permissions) that calls the API using eg python requests or equivalent http library and then uses the AWS SDK to write the results to an S3 bucket of your choice:
An S3 bucket ready to receive!
This should be all you need to achieve what you want.
I'm going to skip the implementation details, as it is largely outside the scope of your question. As such, I'm going to assume your function already is written and targets nodeJS.
AWS can do this on its own, but to make it simpler, I'd recommend using Serverless. We're going to assume you're using this.
Assuming you're entirely new to serverless, the first thing you'll need to do is to create a handler:
serverless create --template "aws-nodejs" --path my-service
This creates a service based on the aws-nodejs template on the provided path. In there, you will find serverless.yml (the configuration for your function) and handler.js (the code itself).
Assuming your function is exported as crawlSomeExternalApi on the handler export (module.exports.crawlSomeExternalApi = () => {...}), the functions entry on your serverless file would look like this if you wanted to invoke it every 3 hours:
functions:
crawl:
handler: handler.crawlSomeExternalApi
events:
- schedule: rate(3 hours)
That's it! All you need now is to deploy it through serverless deploy -v
Below the hood, what this does is create a CloudWatch schedule entry on your function. An example of it can be found over on the documentation
First thing you need is a Lambda function. Implement your logic, of hitting the API and writing data to S3 or whatever, inside the Lambda function. Next thing, you need a schedule to periodically trigger your lambda function. Schedule expression can be used to trigger an event periodically either using a cron expression or a rate expression. The lambda function you created earlier should be configured as the target for this CloudWatch rule.
The resulting flow will be, CloudWatch invokes the lambda function whenever there's a trigger (depending on your CloudWatch rule). Lambda then performs your logic.

CodePipeline and CloudFormation paramters

I am using CodePipeline to deploy my SAM (lambda etc) application referencing https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html.
The "issue" now is my CloudFormation has some parameters inside and CodePipeline requires that I set these. I could do so via parameter overrides
But is this the correct way? I actually only want it set once at the start. And I'd rather have users set it in CloudFormation and CodePipeline should follow those values.
This stack is already created, why isit that CodePipeline complains I need them set?
The input parameters are required by CloudFormation to update.
Template configuration is the recommended way to specify the input parameters. You could create a template file of input parameters for the customers to use.
Possible solution is to create custom Lambda functions which will be invoked from CodePipeline using Invoke action.
As a parameter to such Lambda you would specify CloudFormation stack name. Lambda then will load CloudFormation parameters from existing stack and create output from it (using appropriate AWS SDK). Such artifact will be used as an input to CloudFormation deployment.
Another solution is to create CodeBuild project which will do the same thing.
It's a bit complex but it seems that CodePipeline always needs full set of parameters unfortunately.

How do I add a Lambda Function with an S3 Trigger in CloudFormation?

I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn