Terraform handle multiple lambda functions - amazon-web-services

I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}

I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.

Related

AWS CDK - Exclude stage name from logical ID of resource

I have a CDK project where initially it was deployed via CLI. I am now wrapping it in a pipelines construct.
Old:
Project
|
Stacks
|
Resources
New:
Project
|
Pipeline
|
Stage
|
Stacks
|
Resources
The issue I'm running into is that there are resources I would rather not be deleted in the application, however adding the stage causes the logical ID's to change to Stage-Stack-Resource from Stack-Resource. I found this article that claims you can provide an id of 'Default' to a resource, and cause it to go unused in the process of making the logical ID. however for some reason when I pass an Id of Default to the stage it simply uses that "Default" literal value instead of omitting it.
End goal is that I can keep my existing cloudformation resources, but have them deployed via this pipeline.
You can override the logical id manually like this:
S3 example:
const cfnBucket = s3Bucket.node.defaultChild as aws_s3.CfnBucket;
cfnBucket.overrideLogicalId('CUSTOMLOGICALID');
However, if you did not specify a logical id initially and do it now, CloudFormation will delete the original resource and create a new one with the new custom logical id because CloudFormation identifies resources by their logical ID.
Stage is something you define and it is not related to CloudFormation. You are probably using it in your Stack name or in your Resource names and that's why it gets included in the logical id.
Based on your project description, the only option to not have any resources deleted is: make one of the pipeline stages use the exact same stack name and resource names (without stage) as the CLI deployed version.
I ended up doing a full redeploy of the application. Luckily this was a development environment where trashing our data stores isn't a huge loss. But would be much more of a concern in a production environment.

How can I get an invoking lambda to run a cloud custodian policy in multiple different accounts on one run?

I have multiple c7n-org policies to be run in all regions in a list of accounts. Locally I can do this easily with the c7n-org run -c accounts.yml -s out --region all -u cost-control.yml.
The goal is to have an aws lambda function running this daily on all accounts like this. Currently I have a child lambda function for each policy in cost-control.yml and an invoker lambda function that loops through each function and calls it passing it the appropriate arn role to assume and region each time. Because I am calling the child functions for all accounts and all regions, the child functions are called over and over with different parameters to parse.
To get the regions to change each time I needed to remove an if statement in the SDK in handler.py (line 144) that is caching the config files so that it reads the new config w the parameters in subsequent invocations.
# one time initialization for cold starts.
global policy_config, policy_data
if policy_config is None:
with open(file) as f:
policy_data = json.load(f)
policy_config = init_config(policy_data)
load_resources(StructureParser().get_resource_types(policy_data))
I removed the "if policy_config is None:" line and modified the filename to a new config file that I wrote to tmp within the custodian_policy.py lambda code which is the config with the parameters for this invocation.
In the log streams for each invocation of the child lambdas the accounts are not assumed properly. The regions are changing properly and cloud custodian is calling the policy on the different regions but it is keeping the initial account from the first invocation. Each log stream shows the lambda assuming the role of the first called parameters from the invoker and then not changing the role in the next calls though it is receiving the correct parameters.
I've tried changing the cloud custodian SDK code in handler.py init_config() to try to force it to change the account_id each time. I know I shouldn't be changing the SDK code though and there is probably a way to do this properly using the policies.
I've thought about trying the fargate route which would be more like running it locally but I'm not sure if I would come across this issue there too.
Could anyone give me some pointers on how to get cloud custodian to assume roles on many different lambda invocations?
I found the answer in local_session function in utils.py of the c7n SDK. It was caching the session info for up to 45 minutes which is why it was reusing the old account info each lambda invocation within each log stream.
By commenting out lines 324 and 325, I forced c7n to create a new session each time with the passed in account parameter. The new function should look like this:
def local_session(factory, region=None):
"""Cache a session thread local for up to 45m"""
factory_region = getattr(factory, 'region', 'global')
if region:
factory_region = region
s = getattr(CONN_CACHE, factory_region, {}).get('session')
t = getattr(CONN_CACHE, factory_region, {}).get('time')
n = time.time()
# if s is not None and t + (60 * 45) > n:
# return s
s = factory()
setattr(CONN_CACHE, factory_region, {'session': s, 'time': n})
return s

Prevent Terraform from deleting existing paths in S3

I have a simple Terraform code where I manage an application's version code in S3
I want to manage multiple version of this code in S3.
My code is as follows:
main.tf
resource "aws_s3_bucket" "caam_test_bucket" {
bucket = "caam-test-bucket"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "caam_test_bucket_obj" {
bucket = aws_s3_bucket.caam_test_bucket.id
key = "${var.env}/v-${var.current_version}/app.zip"
source = "app.zip"
}
Every time I update the code, I export it to app.zip, increment the variable current_version and push the terraform code.
The issue here is that instead of keeping multiple version folders in the S3 buckets, it deletes the existing one and creates another.
I want Terraform to keep any paths and files created and to not delete it.
For e.g if a path dev/v-1.0/app.zip already exists and i increment the current version to 2.0 and push the code, i want Terraform to keep dev/v-1.0/app.zip and also add the dev/v-2.0/app.zip to the bucket.
Is there a way to do that ?
TF deletes your object, because that is how it works:
Destroy resources that exist in the state but no longer exist in the configuration.
One way to overcome this is to keep all your objects in the configuration, through for_each. This way you would keep adding new versions to a map of existing objects, rather then keep replacing them. This can be problematic if you are creating lots of versions, as you have to keep them all.
Probably easier way is to use local-exec which is going to use AWS CLI to upload the object. This happens "outside" of TF, thus TF will not be deleting pre-existing objects, as TF won't be aware of them.

How to conditionally update a resource in Terraform

Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}