AWS CDK - Exclude stage name from logical ID of resource - amazon-web-services

I have a CDK project where initially it was deployed via CLI. I am now wrapping it in a pipelines construct.
Old:
Project
|
Stacks
|
Resources
New:
Project
|
Pipeline
|
Stage
|
Stacks
|
Resources
The issue I'm running into is that there are resources I would rather not be deleted in the application, however adding the stage causes the logical ID's to change to Stage-Stack-Resource from Stack-Resource. I found this article that claims you can provide an id of 'Default' to a resource, and cause it to go unused in the process of making the logical ID. however for some reason when I pass an Id of Default to the stage it simply uses that "Default" literal value instead of omitting it.
End goal is that I can keep my existing cloudformation resources, but have them deployed via this pipeline.

You can override the logical id manually like this:
S3 example:
const cfnBucket = s3Bucket.node.defaultChild as aws_s3.CfnBucket;
cfnBucket.overrideLogicalId('CUSTOMLOGICALID');
However, if you did not specify a logical id initially and do it now, CloudFormation will delete the original resource and create a new one with the new custom logical id because CloudFormation identifies resources by their logical ID.
Stage is something you define and it is not related to CloudFormation. You are probably using it in your Stack name or in your Resource names and that's why it gets included in the logical id.
Based on your project description, the only option to not have any resources deleted is: make one of the pipeline stages use the exact same stack name and resource names (without stage) as the CLI deployed version.

I ended up doing a full redeploy of the application. Luckily this was a development environment where trashing our data stores isn't a huge loss. But would be much more of a concern in a production environment.

Related

Terraform handle multiple lambda functions

I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.

Migrate a CDK managed CloudFormation distribution from the CloudFrontWebDistribution to the Distribution API

I have an existing CDK setup in which a CloudFormation distribution is configured using the deprecated CloudFrontWebDistribution API, now I need to configure a OriginRequestPolicy, so after some Googling, switched to the Distribution API (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cloudfront-readme.html) and reused the same "id" -
Distribution distribution = Distribution.Builder.create(this, "CFDistribution")
When I synth the stack I already see in the yaml that the ID - e.g. CloudFrontCFDistribution12345689 - is a different one than the one before.
When trying to deploy it will fail, since the HTTP Origin CNAMEs are already associated with the existing distribution. ("Invalid request provided: One or more of the CNAMEs you provided are already associated with a different resource. (Service: CloudFront, Status Code: 409, Request ID: 123457657, Extended Request ID: null)"
Is there a way to either add the OriginRequestPolicy (I just want to transfer an additional header) to the CloudFrontWebDistribution or a way to use the new Distribution API while maintaining the existing distribution instead of creating a new one?
(The same operation takes around 3 clicks in the AWS Console).
You could use the following trick to assign the logical ID yourself instead of relying on the autogenerated logical ID. The other option is to execute it in two steps, first update it without the additional CNAME and then do a second update with the additional CNAME.
const cfDistro = new Distribution(this, 'distro', {...});
cfDistro.node.defaultChild.overrideLogicalId('CloudfrontDistribution');
This will result in the following stack:
CloudfrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
...
Small edit to explain why this happens:
Since you're switching to a new construct, you're also getting a new logical ID. In order to ensure a rollback is possible, CloudFormation will first create all new resources and create the updated resources that need to be recreated. Only when creating and updating everything is done, it will clean up by removing the old resources. This is also the reason why a two-step approach would work when changing the logical IDs of the resources, or force a normal update by ensuring the same logical ID.
Thanks a lot #stijndepestel - simply assigning the existing logical ID worked on the first try.
Here's the Java variant of the code in the answer
import software.amazon.awscdk.services.cloudfront.CfnDistribution;
...
((CfnDistribution) distribution.getNode().getDefaultChild()).overrideLogicalId("CloudfrontDistribution");

Terraform `name` vs `self_link` in GCP

In GCP, when using Terraform, I see I can use name attribute as well as self_link. So, I am wondering if there are cases where I must use any of those.
For example:
resource "google_compute_ssl_policy" "custom_ssl_policy" {
name = "my-ssl-policy"
profile = "MODERN"
min_tls_version = "TLS_1_1"
}
this object, then can be referred as:
ssl_policy = google_compute_ssl_policy.custom_ssl_policy.name
and
ssl_policy = google_compute_ssl_policy.custom_ssl_policy.self_link
I know that object.name returns the Terraform object name, and object.self_link returns GCP's resources's URI.
I have tried with several objects, and it works with both attributes, so I want to know if this is trivial or there are situations where I should use one of them.
Here is the definition from the official documentation:
Nearly every GCP resource will have a name field. They are used as a
short way to identify resources, and a resource's display name in the
Cloud Console will be the one defined in the name field.
When linking resources in a Terraform config though, you'll primarily
want to use a different field, the self_link of a resource. Like name,
nearly every resource has a self_link. They look like:
https://www.googleapis.com/compute/v1/projects/foo/zones/us-central1-c/instances/terraform-instance
A resource's self_link is a unique reference to that resource. When
linking two resources in Terraform, you can use Terraform
interpolation to avoid typing out the self link!
Reference: https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started
One example, I can deploy two cloud functions with the same name/same project but in different regions. In this case, if you had to reference both resources in Terraform code, you would be better by using the self_link since it's a unique URI.

How to conditionally update a resource in Terraform

Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}