Adding condition to existing Terraform code that was successfully run - amazon-web-services

As I'm working with an existing terraform code that has been run successfully against AWS, I discovered I'd like to reuse the code in a different region without having to have a 2nd set of the same code. Some of the code affects global services which means I don't need it to be rerun in the other regions, so I would like to include the count = "${var.alreadyrun}" == "yes" ? 1 : 0 , in some of the terraform modules.
However, when I add the above line to the existing code for the specific modules, when I run terraform plan against the same region it was already run against, it tells me it's going to destroy and re-add those modules. I don't want to destroy and the recreated modules, I just want to skip it and move on to the next. Is there a way I can do this?

Adding count to a module block causes Terraform to track multiple instances for that block, and so the address of the module will change from something like module.example to be like module.example[0] instead, and so by default Terraform will assume you want to destroy the old module instance with no instance key and create a new one with instance key zero.
However, if you are using Terraform v1.1 or later you can add an additional declaration to tell Terraform that you want to "move" the existing module instance to a new address instead. For a module "example" block, that would look like this:
module "example" {
source = "./modules/example"
count = var.enable_example ? 1 : 0
# ...
}
moved {
from = module.example
to = module.example[0]
}
There are more details on moved blocks in the Terraform documentation section Refactoring.
As a side-note, when declaring a conditional module or resource based on a n input variable like this it's more typical to name it something like enable_example as I showed above, rather than a name like "already run", because a Terraform configuration should typically declare a desired state rather than describing how to reach that state.
You might also wish to investigate the possibility of splitting your Terraform configuration into multiple parts so that there's a "global" configuration that you use only once and then a "regional" configuration that you use for each region. That will then avoid the need to treat one of the regions as "special", also being responsible for the global infrastructure, and thus create a clearer dependency graph between all of your configurations for future maintainers to understand.
Both of those suggestions are away from your direct question, though; a moved block as I described above is the more direct answer.

Related

Where do I tell AWS SAM which file to chose depending on the stage/environment?

In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command

use dynamic block to create optional attribute inside resource

I have to set optional attribute to add custom provider inside a terraform resource to reuse a resource with multiple providers.
i need something like this
resource "aws_kms" "key" {
provider = aws."custom_alias"
description = "xxx"
policy = "yyy"
}
in the above resource block, I want to pass different values to the provider attribute. to use the default provider, I want to pass a null value to this, and to use the custom provider, I want to pass the custom alias of the provider.
The provider attribute doesn't support variables. so I can't just set it to a variable (that would be very easy, not sure why it's not supported!)
I'm thinking I can use a dynamic block to create this attribute inside a resource provider = aws."custom_alis"
Not sure if that's possible to do. as most of the examples, i see for the dynamic block is to cerate a dynamic block inside a resource like
settings {
xyz = abc
abc = xyz
}
Not sure using dynamic if I can create an optional attribute inside the resource.
looking for a suggestion on how to handle this use case?
The goal is to add provider attributes inside resources with different values.
Thanks in advance!
Terraform does not support dynamic provider selection. There's already a popular [feature request][1] for this.
What you can do instead is put your re-usable code inside a [module][2] and create the module multiple times with different providers:
module "mymodule_provider1" {
source = "./path/to/module"
providers = {
aws = aws.provider1
}
}
module "mymodule_provider2" {
source = "./path/to/module"
providers = {
aws = aws.provider2
}
}
This is the "suggested" way to do it by HashiCorp, but it has the limitation that the number of modules can't be dynamic. If you really need the number of modules to be dynamic and the providers can't be created statically, then you can create the provider inside the module and then use the for_each argument on the module itself. You'd have to pass in the provider initialization values as input arguments into the module.
EDIT:
Sorry, it wasn't until I tried it myself that I remembered that Terraform doesn't allow for_each in a module if the module creates providers internally. So, I'm afraid, there's no way I'm aware of to do what you're trying to do. You'll have to create the providers statically.

Difference between an Output & an Export

In CloudFormation we have the ability to output some values from a template so that they can be retrieved by other processes, stacks, etc. This is typically the name of something, maybe a URL or something generated during stack creation (deployment), etc.
We also have the ability to 'export' from a template. What is the difference between returning a value as an 'output' vs as an 'export'?
Regular output values can't be references from other stacks. They can be useful when you chain or nest your stacks and their scope/visibility is local. Exported outputs are visible globally within account and region, and can be used by any future stack you are going to deploy.
Chaining
When you chain your stacks, you deploy one stack, take it outputs, and use as input parameters to the second stack you are going to deploy.
For example, let's say you have two templates called instance.yaml and eip.yaml. The instance.yaml outputs its instance-id (no export), while eip.yaml takes instance id as an input parameter.
To deploy them both, you have to chain them:
Deploy instance.yaml and wait for its completion.
Note it outputs values (i.e. instance-id) - usually done programmatically, not manually.
Deploy eip.yaml and pass instance-id as its input parameter.
Nesting
When you nest stacks you will have a parent template and a child template. Child stack will be created from inside of the parent stack. In this case the child stack will produce some outputs (not exports) for the parent stack to use.
For example, lets use again instance.yaml and eip.yaml. But this time eip.yaml will be parent and instance.yaml will be child. Also eip.yaml does not take any input parameters, but instance.yaml outputs its instance-id (not export)
In this case, to deploy them you do the following:
Upload parrent template (eip.yaml) to s3
In eip.yaml create the child instance stack using AWS::CloudFormation::Stack and the s3 url from step 1.
This way eip.yaml will be able to access the instance-id from the outputs of the nested stack using GetAtt.
Cross-referencing
When you cross-reference stacks, you have one stack that exports it outputs so that they can be used by any other stack in the same region and account.
For example, lets use again instance.yaml and eip.yaml. instance.yaml is going to export its output (instance-id). To use the instance-id eip.yaml will have to use ImportValue in its template without the need for any input parameters or nested stacks.
In this case, to deploy them you do the following:
Deploy instance.yaml and wait till it completes.
Deploy eip.yaml which will import the instance-id.
Altough cross-referencing seems very useful, it has one major issue, which is that its very difficult to update or delete cross-referenced stacks:
After another stack imports an output value, you can't delete the stack that is exporting the output value or modify the exported output value. All of the imports must be removed before you can delete the exporting stack or modify the output value.
This is very problematic if you are starting your design and your templates can change often.
When to use which?
Use cross-references (exported values) when you have some global resources that are going to be shared among many stacks in a given region and account. Also they should not change often as they are difficult to modify. Common examples are: a global bucket for centralized logging location, a VPC.
Use nested stack (not exported outputs) when you have some common components that you often deploy, but each time they can be a bit different. Examples are: ALB, a bastion host instance, vpc interface endpoint.
Finally, chained stacks (not exported outputs) are useful for designing loosely-coupled templates, where you can mix and match templates based on new requirements.
Short answer from here, use export between stacks, and use output with nested stacks.
Export
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values.
Output
With nested stacks, you deploy and manage all resources from a single
stack. You can use outputs from one stack in the nested stack group as
inputs to another stack in the group. This differs from exporting
values.

Terraform 0.12: Provider produced inconsistent final plan

I have a Terraform configuration which creates an aws_api_gateway_usage_plan resource, using a computed value during the apply stage from a local_file resource.
resource "aws_api_gateway_usage_plan" "api_plan" {
name = var.usage_plan_name
api_stages {
api_id = jsondecode(file("dev.json")).resources[1].rest_api_id
stage = "api"
}
# Have to wait for the API to be created before we can create the usage plan
depends_on = [local_file.chalice_config]
}
As you can see, I read dev.json to determine the api_id Terraform needs. The problem is that when I run terraform apply, the new safety checks described here notice that the previous value that api_id evaluated to has changed!
Provider produced inconsistent final plan: When expanding the plan for aws_api_gateway_usage_plan.api_plan
to include new values learned so far during apply, provider "aws" produced an invalid new value
for .api_stages[0].api_id: was cty.StringVal("****"), but now cty.StringVal("****").
As that documentation describes, the correct way to solve this error is to specify that during the plan phase this api_id actually has yet to be computed. The problem is I'm not sure how to do this through a Terraform config - the documentation I've referenced is for the writers of the actual Terraform providers.
Looking at issues on GitHub, it seems like setting the initial value to null isn't a reasonable way to do this.
Any ideas? I am considering downgrading to Terraform 0.11 to get around this new safety check, but I was hoping this would be possible in 0.12.
Thanks in advance!
Okay, after thinking for a while I came up with a silly workaround that enabled me to "trick" Terraform into believing that the value for the api_id was to be computed during the apply phase, thereby disregarding the safety check.
What I did was replace the api_id expression with the following:
api_id = replace("=${aws_security_group.sg.vpc_id}=${jsondecode(file("files/handler/.chalice/deployed/dev.json")).resources[1].rest_api_id}", "=${aws_security_group.sg.vpc_id}=", "")
Essentially what I am doing is saying that the api_id's value depends on a computed variable - namely, the vpc_id of a aws_security_group I create named sg. In doing so, Terraform recognizes this value is to be computed later, so the safety check is ignored.
Obviously, I don't actually want to have the vpc_id in here, so I used Terraform's string functions to remove it from the final expression.
This is a pretty hacky workaround, and I'm open to a better solution - just thought I'd share what I have now in case someone else runs into the same issue.
Thanks!
I was facing the same issue while creating lambda event source mapping. I overcome from it running
terraform plan
and then
terraform apply
I've got the same error when encoded my user_data scripts (with filebase64 or base64encode) in places where I add to just simply use file or templatefile :
user_data = file("${path.module}/provisioning_scripts/init_script.sh")
user_data = templatefile("${path.module}/provisioning_scripts/init_script.tpl", {
USER = "my-user"
GROUP = "my-group"
})
(*) I can't 100% reproduce it but I'm adding this solution as another possible reason for receiving the mentioned error.
Read also in here.

What is the difference between kubectl apply and kubectl replace

I am learning Kubernetes recently, and I am not very clear about the difference between "kubectl apply" and "kubectl replace". Is there any situation that we can only use one of them?
I have written up a thorough explanation of the differences between apply, replace, and patch: Kubernetes Apply vs. Replace vs. Patch. It includes an explanation that the current top-ranked answer to this question is wrong.
Briefly, kubectl apply uses the provided spec to create a resource if it does not exist and update, i.e., patch, it if it does. The spec provided to apply need only contain the required parts of a spec, when creating a resource the API will use defaults for the rest and when updating a resource it will use its current values.
The kubectl replace completely replaces the existing resource with the one defined by the provided spec. replace wants a complete spec as input, including read-only properties supplied by the API like .metadata.resourceVersion, .spec.nodeName for pods, .spec.clusterIP for services, and .secrets for service accounts. kubectl has some internal tricks to help you get that right, but typically the use case for replace is getting a resource spec, changing a property, and then using that changed, complete spec to replace the existing resource.
The kubectl replace command has a --force option which actually does not use the replace, i.e., PUT, API endpoint. It forcibly deletes (DELETE) and then recreates, (POST) the resource using the provided spec.
Updated Answer
My original was rather controversial and I would even say now, in hindsight, half incorrect. So here is an updated answer which I hope will be more helpful:
commands like kubectl patch, replace, delete, create, even edit are all imperative: they tell kubectl exactly what to do
the kubectl apply command is OTOH "declarative" in that it tells kubernetes, here is a desired state (the yaml from the file provided to the apply command), now figure out how to get there: create, patch, replace the object, etc whatever it takes... you get the idea.
So the 2 commands are hugely different.
EG with apply you can give it just the changes you want: it will figure out what properties of the object need to be changed, and leave the other ones alone; if those properties are "immutable" (eg, the nodeName of a pod), it will complain, and if you then repeat the command with --force, it is smart enough to know to do the equivalent of a replace --force.
In general, you should favor apply (with --force when necessary), and only use the imperative commands when the declarative approach does not give the expected result (although I would love to see examples of this -- I'm guessing this would happen only when you would need several steps because of interdependencies that will have negative consequences if done with apply).
The difference between apply and replace is similar to the difference between apply and create.
create / replace uses the imperative approach, while apply uses the declarative approach.
If you used create to create the resource, then use replace to update it. If you used apply to create the resource, then use apply to update it.
Note that both replace and apply require a complete spec, and both create the new resources first before deleting the old ones (unless --force is specified).
you can add option -v=8 when use kubectl, and you will find the log like this
apply --force
patch 422
delete 200
get 200
get 200
get 404
post 201
replace --force
get 200
delete 200
get 404
post 201
kubectl apply .. will use various heuristics to selectively update the values specified within the resource.
kubectl replace ... will replace / overwrite the entire object with the values specified. This should be preferred as you're avoiding the complexity of the selective heuristic update. However some resources like ingresses/load balancers can't really be replaced as they're immutable.
Example of the heuristic update leading to non obvious operation: https://github.com/kubernetes/kubernetes/issues/67135
From: https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/manage-deployment.md
Disruptive updates
In some cases, you may need to update resource fields that cannot be
updated once initialized, or you may just want to make a recursive
change immediately, such as to fix broken pods created by a
Deployment. To change such fields, use replace --force, which deletes
and re-creates the resource.