I'm managing an autoscaling cloud infrastructure in AWS.
Every time I run Terraform it wants to override the desired_count, i.e. the number of running instances.
I would like this to not happen. How do I do that?
Constraints: I manage multiple different microservices, each of which set up their running instances with a shared module, where desired_count is specified. I don't want to change the shared module such that desired_count is ignored for all my microservices. Rather, I want to be able to override or not on a service-by-service (i.e caller-by-caller) basis.
This rules out a straightforward use of lifecycle { ignore_changes = ... }. As far as I can tell, the list of changes to ignore cannot be given as arguments (my Terraform complains when I try; feel free to tell me how you succeed at this).
My next idea (if possible) is to read the value from the stored state, if present, and ask for a desired_count equal to its current value, or my chosen initial value if it has no current value. If there are no concurrent Terraform runs (i.e. no races), this should accomplish the same thing. Is this possible?
I'm no expert terraformer. I would appreciate it a lot if you give very detailed answers.
The lifecycle parameters affect how the graph is built, so they can't be parameterized. The terraform team hasn't ruled out that this could be implemented, but they haven't done it with the issue reported over a couple years.
What you could do is create two aws_ecs_service resources, and switch between them:
resource "aws_ecs_service" "service" {
count = var.use_lifecycle ? 0 : 1
...
}
resource "aws_ecs_service" "service_with_lifecycle" {
count = var.use_lifecycle ? 1 : 0
...
lifecycle {
ignore_changes = ["desired_count"]
}
}
Given that, you need a way to reference the service you created. You can do that with a local:
locals {
service = var.use_lifecycle ? aws_ecs_service.service_with_lifecycle[0] : aws_ecs_service.service[0]
}
Related
I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.
I have custom resource defined in my terraform module:
resource "aws_alb_target_group" "whatever"
{
....
}
Turns out whatever is not good name, and I need to update it.
Classic way of doing it would be to login onto each environment and execute terraform state mv, however I have lots of environments, and no automation for such action.
How I can change name of resource without manually moving state (only through editing terraform modules and applying plans)?
Based on the explanation in the question, I guess your best bet would be to use the moved block [1]. So for example, in your case that would be:
resource "aws_alb_target_group" "a_much_better_whatever"
{
....
}
moved {
from = aws_alb_target_group.whatever
to = aws_alb_target_group.a_much_better_whatever
}
EDIT: As #Matt Schuchard noted, the moved block is available only for Terraform versions >=1.1.0.
EDIT 2: As per #Martin Atkins' comments, changed the resource name to be the name of the resource moving to instead of moving from.
[1] https://www.terraform.io/language/modules/develop/refactoring#moved-block-syntax
I'm in the same situation.
My plan is to create the new resource group in Terraform, apply it, move the resources in the Azure portal to the new resource group, and then do terraform state mv to move the resources in terraform.
Yes, if you have a lot of resources its boring.. but I guess I won't brake anything this way.
I have a simple site set up on AWS and have a terraform script working to deploy it (at least from my local machine).
When I have a successful deployment through terraform apply, quite often if I then run terraform plan again (immediately after the apply) I will see changes like this:
# aws_route53_record.alias_route53_record_portal will be updated in-place
~ resource "aws_route53_record" "alias_route53_record_portal" {
fqdn = "mysite.co.uk"
id = "Z12345678UR1K1IFUBA_mysite.co.uk_A"
name = "mysite.co.uk"
records = []
ttl = 0
type = "A"
zone_id = "Z12345678UR1K1IFUBA"
- alias {
- evaluate_target_health = false -> null
- name = "d12345mkpmx9ii.cloudfront.net" -> null
- zone_id = "Z2FDTNDATAQYW2" -> null
}
+ evaluate_target_health = true
+ name = "d12345mkpmx9ii.cloudfront.net"
+ zone_id = "Z2FDTNDATAQYW2"
}
}
Why is terraform saying that some parts of resources need recreating when nothing has changed?
EDIT My actual tf resource...
resource "aws_route53_record" "alias_route53_record_portal" {
zone_id = data.aws_route53_zone.sds_zone.zone_id
name = "mysite.co.uk"
type = "A"
alias {
name = aws_cloudfront_distribution.s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
evaluate_target_health = true
}
}
You have changed evaluate_target_health from false to true. Terraform will just update the fields that have changed. The reason why it's showing like this is because often times AWS doesn't provide separate APIs for each field. Since TF is showing that this resource will be updated in-place, it will touch the minimum number of resources to make this change.
The "plan" operation in Terraform first synchronizes the Terraform state with remote objects (by making calls to the remote API), and then it compares the updated state with the configuration.
Terraform (or, more accurately, the relevant Terraform provider) then generates a planned update or replace for any case where the state and the configuration disagree.
If you see a planned update for a resource whose configuration you know you haven't changed, then by process of elimination that suggests that the remote system is what has changed.
Sometimes that can happen if some other process (or a human in the admin console) changes an object that Terraform believes itself to be responsible for. In that case, the typical resolution is to ensure that each object is only managed by one system and that no-one is routinely making changes to Terraform-managed objects outside of Terraform.
One way to diagnose this would be to consult the remote system and see whether its current settings agree with your Terraform configuration. If not, that would suggest that something other than Terraform has changed the value.
A less common reason this can arise is due to a bug in the provider itself. There are two variations of this class of bug:
When creating the object, the provider doesn't correctly translate the given configuration to a remote API call, and so it ends up creating an object that doesn't match the configuration. A subsequent Terraform plan will then notice that inconsistency and plan an update to fix it. If the provider's update operation has a similar bug then this will never converge, causing the provider to repeatedly plan the same update.
Conversely, the create/update may be implemented correctly but the "refresh" operation (updating the state to match the remote system) may inaccurately translate the remote object data back to Terraform state data, causing the state to not accurately reflect the remote system. In that case, the provider will probably then find that the configuration doesn't match the state anymore, even though the state was correct after the initial create.
Both of these bugs are typically nicknamed "permadiff" by provider developers, because the symptom is Terraform seeming to plan the same change indefinitely, no matter how many times you apply it. If you think you've encountered a "permadiff" bug then usually the path forward is to report a bug in the provider's development repository so that the maintainers can investigate.
One specific variation of "permadiff" is a situation where the remote system does some sort of normalization of your given values which the provider doesn't take into account. For example, some remote systems will accept strings containing uppercase letters but will convert them to lowercase for permanent storage. If a provider doesn't take that into account, it will probably incorrectly plan to change the value back to the one containing uppercase letters again in order to try to make the state match the configuration. This subclass of bug is a normalization permadiff, which provider teams will typically address by re-implementing the remote system's normalization logic in the provider itself.
If you find a normalization permadiff then you can often work around it until the bug is fixed by figuring out what normalization the remote system expects and then manually normalizing your configuration to match it, so that the provider will then see the configuration as matching the remote system.
Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.
I have a module with in my terraform file that created some Database servers that does a few things.
First, it creates an auto scaling group to use a specific image, then it creates some EBS volumes and attaches them and then adds some lambda code so on launch the instances get registered to route 53. So in all about 80 lines of text.
Extract
module "systemt-sql-db01" {
source = "localmodules/tf-aws-asg"
name = "${var.envname}-sys-db01"
envname = "${var.envname}"
service = "dbpx"
ami_id = "${data.aws_ami.app_sqlproxy.id}"
user_data = "${data.template_cloudinit_config.config-enforcement-sqlproxy.rendered}"
#subnets = ["${module.subnets-enforcement.web_private_subnets}"]
subnets = ["${element(module.subnets-enforcement.web_private_subnets, 1)}"]
security_groups = ["${aws_security_group.unfiltered-egress-sg.id}", "${aws_security_group.sysopssg.id}", "${aws_security_group.system-sqlproxy.id}"]
key_name = "${var.keypair}"
load_balancers = ["${var.envname}-enf-dbpx-int-elb"]
iam_instance_profile = "${module.iam_profile_generic.profile_arn}"
instance_type = "${var.enforcement_instancesize_dbpx}"
min = 0
max = 0
}
And I then have two parameter files one that I call when launching to pre production and one called when launching to production. I don't want these to contain anything other than variables.
The problem is that for production I need to call the module twice, but for production I need it called three times.
People talk about a count function for modules but I don't think this is possible as yet. Can anyone suggest any other ways to do this? What I would like is to be able in my parameter file to set a list variable of all the DB ASG names, and then loop through this calling the module each time.
I hope that makes sense?
thank you
EDIT Looping in modules is in beta for Terraform 0.13 (https://discuss.hashicorp.com/t/terraform-0-13-beta-released/9555).
This is a highly requested feature in Terraform and as mentioned it is not yet supported. Later releases of Terraform v0.12 will introduce this feature (https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each).
I had a similar problem where I had to create multiple KMS keys for multiple accounts from a base KMS module. I ended up creating a second module that uses the core KMS module, this second module had many instances of the core module, but only required me to input the account details once.
This is still not ideal, but it worked well enough without over complicating things.