Terraform 0.12: Provider produced inconsistent final plan - amazon-web-services

I have a Terraform configuration which creates an aws_api_gateway_usage_plan resource, using a computed value during the apply stage from a local_file resource.
resource "aws_api_gateway_usage_plan" "api_plan" {
name = var.usage_plan_name
api_stages {
api_id = jsondecode(file("dev.json")).resources[1].rest_api_id
stage = "api"
}
# Have to wait for the API to be created before we can create the usage plan
depends_on = [local_file.chalice_config]
}
As you can see, I read dev.json to determine the api_id Terraform needs. The problem is that when I run terraform apply, the new safety checks described here notice that the previous value that api_id evaluated to has changed!
Provider produced inconsistent final plan: When expanding the plan for aws_api_gateway_usage_plan.api_plan
to include new values learned so far during apply, provider "aws" produced an invalid new value
for .api_stages[0].api_id: was cty.StringVal("****"), but now cty.StringVal("****").
As that documentation describes, the correct way to solve this error is to specify that during the plan phase this api_id actually has yet to be computed. The problem is I'm not sure how to do this through a Terraform config - the documentation I've referenced is for the writers of the actual Terraform providers.
Looking at issues on GitHub, it seems like setting the initial value to null isn't a reasonable way to do this.
Any ideas? I am considering downgrading to Terraform 0.11 to get around this new safety check, but I was hoping this would be possible in 0.12.
Thanks in advance!

Okay, after thinking for a while I came up with a silly workaround that enabled me to "trick" Terraform into believing that the value for the api_id was to be computed during the apply phase, thereby disregarding the safety check.
What I did was replace the api_id expression with the following:
api_id = replace("=${aws_security_group.sg.vpc_id}=${jsondecode(file("files/handler/.chalice/deployed/dev.json")).resources[1].rest_api_id}", "=${aws_security_group.sg.vpc_id}=", "")
Essentially what I am doing is saying that the api_id's value depends on a computed variable - namely, the vpc_id of a aws_security_group I create named sg. In doing so, Terraform recognizes this value is to be computed later, so the safety check is ignored.
Obviously, I don't actually want to have the vpc_id in here, so I used Terraform's string functions to remove it from the final expression.
This is a pretty hacky workaround, and I'm open to a better solution - just thought I'd share what I have now in case someone else runs into the same issue.
Thanks!

I was facing the same issue while creating lambda event source mapping. I overcome from it running
terraform plan
and then
terraform apply

I've got the same error when encoded my user_data scripts (with filebase64 or base64encode) in places where I add to just simply use file or templatefile :
user_data = file("${path.module}/provisioning_scripts/init_script.sh")
user_data = templatefile("${path.module}/provisioning_scripts/init_script.tpl", {
USER = "my-user"
GROUP = "my-group"
})
(*) I can't 100% reproduce it but I'm adding this solution as another possible reason for receiving the mentioned error.
Read also in here.

Related

AWS CDK conditional ImportValue

I'm importing an ARN from another stack with the cdk.Fn.importValue method. This works fine if I know that the output value is always present, but I don't know how to handle the case when the value I try to import is optional.
How can I get something similar to: (checking if the value exists before importing it)
if(value exists) {
cdk.Fn.importValue("value")
}
AFAIK there currently is no way in CDK to perform a lookup of a CloudFormation exports during synthesis time.
If you don't want to fiddle around with performing CloudFormation API calls with the aws-sdk before creating the CDK stack, in my opinion the most elegant way to share conditional values between stacks, is to use SSM parameters instead of CloudFormation exports.
SSM parameters can be looked up during synthesis time. See docs: https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
So, with StringParameter.valueFromLookup you are then able to only use the value if it exists (IIRC the method throws an error if the parameter doesn't exist, so try-catch is your friend here, but not 100% sure).

Adding condition to existing Terraform code that was successfully run

As I'm working with an existing terraform code that has been run successfully against AWS, I discovered I'd like to reuse the code in a different region without having to have a 2nd set of the same code. Some of the code affects global services which means I don't need it to be rerun in the other regions, so I would like to include the count = "${var.alreadyrun}" == "yes" ? 1 : 0 , in some of the terraform modules.
However, when I add the above line to the existing code for the specific modules, when I run terraform plan against the same region it was already run against, it tells me it's going to destroy and re-add those modules. I don't want to destroy and the recreated modules, I just want to skip it and move on to the next. Is there a way I can do this?
Adding count to a module block causes Terraform to track multiple instances for that block, and so the address of the module will change from something like module.example to be like module.example[0] instead, and so by default Terraform will assume you want to destroy the old module instance with no instance key and create a new one with instance key zero.
However, if you are using Terraform v1.1 or later you can add an additional declaration to tell Terraform that you want to "move" the existing module instance to a new address instead. For a module "example" block, that would look like this:
module "example" {
source = "./modules/example"
count = var.enable_example ? 1 : 0
# ...
}
moved {
from = module.example
to = module.example[0]
}
There are more details on moved blocks in the Terraform documentation section Refactoring.
As a side-note, when declaring a conditional module or resource based on a n input variable like this it's more typical to name it something like enable_example as I showed above, rather than a name like "already run", because a Terraform configuration should typically declare a desired state rather than describing how to reach that state.
You might also wish to investigate the possibility of splitting your Terraform configuration into multiple parts so that there's a "global" configuration that you use only once and then a "regional" configuration that you use for each region. That will then avoid the need to treat one of the regions as "special", also being responsible for the global infrastructure, and thus create a clearer dependency graph between all of your configurations for future maintainers to understand.
Both of those suggestions are away from your direct question, though; a moved block as I described above is the more direct answer.

use dynamic block to create optional attribute inside resource

I have to set optional attribute to add custom provider inside a terraform resource to reuse a resource with multiple providers.
i need something like this
resource "aws_kms" "key" {
provider = aws."custom_alias"
description = "xxx"
policy = "yyy"
}
in the above resource block, I want to pass different values to the provider attribute. to use the default provider, I want to pass a null value to this, and to use the custom provider, I want to pass the custom alias of the provider.
The provider attribute doesn't support variables. so I can't just set it to a variable (that would be very easy, not sure why it's not supported!)
I'm thinking I can use a dynamic block to create this attribute inside a resource provider = aws."custom_alis"
Not sure if that's possible to do. as most of the examples, i see for the dynamic block is to cerate a dynamic block inside a resource like
settings {
xyz = abc
abc = xyz
}
Not sure using dynamic if I can create an optional attribute inside the resource.
looking for a suggestion on how to handle this use case?
The goal is to add provider attributes inside resources with different values.
Thanks in advance!
Terraform does not support dynamic provider selection. There's already a popular [feature request][1] for this.
What you can do instead is put your re-usable code inside a [module][2] and create the module multiple times with different providers:
module "mymodule_provider1" {
source = "./path/to/module"
providers = {
aws = aws.provider1
}
}
module "mymodule_provider2" {
source = "./path/to/module"
providers = {
aws = aws.provider2
}
}
This is the "suggested" way to do it by HashiCorp, but it has the limitation that the number of modules can't be dynamic. If you really need the number of modules to be dynamic and the providers can't be created statically, then you can create the provider inside the module and then use the for_each argument on the module itself. You'd have to pass in the provider initialization values as input arguments into the module.
EDIT:
Sorry, it wasn't until I tried it myself that I remembered that Terraform doesn't allow for_each in a module if the module creates providers internally. So, I'm afraid, there's no way I'm aware of to do what you're trying to do. You'll have to create the providers statically.

How to work around Cfn action's character limit in CodePipeline

Using the AWS CDK, I have a CodePipeline that produces build artifacts for 5 different Lambda functions, and then passes those artifacts as parameters to a CloudFormation template. The basic setup is the same as this example, and the CloudFormation deploy action looks basically like this:
new CloudFormationCreateUpdateStackAction({
actionName: 'Lambda_CFN_Deploy',
templatePath: cdkBuildOutput.atPath('LambdaStack.template.json'),
stackName: 'LambdaDeploymentStack',
adminPermissions: true,
parameterOverrides: {
...props.lambdaCode.assign(lambdaBuildOutput.s3Location),
// more parameter overrides here
},
extraInputs: [lambdaBuildOutput],
})
However, when I try to deploy, I get this error:
1 validation error detected: Value at 'pipeline.stages.3.member.actions.1.member.configuration' failed to satisfy constraint:
Map value must satisfy constraint:
[Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
The CodePipeline documentation specifies that values in the Configuration property of the ActionDeclaration can be up to 1000 characters. If I look at the YAML output from cdk synth, the ParameterOverrides property comes out to 1351 characters. So that's a problem.
How can I work around this issue? I may need to add more Lambda functions in the future, so this problem will only get worse. Part of the problem is that the CDK code inserts 'LambdaSourceBucketNameParameter' and 'LambdaSourceObjectKeyParameter' in each bucket/object pair name in the configuration output, putting me at 61 * 5 = 305 characters lost just to being verbose. Could I get part of the way there by overriding those generated names?
I got some assistance from a CDK maintainer here, which let me get well under the 1000-character limit. Reproducing the workaround here:
LambdaSourceBucketNameParameter and LambdaSourceObjectKeyParameter are just the default parameter names. You can create your own:
lambda.Code.fromCfnParameters({
bucketNameParam: new CfnParameter(this, 'A'),
objectKeyParam: new CfnParameter(this, 'B'),
});
You can also name Artifacts explicitly, thus saving a lot of characters over the defaults:
const sourceOutput = new codepipeline.Artifact('S');
EDIT 10-Jan-2020
I finally got a response from AWS Support regarding the issue:
I've queried the CodePipeline team and searched though the current development workflows and couldn't find any current activity related to increasing the limit for parameters passed to a CloudFormation stack or any alternative method for this action, so we have issued a feature request based on your request for our development team.
I'm not able to provide an estimated time for this feature to be available, but you can follow the release on new features through the CloudFormation and CodePipeline official pages to check when the new feature will be available.
So for now, it looks like the CfnParameter workaround is the best option.

Check if Resource type is Static Resource or Document Resource

I wanted to know what is the right syntax on a conditional statement way to check if a Resource used is static or Document? I have a single template that can be used with 2 different types of resource.
So the conditional statement might be like this:
[[+template:is=static:then=This is static resource:else=This is
document resource]]
Anyone knows how to do it in MODx? I'm using MODx revolution latest version.
Thanks :)
you want to check the [[*class_key]] variable of the resource in question.
[[+class_key:is='modDocument':then='This is document resource']]
you also have modWebLink & modSymLink, I don't know what the static resource is ~ at a guess modStatic?
this is coming a lot later but just for the benefit of those that may come along later, the key for static resource is modStaticResource.