Terraform Version : v0.11.8
Use case
Try to terminate the resources using terraform, got error while running output command.
Code:
output "frontend_rendered" {
value = "${data.template_file.user_data.rendered}"
}
Debug Output
module.test.output.test_rendered: Resource
'data.template_file.user_data' does not have attribute 'rendered' for
variable 'data.template_file.user_data.rendered'
Expected Behavior
Termination without any error.
Additional Context
This issue came after i upgraded terraform from v0.11.4 to v0.11.8 and i have also updated the aws provider to latest 1.33.0
Any help?
Thanks!
Finally i was able to find the solution.
After Terraform v0.11.4, we shouldn't evaluate unused outputs during a full destroy operation.
Related
I (try to) deploy my current application using CDK pipelines.
In doing so, I stumbled across an unexpected behavior (here if interested) which now I am trying to resolve. I have a Lambda function for which the asset is a directory that is dynamically generated during a CodeBuild step. The line is currently defined like this in my CDK stack :
code: lambda.Code.fromAsset(process.env.CODEBUILD_SRC_DIR_BuildLambda || "")
The issue is that locally, this triggers the unexpected and undesired behaviour because the environment variable does not exist and therefore goes to the default "".
What is the proper way to avoid this issue?
Thanks!
Option 1: Set the env var locally, pointing to the correct source directory;
CODEBUILD_SRC_DIR_BuildLambda=path/to/lambda && cdk deploy
Option 2: Define a dummy asset if CODEBUILD_SRC_DIR_BuildLambda is undefined
code: process.env.CODEBUILD_SRC_DIR_BuildLambda
? lambda.Code.fromAsset(process.env.CODEBUILD_SRC_DIR_BuildLambda)
: new lambda.InlineCode('exports.handler = async () => console.log("NEVER")'),
I am trying to upgrade my terraform version from 0.12 to 0.13 but had previously init and planned with globally install terraform 0.14.5.
I'm struggling to understand how this effects the snapshot and/or I can remove this error, remote state hasn't changed so where is it getting this from? I have removed any .terraform in the directory.
Terraform is holding its state either in a remote backend or in a local one.
If you have no configuration that looks like this in your configuration files, minding that the backend type might differ based on the one used, so the name in "..." might vary:
terraform {
backend "..." {
}
}
Then it would be safe to assume you have a local JSON state file named terraform.tfsate, and also, since your project existed before the upgrade, a file terraform.tfsate.backup.
If you peak into those files, you will see the version of terraform used to create the said state in the beginning of the file.
For example:
{
"version": 4,
"terraform_version": "0.14.5",
}
From there, and with all the caution in the world, ensuring you indeed didn't change anything in the remote state, you have some options:
if your file terraform.tfsate.backup still have "terraform_version": "0.13.0", you could just make a rollback by removing the terraform.tfsate and renaming terraform.tfsate.backup to terraform.tfsate
you can try to "hack" into the actual terraform.tfsate and change the version there by adapting the line "terraform_version": "0.14.5"
As advised in the below link, you could create a state version using the API, so, overriding the state by manually specifying your expected version terraform_version
My advise still, would be to make a diff of terraform.tfsate against terraform.tfsate.backup to see what have possibly changed, or use a versioning tool if your terraform.tfsate is under version control.
Useful read: https://support.hashicorp.com/hc/en-us/articles/360001147287-Downgrading-Terraform-Version-in-the-State
I'm trying to create a terraform aws_lb_listener_rule resource and am getting the error "Unsupported block type - Blocks of type "host_header" are not expected here." (and the same error for the path_pattern) when I run terraform plan.
I'm using terraform 0.12 and upgraded the folder from 0.11 so there's a version.tf file with required_version = ">= 0.12". I'm using this link as a reference https://www.terraform.io/docs/providers/aws/r/lb_listener_rule.html
This is the resource block I'm using
resource "aws_lb_listener_rule" "260" {
listener_arn = data.terraform_remote_state.alb.outputs.alb_https_listener_arn
priority = 260
action {
type = "forward"
target_group_arn = module.x.target_group_arn
}
condition {
host_header {
values = ["something.com"]
}
}
condition {
path_pattern {
values = ["/a/*", "/b/*"]
}
}
}
I'm using the this setup in other files I have so I know it can run successfully. I'm wondering if there's a conflicting resource or something else I'm missing that's causing the error.
I am using the deprecated condition version in the same folder in a different file if that would cause an issue. When I isolate that resource and try to modify it it still gives me the error so I might need to delete the rule and then recreate it with the new way.
I've tried deleting the .terraform file and running terraform init again to see if that would reset anything, re-arranging the conditions if that had anything to do with it, and copying the exact code from the doc and modifying it but it still throws the error.
I can use the deprecated condition
condition {
field = "path-pattern"
values = ["/a/*", "/b/*"]
}
I've been searching online for a similar problem and had trouble finding that matches this issue.
I run wso2 apim 2.0.1 snapshot on windows, and when i modify subscription tier and save, it report below exception, and although the bill plan changed , but the API still display FREE label.
[2016-08-12 15:30:02,504] ERROR - EventProcessorAdminService Error while deleting the execution plan file
org.wso2.carbon.event.processor.core.exception.ExecutionPlanConfigurationException: Error while deleting the execution plan file
at org.wso2.carbon.event.processor.core.internal.util.EventProcessorConfigurationFilesystemInvoker.delete(EventProcessorConfigurationFilesystemInvoker.java:124)
......
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 2: /D:/emman/PROJECT/AA/apimgmt/wso2am-2.0.1-SNAPSHOT/repository/deployment/server/\executionplans
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.wso2.carbon.event.processor.core.internal.util.EventProcessorUtil.validateFilePath(EventProcessorUtil.java:387)
at org.wso2.carbon.event.processor.core.internal.util.EventProcessorConfigurationFilesystemInvoker.delete(EventProcessorConfigurationFilesystemInvoker.j
ava:109)
... 65 more
[2016-08-12 15:30:02,539] ERROR - ThrottlePolicyDeploymentManager Error while deploying policy to global policy server.Error while deleting the execution plan file
[2016-08-12 15:30:02,541] INFO - subscription-policy-edit:jag SubscriptionPolicy [policyName=Gold, description=Allows 5000 requests per minute, defaultQuotaPolicy=QuotaPolicy [type=requestCount, limit=RequestCountLimit [requestCount=5000,
toString()=Limit [timeUnit=min, unitTime=1]]]rateLimitCount=-1, tenantId=-1234,ratelimitTimeUnit=NA]
As per your logs, error happens due to invalid file path below.
/D:/emman/PROJECT/AA/apimgmt/wso2am-2.0.1-SNAPSHOT/repository/deployment/server/\executionplans
I had a look at code. It reads the first part of this path from <RepositoryLocation> tag of carbon.xml file. By default, it should look like this.
<RepositoryLocation>${carbon.home}/repository/deployment/server</RepositoryLocation>
Please verify if you have the same in carbon.xml. If you are getting this error with the same config, please change it to the absolute path like below and try again.
D:\emman\PROJECT\AA\apimgmt\wso2am-2.0.1-SNAPSHOT\repository\deployment\server
To make your path more linux-like I used this trick.
Share your carbon home folder. Change carbon.xml setting RepositoryLocation in //machinenaam/share.
I'm new to whirr and AWS so apologies in advance if I'm asking something silly.
I'm following the directions here to set up whirr and
bin/whirr launch-cluster --config hadoop.properties
fails with the following:
[~/src/cloudera/whirr-0.1.0+23]$ bin/whirr version rvm:ruby-1.8.7-p299
Apache Whirr 0.1.0+23
[~/src/cloudera/whirr-0.1.0+23]$ bin/whirr launch-cluster --config hadoop.properties rvm:ruby-1.8.7-p299
Launching myhadoopcluster cluster
Exception in thread "main" com.google.inject.CreationException: Guice creation errors:
1) No implementation for java.lang.String annotated with #com.google.inject.name.Named(value=jclouds.credential) was bound.
while locating java.lang.String annotated with #com.google.inject.name.Named(value=jclouds.credential)
for parameter 2 at org.jclouds.aws.filters.FormSigner.<init>(FormSigner.java:91)
at org.jclouds.aws.config.AWSFormSigningRestClientModule.provideRequestSigner(AWSFormSigningRestClientModule.java:66)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:410)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:166)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:118)
at com.google.inject.InjectorBuilder.build(InjectorBuilder.java:100)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at org.jclouds.rest.RestContextBuilder.buildInjector(RestContextBuilder.java:141)
at org.jclouds.compute.ComputeServiceContextBuilder.buildInjector(ComputeServiceContextBuilder.java:53)
at org.jclouds.aws.ec2.EC2ContextBuilder.buildInjector(EC2ContextBuilder.java:101)
at org.jclouds.compute.ComputeServiceContextBuilder.buildComputeServiceContext(ComputeServiceContextBuilder.java:66)
at org.jclouds.compute.ComputeServiceContextFactory.buildContextUnwrappingExceptions(ComputeServiceContextFactory.java:72)
at org.jclouds.compute.ComputeServiceContextFactory.createContext(ComputeServiceContextFactory.java:114)
at org.apache.whirr.service.ComputeServiceContextBuilder.build(ComputeServiceContextBuilder.java:41)
at org.apache.whirr.service.hadoop.HadoopService.launchCluster(HadoopService.java:84)
at org.apache.whirr.service.hadoop.HadoopService.launchCluster(HadoopService.java:61)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:61)
at org.apache.whirr.cli.Main.run(Main.java:65)
at org.apache.whirr.cli.Main.main(Main.java:91)
My hadoop.properties file has an AWS Access Key and Secret Access Key.
Any pointers on what I might have done wrong and what I need to do to fix this?
Thanks!
Okay so this appears to be a problem with the syntax in my hadoop.properties file. In the process of copying my keys across from the AWS management console, "Whirr.credential" got truncated to "Whirr.cred."
A classic face palm moment!
Anyway, leaving this up so that anyone googling for this error message knows to go triple check their hadoop.properties file!