I have to set optional attribute to add custom provider inside a terraform resource to reuse a resource with multiple providers.
i need something like this
resource "aws_kms" "key" {
provider = aws."custom_alias"
description = "xxx"
policy = "yyy"
}
in the above resource block, I want to pass different values to the provider attribute. to use the default provider, I want to pass a null value to this, and to use the custom provider, I want to pass the custom alias of the provider.
The provider attribute doesn't support variables. so I can't just set it to a variable (that would be very easy, not sure why it's not supported!)
I'm thinking I can use a dynamic block to create this attribute inside a resource provider = aws."custom_alis"
Not sure if that's possible to do. as most of the examples, i see for the dynamic block is to cerate a dynamic block inside a resource like
settings {
xyz = abc
abc = xyz
}
Not sure using dynamic if I can create an optional attribute inside the resource.
looking for a suggestion on how to handle this use case?
The goal is to add provider attributes inside resources with different values.
Thanks in advance!
Terraform does not support dynamic provider selection. There's already a popular [feature request][1] for this.
What you can do instead is put your re-usable code inside a [module][2] and create the module multiple times with different providers:
module "mymodule_provider1" {
source = "./path/to/module"
providers = {
aws = aws.provider1
}
}
module "mymodule_provider2" {
source = "./path/to/module"
providers = {
aws = aws.provider2
}
}
This is the "suggested" way to do it by HashiCorp, but it has the limitation that the number of modules can't be dynamic. If you really need the number of modules to be dynamic and the providers can't be created statically, then you can create the provider inside the module and then use the for_each argument on the module itself. You'd have to pass in the provider initialization values as input arguments into the module.
EDIT:
Sorry, it wasn't until I tried it myself that I remembered that Terraform doesn't allow for_each in a module if the module creates providers internally. So, I'm afraid, there's no way I'm aware of to do what you're trying to do. You'll have to create the providers statically.
Related
As I'm working with an existing terraform code that has been run successfully against AWS, I discovered I'd like to reuse the code in a different region without having to have a 2nd set of the same code. Some of the code affects global services which means I don't need it to be rerun in the other regions, so I would like to include the count = "${var.alreadyrun}" == "yes" ? 1 : 0 , in some of the terraform modules.
However, when I add the above line to the existing code for the specific modules, when I run terraform plan against the same region it was already run against, it tells me it's going to destroy and re-add those modules. I don't want to destroy and the recreated modules, I just want to skip it and move on to the next. Is there a way I can do this?
Adding count to a module block causes Terraform to track multiple instances for that block, and so the address of the module will change from something like module.example to be like module.example[0] instead, and so by default Terraform will assume you want to destroy the old module instance with no instance key and create a new one with instance key zero.
However, if you are using Terraform v1.1 or later you can add an additional declaration to tell Terraform that you want to "move" the existing module instance to a new address instead. For a module "example" block, that would look like this:
module "example" {
source = "./modules/example"
count = var.enable_example ? 1 : 0
# ...
}
moved {
from = module.example
to = module.example[0]
}
There are more details on moved blocks in the Terraform documentation section Refactoring.
As a side-note, when declaring a conditional module or resource based on a n input variable like this it's more typical to name it something like enable_example as I showed above, rather than a name like "already run", because a Terraform configuration should typically declare a desired state rather than describing how to reach that state.
You might also wish to investigate the possibility of splitting your Terraform configuration into multiple parts so that there's a "global" configuration that you use only once and then a "regional" configuration that you use for each region. That will then avoid the need to treat one of the regions as "special", also being responsible for the global infrastructure, and thus create a clearer dependency graph between all of your configurations for future maintainers to understand.
Both of those suggestions are away from your direct question, though; a moved block as I described above is the more direct answer.
I am trying to setup cloudwatch alarms using the auto-generated metrics via CDK on a CfnDeliveryStream which is a part of #aws-cdk/aws-kinesisfirehose. From the documentation here, https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-kinesisfirehose.CfnDeliveryStream.html it looks like there is no metric() that can be used for this. However the DeliveryStream class in the same library has that method, is it possible to leverage that?
There are basically two strategies:
Use the DeliveryStream construct primarily (let mystream = new DeliveryStream(...)) and then modify the underlying CfnDeliverystream by accessing the underlying Cfn object ( let cfnstream = mystream.node.defaultChild) and then modify that construct.
Create the Cfn stream first and then convert it to a DeliveryStream by using the DeliveryStream.fromDeliveryStreamAttributes(scope, id, attrs) or fromDeliveryStreamArn(scope, id, attrs) or fromDeliveryStreamName(scope, id, attrs). These methods have the downside that this way of using the construct often does limit the amount of properties and methods that can be used properly, since it does not import all info of the original stream. fromDeliveryStreamAttributes is the most complete one but it's quite verbose since you need to pass all attributes that you need to use.
I've applied the guidance on programmatic usage of M2Doc (also with this help) to successfully generate a document via the API, which was previously prepared by using the M2Doc GUI (configured .docx plus a .genconf file). It seems to also work with a configured .docx, but without a .genconf file.
Now I would like to go a step further and ease the user interface in our application. The user should come with a .docx, include the {m:...} fields there, especially for variable definition, and then in our Eclipse application just assign model elements to the list of variables. Finally press "generate". The rest I would like to handle via the M2Doc API:
Get list of variables from the .docx
Tell M2Doc the variable objects (and their types and other required information, if that is separately necessary)
Provide M2Doc with sufficient information to handle AQL expressions like projectmodel::PJDiagram.allInstances() in the Word fields
I tried to analyse the M2Doc source code for this, but have some questions to achieve the goal:
The parse/generate API does not create any config information into the .docx or .genconf files, right? What would be the API to at least generate the .docx config information?
The source code mentions "if you are using a Generation" - what is meant with that? The use of a .genconf file (which seems to be optional for the generate API)?
Where can I get the list of variables from, which M2Doc found in a .docx (during parse?), so that I can present it to the user for Object (Model Element) assignment?
Do I have to tell M2Doc the types of the variables, and in which resource file they are located, besides handing over the variable objects? My guess is no, as using a blank .docx file without any M2Doc information stored also worked for the variables themselves (not for any additional AQL expressions using other types, or .oclAsType() type castings).
How can I provide M2Doc with the types information for the AQL expressions mentioned above, which I normally tell it via the nsURI configuration? I handed over the complete resourceSet of my application, but that doesn't seem to be enough.
Any help would be very much appreciated!
To give you an impression of my code so far, see below - note that it's actually Javascript instead of Java, as our application has a built-in JS-Java interface.
//=================== PARSING OF THE DOCUMENT ==============================
var templateURIString = "file:///.../templateReqs.docx";
var templateURI = URI.createURI(templateURIString);
// canNOT be empty, as we get nullpointer exceptions otherwise
var options = {"TemplateURI":templateURIString};
var exceptions = new java.util.ArrayList();
var resourceSetForModels = ...; //here our application's resource set for the whole model is used, instead of M2Doc "createResourceSetForModels" - works for the moment, but not sure if some services linking is not working
var queryEnvironment = m2doc.M2DocUtils.getQueryEnvironment(resourceSetForModels, templateURI, options);
var classProvider = m2doc.M2DocPlugin.getClassProvider();
// empty Monitor for the moment
var monitor = new BasicMonitor();
var template = m2doc.M2DocUtils.parse(resourceSetForModels.getURIConverter(), templateURI, queryEnvironment, classProvider, monitor);
// =================== GENERATION OF THE DOCUMENT ==============================
var outputURIString = "file:///.../templateReqs.autogenerated.docx";
var outputURI = URI.createURI(outputURIString);
variables["myVar1"] = ...; // assigment of objects...
m2doc.M2DocUtils.generate(template, queryEnvironment, variables, resourceSetForModels, outputURI, monitor);
Thanks!
No the API used to parse an generate don't modifies the template file nor the .genconf file. To modify the configuration of the template you will need to use the
TemplateCustomProperties class. That will allow you to register your metamodels and service classes. This instormation is then used to configure the IQueryEnvironment, so you might also want to directly configure the IQueryEnvironment in your code.
The generation in this context referes to the .genconf file. Note The genconf file is also an EMF model, so you can also craft one in memory to launch you generation if it's easier for you. But yes the use of a .genconf file is optional like in your code example.
To the list of variables in the template you can use the class TemplateCustomProperties:
TemplateCustomProperties.getVariables() will list the variables that are declared with their type
TemplateCustomProperties.getMissingVariables() to list varaibles that are used in the template but not declared
You can also find le list of used metamodels (EPackage nsURIs) and imported services classes.
The type of variables is not needed at generation time, it's only needed if you want to validate your template. At generation time you need to pass a map from the variable name to its value as you did in your example. The value of a variable can be a any object from your model (an EObject), a String, an Integer, ... If you want to use something like oclIsKindOf(pkg::MyEClass) you will need to register the nsURI of pkg first see the next point.
The code you provided should let you use something like projectmodel::PJDiagram.allInstances(). This service needs a ResourceSetRootEObjectProvider() that is initialized in M2DocUtils.getQueryEnvironment(). But you need to declare the nsURI of your metamodel in your template (see TemplateCustomProperties). This will register it in the IQueryEnvironment. You can also register it yourself using IQueryEnvironment.registerEPackage().
This should help you finding the missing parts in the configuration of the AQL environment. Your code seems good and should work when you add the configuration part.
I have a Terraform configuration which creates an aws_api_gateway_usage_plan resource, using a computed value during the apply stage from a local_file resource.
resource "aws_api_gateway_usage_plan" "api_plan" {
name = var.usage_plan_name
api_stages {
api_id = jsondecode(file("dev.json")).resources[1].rest_api_id
stage = "api"
}
# Have to wait for the API to be created before we can create the usage plan
depends_on = [local_file.chalice_config]
}
As you can see, I read dev.json to determine the api_id Terraform needs. The problem is that when I run terraform apply, the new safety checks described here notice that the previous value that api_id evaluated to has changed!
Provider produced inconsistent final plan: When expanding the plan for aws_api_gateway_usage_plan.api_plan
to include new values learned so far during apply, provider "aws" produced an invalid new value
for .api_stages[0].api_id: was cty.StringVal("****"), but now cty.StringVal("****").
As that documentation describes, the correct way to solve this error is to specify that during the plan phase this api_id actually has yet to be computed. The problem is I'm not sure how to do this through a Terraform config - the documentation I've referenced is for the writers of the actual Terraform providers.
Looking at issues on GitHub, it seems like setting the initial value to null isn't a reasonable way to do this.
Any ideas? I am considering downgrading to Terraform 0.11 to get around this new safety check, but I was hoping this would be possible in 0.12.
Thanks in advance!
Okay, after thinking for a while I came up with a silly workaround that enabled me to "trick" Terraform into believing that the value for the api_id was to be computed during the apply phase, thereby disregarding the safety check.
What I did was replace the api_id expression with the following:
api_id = replace("=${aws_security_group.sg.vpc_id}=${jsondecode(file("files/handler/.chalice/deployed/dev.json")).resources[1].rest_api_id}", "=${aws_security_group.sg.vpc_id}=", "")
Essentially what I am doing is saying that the api_id's value depends on a computed variable - namely, the vpc_id of a aws_security_group I create named sg. In doing so, Terraform recognizes this value is to be computed later, so the safety check is ignored.
Obviously, I don't actually want to have the vpc_id in here, so I used Terraform's string functions to remove it from the final expression.
This is a pretty hacky workaround, and I'm open to a better solution - just thought I'd share what I have now in case someone else runs into the same issue.
Thanks!
I was facing the same issue while creating lambda event source mapping. I overcome from it running
terraform plan
and then
terraform apply
I've got the same error when encoded my user_data scripts (with filebase64 or base64encode) in places where I add to just simply use file or templatefile :
user_data = file("${path.module}/provisioning_scripts/init_script.sh")
user_data = templatefile("${path.module}/provisioning_scripts/init_script.tpl", {
USER = "my-user"
GROUP = "my-group"
})
(*) I can't 100% reproduce it but I'm adding this solution as another possible reason for receiving the mentioned error.
Read also in here.
I'm writing code for my GraphQL resolvers in AWS AppSync with resolver mapping template.
I know that there is a put mehtod that I can use for add a field to input object or any other object. Like this (for example):
$util.qr($name.put("firstName", "$ctx.args.input.firstName"))
But now I want to remove a field from an object, for example, the input object.
Is there any mehtod similar to the put method but for removing a field. something like:
$util.qr($ctx.args.input.remove("firstName"))
I am new to AWS and DynamoDB and AppSync.( you can consider me as an absolute beginner. )
Use foreach and make a new array.
#set($newInput={})
#foreach ($key in $ctx.args.input.keySet())
#if($key!="firstName")
$util.qr($newInput.put($key, $ctx.args.input.get($key)))
#end
#end
Yes, generally you can use $myObject.remove("myKey") on objects that you create in a mapping template, however, I will add the disclaimer that this will not always work on objects in the $ctx as some parts are immutable. AppSync bundles utility methods that make dealing with objects in mapping templates easier (e.g. making copies of objects). This functionality is actually tied to that of Apache Velocity so you can read more about how it works in those docs.
In AppSync, the arguments in a query or mutation are exposed in the request mapping template as $context.args. If you have passed in an argument named input you can remove it as follows:
$util.quiet($context.args.remove("input"))
or its using the alias for quiet (identical to the above):
$util.qr($context.args.remove("input"))
This can be used in both the request and response mapping template. It can also be used to remove nested properties:
$util.qr($context.args.input.remove("nestedProp"))