Terraform - Optional SSM parameter lookup - amazon-web-services

I'm doing a lookup for an SSM parameter which may or may not exist depending on a variable passed in:
data "aws_ssm_parameter" "server_tags" {
name = "/${var.env_number}/server_tags"
}
I am then using it like below in my locals and passing to my module:
locals {
server_tags = data.aws_ssm_parameter.server_tags != null ? jsondecode(data.aws_ssm_parameter.server_tags.value) : {}
instance_tags = merge(var.instance_tags, local.server_tags)
}
This works fine when my parameter exists, but if I pass in a value where my parameter doesn't exist, I get an error:
Error describing SSM parameter (/997/server_tags): ParameterNotFound:
Is there anyway I can do a pre-check to see if the parameter exists or make it optional somehow?
Thanks

Sadly you can't do this. There is no way build-on mechanism for TF to check if a data source exists or not. But you can program your own logic for that using External Data Source.
Since you program the external data source, you can create a logic for checking if a resource exists or not.

Related

How to create a Dataflow Template without running code?

I am creating a Dataflow template that runs perfectly if I pass all required parameters but my use case is to create a Generic Template where I can pass parameters at Runtime.
All my options are ValueProvider but still, it is trying to execute the code while creating the template and that gives an error. Is there any way I can create a Dataflow template without executing code?
I have also faced one weird issue when I use create template command along with all the parameters then it creates the template successfully but at that time all parameters get hard coded in the code and if I pass the new parameter value while running it does not change the value with the template.
Is it the correct behavior of the Dataflow template?
Command without passing all parameters :
mvn compile exec:java -D"exec.mainClass"="org.example.Main" -D"exec.args"="--runner=DataflowRunner --project=<project> --stagingLocation=gs://test-bucket/staging_4 --templateLocation=gs://test-bucket/templates/my_template --region=asia-south1 " -P dataflow-runner -X
After designing my code multiple times, I got to know what mistake I was doing.
What I got to know is that your code should be designed in such a way that it will not ask for any value for generating the Graph except Project Id, Stage Path, and Template Path. if it is asking then check the below practices and design accordingly.
Following are the practice you should follow if you are creating Custom Template.
1.Declare all the options as ValueProvider so that it will be calculated at Run Time and the Template creation process will skip it.
2.Do not assign the ValueProvider value to the normal variable as below.
String Query = options.getQuery().toString();
PCollection<TableRow> rows = p.apply("Read From Source", JdbcIO.<TableRow>read()
.withQuery(Query)
3.Try to pass ValueProvider value directly to the method. For example above code snippet should be like below
PCollection<TableRow> rows = p.apply("Read From Source", JdbcIO.<TableRow>read()
.withQuery(options.getQuery())
4.Do not use options only inside any one particular block of code which is never accessible during Template creation. As below example.
Boolean Check = false;
if (options.getA().isAccessible()){
Check = true;
}
if (Check == true){
//Use of Options.getB().
}
In the above case if you are not pass --a option while creating a template then the Check value will always false and a particular below block of code will not be executable. If we are not using --b out of Check block then after the creation of Template options b will be unknown to Template and it will give an error if you pass it at Run time.
5.If you have use case where you need to manipulate data of input options passed then you should use NestedValueProvider instead of converting ValueProvider into a Normal variable such as String and manipulate data.
Fo example, avoid the below practice.
String Query = String.format("SELECT * FROM %s", options.getTableName().toString());
PCollection<TableRow> rows = p.apply("Read From Source", JdbcIO.<TableRow>read()
.withQuery(Query)
Instead implement same logic as below:
ValueProvider<String> Query = NestedValueProvider.of(options.getTableName(),
new SerializableFunction<String, String>() {
#Override
public String apply(TranslatorInput<String, String> input)
{
return String.format("SELECT * FROM %s", input);;
}
});;
PCollection<TableRow> rows = p.apply("Read From Source", JdbcIO.<TableRow>read()
.withQuery(Query)
Use FlexTemplates instead. Documentation can be found here.

aws-cdk There is already a Construct with name; when trying to create multiple instances of same construct

I want to do something like following in the AWS CDK.
const a = new RandomObject(this, 'randomValue1', 'randomValue2').lambdaFunction;
const b = new RandomObject(this, 'randomValue111', 'randomValue2222').lambdaFunction;
However, CDK complains that "There is already a Construct with name 'RandomObject' in InfraStack"
Is there a way achieve what I am trying to do?
Context : I have a stepFunction that I want to dynamically create by passing 2 variables 'randomValue1' and 'randomValue2'. The randomValue1 will be the name of the step function and randomValue2 will be the trigger of lambda function inside of it. Keeping everything else the same.
I want to avoid creating multiple Constructs for the same state machine if I can.

Terraform - why can't I assign a value from 1 variable to another variable

I am trying to assign an output variable from a module to a local variable so that I can conveniently use a local variable. Is there another way ?
variable "vpc_id" {
default = "${module.vpc.vpc_id}"
}
Error I am getting is :
Error: Unsupported argument
on main.tf line 22, in variable "subnetid_private":
22: default = "${module.vpc.subnet_private}"
Variables may not be used here..
I spent good amount of time to google this but could not see any example. Am I missing something here. This is a pretty standard convenience feature of any language.
you can replace with locals
https://www.terraform.io/docs/configuration/locals.html
locals {
vpc_id = module.vpc.vpc_id
}
and later reference it as local.vpc_id
I had the same thought when I first started using Terraform.
The problem is naming. A Terraform variable block is more like a final or constant constructor or method parameter in other languages.
From the Local Value documentation, it says this:
A local value assigns a name to an expression, allowing it to be used multiple times within a module without repeating it.
Comparing modules to functions in a traditional programming language: if input variables are analogous to function arguments and outputs values are analogous to function return values, then local values are comparable to a function's local temporary symbols.
When you write:
variable "vpc_id" {
}
Terraform says "ah, you'd like callers of this module to be able to hand a string in called vpc_id". Definitely not what you are looking for.
For the kind of case you have, a Local Value is what Terraform provides:
locals {
vpc_id = module.vpc.vpc_id
}
In terraform 0.12, you can use variables in variables like so:
variable "var1" {
type = string
default = "this is var 1"
}
variable "var2" {
type = string
default = "$${variable.var1}"
}
output of this is:
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Changing model parameters by cPar in other module

I am using this module hierarchy :
Node: {udpApp[0]<->udp<->networkLayer->wlan[0]} and wlan[0]: {CNPCBeacon<->mac<->radio}
I have given some initial parameter in the ini file for udpApp as :
**.host*.numUdpApps = 2
**.host*.udpApp[0].typename = "UDPBasicApp"
**.host*.udpApp[0].chooseDestAddrMode = "perBurst"
**.host*.udpApp[0].destAddresses = "gw1"
**.host*.udpApp[0].startTime = 1.32s
**.host*.udpApp[0].stopTime = 1.48s
But at run time I want to change the startTime and stopTime for udpAPP[0] through CNPCBeacon module. Hence I changed CNPCBeacon.cc as:-
cModule* parentmod = getParentModule();
cModule* grantParentmod = parentmod->getParentModule();
cModule* udpmod;
for (cSubModIterator iter(*grantParentmod); !iter.end(); iter++)
{
//EV<<"get the modulde "<< iter()->getFullName()<<endl;
if (strcmp(iter()->getFullName(), "udpApp[0]") == 0)
{
udpmod = iter();
break;
}
}
cPar& startTime = udpmod->par("startTime");
cPar& stopTime = udpmod->par("stopTime");
And I am successfully able to receive the values of startTime and stopTime. However I want to change these value in current module, which is resulting in an error by following code:
udpmod->par("startTime").setDoubleValue(4.2);
Can anybody please suggest me a way to change it at run time.
Declaring your parameter as volatile should solve your problem. But for future reference I'll provide further explanation below
Volatile vs. non-volatile:
Here it depends how you want to use this parameter. Mainly via the .ini file you have two types of parameters: volatile and non-volatile.
volatile parameters are read every time during your run. That woule be helpful if you want this parameter to be generated by a built-in function, for example, uniform(0,10) each time this volatile parameter will get a different value.
On the other hand non-volatile parameters are read just one, as they don't change from run to run.
Using the volatile type parameter does not give you full flexibility, in the sense that your parameter value will always fall with in a range predefined in the .ini
Dynamic Variable (parameter) Reassignment:
Instead what you could do is use a more robust approach, and re-define the variable which stores the value from that module parameter each time you have to do so.
For example in your case you could do the following:
varHoldingStartTime = par("startTime").doubleValue();
varHoldingStartTime = 4.2;
This way the actual value will change internally without reflecting to your run.
Parameter Studies:
Alternatively if you want this change of the parameter to be applied to multiple runs you could use the advanced built-in approach provided by OMNeT++ which allows you to perform Parameter Studies.
I have explained here how Parameter Studies work: https://stackoverflow.com/a/30572095/4786271 and also here how it can be achieved with constraints etc: https://stackoverflow.com/a/29622426/4786271
If none of the approaches suggested by me fit your case, answers to this question altogether might solve your problem: How to change configuration of network during simulation in OMNeT++?
EDIT: extending the answer to roughly explain handleParameterChange()
I have not used handleParameterChange() before as well, but from what can I see this function provides a watchdog functionality to the module which utilizes it.
To activate this functionality first the void handleParameterChange(const char *parameterName); has to be re-defined.
In essence what it seems to do is the following:
Assume we have two modules moduleA and moduleB and moduleB has parameter parB. moduleA changes the parB and when that happens, moduleB reacts to this change based on the behaviour defined in:
moduleB::handleParameterChange(parB);
The behaviour could be re-reading the original value for parB from the .ini etc.

Filter by scope in Boost.Log

I'm using Boost.Log library. I've created a named_scope attribute that keeps track of where I am in the code. (I specify it by hand with BOOST_LOG_NAMED_SCOPE("...").) Is it possible to create a filter (using set_filter) that would select only the messages from a particular scope?
Please refer to Andrey's latest & greatest logging doc :
The scope stack is implemented as a thread-specific global storage
internally. There is the named_scope attribute that allows hooking
this stack into the logging pipeline. This attribute generates value
of the nested type named_scope::scope_stack which is the instance of
the scope stack. The attribute can be registered in the following way:
logging::core::get()->add_global_attribute("Scope",
attrs::named_scope());
Then, you should configure your front end sink filter to latch only your tags of interest (in the filter lambda or in your custom filter which you pass to set_filter() you can use the following in order to extract the scope name, assuming u work with MBCS)
typedef attrs::basic_named_scope< char >::value_type scope_stack;
logging::value_extractor<char, scope_stack> S("Scope");
scope_stack s = *S(rec);
if ( s.empty() == false )
{
const attrs::basic_named_scope_entry<char>& e = s.back();
// Filter by e.scope_name
...
}
I hope it will work for you :)