We have a common variables in our infrastructure on AWS, planned to be used by several modules. For example subnet ids, vpc id and so on.
To avoid duplication those variable in each module in *.tfvars files. Is that possible make them available from any terraform modules? While modules itself can be isolated from each other.
I think about kind of core module, which can be imported where that common variables needs. But in doubts that module is a right way to do that, as modules intended to have only resources in them, but we need expose only variables. Is it right way to use modules to share variables? Or how you guys cope with this problem? Think it's common or it's bad approach in terraform?)
Thanks.
If you have a set of expressions (including hard-coded literal values) that you want to reuse then it is valid to write a module which only contains input variable declarations, local values, and output values as a way to model that.
The simplest form of this would be a module that only contains output blocks whose values are hard-coded literal values, like this:
output "example" {
value = "result"
}
The official module hashicorp/subnets/cidr is an example of that: it doesn't declare any resources of its own, and instead it just encapsulates some logic for calculating a set of subnets based on some input variables.
This is a special case of Data-only Modules where the data comes from inside the module itself, rather than from data sources. A nice thing about modelling shared data in this way is that if you later decide to calculate those results automatically based on data sources then you'll be able to do so, while keeping the details encapsulated. For example, if you define a module which takes an environment name as an input variable and returns derived settings about that environment, the module could contain local logic to calculate those results today but could later determine some of those settings by fetching them from a prescribed remote location, such as AWS SSM Parameter store, if the need arises.
Related
Let's say I have a program which uses n big modules:
A) Network/Communication
B) I/O files
C) I/O database
D) GUI
E) ...
Of course, a list of modules could be bigger.
Let's say I want to have some global variable, but with a scope limited to a single module.
As an example, let's say that I/O database module will consist of 10 classes, each representing each table in a database, but it needs some global const state like Name of table A, Columns of table A etc. (as it is a relational database, in table D I may need to use table A).
It is also obvious, that I do not need to access these table names through Network/Communication module. Is there a way to make a variable "globally" accessible only for some part of classes?
Just for clarification - I know that "Global for some part" is a contradiction, but my idea is that I want to keep the accessibility(no need of pointer passing to each object), while limiting the place from where it can be called (for example, limit from global to module scope)
You don't need globals for that, I strongly advise you to learn about dependency injection. Basically you have one "factory" module. And each module has an interface on you can inject an interface that has getters to access the centralized data. (e.g. members of a n instance of a class). This also allows you to test the independent modules using mocks and stubs (e.g. a test class that returns other values). –
Assuming I have 2 different sources:
node_module.cc
threaded_class.cc
node_module.cc is where I am calling NODE_MODULE to initialize my module. This module has a function that makes an instance of threaded_class.cc (in a separate thread). I understand that I need to use Lockers and Isolates to access v8 in a separate thread but my issue is bigger than that.
NODE_MODULE function is my only chance to catch the module's instance from my understanding. I found this article that uses a piece of code that is what I am exactly looking for. The author stores the module handle in a persistent object like this:
auto module_handle = Persistent<Object>::New(target);
But this either seems deprecated or not possible anymore. However I figured that it can be achieved like this:
auto module_handle = Persistent<Object>(context->GetIsolate() ,target);
However the latter, when I am trying to access its properties, are mostly private methods and properties, nothing worth to be used or I am not knowing how to use this.
My question is, is there any updated guide on how to properly handle these kind of stuff in writing a Node module? Or can you show me an example how I can pass my latter module_handle to my thread and use it for example for executing a js function called test?
I also want to know, what is the difference between NODE_MODULE and NODE_MODULE_CONTEXT_AWARE when initializing a node module?
I'm writing this, well, call it a library I guess. It offers a set of global variables of type MyType. Now, I want to write the source of each of these MyType's in its own .cpp and .h files, unaware of all the rest, without needing some central header file saying MyType* offerings = { &global1, &global2, /*... */ }.
Now, had these been different classes I want to be able to instantiate, I would want to use a factory pattern; but here they're all of the same type, and I don't need to instantiate anything. I would think each variable needs to be 'registered' into a global array (or unordered set) from somewhere in its sources.
So, what's the idiomatic way to do this?
You could take a look at the Registry Pattern and create a manager for your filesystem or folder that will manage these objects.
The Registry could have everything related to Filesystem Handling so you insert your object names and properties in a model in one config file or database. The registry could look up that and instantiate your objects on runtime.
Now you would need a mechanism to communicate this objects to the rest of the system. But if your objects are not going to change then just a registry with compile time objects would do.
The Registry is a pattern to handle global objects in a similar fashion to the singleton.
Is it possible to display all variables available inside a Velocity Template?
Let's say 1 developer pass 2 values to template: $headline and $body. Another developer has to deal with those 2 variables. How would he know names of those variables?
Right now we use 3 solutions:
we simply say what variables are present on templates
we agreed with all developers that all data we pass to template should be included into 1 map ($data)
developer that pass variables to templates has to update template as well and describe all the fields available on it.
I'm looking for way to do this correctly. Right now I'm not really satisfied with all approaches, but 2-nd looks like most preferable.
The short answer is:
$context.keys
The variables and "tools" are accessed by the templates via the velocity "context". If the context tool is available, you can request the list of variables via $context.keys. If not, you need to add the tool ContextTool to the context. How this is done depends on your application.
Although it's technically possible to list all keys in the context, I'm not sure it's also good practice in the situation you describe.
Suppose I am recording data and want to associate some number of data elements, such that each recorded set always has a fixed composition, i.e. no missing fields.
Most of my experience as a programmer is with Ada or C/C++ variants. In Ada, I would use a record type and aggregate assignment, so that when the record type was updated with new fields, anyone using the record would be notified by the compiler. In C++, chances are I would use a storage class and constructor to do something similar.
What is the appropriate way to handle a similar situation in Python? Is this a case where classes are the proper answer, or is there a lighter weight analog to the Ada record?
An additional thought, both Ada records and C++ constructors allow for default initialization values. Is there a Python solution to the above question which provides that capability as well?
A namedtuple (from the collections library) may suit your purposes. It is basically a tuple which allows reference to fields by name as well as index position. So it's a fixed structure of ordered named fields. It's also lightweight in that it uses slots to define field names thus eliminating the need to carry a dictionary in every instance.
A typical use case is to define a point:
from collections import namedtuple
Point = namedtuple("Point", "x y")
p1 = Point(x=11, y=22)
It's main drawback is that, being a tuple, it is immutable. But there is a method, replace which allows you to replace one or more fields with new values, but a new instance is created in the process.
There is also a mutable version of namedtuple available at ActiveState Python Recipes 576555 called records which permits direct field changes. I've used it and can vouch that it works well.
A dictionary is the classical way to do this in Python. It can't enforce that a value must exist though, and doesn't do initial values.
config = {'maxusers': 20, 'port': 2345, 'quota': 20480000}
collections.namedtuple() is another option in versions of Python that support it.