Terraform: De-duping a map object - amazon-web-services

I'm creating a map object by looping through a list of elements. How do I enforce unique keys in the map such that there are no duplicates?
For example, I have something like below but I get the error: │ Local value local.uri_bucket_map cannot use its own result as part of its expression.
locals {
inference = [
{ "data_uri" : "s3://my_bucket/model.tar.gz",
"model_uri": "account_num.dkr.ecr.us-west-2.amazonaws.com/my_container:latest"
},
{ "data_uri" : "s3://my_bucket/model.tar.gz",
"model_uri": "account_num.dkr.ecr.us-west-2.amazonaws.com/my_container2:latest"
},
{ "data_uri" : "s3://my_bucket_3/model.tar.gz" ,
"model_uri": "account_num.dkr.ecr.us-west-2.amazonaws.com/my_container3:latest"
}
]
uri_bucket_map = {
for x in local.inference : x.data_uri =>
split("/", split("//", x.data_uri)[1])[0]
}
}
Expected Map:
{
"s3://my_bucket/model.tar.gz" -> "my_bucket"
"s3://my_bucket_3/model.tar.gz" -> "my_bucket_3"
}

Without much changes to you original code, you can obtain what you want as follows:
uri_bucket_map = {
for x in distinct(local.inference[*].data_uri) : x =>
split("/", split("//", x)[1])[0]
}

Related

Terraform Variable looping to generate properties

I have to admit, this is the first time I have to ask something that I dont even myself know how to ask for it or explain, so here is my code.
It worth explains that, for specific reasons I CANNOT change the output resource, this, the metadata sent to the resource has to stay as is, otherwise it will cause a recreate and I dont want that.
currently I have a terraform code that uses static/fixed variables like this
user1_name="Ed"
user1_Age ="10"
user2_name="Mat"
user2_Age ="20"
and then those hard typed variables get used in several places, but most importanly they are passed as metadata to instances, like so
resource "google_compute_instance_template" "mytemplate" {
...
metadata = {
othervalues = var.other
user1_name = var.user1_name
user1_Age = var.user1_Age
user2_name = var.user2_name
user2_Age = var.user2_Age
}
...
}
I am not an expert on terraform, thus asking, but I know for fact this is 100% ugly and wrong, and I need to use lists or array or whatever, so I am changing my declarations to this:
users = [
{ "name" : "yo", "age" : "10", "last" : "other" },
{ "name" : "El", "age" : "20", "last" : "other" }
]
but then, how do i get around to generate the same result for that resource? The resulting resource has to still have the same metadata as shown.
Assuming of course that the order of the users will be used as the "index" of the value, first one gets user1_name and so on...
I assume I need to use a for_each loop in there but cant figure out how to get around a loop inside properties of a resource
Not sure if I make myself clear on this, probably not, but didn't found a better way to explain.
From your example it seems like your intent is for these to all ultimately appear as a single map with keys built from two parts.
Your example doesn't make clear what the relationship is between user1 and Ed, though: your first example shows that "user1's" name is Ed, but in your example of the data structure you want to create there is only one "name" and it isn't clear to me whether that name would replace "user1" or "Ed" from your first example.
Instead, I'm going to take a slightly different variable structure which still maintains both the key like "user1" and the name attribute, like this:
variable "users" {
type = map(object({
name = string
age = number
})
}
locals {
# First we'll transform the map of objects into a
# flat set of key/attribute/value objects, because
# that's easier to work with when we generate the
# flattened map below.
users_flat = flatten([
for key, user in var.users : [
for attr, value in user : {
key = key
attr = attr
value = value
}
]
])
}
resource "google_compute_instance_template" "mytemplate" {
metadata = merge(
{
othervalues = var.other
},
{
for vo in local.users_flat : "${vo.key}_${vo.attr}" => vo.value
}
)
}
local.users_flat here is an intermediate data structure that flattens the two-level heirarchy of keys and object attributes from the input. It would be shaped something like this:
[
{ key = "user1", attr = "name", value = "Ed" },
{ key = "user1", attr = "age", value = 10 },
{ key = "user2", attr = "name", value = "Mat" },
{ key = "user2", attr = "age", value = 20 },
]
The merge call in the metadata argument then merges a directly-configured mapping of "other values" with a generated mapping derived from local.users_flat, shaped like this:
{
"user1_name" = "Ed"
"user1_age" = 10
"user2_name" = "Mat"
"user2_age" = 20
}
From the perspective of the caller of the module, the users variable should be defined with the following value in order to get the above results:
users = {
user1 = {
name = "Ed"
age = 10
}
user2 = {
name = "Mat"
age = 20
}
}
metadata is not a block, but a regular attribute of type map. So you can do:
# it would be better to use map, not list for users:
variable "users"
default {
user1 = { "name" : "yo", "age" : "10", "last" : "other" },
user2 = { "name" : "El", "age" : "20", "last" : "other" }
}
}
resource "google_compute_instance_template" "mytemplate" {
for_each = var.users
metadata = each.value
#...
}

Inconsistent conditional result types with locals templatization

I have this code, which is working if I remove version from msr code block. But if I add it - this error pop-ups. I've tried so far to interpolate conditional and to change types of the variables. No luck
mke_launchpad_tmpl = {
apiVersion = "API"
kind = "mke"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {}
hosts = concat(local.managers, local.workers, local.windows_workers)
}
}
msr_launchpad_tmpl = {
apiVersion = "API"
kind = "mke+msr"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {
version: var.msr_version
installFlags : [
"--ucp-insecure-tls",
"--dtr-external-url ${module.msrs.lb_dns_name}",
]
}
hosts = concat(local.managers, local.msrs, local.workers, local.windows_workers)
}
}
launchpad_tmpl = var.msr_count > 0 ? local.msr_launchpad_tmpl : local.mke_launchpad_tmpl
}
Expected behaviour:
To normally run plan and apply it and get the output at the end to change it for the launchpad and install everything by versions from this output which I can pass in terraform.tfvars
Actual behaviour:
Error: Inconsistent conditional result types
on main.tf line 179, in locals:
179: launchpad_tmpl = var.msr_count > 0 ? local.msr_launchpad_tmpl : local.mke_launchpad_tmpl
|----------------
| local.mke_launchpad_tmpl is object with 3 attributes
| local.msr_launchpad_tmpl is object with 3 attributes
The true and false result expressions must have consistent types. The given
expressions are object and object, respectively.
Unfortunately this is a situation where Terraform doesn't really know how to explain the problem fully because the difference between your two result types is in some details in deeply nested attributes.
However, what Terraform is referring to here is that your local.msr_launchpad_tmpl and local.make_launchpad_tmpl values have different object types, because an object type in Terraform is defined by the attribute names and associated types and your msr attributes are not consistent across both objects.
One way you could make this work is to explicitly add the msr attributes to local.msr_launchpad_tmpl but set them to null, so that the object types will be compatible but the unneeded attributes will still be left without a specific value:
msr = {
version = null
installFlags = null
}
This difference in msr's type was the only type difference I noticed between the two expressions, although I might have missed another example. If so, the general idea here is to make sure that both of the values have the same object structure, so that their types will be compatible with one another.
Terraform requires the true and false expressions in a conditional to have compatible types because it uses the common type as the return type for the conditional during type checking. However, in situations like this where you might intentionally want to use a different type for each case, you can use other language constructs that will allow Terraform to successfully complete type checking in other ways.
For example, if you combine both of those object values into a single object container then Terraform will be able to see that each of the two top-level attributes has a different type and see exactly what type each one has:
locals {
launchpad_tmpls =
mke = {
apiVersion = "API"
kind = "mke"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {}
hosts = concat(local.managers, local.workers, local.windows_workers)
}
}
msr = {
apiVersion = "API"
kind = "mke+msr"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {
version: var.msr_version
installFlags : [
"--ucp-insecure-tls",
"--dtr-external-url ${module.msrs.lb_dns_name}",
]
}
hosts = concat(local.managers, local.msrs, local.workers, local.windows_workers)
}
}
}
launchpad_tmpl = local.launchpad_tmpl[var.msr_count > 0 ? "msr" : "mke"]
}
Because Terraform can see the exact types of both local.launchpad_tmpl["msr"] and local.launchpad_tmpl["mke"] it will be able to determine the exact object type for local.launchpad_tmpl in each case, even though the two have different types.
There is one exception to this: if var.msr_count is unknown during planning (that is, if you've computed it based on a resource attribute that won't be known until the apply step) then Terraform will be left in a situation where it can't infer a specific type for local.launchpad_tmpl, and so Terraform will treat it as an "unknown value of unknown type", which effectively means that any uses you make of it elsewhere in the configuration won't be type checked during planning and so might fail at apply time. However, this caveat won't apply as long as var.msr_count is set to a static value you've specified directly in your configuration.
I ran into this issue with TF 0.14 while trying to conditionally set replication_configuration in a call to aws_s3_bucket:
replication_configuration = var.replication ? local.replication_configuration : {}
var.replication was defined as a bool, and local.replication_configuration looked something like this:
replication_configuration = {
role = "arn:aws:iam::${account}:role/${name}-s3-replication"
rules = [
{
id = "everything-without-filters"
status = "Enabled" # Enabled or Disabled
priority = 10
delete_marker_replication_status = "Enabled"
destination = {
bucket = "arn:aws:s3:::${name}-delete8-dr"
storage_class = "STANDARD_IA"
}
}
]
}
Note: The contents of the json above are not real working code - they are provided only to illustrate the points below.
{} was not a close enough match to local.replication_configuration as it was defined, so the conditional failed, but the module for aws_s3_bucket errored when passed a null, so it was not possible to approach it this way, either.
Ultimately, I solved this by writing a conditional without using conditionals:
locals {
repl_bool = {
true = local.replication_configuration
false = {}
}
}
...
module "s3-bucket" {
...
replication_configuration = local.repl_bool[var.replication]
...
}
Writing code like the above really doesn't leave me with a good feeling. It looks awkward to me, and definitely has a hacky feel to it. But we needed to be able to write TF that only used one module, with or without replication, and this was a way to do that.
I ran into a similar error (The given expressions are list and list).
Took quite a bit of trial and error to figure another hacky workaround.
Here is a modified simple non-working example.
output "wont_work" {
value = false ? {
foo: "foo",
bar: {
baz: "foo",
},
} : {}
}
And here is my workaround
output "works" {
value = try(false ? {
foo: "foo",
bar: {
baz: "foo",
},
} : throw_error(), {})
}
When the condition is false,
value = try(false ? { value = try(throw_error(), {}) value = {}
foo: "foo",
bar: { ==> ==>
baz: "foo",
},
} : throw_error(), {})

Is there an node like statement available in mongodb

I want to filter the decimal value in child array of json file.In below sample code i want to apply the like function to get the json value like(t1,t2) in below sample file.
Sample code:
db.getCollection('temp').find({},{"temp.text./.*t.*/.value":1})
Sample Json file:
{
"_id" :0"),
"temp" : {
"text" : {
"t1" : {
"value" : "960"
},
"t2" : {
"value" : "959"
},
"t3" : {
"value" : "961"
},
"t4" : {
"value" : "962"
},
"t5" : {
"value" : "6.0"
}
}
}
}
MongoDB doesn't have a way to filter field names directly other than projection, which is exact match only.
However, using aggregation you can use $objectToArray, which would convert the object {"t1" : {"value" : "960"}} to [{"k":"t1","v":{"value":"960"}}]. You can then filter based on the value of k, and use $arrayToObject to convert the entries left back into an object.
.aggregate([
{$addFields:{
"temp.text":{
$arrayToObject:{
$filter:{
input:{$objectToArray:"$temp.text"},
cond:{
$regexMatch:{
input:"$$this.k",
regex:/t/
}
}
}
}
}
}}
])
Playground

Converting a BSON document containing an array to JSON, removes the array from the converted JSON (C++)

I have a document that looks like this:
{
"_id" : ObjectId("5bd37e0128f41363c0006ac2"),
"source" : "Forge",
"data" : [
{
"symbol" : "EURUSD",
"bid" : 1.14021,
"ask" : 1.14024,
"price" : 1.14023,
"timestamp" : 1540587008
}
]
}
I want to get the data part out of the bson document, which I'm doing by this code:
auto dataDocument = view["data"].get_value().get_document();
auto textMessage = bsoncxx::to_json(dataDocument);
However the output when field data is an array as in the example is this:
{
"0": {
"symbol": "EURUSD",
"bid": 1.1405199999999999783,
"ask": 1.1405300000000000438,
"price": 1.1405300000000000438,
"timestamp": 1540580136
}
}
Instead of this (correct):
[{
"symbol": "EURUSD",
"bid": 1.14056,
"ask": 1.14057,
"price": 1.14057,
"timestamp": 1540580927
}
]
How come the brackets are removed and instead a "0" field is put there instead?
If I do a to_json on the whole document the array is kept, its only when I do to_json on the field data which is an array the error happens.
Any ideas?
Update, here's a working example which reproduce my issue:
#include <mongocxx/instance.hpp>
#include <bsoncxx/json.hpp>
#include <mongocxx/client.hpp>
int main()
{
mongocxx::instance inst{};
auto conn = mongocxx::client{ mongocxx::uri{} };
auto collection = conn["QuantBot"]["test"];
auto jsonDoc = R"(
{
"source" : "Forge",
"data" : [
{
"symbol" : "EURUSD",
"bid" : 1.13875,
"ask" : 1.13925,
"price" : 1.139,
"timestamp" : 1540758149
}
]
}
)";
auto bsonDocument = bsoncxx::from_json(jsonDoc);
collection.insert_one(bsonDocument.view());
auto cursor = std::make_unique<mongocxx::cursor>(collection.find({}));
auto cursorIt = std::make_unique<mongocxx::cursor::iterator>(cursor->begin());
auto view = bsoncxx::document::view(**cursorIt);
auto dataDocument = view["data"].get_value().get_document();
auto textMessage = bsoncxx::to_json(dataDocument);
}
I believe the problem is here: get_value().get_document(). Try making that say get_value().get_array() instead. Also note that you should be checking the type of the value returned by get_value before calling any methods on it. This is working more by good luck than by good design. The reason that you are seeing a literal zero there is because BSON arrays are represented as documents with integral keys. You have, effectively, cast an array to a document, so it is printing as a document with numeric keys, instead of an array as intended.

wxJSONOBJECTARRAY - accessing the required key value pair

I have just started working with JSON objects with the help of wxJSON. It works fine when I try to work a single JSON object (example given below), it works fine.
{
"wxWidgets" : 1
{
"Version" :1
{
"Major" : 2,
},
"Languages" :
[
"C++",
"Python",
"Perl",
"C#/Net"
]
}
}
However, when there are an array of values in the JSON object, I fail to parse the JSON object and extract the required key value pair.
Example :
[ {
"wxWidgets" : 1
{
"Version" :1
{
"Major" : 2,
},
"Languages" :
[
"C++",
"Python",
"Perl",
"C#/Net"
]
}
}
{
"wxWidgets" : 2
{
"Version" :2
{
"Major" : 3
},
"Languages" :
[
"java",
"j2ee",
"j2me"
]
}
}
]
when it is a single object, I access the elements like this:
wxJSONValue root; //this will have the json object created above.
wxString c = root[_T("wxWidgets")][_T("Version")].AsString(); // to get the version.
How do I do it in case there is an array of JSONOBJECTs as given in the second example?
Since you're working with an array,
root[_T("wxWidgets")][_T("Version")]
Doesn't deference the first object of the array (index 0 in this case) in the JSON object. The code needs to reflect that you're using an array now, so:
root[0][_T("wxWidgets")][_T("Version")]