I have two different scenarios, one with public subnets and one with private.
For both, I would like to apply some combination of for-loop and zipmask to have a single map object. You can assume I have checked the ordering of the two input maps/lists and that they are align.
Using the public-subnet as the first example, I have one cidr for each user:
pub_cidr_map = {
"user2" = "10.0.8.0/21"
"user4" = "10.0.24.0/21"
"user1" = "10.0.0.0/21"
"user3" = "10.0.16.0/21"
}
pub_id_list = [
"subnet-666666662ee6f3442",
"subnet-6666666696b92d895",
"subnet-66666666cbaa4bfb3",
"subnet-6666666655a09d064",
]
I would like this to look like this so I can access both values with a single key:
pub_lookup_map = {
"user2" = ["10.0.8.0/21", "subnet-666666662ee6f3442"]
"user4" = ["10.0.24.0/21", "subnet-6666666696b92d895"]
"user1" = ["10.0.0.0/21", "subnet-66666666cbaa4bfb3"]
"user3" = ["10.0.16.0/21", "subnet-6666666655a09d064"]
}
I'd also like to accomplish something similar with my private subets, which are apportioned two-per-user:
priv_cidr_map = {
"user1" = [
"10.0.96.0/20",
"10.0.112.0/20",
]
"user2" = [
"10.0.160.0/20",
"10.0.176.0/20",
]
"user3" = [
"10.0.64.0/20",
"10.0.80.0/20",
]
"user4" = [
"10.0.128.0/20",
"10.0.144.0/20",
]
}
priv_id_list = [
"subnet-666666662f611f9a5",
"subnet-6666666689f1eff5e",
"subnet-66666666a3fe6efb9",
"subnet-66666666faf4a62a8",
"subnet-666666668f1442700",
"subnet-66666666328a4b134",
"subnet-666666661b147a933",
"subnet-666666661ce02c330"
]
I would like this to look like
priv_lookup_map = {
"user1" = [
["10.0.96.0/20","subnet-666666662f611f9a5"]
["10.0.112.0/20","subnet-6666666689f1eff5e"]
]
"user2" = [
["10.0.160.0/20","subnet-66666666a3fe6efb9"]
["10.0.176.0/20","subnet-66666666faf4a62a8"]
]
"user3" = [
["10.0.64.0/20","subnet-666666668f1442700"]
["10.0.80.0/20","subnet-66666666328a4b134"]
]
"user4" = [
["10.0.128.0/20","subnet-666666661b147a933"]
["10.0.144.0/20","subnet-666666661ce02c330"]
]
}
I am open to any other structures someone might think are useful; the use case here is to provision subnets and EIPs as part of a separate, stateful deployment of a VPC prior to the deployment of the resources (like EC2, RDS) that will reside within these ranges.
I can not guarantee that this solution is correct, I will share my attempt anyway hoping it might be helpful.
For the public subnets:
locals {
pub_lookup_map = {for key, value in zipmap(keys(var.pub_cidr_map), var.pub_id_list) : key => [var.pub_cidr_map[key], value] }
}
This will produce the following output:
pub = {
"user1" = [
"10.0.0.0/21",
"subnet-666666662ee6f3442",
]
"user2" = [
"10.0.8.0/21",
"subnet-6666666696b92d895",
]
"user3" = [
"10.0.16.0/21",
"subnet-66666666cbaa4bfb3",
]
"user4" = [
"10.0.24.0/21",
"subnet-6666666655a09d064",
]
}
The problem with this output is that, as I noted in the comments, the iteration over a the keys of a map happens in a lexicographical order. This means that user1 will be mapped to the first entry from pub_id_list, user2 to the second entry, etc. Even if you are suggesting in the comments that "I've checked this ordering issue and not to worry about it", please double check this solution before using it.
For the private subnets:
locals {
cidr_subnet_id = zipmap(flatten(values(var.priv_cidr_map)), var.priv_id_list)
priv_lookup_map = {for key, value in var.priv_cidr_map: key => [ for cidr in value: [cidr, local.cidr_subnet_id[cidr]]]}
}
Please note, that I'm using an intermediary local variable to make my code readable. The value for cidr_subnet_id will be:
cidr_subnet_id = {
"10.0.112.0/20" = "subnet-6666666689f1eff5e"
"10.0.128.0/20" = "subnet-666666661b147a933"
"10.0.144.0/20" = "subnet-666666661ce02c330"
"10.0.160.0/20" = "subnet-66666666a3fe6efb9"
"10.0.176.0/20" = "subnet-66666666faf4a62a8"
"10.0.64.0/20" = "subnet-666666668f1442700"
"10.0.80.0/20" = "subnet-66666666328a4b134"
"10.0.96.0/20" = "subnet-666666662f611f9a5"
}
This is essentially a map between the CIDR and the subnet id. Apparently, this works correctly, because the keys from the priv_lookup_map are in lexicographical order when provided. I think this somewhat answers your question in the comments for "if both maps are in lexigraphical order isn't this a non-issue?"
The output for priv_lookup_map will be:
priv = {
"user1" = [
[
"10.0.96.0/20",
"subnet-666666662f611f9a5",
],
[
"10.0.112.0/20",
"subnet-6666666689f1eff5e",
],
]
"user2" = [
[
"10.0.160.0/20",
"subnet-66666666a3fe6efb9",
],
[
"10.0.176.0/20",
"subnet-66666666faf4a62a8",
],
]
"user3" = [
[
"10.0.64.0/20",
"subnet-666666668f1442700",
],
[
"10.0.80.0/20",
"subnet-66666666328a4b134",
],
]
"user4" = [
[
"10.0.128.0/20",
"subnet-666666661b147a933",
],
[
"10.0.144.0/20",
"subnet-666666661ce02c330",
],
]
}
Related
final List _icons = [
[
'IDENTITY',
FontAwesomeIcons.camera,
],
[
'SPECIES',
FontAwesomeIcons.tree,
],
[
'ARTICLES',
FontAwesomeIcons.bookOpen,
],
];
You can try
_icons[0][1] //FontAwesomeIcons.camera
but it seems like you can use a map instead of a list
final map _icons = { 'IDENTITY' : FontAwesomeIcons.camera, 'SPECIES': FontAwesomeIcons.tree, 'ARTICLES' : FontAwesomeIcons.bookOpen};
_icons['IDENTITY']; //FontAwesomeIcons.camera
I am struggling with passing list variable to event pattern in terrform. Here is what i have currently and the error with with i am struggling:
Structure:
main.tf
variable.tf
input.tfvars
---modules
------cwr-cwt
---------main.tf
---------variable.tf
my module
resource "aws_cloudwatch_event_rule" "somename" {
.
.
.
event_pattern = <<PATTERN
{ "source":[
"aws.config"
],
"detail-type":[
"Config Configuration Item Change"
],
"detail":{
"messageType":[
"ConfigurationItemChangeNotification"
],
"configurationItem":{
"resourceType":${var.resource_type},
"configurationItemStatus":[
"ResourceDiscovered"
],
"configuration":{
"instanceType":${var.instance_type_list}
}
}
}
}
PATTERN
}
.
.
.
This is what i am passing from main.tf...
module "somename" {
source = "./modules/cwr-cwt"
resource_type = [ "AWS::EC2::Instance" ]
sns_topic_arn = aws_sns_topic.sns-euwe01-expense-watcher-group-01.arn
rule_name = "somename"
instance_type_list = [
"t2.2xlarge","t3.2xlarge","t3a.2xlarge","t4g.2xlarge","m2.2xlarge","m2.4xlarge","m3.2xlarge","m4.2xlarge","m4.4xlarge",
"m4.10xlarge","m4.16xlarge ","cr1.8xlarge","r3.2xlarge","r3.4xlarge","r3.8xlarge","r4.2xlarge","r4.8xlarge","r4.16xlarge",
"r5.2xlarge","r5.4xlarge","r5.8xlarge","r5.12xlarge","r5.16xlarge","r5.24xlarge","r5.metal","r5a.2xlarge", "r5a.4xlarge",
"r5a.8xlarge","r5a.12xlarge","r5a.16xlarge","r5a.24xlarge","r5b.2xlarge","r5b.4xlarge","r5b.8xlarge","r5b.12xlarge","r5b.16xlarge",
"r5b.24xlarge","r5b.metal","r5d.2xlarge","r5d.4xlarge","r5d.8xlarge","m5ad.12xlarge","m5ad.16xlarge","m5a.16xlarge","m5a.24xlarge",
"r5d.12xlarge","r5d.16xlarge","r5d.24xlarge","r5d.metal","r5ad.2xlarge","r5ad.4xlarge","r5ad.8xlarge","r5ad.12xlarge","x2gd.8xlarge",
"r5ad.16xlarge","r5ad.24xlarge","r6g.2xlarge","r6g.4xlarge","r6g.8xlarge","r6g.12xlarge","r6g.16xlarge","r6gd.2xlarge","r6gd.4xlarge",
"r6gd.8xlarge","r6gd.12xlarge","r6gd.16xlarge","x1.16xlarge","x1.32xlarge","x1e.2xlarge","x1e.4xlarge","g3.8xlarge","g3.16xlarge",
"x1e.8xlarge","x1e.16xlarge","x1e.32xlarge","i2.2xlarge","i2.4xlarge","i2.8xlarge","i3.2xlarge","i3.4xlarge","i3.8xlarge",
"i3.16xlarge","i3.metal","i3en.2xlarge","i3en.3xlarge","i3en.6xlarge","i3en.12xlarge","i3en.24xlarge","i3en.metal","hi1.4xlarge",
"hs1.8xlarge" ,"c3.2xlarge","c3.4xlarge","c3.8xlarge","c4.2xlarge","c4.4xlarge"
]
}
But running this is giving me errors as below:
var.instance_type_list is list of string with n elements
Cannot include the given value in a string template: string required.
if i don't create module and use it directly then it works just fine... :(
Note: i don't want to create json string to pass to event pattern.
In Terraform, I need to transform my input data structure from e.g.:
vip_lists = [
["1.0.1.1", "1.0.1.2", "1.0.1.3", "1.0.1.4"]
["1.0.2.1", "1.0.2.2", "1.0.2.3", "1.0.2.4"]
["1.0.0.1", "1.0.0.2", "1.0.0.3", "1.0.0.4"]
]
to produce an output like this:
vip_sets = [
["1.0.1.1", "1.0.2.1", "1.0.0.1"]
["1.0.1.2", "1.0.2.2", "1.0.0.2"]
["1.0.1.3", "1.0.2.3", "1.0.0.3"]
["1.0.1.4", "1.0.2.4", "1.0.0.4"]
]
So essentially, i need to take my input list of lists and create an output which is again a list of lists but whose 0th list is a list of the 0th elements from each of the lists in the input...then the same again for the 1st and so on.
I can't know in advance how many lists will be in the input or how long they will be, but we can assume the lists will all be the same length if that helps.
I've tried pretty much everything I can think of and searched the web but since with no luck. All suggestions would be very welcome!
I once wrote version of this for lists of lists that are not the same length for one of our modules on github.com/mineiros-io where we used such transformations to create 2 dimensional sets of resources using count. (Those are not in use atm as we transformed them to maps for use with ressource level for_each).
locals {
matrix = [
["1.0.1.1", "1.0.1.4"],
["1.0.2.1", "1.0.2.2", "1.0.2.3", "1.0.2.4"],
["1.0.0.1", "1.0.0.3", "1.0.0.4"]
]
row_lengths = [
for row in local.matrix : length(row)
]
max_row_length = max(0, local.row_lengths...)
output = [
for i in range(0, local.max_row_length) : [
for j, _ in local.matrix : try(local.matrix[j][i], null)
]
]
output_compact = [
for i in range(0, local.max_row_length) : compact([
for j, _ in local.matrix : try(local.matrix[j][i], null)
])
]
}
output "matrix" {
value = local.output
}
output "compact" {
value = local.output_compact
}
which can handle dynamic list sizes and output them compact or filled with null values:
Outputs:
compact = [
[ "1.0.1.1", "1.0.2.1", "1.0.0.1" ],
[ "1.0.1.4", "1.0.2.2", "1.0.0.3" ],
[ "1.0.2.3", "1.0.0.4" ],
[ "1.0.2.4" ],
]
matrix = [
[ "1.0.1.1", "1.0.2.1", "1.0.0.1" ],
[ "1.0.1.4", "1.0.2.2", "1.0.0.3" ],
[ null, "1.0.2.3", "1.0.0.4" ],
[ null, "1.0.2.4", null ],
]
I know an answer is already accepted, but maybe some one can still make use of this dynamic version.
This is sort of horrible, but it works (Although I haven't tested what it'd do if vip_lists was empty. Probably crash, as I'm indexing to vip_lists[0] without checking):
locals {
vip_lists = [
["1.0.1.1", "1.0.1.2", "1.0.1.3", "1.0.1.4"],
["1.0.2.1", "1.0.2.2", "1.0.2.3", "1.0.2.4"],
["1.0.0.1", "1.0.0.2", "1.0.0.3", "1.0.0.4"]
]
vip_sets = [for i in range(0, length(local.vip_lists[0])): [for j in range(0, length(local.vip_lists)): local.vip_lists[j][i]]]
}
output "vip_sets" {
value = local.vip_sets
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
vip_sets = [
[
"1.0.1.1",
"1.0.2.1",
"1.0.0.1",
],
[
"1.0.1.2",
"1.0.2.2",
"1.0.0.2",
],
[
"1.0.1.3",
"1.0.2.3",
"1.0.0.3",
],
[
"1.0.1.4",
"1.0.2.4",
"1.0.0.4",
],
]
This is my JSON data, which is stored into cosmos db
{
"id": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"keyData": {
"Keys": [
"Government",
"Training",
"support"
]
}
}
Now I want to write a query to eliminate the keyData and get only the Keys (like below)
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
So far I tried the query like
SELECT c.id,k.Keys FROM c
JOIN k in c.keyPhraseBatchResult
Which is not working.
Update 1:
After trying with the Sajeetharan now I can able to get the result, but the issue it producing another JSON inside the Array.
Like
{
"id": "ee885fdc-9951-40e2-b1e7-8564003cd554",
"keys": [
{
"serving": "Government"
},
{
"serving": "Training"
},
{
"serving": "support"
}
]
}
Is there is any way that extracts only the Array without having key value pari again?
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
You could try this one,
SELECT C.id, ARRAY(SELECT VALUE serving FROM serving IN C.keyData.Keys) AS Keys FROM C
Please use cosmos db stored procedure to implement your desired format based on the #Sajeetharan's sql.
function sample() {
var collection = getContext().getCollection();
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT C.id,ARRAY(SELECT serving FROM serving IN C.keyData.Keys) AS keys FROM C',
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found');
}
else {
var response = getContext().getResponse();
var map = {};
for(var i=0;i<feed.length;i++){
var keyArray = feed[i].keys;
var array = [];
for(var j=0;j<keyArray.length;j++){
array.push(keyArray[j].serving)
}
feed[i].keys = array;
}
response.setBody(feed);
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Output:
I have the table 'column_defn' with the following schema. The keys are column_name,database_name and table_name
column_name STRING(130) NOT NULL
database_name STRING(150) NOT NULL
table_name STRING(130) NOT NULL
column_description STRING(1000) NOT NULL
I am trying to delete a row using the following REST request
{
"session":"xxxxxxxxx"
"singleUseTransaction": {
"readWrite": {}
},
"mutations": [
{
"delete": {
"table": "column_defn",
"keySet": {
"keys": [
[
{
"column_name": "testd"
},
{
"table_name": "test atbd"
},
{
"database_name": "ASDFDFS"
}
]
]
}
}
}
]
}
but I keep getting the following error. Any idea as to where is wrong in the above request
{
"error": {
"code": 400,
"message": "Invalid value for column database_name in table column_defn: Expected STRING.",
"status": "FAILED_PRECONDITION"
}
}
Update: The following request seems to be successful. At least it was returning the success code 200 and the commitTimestamp. However, the row didn't get deleted
{
"singleUseTransaction": {
"readWrite": {}
},
"mutations": [
{
"delete": {
"table": "column_defn",
"keySet": {
"keys": [
[
"testd",
"dsafd",
"test atbd"
]
]
}
}
}
]
}
keys should contain an array-of-arrays. In the outer array, there will be one entry for each row you are trying to delete. Each inner array will be the ordered list of key-values that define a single row (order matters). So in your example, you want:
"keys": [["testd","ASDFDFS","test atbd"]]
Note that the original question is inconsistent in the true ordering of the keys in the table. The above answer assumes the primary key is defined something like:
PRIMARY KEY(column_name,database_name,table_name)