Bigquery Multiple Tables access with Terraform with dynamic table name - google-cloud-platform

I am trying to create multiple table access using a local list and pass values into a single resource block:
locals {
map_of_all_tables = [
{
"table_name" : "table1"
"dataset_id" : "dataset_id1"
"table_id" : "table_id1"
},
{
"table_name" : "table2"
"dataset_id" : "dataset_id2"
"table_id" : "table_id2"
}
]
}
resource "google_bigquery_table_iam_member" "access" {
count = contains(var.table_name_list, local.map_of_all_tables[*].table_name) ? <(no. of matching tables)> : 0
project = "test-project1"
dataset_id = locals.map_of_all_tables[<indexOfMatchingTable>].dataset_id #dataset_id of matching table name
table_id = locals.map_of_all_tables[<indexOfMatchingTable>].table_id #table_id of matching table name
role = "roles/bigquery.dataViewer"
member = "user:${var.user_email}"
}
If the var.table_name_list contains any number of tables which matches the table name in the local list, it should create the resource "access[]" for each of these tables using the dataset ids and table ids from the list for these particular tables. Is this possible in Terraform? Any help would be appreciated. Thanks!

If I understand your question correctly, you have a list of tables in var.table_name_list var for which access needs to be given. All the tables are present in local.map_of_all_tables local variable & you want to filter it against var.table_name_list.
I'm assuming above scenarios as you haven't told how var.table_name_list looks like..
locals {
map_of_all_tables = [
{
"table_name" : "table1"
"dataset_id" : "dataset_id1"
"table_id" : "table_id1"
},
{
"table_name" : "table2"
"dataset_id" : "dataset_id2"
"table_id" : "table_id2"
},
{
"table_name" : "table3"
"dataset_id" : "dataset_id3"
"table_id" : "table_id3"
}
]
## this will filter
table_access_list = [for table in local.map_of_all_tables : table if contains(var.table_name_list, table.table_name)]
}
## assuming the var like below
variable "table_name_list" {
type = list(any)
default = ["table1", "table2"]
}
## output displaying the filtered tables
output "table_access_list" {
value = local.table_access_list
}
Then, you could iterate over the local.table_access_list var to grant access only to desired tables.
resource "google_bigquery_table_iam_member" "access" {
for_each = {
for table_access in local.table_access_list : table_access.table_name => table_access
}
project = "test-project1-${each.value.table_name}"
dataset_id = local.table_access_list[each.value.table_name].dataset_id #dataset_id of matching table name
table_id = local.table_access_list[each.value.table_name].table_id #table_id of matching table name
role = "roles/bigquery.dataViewer"
member = "user:${var.user_email}"
}

Related

Terraform - Copy AWS SSM Parameters

longtime lurker first time poster
Looking for some guidance from you all. I'm trying to replicate the aws command to essentially get the parameters (ssm get-parameters-by-path) then loop through the parameters and get them
then loop through and put them into a new parameter (ssm put-parameter)
I understand there's a for loop expression in TF but for the life of me I can't put together how I would achieve this.
so thanks to the wonderful breakdown below, I've gotten closer! But have this one issue. Code below:
provider "aws" {
region = "us-east-1"
}
data "aws_ssm_parameters_by_path" "parameters" {
path = "/${var.old_env}"
recursive = true
}
output "old_params_by_path" {
value = data.aws_ssm_parameters_by_path.parameters
sensitive = true
}
locals {
names = toset(data.aws_ssm_parameters_by_path.parameters.names)
}
data "aws_ssm_parameter" "old_param_name" {
for_each = local.names
name = each.key
}
output "old_params_names" {
value = data.aws_ssm_parameter.old_param_name
sensitive = true
}
resource "aws_ssm_parameter" "new_params" {
for_each = local.names
name = replace(data.aws_ssm_parameter.old_param_name[each.key].name, var.old_env, var.new_env)
type = data.aws_ssm_parameter.old_param_name[each.key].type
value = data.aws_ssm_parameter.old_param_name[each.key].value
}
I have another file like how the helpful poster mentioned and created the initial dataset. But what's interesting is that after you create the set after the second set, it overwrites the first set! The idea is that I would be able to tell terraform, I have this current set of SSM parameters and I want you to copy that info (values, type) and create a brand new set of parameters (and not destroy anything that's already there).
Any and all help would be appreciated!
I understand, It's not easy at the beginning. I will try to elaborate step-by-step on how I achieve that.
Anyway, it's nice to include any code, that you tried before, even if doesn't work.
So, firstly I create some example parameters:
# create_parameters.tf
resource "aws_ssm_parameter" "p" {
count = 3
name = "/test/${count.index}/p${count.index}"
type = "String"
value = "test-${count.index}"
}
Then I try to view them:
# example.tf
data "aws_ssm_parameters_by_path" "parameters" {
path = "/test/"
recursive = true
}
output "params_by_path" {
value = data.aws_ssm_parameters_by_path.parameters
sensitive = true
}
As an output I received:
terraform output params_by_path
{
"arns" = tolist([
"arn:aws:ssm:eu-central-1:999999999999:parameter/test/0/p0",
"arn:aws:ssm:eu-central-1:999999999999:parameter/test/1/p1",
"arn:aws:ssm:eu-central-1:999999999999:parameter/test/2/p2",
])
"id" = "/test/"
"names" = tolist([
"/test/0/p0",
"/test/1/p1",
"/test/2/p2",
])
"path" = "/test/"
"recursive" = true
"types" = tolist([
"String",
"String",
"String",
])
"values" = tolist([
"test-0",
"test-1",
"test-2",
])
"with_decryption" = true
}
aws_ssm_parameters_by_path is unusable without additional processing, so we need to use another data source, to get a suitable object for a copy of provided parameters. n the documentation I found aws_ssm_parameter. However, to use it, I need the full name of the parameter.
List of the parameter names I retrieved in the previous stage, so now only needed is to iterate through them:
# example.tf
locals {
names = toset(data.aws_ssm_parameters_by_path.parameters.names)
}
data "aws_ssm_parameter" "param" {
for_each = local.names
name = each.key
}
output "params" {
value = data.aws_ssm_parameter.param
sensitive = true
}
And as a result, I get:
terraform output params
{
"/test/0/p0" = {
"arn" = "arn:aws:ssm:eu-central-1:999999999999:parameter/test/0/p0"
"id" = "/test/0/p0"
"name" = "/test/0/p0"
"type" = "String"
"value" = "test-0"
"version" = 1
"with_decryption" = true
}
"/test/1/p1" = {
"arn" = "arn:aws:ssm:eu-central-1:999999999999:parameter/test/1/p1"
"id" = "/test/1/p1"
"name" = "/test/1/p1"
"type" = "String"
"value" = "test-1"
"version" = 1
"with_decryption" = true
}
"/test/2/p2" = {
"arn" = "arn:aws:ssm:eu-central-1:999999999999:parameter/test/2/p2"
"id" = "/test/2/p2"
"name" = "/test/2/p2"
"type" = "String"
"value" = "test-2"
"version" = 1
"with_decryption" = true
}
}
Each parameter object has been retrieved, so now it is possible to create new parameters - which can be done like this:
# example.tf
resource "aws_ssm_parameter" "new_param" {
for_each = local.names
name = "/new_path${data.aws_ssm_parameter.param[each.key].name}"
type = data.aws_ssm_parameter.param[each.key].type
value = data.aws_ssm_parameter.param[each.key].value
}

Terraform variable referencing locals not working

I need to pass the database host name (that is dynamically generated) as an environmental variable into my task definition. I thought I could set locals and have the variable map refer to a local but it seems to not work, as I receive this error: “error="failed to check table existence: dial tcp: lookup local.grafana-db-address on 10.0.0.2:53: no such host". I am able to execute the terraform plan without issues and the code works when I hard code the database host name, but that is not optimal.
My Variables and Locals
//MySql Database Grafana Username (Stored as ENV Var in Terraform Cloud)
variable "username_grafana" {
description = "The username for the DB grafana user"
type = string
sensitive = true
}
//MySql Database Grafana Password (Stored as ENV Var in Terraform Cloud)
variable "password_grafana" {
description = "The password for the DB grafana password"
type = string
sensitive = true
}
variable "db-port" {
description = "Port for the sql db"
type = string
default = "3306"
}
locals {
gra-db-user = var.username_grafana
}
locals {
gra-db-password = var.password_grafana
}
locals {
db-address = aws_db_instance.grafana-db.address
}
locals {
grafana-db-address = "${local.db-address}.${var.db-port}"
}
variable "app_environments_vars" {
type = list(map(string))
description = "Database environment variables needed by Grafana"
default = [
{
"name" = "GF_DATABASE_TYPE",
"value" = "mysql"
},
{
"name" = "GF_DATABASE_HOST",
"value" = "local.grafana-db-address"
},
{
"name" = "GF_DATABASE_USER",
"value" = "local.gra-db-user"
},
{
"name" = "GF_DATABASE_PASSWORD",
"value" = "local.gra-db-password"
}
]
}
Task Definition Variable reference
"environment": ${jsonencode(var.app_environments_vars)},
Thank you to everyone who has helped me with this project. I am new to all of this and could not have done it without help from this community.
You can't use dynamic references in your app_environments_vars. So your default values "value" = "local.grafana-db-address" will never get resolved by TF. If will be just a literal string "local.grafana-db-address".
You have to modify your code so that all these dynamic references in app_environments_vars get populated in locals.
UPDATE
Your app_environments_vars should be local variable for it to be resolved:
locals {
app_environments_vars = [
{
"name" = "GF_DATABASE_TYPE",
"value" = "mysql"
},
{
"name" = "GF_DATABASE_HOST",
"value" = local.grafana-db-address
},
{
"name" = "GF_DATABASE_USER",
"value" = local.gra-db-user
},
{
"name" = "GF_DATABASE_PASSWORD",
"value" = local.gra-db-password
}
]
}
then you pass that local to your template for the task definition.

Terraform Variable looping to generate properties

I have to admit, this is the first time I have to ask something that I dont even myself know how to ask for it or explain, so here is my code.
It worth explains that, for specific reasons I CANNOT change the output resource, this, the metadata sent to the resource has to stay as is, otherwise it will cause a recreate and I dont want that.
currently I have a terraform code that uses static/fixed variables like this
user1_name="Ed"
user1_Age ="10"
user2_name="Mat"
user2_Age ="20"
and then those hard typed variables get used in several places, but most importanly they are passed as metadata to instances, like so
resource "google_compute_instance_template" "mytemplate" {
...
metadata = {
othervalues = var.other
user1_name = var.user1_name
user1_Age = var.user1_Age
user2_name = var.user2_name
user2_Age = var.user2_Age
}
...
}
I am not an expert on terraform, thus asking, but I know for fact this is 100% ugly and wrong, and I need to use lists or array or whatever, so I am changing my declarations to this:
users = [
{ "name" : "yo", "age" : "10", "last" : "other" },
{ "name" : "El", "age" : "20", "last" : "other" }
]
but then, how do i get around to generate the same result for that resource? The resulting resource has to still have the same metadata as shown.
Assuming of course that the order of the users will be used as the "index" of the value, first one gets user1_name and so on...
I assume I need to use a for_each loop in there but cant figure out how to get around a loop inside properties of a resource
Not sure if I make myself clear on this, probably not, but didn't found a better way to explain.
From your example it seems like your intent is for these to all ultimately appear as a single map with keys built from two parts.
Your example doesn't make clear what the relationship is between user1 and Ed, though: your first example shows that "user1's" name is Ed, but in your example of the data structure you want to create there is only one "name" and it isn't clear to me whether that name would replace "user1" or "Ed" from your first example.
Instead, I'm going to take a slightly different variable structure which still maintains both the key like "user1" and the name attribute, like this:
variable "users" {
type = map(object({
name = string
age = number
})
}
locals {
# First we'll transform the map of objects into a
# flat set of key/attribute/value objects, because
# that's easier to work with when we generate the
# flattened map below.
users_flat = flatten([
for key, user in var.users : [
for attr, value in user : {
key = key
attr = attr
value = value
}
]
])
}
resource "google_compute_instance_template" "mytemplate" {
metadata = merge(
{
othervalues = var.other
},
{
for vo in local.users_flat : "${vo.key}_${vo.attr}" => vo.value
}
)
}
local.users_flat here is an intermediate data structure that flattens the two-level heirarchy of keys and object attributes from the input. It would be shaped something like this:
[
{ key = "user1", attr = "name", value = "Ed" },
{ key = "user1", attr = "age", value = 10 },
{ key = "user2", attr = "name", value = "Mat" },
{ key = "user2", attr = "age", value = 20 },
]
The merge call in the metadata argument then merges a directly-configured mapping of "other values" with a generated mapping derived from local.users_flat, shaped like this:
{
"user1_name" = "Ed"
"user1_age" = 10
"user2_name" = "Mat"
"user2_age" = 20
}
From the perspective of the caller of the module, the users variable should be defined with the following value in order to get the above results:
users = {
user1 = {
name = "Ed"
age = 10
}
user2 = {
name = "Mat"
age = 20
}
}
metadata is not a block, but a regular attribute of type map. So you can do:
# it would be better to use map, not list for users:
variable "users"
default {
user1 = { "name" : "yo", "age" : "10", "last" : "other" },
user2 = { "name" : "El", "age" : "20", "last" : "other" }
}
}
resource "google_compute_instance_template" "mytemplate" {
for_each = var.users
metadata = each.value
#...
}

dynamoDB update-item python boto3

I have a column in DynamoDB table which will be of the following type:
{"History": {"L": [{"M": {"id": {"S": "id"}, "Flow": {"L":[{"S": "test2"}]},"UUID": {"S": "1234"}}}]}}
Column History is of type 'List' in which each list element is a map with 3 values - id (string), Flow (List), uuid (String)
My code would trigger update-item multiple times and all I want is, given the same id and uuid, new values are to be appended into the Flow list without disturbing anything else.
I have referred the documentation but unable to figure out how to write the UpdateExpression.
My existing code is as below:
response_update = client.update_item(
TableName = 'tableName',
Key = {
'k1': {
'S': 'v1'
},
'k2': {
'S': 'v2'
}
},
UpdateExpression="SET History=list_append(if_not_exists(History, :empty_list), :attrValue)",
ExpressionAttributeValues = {":attrValue" :{"L":[ { "M" : { "id" : { "S" : "123" }, "UUID" : { "S" : "uuid123" }, "Flow" : { "L" : [ { "S" : "now2" } ] } } } ]},":empty_list":{"L":[]}})
With this code, each time I trigger the update function, a new element in getting appended in History list. Instead, I need my desired string to be appended to the Flow list.
Please let me know how the expression should be.

Complex Queries in DynamoDB

I am working on an application that uses DynamoDB.
Is there a way where I can create a GSI with multiple attributes. My aim is to query the table with a query of following kind:
(attrA.val1 === someVal1 AND attrB.val2 === someVal2 AND attrC.val3 === someVal3)
OR (attrA.val4 === someVal4 AND attrB.val5 === someVal5 AND attrC.val6 === someVal6)
I am aware we can use Query when we have the Key Attribute and when Key Attribute is unknown we can use Scan operations. I am also aware of GSI if we need to query with non-key attributes. But I need some help in this scenario. Is there a way to model GSI to suit the above query.
I have the below item (i.e. data) on my Movies tables. The below query params works fine for me.
You can add the third attribute as present in the OP. It should work fine.
DynamoDB does support the complex condition on FilterExpression.
Query table based on some condition:-
var table = "Movies";
var year_val = 1991;
var title = "Movie with map attribute";
var params = {
TableName : table,
KeyConditionExpression : 'yearkey = :hkey and title = :rkey',
FilterExpression : '(records.K1 = :k1Val AND records.K2 = :k2Val) OR (records.K3 = :k3Val AND records.K4 = :k4Val)',
ExpressionAttributeValues : {
':hkey' : year_val,
':rkey' : title,
':k3Val' : 'V3',
':k4Val' : 'V4',
':k1Val' : 'V1',
':k2Val' : 'V2'
}
};
docClient.query(params, function(err, data) {
if (err) {
console.error("Unable to read item. Error JSON:", JSON.stringify(err,
null, 2));
} else {
console.log("GetItem succeeded:", JSON.stringify(data, null, 2));
}
});
My Data:-
Result:-
GetItem succeeded: {
"Items": [
{
"title": "Movie with map attribute",
"yearkey": 1991,
"records": {
"K3": "V3",
"K4": "V4",
"K1": "V1",
"K2": "V2"
}
}
],
"Count": 1,
"ScannedCount": 1
}