I am working on setting up VPC Service control in GCP using google_access_context_manager_service_perimeter provider. In this resource, within status block, I have to specify a list of google projects as resources value in the format "projects/123456789". I want to put the project numbers in a variable and created something like this.
variable "project_numbers_to_protect" {
type = list(any)
default = [
"123456",
"456789",
"894321"
]
}
I am able to reference the variable as below.
resources = ["projects/${var.project_numbers_to_protect[0]}",
"projects/${var.project_numbers_to_protect[1]}",
"projects/${var.project_numbers_to_protect[2]}"]
But in my production case, I have a large number of projects in the list and I am looking for option to reference it dynamically. I tried count option, but that didn't work.
count = var.project_numbers_to_protect
resources = ["projects/${var.project_numbers_to_protect[count.index]}"]
Error message
vpc-sc-module $ terraform validate
╷
│ Error: Reference to "count" in non-counted context
│
│ on vpc-sc-copy.tf line 16, in resource "google_access_context_manager_service_perimeter" "regular_service_perimeter":
│ 16: resources = ["projects/${var.project_numbers_to_protect[count.index]}"]
│
│ The "count" object can only be used in "module", "resource", and "data" blocks, and only when the "count" argument is set.
╵
Appreciate any help. Thanks.
Full Code
vpc-sc-copy.tf
resource "google_access_context_manager_service_perimeter" "regular_service_perimeter" {
parent = "accessPolicies/${var.access_context_manager_policy_number}"
name = "accessPolicies/${var.access_context_manager_policy_number}/servicePerimeters/${var.perimeter_name}"
perimeter_type = var.perimeter_type
title = var.perimeter_name
use_explicit_dry_run_spec = false
status {
restricted_services = var.restricted_services
## Below two lines works.
# resources = ["projects/${var.project_numbers_to_protect[0]}",
# "projects/${var.project_numbers_to_protect[1]}",]
## Below option doesn't work
count = var.project_numbers_to_protect
resources = ["projects/${var.project_numbers_to_protect[count.index]}"]
ingress_policies {
ingress_from {
identity_type = "ANY_IDENTITY"
sources {
access_level = "*"
}
}
ingress_to {
resources = [
"*"
]
dynamic "operations" {
for_each = var.ingress_rule1_service_name
content {
service_name = operations.value
method_selectors {
method = "*"
}
}
}
}
}
egress_policies {
egress_from {
identities = ["serviceAccount:service-${var.project_number_to_protect}#gcp-sa-aiplatform-cc.iam.gserviceaccount.com"]
}
egress_to {
resources = [
"projects/${var.egress_rule1_project_number}"
]
operations {
service_name = "storage.googleapis.com"
dynamic "method_selectors" {
for_each = var.egress_rule1_methods
content {
method = method_selectors.value
}
}
}
}
}
egress_policies {
egress_from {
identity_type = "ANY_IDENTITY"
}
egress_to {
resources = [
"projects/${var.egress_rule2_project_number}"
]
operations {
service_name = "storage.googleapis.com"
dynamic "method_selectors" {
for_each = var.egress_rule2_methods
content {
method = method_selectors.value
}
}
}
}
}
}
}
Relevant section of vars.tf
variable "project_numbers_to_protect" {
type = list(any)
default = [
"123456",
"456789",
"894321"
]
}
As the error writes, you can't use count the way you want. Instead it should be:
resource "google_access_context_manager_service_perimeter" "regular_service_perimeter" {
parent = "accessPolicies/${var.access_context_manager_policy_number}"
name = "accessPolicies/${var.access_context_manager_policy_number}/servicePerimeters/${var.perimeter_name}"
perimeter_type = var.perimeter_type
title = var.perimeter_name
use_explicit_dry_run_spec = false
status {
restricted_services = var.restricted_services
resources = [for project_number in var.project_numbers_to_protect:
"projects/${project_number}" ]
ingress_policies {
ingress_from {
identity_type = "ANY_IDENTITY"
sources {
access_level = "*"
}
}
ingress_to {
resources = [
"*"
]
dynamic "operations" {
for_each = var.ingress_rule1_service_name
content {
service_name = operations.value
method_selectors {
method = "*"
}
}
}
}
}
egress_policies {
egress_from {
identities = ["serviceAccount:service-${var.project_number_to_protect}#gcp-sa-aiplatform-cc.iam.gserviceaccount.com"]
}
egress_to {
resources = [
"projects/${var.egress_rule1_project_number}"
]
operations {
service_name = "storage.googleapis.com"
dynamic "method_selectors" {
for_each = var.egress_rule1_methods
content {
method = method_selectors.value
}
}
}
}
}
egress_policies {
egress_from {
identity_type = "ANY_IDENTITY"
}
egress_to {
resources = [
"projects/${var.egress_rule2_project_number}"
]
operations {
service_name = "storage.googleapis.com"
dynamic "method_selectors" {
for_each = var.egress_rule2_methods
content {
method = method_selectors.value
}
}
}
}
}
}
}
Related
I'm trying to create an IP whitelist in nonprod for load testing, the WAF is dynamically created in prod and nonprod based on the envname/envtype:
resource "aws_waf_ipset" "pwa_cloudfront_ip_restricted" {
name = "${var.envname}-pwa-cloudfront-whitelist"
dynamic "ip_set_descriptors" {
for_each = var.cloudfront_ip_restricted_waf_cidr_whitelist
content {
type = ip_set_descriptors.value.type
value = ip_set_descriptors.value.value
}
}
}
resource "aws_waf_rule" "pwa_cloudfront_ip_restricted" {
depends_on = [aws_waf_ipset.pwa_cloudfront_ip_restricted]
name = "${var.envname}-pwa-cloudfront-whitelist"
metric_name = "${var.envname}PWACloudfrontWhitelist"
predicates {
data_id = aws_waf_ipset.pwa_cloudfront_ip_restricted.id
negated = false
type = "IPMatch"
}
}
resource "aws_waf_ipset" "pwa_cloudfront_ip_restricted_load_testing" {
name = "${var.envname}-pwa-cloudfront-whitelist_load_testing"
count = var.envtype == "nonprod" ? 1 : 0
dynamic "ip_set_descriptors" {
for_each = var.cloudfront_ip_restricted_waf_cidr_whitelist_load_testing
content {
type = ip_set_descriptors.value.type
value = ip_set_descriptors.value.value
}
}
}
resource "aws_waf_rule" "pwa_cloudfront_ip_restricted_load_testing" {
depends_on = [aws_waf_ipset.pwa_cloudfront_ip_restricted_load_testing]
count = var.envtype == "nonprod" ? 1 : 0
name = "${var.envname}-pwa-cloudfront-whitelist-load_testing"
metric_name = "${var.envname}PWACloudfrontWhitelistload_testing"
predicates {
data_id = aws_waf_ipset.pwa_cloudfront_ip_restricted_load_testing[count.index].id
negated = false
type = "IPMatch"
}
}
resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted" {
name = "${var.envname}-pwa-cloudfront-whitelist"
metric_name = "${var.envname}PWACloudfrontWhitelist"
default_action {
type = "BLOCK"
}
rules {
action {
type = "ALLOW"
}
priority = 1
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted.id
type = "REGULAR"
}
rules {
action {
type = "ALLOW"
}
priority = 2
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing.id
type = "REGULAR"
}
}
The second rules block throws and error in the terraform plan:
Error: Missing resource instance key
on waf.tf line 73, in resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted":
73: rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing.id
Because aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing has "count" set,
its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing[count.index]
However if I add [count.index] :
Error: Reference to "count" in non-counted context
on waf.tf line 73, in resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted":
73: rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing[count.index].id
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set.
Is there a way to do this that doesn't use the count param? Or am I missing something in the way that I am using it?
Since there is difference between the prod and non-prod environment, the way this should be tackled is by using dynamic [1] and for_each meta-argument [2]:
resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted" {
name = "${var.envname}-pwa-cloudfront-whitelist"
metric_name = "${var.envname}PWACloudfrontWhitelist"
default_action {
type = "BLOCK"
}
dynamic "rules" {
for_each = var.envtype == "nonprod" ? [1] : []
content {
action {
type = "ALLOW"
}
priority = 1
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted[0].id
type = "REGULAR"
}
}
dynamic "rules" {
for_each = var.envtype == "nonprod" ? [1] : []
content {
action {
type = "ALLOW"
}
priority = 2
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing[0].id
type = "REGULAR"
}
}
}
[1] https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks
[2] https://developer.hashicorp.com/terraform/language/expressions/for
I want to create two Amazon SNS topics with the same aws_iam_policy_document, aws_sns_topic_policy & time_sleep configs.
This is my terraform, my_sns_topic.tf:
resource "aws_sns_topic" "topic_a" {
name = "topic-a"
}
resource "aws_sns_topic" "topic_b" {
name = "topic-b"
}
data "aws_iam_policy_document" "topic_notification" {
version = "2008-10-17"
statement {
sid = "__default_statement_ID"
actions = [
"SNS:Publish"
]
# Cut off some lines for simplification.
## NEW LINE ADDED
statement {
sid = "allow_snowflake_subscription"
principals {
type = "AWS"
identifiers = [var.storage_aws_iam_user_arn]
}
actions = ["SNS:Subscribe"]
resources = [aws_sns_topic.topic_a.arn] # Troubles with this line
}
}
resource "aws_sns_topic_policy" "topic_policy_notification" {
arn = aws_sns_topic.topic_a.arn
policy = data.aws_iam_policy_document.topic_policy_notification.json
}
resource "time_sleep" "topic_wait_10s" {
depends_on = [aws_sns_topic.topic_a]
create_duration = "10s"
}
As you can see here, I set up the configuration only for topic-a. I want to loop this over to apply for topic-b as well.
It would be better to use map and for_each, instead of separately creating "a" and "b" topics:
variable "topics" {
default = ["a", "b"]
}
resource "aws_sns_topic" "topic" {
for_each = toset(var.topics)
name = "topic-${each.key}"
}
data "aws_iam_policy_document" "topic_notification" {
version = "2008-10-17"
statement {
sid = "__default_statement_ID"
actions = [
"SNS:Publish"
]
# Cut off some lines for simplification.
}
resource "aws_sns_topic_policy" "topic_policy_notification" {
for_each = toset(var.topics)
arn = aws_sns_topic.topic[each.key].arn
policy = data.aws_iam_policy_document.topic_policy_notification.json
}
resource "time_sleep" "topic_wait_10s" {
for_each = toset(var.topics)
depends_on = [aws_sns_topic.topic[each.key]]
create_duration = "10s"
}
I have my main.tf file with this code:
provider “aws” {
region = var.region
}
/*
REST API
*/
resource “aws_api_gateway_rest_api” “my_api” {
name = format(“mock-api-%s-%s”, var.environment, var.region)
endpoint_configuration {
types = [“REGIONAL”]
}
body = jsonencode({
openapi = “3.0.1"
info = {
title = “example”
version = “1.0"
}
paths = {
“/testapi” = {
get = {
responses = {
“200" = {
description = “200 response”
}
}
x-amazon-apigateway-integration = {
httpMethod = “GET”
payloadFormatVersion = “1.0”
type = “MOCK”
passthroughBehavior = “when_no_match”,
requestTemplates = {
“application/json” = “{\“statusCode\“: 200}”
}
responses = {
default = {
statusCode = 200
responseTemplates = {
“application/json” = <<TEMPLATE
{
“foo”: “bar”
}
TEMPLATE
}
}
}
}
}
},
“/myusers” = {
get = {
responses = {
“200" = {
description = “200 response”
}
}
x-amazon-apigateway-integration = {
httpMethod = “GET”
payloadFormatVersion = “1.0”
type = “MOCK”
passthroughBehavior = “when_no_match”,
requestTemplates = {
“application/json” = “{\“statusCode\“: 200}”
}
responses = {
default = {
statusCode = 200
responseTemplates = {
“application/json” = <<TEMPLATE
[
{
“firstName”: “TestUser1"
},
{
“firstName”: “TestUser2”
}
]
TEMPLATE
}
}
}
}
}
}
}
})
}
resource “aws_api_gateway_deployment” “my_api” {
rest_api_id = aws_api_gateway_rest_api.my_api.id
triggers = {
redeployment = sha1(jsonencode(aws_api_gateway_rest_api.my_api.body))
}
lifecycle {
create_before_destroy = true
}
}
resource “aws_api_gateway_stage” “dev” {
deployment_id = aws_api_gateway_deployment.my_api.id
rest_api_id = aws_api_gateway_rest_api.my_api.id
stage_name = “mystage”
}
As I keep adding more paths, my file is getting too big. What I want is if I can put code related to /testapi and /myusers in two different files. It will be easy to maintain and modify. I can add more files in future for more APIs.
Any other better scalable/modular solutions is also welcomed.
I think that the best way would be to use templatefile for your content. So you would end up with something as below:
body = templatefile("path/to/body.json", vars)
I am trying to replicate my AWS ECR repository to multiple regions within the same account using terraform. I tried manually from the AWS console it works fine but from terraform, I am not able to find the solution.
What I tried:
I tried to make a separate variable for the region called replicate_region and tried to provide the region in the list but it keeps on giving me an error called
Inappropriate value for attribute "region": string required.
Here is the variable code:
variable "replicate_region" {
description = "value"
type = list(string)
}
Here is my code for ecr replication:
resource "aws_ecr_replication_configuration" "replication" {
replication_configuration {
rule {
destination {
region = var.replicate_region
registry_id = "xxxxxxxx"
}
}}}
Can anyone please help me out?
Thanks,
Your replicate_region should be string, not a list of strings. It should be, e.g.:
variable "replicate_region" {
description = "value"
type = string
default = "us-east-1"
}
Update
Iteration using dynamic block.
variable "replicate_region" {
description = "value"
type = list(string)
default = ["us-east-1", "ap-southeast-1", "ap-south-1"]
}
resource "aws_ecr_replication_configuration" "replication" {
replication_configuration {
rule {
dynamic "destination" {
for_each = toset(var.replicate_region)
content {
region = destination.key
registry_id = "xxxxxxxx"
}
}
}}}
More easy way:
resource "aws_ecr_replication_configuration" "replication" {
replication_configuration {
rule {
destination {
region = "us-east-2"
registry_id = "xxxxxxxx"
}
destination {
region = "ap-southeast-1"
registry_id = "xxxxxxxx"
}
}
}
}
variable "replicas" {
description = "ECR replicas region list"
type = list(string)
default = [
{
region = "aaa"
registry_id = "11111111"
},
{
region = "bbb"
registry_id = "22222222"
}
]
}
resource "aws_ecr_replication_configuration" "replication" {
count = length(var.replicas) != 0 ? 1 : 0
replication_configuration {
rule {
dynamic "destination" {
for_each = var.replicas
content {
region = destination.value.region
registry_id = destination.value.registry_id
}
}
repository_filter {
filter = var.filter
filter_type = "PREFIX_MATCH"
}
}
}
}
I'm trying to create GCP SQL DBs by iterating a list of string using Terraform's for_each and count parameter and the other loop is for the map keys (maindb & replicadb).
Unfortunately, I get the error that appears below.
Is it possible to do this is Terraform?
variables.tf
variable "sql_var" {
default = {
"maindb" = {
"db_list" = ["firstdb", "secondsdb", "thirddb"],
"disk_size" = "20",
},
"replicadb" = {
"db_list" = ["firstdb"],
"disk_size" = "",
}
}
}
main.tf
resource "google_sql_database_instance" "master_sql_instance" {
...
}
resource "google_sql_database" "database" {
for_each = var.sql_var
name = "${element(each.value.db_list, count.index)}"
instance = "${google_sql_database_instance.master_sql_instance[each.key].name}"
count = "${length(each.value.db_list)}"
}
Error Message
Error: Invalid combination of "count" and "for_each"
on ../main.tf line 43, in resource
"google_sql_database" "database": 43: for_each =
var.sql_var
The "count" and "for_each" meta-arguments are mutually-exclusive, only
one should be used to be explicit about the number of resources to be
created.
What the error message tells you is that you cannot use count and for_each together. It looks like you are trying to create 3 main databases and 1 replica database am I correct? What I would do is create your 2 master instances and then transform your map variable to create the databases.
terraform {
required_version = ">=0.13.3"
required_providers {
google = ">=3.36.0"
}
}
variable "sql_instances" {
default = {
"main_instance" = {
"db_list" = ["first_db", "second_db", "third_db"],
"disk_size" = "20",
},
"replica_instance" = {
"db_list" = ["first_db"],
"disk_size" = "20",
}
}
}
locals {
databases = flatten([
for key, value in var.sql_instances : [
for item in value.db_list : {
name = item
instance = key
}
]
])
sql_databases = {
for item in local.databases :
uuid() => item
}
}
resource "google_sql_database_instance" "sql_instance" {
for_each = var.sql_instances
name = each.key
settings {
disk_size = each.value.disk_size
tier = "db-f1-micro"
}
}
resource "google_sql_database" "sql_database" {
for_each = local.sql_databases
name = each.value.name
instance = each.value.instance
depends_on = [
google_sql_database_instance.sql_instance,
]
}
Then, first run terraform apply -target=google_sql_database_instance.sql_instance and after this run terraform apply.