build resources based on variable count - amazon-web-services

may I ask if there is a way to build an aws resource using Terraform v0.14.10 base from the count of the defined variables and use the name of the variable as part of the name of the created ECR resource. Its like I want to build ECR repo and it should be 3 of them coz of the variables I used has 3 and use the name as the repo name like as below:
Results of ECR build creation
app1.repo
pogi2.repo
panget3.repo
Terraform Code:
MY.TF
variable RESOURCE_NAME { type = map }
locals {
RESOURCE_NAME = "${var.app-name}-repo"
}
resource "aws_ecr_repository" "myrepo" {
name = local.RESOURCE_NAME
}
VAR.tfvars
app-name = [ "app1", "pogi2", "panget3" ]

You can do that as follows:
resource "aws_ecr_repository" "myrepo" {
for_each = toset(var.app-name)
name = "${each.key}.repo"
}

Related

How to move existing Terraform resources from single items to set?

I have an SQL db module with single databases like this:
resource "google_sql_database" "projects" {
name = "projects"
instance = google_sql_database_instance.database.name
}
resource "google_sql_database" "markdown" {
name = "markdown"
instance = google_sql_database_instance.database.name
}
I'd like to switch to set of variables instead:
variable "databases" {
type = list(string)
default = ["projects", "markdown"]
}
resource "google_sql_database" "database" {
for_each = toset(var.databases)
name = each.key
instance = google_sql_database_instance.database.name
}
And when I do terraform apply the CLI wants to recreate everything:
# module.sql.google_sql_database.database["markdown"] will be created
+ resource "google_sql_database" "database" {
...
...
# module.sql.google_sql_database.markdown will be destroyed
- resource "google_sql_database" "markdown" {
...
...
How to avoid that and mirror the existing resources to new config?
You either need to run terraform state mv command for each resource, or add moved blocks to your Terraform code.

How to create an aws sagemaker project using terraform?

This is the terraform shown in the docs:
resource "aws_sagemaker_project" "example" {
project_name = "example"
service_catalog_provisioning_details {
product_id = aws_servicecatalog_product.example.id
}
}
I created a service catalog product with id: "prod-xxxxxxxxxxxxx".
When I substitute the service catalog product id into the above template,
to get the following:
resource "aws_sagemaker_project" "example" {
project_name = "example"
service_catalog_provisioning_details {
product_id = aws_servicecatalog_product.prod-xxxxxxxxxxxxx
}
}
I run terraform plan, but the following error occurs:
A managed resource "aws_servicecatalog_product" "prod-xxxxxxxxxxxxx" has not been declared in the root module.
What do I need to do to fix this error?
Since the documentation is lacking a bit of clarity, in order to have this work as in the example, you would first have to create the Service Catalog product in Terraform as well, e.g.:
resource "aws_servicecatalog_product" "example" {
name = "example"
owner = [aws_security_group.example.id] # <---- This would need to be created first
type = aws_subnet.main.id # <---- This would need to be created first
provisioning_artifact_parameters {
template_url = "https://s3.amazonaws.com/cf-templates-ozkq9d3hgiq2-us-east-1/temp1.json"
}
tags = {
foo = "bar"
}
}
You can reference it then in the SageMaker project the same way as in the example:
resource "aws_sagemaker_project" "example" {
project_name = "example"
service_catalog_provisioning_details {
product_id = aws_servicecatalog_product.example.id
}
}
Each of the resources that gets created has a set of attributes that can be accessed as needed by other resources, data sources or outputs. In order to understand how this works, I strongly suggest reading the documentation about referencing values [1]. Since you already created the Service Catalog product, the only thing you need to do is provide the string value for the product ID:
resource "aws_sagemaker_project" "example" {
project_name = "example"
service_catalog_provisioning_details {
product_id = "prod-xxxxxxxxxxxxx"
}
}
When I can't understand what value is expected by an argument (e.g., product_id in this case), I usually read the docs and look for examples like in [2]. Note: That example is CloudFormation, but it can help you understand what type of a value is expected (e.g., string, number, bool).
You could also import the created Service Catalog product into Terraform so you can manage it with IaC [3]. You should understand all the implications of terraform import though before trying it [4].
[1] https://www.terraform.io/language/expressions/references
[2] https://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-project.html#aws-resource-sagemaker-project--examples--SageMaker_Project_Example
[3] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/servicecatalog_product#import
[4] https://www.terraform.io/cli/commands/import

How can I use var in resource calling

I'm importing roles which already have been created in AWS console and unfortunately the names are strange. So in order to use those roles I am trying like this
I've two IAM roles as follows
data "aws_iam_role" "reithera-rtcov201" {
name = "exomcloudrosareitherartcov-YRX1M2GJKD6H"
}
data "aws_iam_role" "dompe-rlx0120" {
name = "exomcloudrosadomperlx0120p-1SCGY0RG5JXFF"
}
In this file I have 2 variables as follows:
sponsor = ["reithera", "dompe"]
study = ["rtcov201", "rlx0120"]
I'm trying in the following way, but terraform doesn't allow to use $.
data.aws_iam_role.${var.sponsor}-${var.study}.arn
Do you know any solution for this.
Its not possible. You can dynamically create references to resources.
Instead of two separate data sources you should create one:
variable "iam_roles"
default = ["exomcloudrosareitherartcov-YRX1M2GJKD6H", "exomcloudrosadomperlx0120p-1SCGY0RG5JXFF"]
}
and then
data "aws_iam_role" "role" {
for_each = toset(var.iam_roles)
name = each.key
}
and you can refer to them using role name:
data.aws_iam_role["exomcloudrosareitherartcov-YRX1M2GJKD6H"].arn

Create multiple GCP storage buckets using terraform

I have used terraform scripts to create resources in GCP. The scripts are working fine. But my question is - how do I create multiple storage buckets using a single script.
I have two files for creating the storage bucket-
main.tf which has the terraform code to create the buckets .
variables.tf which has the actual variables like storage bucket name, project_id, etc, which looks like this:
variable "storage_class" { default = "STANDARD" }
variable "name" { default = "internal-demo-bucket-1"}
variable "location" { default = "asia-southeast1" }
How can I provide more than one bucket name in the variable name? I tried to provide multiple names in an array but the build failed.
I don't know all your requirements, however suppose you need to create a few buckets with different names, while all other bucket characteristics are constant for every bucket in the set under discussion.
I would create a variable, i.e. bucket_name_set in a variables.tf file:
variable "bucket_name_set" {
description = "A set of GCS bucket names..."
type = list(string)
}
Then, in the terraform.tfvars file, I would provide unique names for the buckets:
bucket_name_set = [
"some-bucket-name-001",
"some-bucket-name-002",
"some-bucket-name-003",
]
Now, for example, in the main.tf file I can describe the resources:
resource "google_storage_bucket" "my_bucket_set" {
project = "some project id should be here"
for_each = toset(var.bucket_name_set)
name = each.value # note: each.key and each.value are the same for a set
location = "some region should be here"
storage_class = "STANDARD"
force_destroy = true
uniform_bucket_level_access = true
}
Terraform description is here: The for_each Meta-Argument
Terraform description for the GCS bucket is here: google_storage_bucket
Terraform description for input variables is here: Input Variables
Have you considered using terraform provided modules ? It becomes very easy if you use gcs module for bucket creation. It has an option to specify how many buckets you need to create and even the subfolders. I am including the module below for your reference
https://registry.terraform.io/modules/terraform-google-modules/cloud-storage/google/latest

Replicate infrastructure using Terraform module throws error for same name IAM policy

I have created basic infrastructure as below and I'm trying to see if modules works for me to replicate infrastructure on AWS using Terraform.
variable "access_key" {}
variable "secret_key" {}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
alias = "us-east-1"
region = "us-east-1"
}
variable "company" {}
module "test1" {
source = "./modules"
}
module "test2" {
source = "./modules"
}
And my module is as follows:
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
But somehow when I use same module in my main.tf it is giving me an error for same named resource policy. How should I handle such a scenario?
I want to use same main.tf for prod/stage/dev environment. How do I achieve it?
My actual module looks like the code in this question.
How do I make use of modules and be able to name module resources dynamically? e.g. stage_iam_policy / prod_iam_policy etc. Is this the right approach?
You're naming the IAM policy the same regardless of where you use the module. With IAM policies they are uniquely identified by their name rather than some random ID (such as EC2 instances which are identified as i-...) so you can't have 2 IAM policies with the same name in the same AWS account.
Instead you must add some extra uniqueness to the name such as by using a parameter to the module appended to the name with something like this:
module "test1" {
source = "./modules"
enviroment = "foo"
}
module "test1" {
source = "./modules"
enviroment = "bar"
}
and in your module you'd have the following:
variable "enviroment" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${var.enviroment}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
Alternatively if you don't have some useful thing you can use such as name or environment etc then you could just straight up use some randomness:
resource "random_pet" "random" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${random_pet.random.id}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}