I want access to my AWS Account ID in terraform. I am able to get at it with aws_caller_identity per the documentation. How do I then use the variable I created? In the below case I am trying to use it in an S3 bucket name:
data "aws_caller_identity" "current" {}
output "account_id" {
value = data.aws_caller_identity.current.account_id
}
resource "aws_s3_bucket" "test-bucket" {
bucket = "test-bucket-${account_id}"
}
Trying to use the account_id variable in this way gives me the error A reference to a resource type must be followed by at least one attribute access, specifying the resource name. I expect I'm not calling it correctly?
If you have a
data "aws_caller_identity" "current" {}
then you need to define a local for that value:
locals {
account_id = data.aws_caller_identity.current.account_id
}
and then use it like
output "account_id" {
value = local.account_id
}
resource "aws_s3_bucket" "test-bucket" {
bucket = "test-bucket-${local.account_id}"
}
Terraform resolves the locals based on their dependencies so you can create locals that depend on other locals, on resources, on data blocks, etc.
Any time you create a datasource in terraform , it will export some attributes related to that datasource so that you can reference it somewhere else in your configuration and interpolate it with various ways.
In your case, you are already referencing the value of your account id in output block
So that same way, you can construct the string for the bucket name as follows.
resource "aws_s3_bucket" "test-bucket" {
bucket = "test-bucket-${data.aws_caller_identity.current.account_id}"
}
I would highly recommend you go through the terrraform syntax which can help you better understand the resource, datasource and expressions
https://www.terraform.io/docs/language/expressions/references.html
Related
How to get aws configuration parameters stored in json format on S3 in terraform scripts. I want to use those parameters in another resources.
I just want to externalise all the variable parameters in the script.
e.g: we have Data Source: aws_ssm_parameter to get AWS ssm parameters.
'''
data "aws_ssm_parameter" "foo" {
name = "foo"
}
'''
Similarly how can we get aws app configurations in terraform scripts.
From my understanding you need to read S3 objects' value's and use it in terraform.
Used data because it's external resource that we're referencing.
I would use like this:
data "aws_s3_object" "obj" {
bucket = "foo"
key = "foo.json"
}
output "s3_json_value" {
value = data.aws_s3_object.obj.body
}
To parse JSON you can use jsondecode
locals {
a_variable = jsondecode(data.aws_s3_object.obj.body)
}
output "Username" {
value = local.a_variable.name
}
I'm importing roles which already have been created in AWS console and unfortunately the names are strange. So in order to use those roles I am trying like this
I've two IAM roles as follows
data "aws_iam_role" "reithera-rtcov201" {
name = "exomcloudrosareitherartcov-YRX1M2GJKD6H"
}
data "aws_iam_role" "dompe-rlx0120" {
name = "exomcloudrosadomperlx0120p-1SCGY0RG5JXFF"
}
In this file I have 2 variables as follows:
sponsor = ["reithera", "dompe"]
study = ["rtcov201", "rlx0120"]
I'm trying in the following way, but terraform doesn't allow to use $.
data.aws_iam_role.${var.sponsor}-${var.study}.arn
Do you know any solution for this.
Its not possible. You can dynamically create references to resources.
Instead of two separate data sources you should create one:
variable "iam_roles"
default = ["exomcloudrosareitherartcov-YRX1M2GJKD6H", "exomcloudrosadomperlx0120p-1SCGY0RG5JXFF"]
}
and then
data "aws_iam_role" "role" {
for_each = toset(var.iam_roles)
name = each.key
}
and you can refer to them using role name:
data.aws_iam_role["exomcloudrosareitherartcov-YRX1M2GJKD6H"].arn
Instead of having to update this value manually each time, can I read this value directly into my terraform.tfvars file?
monitoring_role_arn = "arn:aws:iam::account:role/value"
you can use locals
Define in *.tf file
locals {
monitoring_role_arn = "arn:aws:iam::account:role/value"
}
in variables file you can refer as below
your_var = local.monitoring_role_arn
Role lookup option
Alternatively use the IAM Role lookup by the name given to the targeted role.
Ref: Data Source: aws_iam_role
To looup the resource by role name:
data "aws_iam_role" "monitoring_role_arn" {
name = "an_example_role_name" // This is the name of the role that appear in the AWS IAM Console
}
To get the ARN use the following line:
data.aws_iam_role.monitoring_role_arn.arn
I have used terraform scripts to create resources in GCP. The scripts are working fine. But my question is - how do I create multiple storage buckets using a single script.
I have two files for creating the storage bucket-
main.tf which has the terraform code to create the buckets .
variables.tf which has the actual variables like storage bucket name, project_id, etc, which looks like this:
variable "storage_class" { default = "STANDARD" }
variable "name" { default = "internal-demo-bucket-1"}
variable "location" { default = "asia-southeast1" }
How can I provide more than one bucket name in the variable name? I tried to provide multiple names in an array but the build failed.
I don't know all your requirements, however suppose you need to create a few buckets with different names, while all other bucket characteristics are constant for every bucket in the set under discussion.
I would create a variable, i.e. bucket_name_set in a variables.tf file:
variable "bucket_name_set" {
description = "A set of GCS bucket names..."
type = list(string)
}
Then, in the terraform.tfvars file, I would provide unique names for the buckets:
bucket_name_set = [
"some-bucket-name-001",
"some-bucket-name-002",
"some-bucket-name-003",
]
Now, for example, in the main.tf file I can describe the resources:
resource "google_storage_bucket" "my_bucket_set" {
project = "some project id should be here"
for_each = toset(var.bucket_name_set)
name = each.value # note: each.key and each.value are the same for a set
location = "some region should be here"
storage_class = "STANDARD"
force_destroy = true
uniform_bucket_level_access = true
}
Terraform description is here: The for_each Meta-Argument
Terraform description for the GCS bucket is here: google_storage_bucket
Terraform description for input variables is here: Input Variables
Have you considered using terraform provided modules ? It becomes very easy if you use gcs module for bucket creation. It has an option to specify how many buckets you need to create and even the subfolders. I am including the module below for your reference
https://registry.terraform.io/modules/terraform-google-modules/cloud-storage/google/latest
I know the data aws_s3_bucket resource can be used to get a reference to an existing bucket, but how would it be used to ensure that a new potential bucket name is unique?
I'm thinking a loop using random numbers, but how can that be used to search for a bucket name which has not been used?
As discussed in the comments, this behaviour can be achieved with the bucket_prefix functionality
This code:
resource "aws_s3_bucket" "my_s3_bucket" {
bucket_prefix = "my-stackoverflow-bucket-"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Produces the following unique bucket:
Another solution by using bucket instead of bucket_prefix and random_uuid, for example:
resource "aws_s3_bucket" "my_s3_bucket" {
bucket = "my-s3-bucket-${random_uuid.uuid.result}"
}
resource "random_uuid" "uuid" {}
This will give you a name like this:
my-s3-bucket-ebb92011-3cd9-503f-0977-7371102405f5