I am quite new to Terraforms and gitlab CI and there is something that I am trying to do here with it.
I want to use Terraform to create an IAM user and a S3 bucket. Using policies allow certain operations on this S3 bucket to this IAM user. Have the IAM user's credentials saved in the artifactory.
Now the above is going to be my core module.
The core module looks something like the below:
Contents of : aws-s3-iam-combo.git
(The credentials for the IAM user using which all the Terraform would be run, say admin-user, would be stored in gitlab secrets.)
main.tf
resource "aws_s3_bucket" "bucket" {
bucket = "${var.name}"
acl = "private"
force_destroy = "true"
tags {
environment = "${var.tag_environment}"
team = "${var.tag_team}"
}
policy =<<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "${aws_iam_user.s3.arn}"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
resource "aws_iam_user" "s3" {
name = "${var.name}-s3"
force_destroy = "true"
}
resource "aws_iam_access_key" "s3" {
user = "${aws_iam_user.s3.name}"
}
resource "aws_iam_user_policy" "s3_policy" {
name = "${var.name}-policy-s3"
user = "${aws_iam_user.s3.name}"
policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
outputs.tf
output "bucket" {
value = "${aws_s3_bucket.bucket.bucket}"
}
output "bucket_id" {
value = "${aws_s3_bucket.bucket.id}"
}
output "iam_access_key_id" {
value = "${aws_iam_access_key.s3.id}"
}
output "iam_access_key_secret" {
value = "${aws_iam_access_key.s3.secret}"
}
variables.tf
variable "name" {
type = "string"
}
variable "tag_team" {
type = "string"
default = ""
}
variable "tag_environment" {
type = "string"
default = ""
}
variable "versioning" {
type = "string"
default = false
}
variable "profile" {
type = "string"
default = ""
}
Anyone in the organization who now needs to create S3 buckets, would need to create a new repo, something of the form:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
}
gitlab-ci.yml
stages:
- plan
- apply
plan:
image: hashicorp/terraform
stage: plan
script:
- terraform init
- terraform plan
apply:
image: hashicorp/terraform
stage: apply
script:
- terraform init
- terraform apply
when: manual
only:
- master
And then the pipeline would trigger and when this repo gets merged to master, the resources (S3 and IAM user) would be created and the user would have this IAM user's credentials.
Now the problem is that we have multiple AWS accounts. So say if a dev wants to create an S3 in a certain account, it would not be possible with the above set up as the admin-user, whose creds are in gitlab secrets, is only for one account alone.
Now I don't understand how do I achieve the above requirement of mine. I have the below idea: (Please suggest if there's a better way to do this)
Have multiple different creds set up in gitlab secrets for each AWS account in question
Take user input, specifying the AWS account they want the resources created in, as a variable. So something like say:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
aws_account = "account1"
}
And then in the aws-s3-iam-combo.git main.tf somehow read the creds for account1 from the gitlab secrets.
Now I do not know how achieve the above, like how do i read from gitlab the required secret variable etc.
Can someone please help here?
you asked this some time ago, but maybe my idea still helps the one or the other...
You can do this with envsubst (requires the pkg gettext to be installed on your runner or in the Docker image used to run the pipeline).
Here is an example:
First, in the project settings you set your different user accounts as environment variables (project secrets:
SECRET_1: my-secret-1
SECRET_2: my-secret-2
SECRET_3: my-secret-3
Then, create a file that holds a Terraform variable, let's name it vars_template.tf:
variable "gitlab_secrets" {
description = "Variables from GitLab"
type = "map"
default = {
secret_1 = "$SECRET_1"
secret_2 = "$SECRET_2"
secret_3 = "$SECRET_3"
}
}
In your CI pipeline, you can now configure the following:
plan:dev:
stage: plan dev
script:
- envsubst < vars_template.tf > ./vars_envsubst.tf
- rm vars_template.tf
- terraform init
- terraform plan -out "planfile_dev"
artifacts:
paths:
- environments/dev/planfile_dev
- environments/dev/vars_envsubst.tf
apply:dev:
stage: apply dev
script:
- cd environments/dev
- rm vars_template.tf
- terraform init
- terraform apply -input=false "planfile_dev"
dependencies:
- plan:dev
It's important to note that the original vars_template.tf has to be deleted, otherwise Terraform will throw an error that the variable is defined multiple times. You could circumvent this by storing the template file in a directory which is outside the Terraform working directory though.
But from the Terraform state you can see that the variable values where correctly substituted:
"outputs": {
"gitlab_secrets": {
"sensitive": false,
"type": "map",
"value": {
"secret_1": "my-secret-1",
"secret_2": "my-secret-2",
"secret_3": "my-secret-3"
}
}
}
You can then access the values with "${vars.gitlab_secrets["secret_1"]}" in your Terraform resources etc.
UPDATE: Note that this method will store the secrets in the Terraform state file, which can be a potential security issue if the file is stored in an S3 bucket for collaborative work with Terraform. The bucket should at least be encrypted. In addition, it's recommended to limit the access to the files with ACLs so that, e.g., only a user terraform has access to it. And, of course, a user could reveil the secrets via Terraoform outputs...
Related
I have the situation whereby I am having Terraform create a random password and store it into AWS Secrets Manager.
My Terraform password and secrets manager config:
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
The above works well. However I am not clear on how to achieve my final goal...
I have an AWS EC2 Instance which is also configured via Terraform, when the system boots it executes some cloud-init config which runs a setup script (Bash script). The Bash setup script needs to install some server software and set a password for that server software. I am not certain how to securely access my_password from that Bash script during setup.
My Terraform config for the instance and cloud-init config:
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
...
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
my_password=`<MY PASSWORD IS NEEDED HERE>` # TODO retrieve via cURL call to Secrets Manager API?
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
I need to be able to securely retrieve the password from the AWS Secrets Manager when the cloud-init script runs, as I have read that embedding it in the bash script is considered insecure.
I have also read that AWS has the notion of Temporary Credentials, and that these can be associated with an EC2 instance - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
Using Terraform can I create temporary credentials (say 10 minutes TTL) and grant them to my AWS EC2 instance, so that when my Bash script runs during cloud-init it can retrieve the password from the AWS Secrets Manager?
I have seen that on the Terraform aws_instance resource, I can associate a iam_instance_profile and I have started by trying something like:
resource "aws_iam_instance_profile" "my_instance_iam_instance_profile" {
name = "my_instance_iam_instance_profile"
path = "/development/"
role = aws_iam_role.my_instance_iam_role.name
tags = {
Environment = "dev"
}
}
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
// TODO - what how to specify a temporary credential access to a specific secret in AWS Secrets Manager from EC2???
tags = {
Environment = "dev"
}
}
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
iam_instance_profile = join("", [aws_iam_instance_profile.my_instance_iam_instance_profile.path, aws_iam_instance_profile.my_instance_iam_instance_profile.name])
...
}
Unfortunately I can't seem to find any details on what I should put in the Terraform aws_iam_role which would allow my EC2 instance to access the Secret in the AWS Secrets Manager for a temporary period of time.
Can anyone advise? I would also be open to alternative approaches as long as they are also secure.
Thanks
You can create aws_iam_policy or an inline policy which can allow access to certain SSM parameters based on date and time.
In case of inline policy, this can be attached to the instance role which would look something like this:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
inline_policy {
name = "my_inline_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/development-*",
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "2020-04-01T00:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
}
}]
})
}
tags = {
Environment = "dev"
}
}
So in the end the suggestions from #ervin-szilagyi got me 90% of the way there... I then needed to make some small changes to his suggestion. I am including my updated changes here to hopefully help others who struggle with this.
My aws_iam_role that allows temporary access (10 minutes) to the password now looks like:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "access_my_password_iam_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": aws_secretsmanager_secret.my_password_secret.arn,
"Condition": {
"DateGreaterThan": { "aws:CurrentTime": timestamp() },
"DateLessThan": { "aws:CurrentTime": timeadd(timestamp(), "10m") }
}
},
{
"Effect": "Allow",
"Action": "secretsmanager:ListSecrets",
"Resource": "*"
}
]
})
}
tags = {
Environment = "dev"
}
}
To retrieve the password during cloud-init, in the end I switched to using the aws CLI command as opposed to cURL, which yielded a cloud-init config like the following:
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
# Retrieve SA password from AWS Secrets Manager
command="aws --output text --region ${local.aws_region} secretsmanager get-secret-value --secret-id ${aws_secretsmanager_secret.my_password_secret.id} --query SecretString"
max_retry=5
counter=0
until my_password=$($command)
do
sleep 1
[[ counter -eq $max_retry ]] && echo "Failed!" && exit 1
echo "Attempt #$counter - Unable to retrieve AWS Secret, trying again..."
((counter++))
done
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
There are two main ways to achieve this:
pass the value as is during the instance creation with terraform
post-bootstrap invocation of some script
Your approach of polling it in the cloud-init is a hybrid one, which is perfectly fine, but I'm not sure whether you actually need to go down that route.
Let's explore the first option, where you do everything in terraform. We have two sub-options there depending on where you create the secret and the instance within the same terraform execution run (within the same folder in which the code resides) or it's a two step process, where you create the secret first, and then the instance, as it has a minor difference between the two on how to pass the secret value as a var to the script.
Case A: in case they are created together:
You can pass the password directly to the script.
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${random_password.my_password.result} /opt/srv/bin/install.sh
EOF
}
}
Case B: if they are created in separate folders
You could use a data resource to get the secret value from terraform (the role with which you are deploying your terraform code will need permissions GetSecret)
data "aws_secretsmanager_secret_version" "my_password" {
secret_id = "/development/my_password"
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${data.aws_secretsmanager_secret_version.my_password.secret_string} /opt/srv/bin/install.sh
EOF
}
}
In both cases you wouldn't need to assign SSM permissions to the EC2 instance profile attached to the instance, you won't need to use curl or other means in the script, and the password would not be part of your bash script.
It will be stored in your terraform state, so you should make sure that the access to it is restricted.
Even with the hybrid approach where you are going to get the secret from the secret manager during the instance bootstrap, the password would still be stored in your state as you are creating that secret with resource "random_password" as per the Terraform Sensitive data in state.
Now, let's look at option 2. It is very similar to your approach, but instead of doing it in the user-data, you can use Systems Manager Run Command to start your installation script as a post-bootstrap step. Then depending on how do you invoke the script, whether it is present locally on the instance, or you are using a document with a State Manager you can either pass the secret to it as a variable again, or get it from the Secrets Manager with aws-cli or curl, or whatever you prefer (which will require the necessary level of IAM permissions).
I'm implementing a solution to backup my Oracle RDS database using AWS Backup. I'd like to have one vault in my current region and a backup vault in a different region. Being somewhat new to Terraform, I'm not quite sure how to accomplish this. Would I add another AWS provider in a different region? some of my code is below for reference:
providers.tf:
# Configure the AWS Provider
provider "aws" {
profile = "sandbox"
region = var.primary_region # resolves to us-east-1
alias = "primary"
allowed_account_ids = [
var.account_id
]
}
------------------------------------------------------
backups.tf:
resource "aws_backup_region_settings" "test" {
resource_type_opt_in_preference = {
"RDS" = true
}
}
resource "aws_backup_vault" "test" {
name = "backup_vault"
kms_key_arn = aws_kms_key.sensitive.arn
}
# Would like this to be created in us-west-2:
resource "aws_backup_vault" "test_destination" {
name = backup_destination_vault"
kms_key_arn = aws_kms_key.sensitive.arn
}
resource "aws_backup_plan" "backup" {
name = "oasis-backup-plan"
rule {
rule_name = "backup"
target_vault_name = aws_backup_vault.backup.name
schedule = "cron(0 12-20 * * ? *)"
copy_action {
destination_vault_arn = aws_backup_vault.backup_destination.arn
}
}
}
resource "aws_iam_role" "backup" {
name = "backup_role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["sts:AssumeRole"],
"Effect": "allow",
"Principal": {
"Service": ["backup.amazonaws.com"]
}
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "backup" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup"
role = aws_iam_role.backup.name
}
resource "aws_backup_selection" "backup" {
iam_role_arn = aws_iam_role.backup.arn
name = "backup_selection"
plan_id = aws_backup_plan.backup.id
resources = [
aws_db_instance.oasis.arn
data.aws_db_instance.backup.db_instance_arn # My Oracle DB, already existing
]
}
I am aware that AWS Backup is heavily leveraged within AWS Organizations; Despite the fact we are using that pattern for our numerous accounts, I'm trying to avoid getting that level of control involved at this point; I'm just doing a POC to try to get a reasonable backup plan to a DR region going.
So in order to do what you want to do you need to use a feature of terraform that allows you to configure multiple providers:
https://www.terraform.io/docs/language/providers/configuration.html
Once you've configured that you can specify what provider to use when you want to provision the second vault and everything should work without much issue.
I am trying to update an IAM Role and it's attached policy with Terraform through GitLab-CI. My terraform code looks like below:-
data "aws_iam_policy_document" "billing-roles" {
statement {
effect = "Allow"
principals {
type = "Federated"
identifiers = ["${var.samlprovider_arn}"]
}
actions = ["sts:AssumeRoleWithSAML"]
condition {
test = "StringEquals"
variable ="SAML:aud"
values = ["https://signin.aws.amazon.com/saml"]
}
}
}
resource "aws_iam_role" "billing_role" {
name = "billing-role"
permissions_boundary = "${var.permissions_boundary_arn}"
assume_role_policy = "${data.aws_iam_policy_document.billing-roles.json}"
tags = {
Applicatio_ID = "${var.app_id}"
Environment = "${var.environment}"
Name = "billing-role"
Owner = "Terraform"
}
}
resource "aws_iam_policy" "billing_policy" {
name = "billing-policy"
policy= "${file("${path.module}/policies/billing-role-policy.json")}"
}
resource "aws_iam_role_policy_attachment" "billing_attachment" {
role = aws_iam_role.billing_role.name
policy_arn = aws_iam_policy.billing_policy.arn
}
I am running various phases of terraform(INIT, PLAN, APPLY) through GitLab-CI. This works the first time but fails with EntityAlreadyExists error.
The .gitlab-ci.yml looks like this:-
include:
- project: 'infrastructure/infrastructure-code-cicd-files'
ref: master
file: '.for_terraform_iam.yml'
stages:
- init
- plan
- apply
tf_init:
extends: .tf_init
tags:
- integration
stage: init
variables:
ACCOUNT: "ACCOUNT_ID"
ASSUME_ROLE: "arn:aws:iam::ACCOUNT_ID:role/devops-cross-account"
backend_bucket_name: "iam-role-backend-${ACCOUNT}"
tfstate_file: "iam-role/terraform.tfstate"
tf_plan:
extends: .tf_plan
variables:
ASSUME_ROLE: "arn:aws:iam::ACCOUNT_ID:role/devops-cross-account"
tags:
- integration
stage: plan
tf_apply:
extends: .tf_apply
variables:
ASSUME_ROLE: "arn:aws:iam::ACCOUNT_ID:role/devops-cross-account"
tags:
- integration
stage: apply
This gitlab-ci configuration includes a utility file which has all the terraform logic for Init, Plan and Apply.
I am running the setup on Terraform 0.12.13.
Terraform import though successful in importing the resources does not help here as terraform complains about "EntityAlreadyExists"
Terraform taint does not work dues to a bug in the terraform version that I am using here.
I want a workflow where IAM Role once created, its attached inline policy can be updated by an Ops Engineer and an approver will approve the merge request and that way the IAM role will have added services as desired by the Ops engineer.
Is there a way we can update the IAM policy here. I understand that updating an IAM role would require detaching the policies first and then attach the new policies to it.
Please help
The issue was with passing terraform.tfstate file into plan stage which I had missed. We run an "aws s3 cp s3://backend-bucket/keys ." to get the statefile and this has solved the problem.
I'm developing a SPA with several envs: dev, preprod, prod
Each env have a corresponding CloudFront distribution and bucket website.
We also have a static website with user manual that is served on behavior /documentation/*
This static website is stored on a separate bucket
All environments share the same documentation, so there is only one bucket for all envs.
The project is a company portal, so user documentation should not be accessible publicly.
To achieve that, we are using OAI, so bucket is accessible only through CloudFront (a lambda#edge ensure user has a valid token and redirect him otherwise, so the documentation is private).
Everything is fine when I deploy on dev using
terraform workspace select dev
terraform apply -var-file=dev.tfvars
But when I try to deploy on preprod
terraform workspace select preprod
terraform apply -var-file=preprod.tfvars
Terraform changes OAI ID this way
# module.s3.aws_s3_bucket_policy.documentation_policy will be updated in-place
~ resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = "my-bucket"
~ policy = jsonencode(
~ {
~ Statement = [
~ {
Action = "s3:GetObject"
Effect = "Allow"
~ Principal = {
~ AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U64NEVQ9IQHH" -> "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3ORU58OAALJAP"
}
Resource = "arn:aws:s3:::my-bucket/*"
Sid = ""
},
]
Version = "2012-10-17"
}
)
}
Whereas I would like the principal to added this way:
# module.s3.aws_s3_bucket_policy.documentation_policy will be updated in-place
~ resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = "my-bucket"
~ policy = jsonencode(
~ {
Statement = [
{
Action = "s3:GetObject"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U64NEVQ9IQHH"
}
Resource = "arn:aws:s3:::my-bucket/*"
Sid = ""
},
+ {
+ Action = "s3:GetObject"
+ Effect = "Allow"
+ Principal = {
+ AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3ORU58OAALJAP"
+ }
+ Resource = "arn:aws:s3:::my-bucket/*"
+ Sid = ""
+ },
]
Version = "2012-10-17"
}
)
}
Is there any way to achieve this using terraform 0.13.5
For information, here is my documentation-bucket.tf which I import in each workspace once created
resource "aws_s3_bucket" "documentation" {
bucket = var.documentation_bucket
tags = {
BillingProject = var.billing_project
Environment = var.env
Terraform = "Yes"
}
logging {
target_bucket = var.website_logs_bucket
target_prefix = "s3-access-logs/${var.documentation_bucket}/"
}
lifecycle {
prevent_destroy = true
}
}
data "aws_iam_policy_document" "documentation" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.documentation.arn}/*"]
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn]
}
}
}
resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = aws_s3_bucket.documentation.id
policy = data.aws_iam_policy_document.documentation.json
}
Best regards
Assumptions:
Based on what you said it seems you manage the same resource in different state files (assumption based on "[...] which I import in each workspace once created")
You basically created a split-brain situation by doing so.
Assumption number two: you are deploying a single S3 bucket and multiple CloudFront distributions accessing this single bucket all in the same AWS Account.
Answer:
While it is totally fine to do so, this is not how it is supposed to be set up. A single resource should only be managed by a single terraform state (workspace) or you will see this expected but unwanted behavior of having an unstable state.
I would suggest to manage the S3 bucket in a single workspace configuration or even create a new workspace called 'shared'.
In this workspace, you can use terraform_remote_state data source to import the state of the other workspaces and build a policy including all your OAIs extracted from the other states. Of course, you can do so without creating a new shared workspace.
I hope this helps, while it might not be the expected solution - and maybe my assumptions are wrong.
Last words:
It's not considered good practice to share resources between environments, as data will most likely stay when you decommission environments, and managing access can get complex and insecure.
Better keep versions of the environments as close as possible like in Dev/Prod Parity of the 12 factors app, But try not to share resources. If you feel you need to share resources, take some time, and challenge your architecture again.
I am trying to use specific code from sibling directories and I am having some trouble doing so. As an example, please see below for how my files are structured:
parents/
brother/
main.tf
outputs.tf
variables.tf
sister/
main.tf
outputs.tf
variables.tf
I want to use a definition that I created in brother/main.tf in sister/main.tf and I can't seem to figure out the right way to do so. I have tried to use modules:
module "brother" {
source = "../brother"
}
Doing this works, but it doesn't. I am able to import and use the code, but for some reason terraform is creating a bunch of other resources with a new resource name, using the new module name (if that makes any sense). Essentially, it creates the desired resource, but also created 100+ other unwanted.
I can easily get this to work by putting the definition I want to use in the same sister directory, but that is not how I want to structure my files. What is the right way to do this? If I have an IAM role that is defined in brother, and I want to reference it in sister, how can I do that? Thanks in advance!
EDIT:
Current Code:
sister/main.tf
resource "aws_config_config_rule" "test-rule" {
name = "test-rule"
source {
owner = "AWS"
source_identifier = "TEST"
}
depends_on = ["aws_config_configuration_recorder.config_configuration_recorder"]
}
resource "aws_config_configuration_recorder" "config_configuration_recorder" {
name = "config_configuration_recorder"
role_arn = "${var.test_assume_role_arn}"
}
brother/main.tf
resource "aws_iam_role" "test_assume_role" {
name = "${var.test_assume_role_name}"
path = "/"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "config.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
So basically, I want to be able to use the test_assume_role arn in the sister/main.tf.
When you require another module it will recreate those resources.
It sounds like you want to reference the state of the already created resources. You can do this using remote state data source.
This allows you to read outputs of another state but doesn't create additional resources
data "terraform_remote_state" "brother" {
backend = "..."
}
resource "aws_instance" "sister" {
# ...
subnet_id = "${data.terraform_remote_state.brother.my_output}"
}
An alternative to outputting an attribute of a resource to the Terraform state and reading it in with the terraform_remote_state data source would be to just use the appropriate data source for your resource in the first place where possible.
In this case you can use the aws_iam_role data source to look up the ARN for an IAM role by its name:
data "aws_iam_role" "example" {
name = "an_example_role_name"
}