How to use code from terraform sibling directory - amazon-web-services

I am trying to use specific code from sibling directories and I am having some trouble doing so. As an example, please see below for how my files are structured:
parents/
brother/
main.tf
outputs.tf
variables.tf
sister/
main.tf
outputs.tf
variables.tf
I want to use a definition that I created in brother/main.tf in sister/main.tf and I can't seem to figure out the right way to do so. I have tried to use modules:
module "brother" {
source = "../brother"
}
Doing this works, but it doesn't. I am able to import and use the code, but for some reason terraform is creating a bunch of other resources with a new resource name, using the new module name (if that makes any sense). Essentially, it creates the desired resource, but also created 100+ other unwanted.
I can easily get this to work by putting the definition I want to use in the same sister directory, but that is not how I want to structure my files. What is the right way to do this? If I have an IAM role that is defined in brother, and I want to reference it in sister, how can I do that? Thanks in advance!
EDIT:
Current Code:
sister/main.tf
resource "aws_config_config_rule" "test-rule" {
name = "test-rule"
source {
owner = "AWS"
source_identifier = "TEST"
}
depends_on = ["aws_config_configuration_recorder.config_configuration_recorder"]
}
resource "aws_config_configuration_recorder" "config_configuration_recorder" {
name = "config_configuration_recorder"
role_arn = "${var.test_assume_role_arn}"
}
brother/main.tf
resource "aws_iam_role" "test_assume_role" {
name = "${var.test_assume_role_name}"
path = "/"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "config.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
So basically, I want to be able to use the test_assume_role arn in the sister/main.tf.

When you require another module it will recreate those resources.
It sounds like you want to reference the state of the already created resources. You can do this using remote state data source.
This allows you to read outputs of another state but doesn't create additional resources
data "terraform_remote_state" "brother" {
backend = "..."
}
resource "aws_instance" "sister" {
# ...
subnet_id = "${data.terraform_remote_state.brother.my_output}"
}

An alternative to outputting an attribute of a resource to the Terraform state and reading it in with the terraform_remote_state data source would be to just use the appropriate data source for your resource in the first place where possible.
In this case you can use the aws_iam_role data source to look up the ARN for an IAM role by its name:
data "aws_iam_role" "example" {
name = "an_example_role_name"
}

Related

Terraform depends_on aws_iam_policy

I have a module that create some aws policy from json files.
Terraform plan return an error when it try to attach the new resources (policies) to the role it is creating.
The "for_each" value depends on resource attributes that cannot be determined until apply
This is ok, so I tried to use depends_on on the module that create the new resources (policies), but I still have the same error.
here my module:
module "admin" {
source = "./my_repo/admin"
depends_on = [
aws_iam_policy.common,
aws_iam_policy.ses_sending,
aws_iam_policy.athena_readonly,
]
policies = [
aws_iam_policy.common.arn,
aws_iam_policy.ses_sending.arn,
aws_iam_policy.athena_readonly.arn,
]
In the module ./my_repo/admin I have a file with this code (here I have the error)
resource "aws_iam_role_policy_attachment" "me" {
for_each = toset(var.policies)
role = aws_iam_role.me.name
policy_arn = each.value
}
What's wrong?
Thank you
The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many policies will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

Terraform resolving loop

I want AWS instance that is allowed to read its own tags, but not of any other resources? Normally, idea of instance being allowed to do something is expressed by iam_role and aws_profile_instance, but when writing policy for the role, I can't refer to ARN of instance, since it creates loop.
It makes sense: normally, Terraform creates resources in order, and once created it never revisits them. What I want requires creating instance without iam role, and attach role to instance after instance is created.
Is it possible with Terraform?
EDIT: (minimal example):
+; cat problem.tf
resource "aws_instance" "problem" {
instance_type = "t2.medium"
ami = "ami-08d489468314a58df"
iam_instance_profile = aws_iam_instance_profile.problem.name
}
resource "aws_iam_policy" "problem" {
name = "problem"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{ Effect = "Allow"
Action = ["ssm:GetParameters"]
Resource = [aws_instance.problem.arn]
}
]
})
}
resource "aws_iam_role" "problem" {
name = "problem"
managed_policy_arns = [aws_iam_policy.problem.id]
# Copy-pasted from aws provider documentation. AWS is overcomplicated.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_instance_profile" "problem" {
name = "problem"
role = aws_iam_role.problem.name
}
+; terraform apply -refresh=false
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
╷
│ Error: Cycle: aws_iam_instance_profile.problem, aws_instance.problem, aws_iam_policy.problem, aws_iam_role.problem
│
│
╵
The problem here arises because you've used the managed_policy_arns shorthand to attach the policy to the role in the same resource that declares the role. That shorthand can be convenient in simple cases, but it can also create cycle problems as you've seen here because it causes the role to refer to the policy, rather than the policy to refer to the role.
The good news is that you can avoid a cycle here by declaring that relationship in the opposite direction, either by using the separate aws_iam_policy_attachment resource type -- which only declares the connection between the role and the policy -- or by using aws_iam_role_policy to declare a policy that's directly attached to the role. You only really need the separate attachment if you intend to attach the same policy to multiple principals, so I'm going to show the simpler approach with aws_iam_role_policy here:
resource "aws_instance" "example" {
instance_type = "t2.medium"
ami = "ami-08d489468314a58df"
iam_instance_profile = aws_iam_instance_profile.example.name
}
resource "aws_iam_instance_profile" "example" {
name = "example"
role = aws_iam_role.example.name
}
resource "aws_iam_role" "example" {
name = "example"
# Allow the EC2 service to assume this role, so
# that the EC2 instance can act as it through its
# profile.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy" "example" {
name = "example"
role = aws_iam_role.example.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["ssm:GetParameters"]
Resource = [aws_instance.example.arn]
},
]
})
}
Now all of the dependency edges go in the correct order to avoid a cycle.
The policy won't be connected to the role until both the role and the instance are both created, so it's important to consider here that the software running in the instance might start up before the role's policy is assigned, and so it should be prepared to encounter access violation errors for some time after boot and keep trying periodically until it succeeds, rather than aborting at the first error.
If this is part of a shared module that's using the functionality of the EC2 instance as part of the abstraction it's creating, it can help the caller of the module to be explicit about that hidden dependency on the aws_iam_role_policy by including it in any output values that refer to behavior of the EC2 instance that won't work until the role policy is ready. For example, if the EC2 instance is providing an HTTPS service on port 443 that won't work until the policy is active:
output "service_url" {
value = "https://${aws_instance.example.private_ip}/"
# Anything which refers to this output value
# should also wait until the role policy is
# created before taking any of its actions,
# even though Terraform can't "see" that
# dependency automatically.
depends_on = [aws_iam_role_policy.example]
}

How to securely allow access to AWS Secrets Manager with Terraform and cloud-init

I have the situation whereby I am having Terraform create a random password and store it into AWS Secrets Manager.
My Terraform password and secrets manager config:
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
The above works well. However I am not clear on how to achieve my final goal...
I have an AWS EC2 Instance which is also configured via Terraform, when the system boots it executes some cloud-init config which runs a setup script (Bash script). The Bash setup script needs to install some server software and set a password for that server software. I am not certain how to securely access my_password from that Bash script during setup.
My Terraform config for the instance and cloud-init config:
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
...
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
my_password=`<MY PASSWORD IS NEEDED HERE>` # TODO retrieve via cURL call to Secrets Manager API?
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
I need to be able to securely retrieve the password from the AWS Secrets Manager when the cloud-init script runs, as I have read that embedding it in the bash script is considered insecure.
I have also read that AWS has the notion of Temporary Credentials, and that these can be associated with an EC2 instance - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
Using Terraform can I create temporary credentials (say 10 minutes TTL) and grant them to my AWS EC2 instance, so that when my Bash script runs during cloud-init it can retrieve the password from the AWS Secrets Manager?
I have seen that on the Terraform aws_instance resource, I can associate a iam_instance_profile and I have started by trying something like:
resource "aws_iam_instance_profile" "my_instance_iam_instance_profile" {
name = "my_instance_iam_instance_profile"
path = "/development/"
role = aws_iam_role.my_instance_iam_role.name
tags = {
Environment = "dev"
}
}
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
// TODO - what how to specify a temporary credential access to a specific secret in AWS Secrets Manager from EC2???
tags = {
Environment = "dev"
}
}
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
iam_instance_profile = join("", [aws_iam_instance_profile.my_instance_iam_instance_profile.path, aws_iam_instance_profile.my_instance_iam_instance_profile.name])
...
}
Unfortunately I can't seem to find any details on what I should put in the Terraform aws_iam_role which would allow my EC2 instance to access the Secret in the AWS Secrets Manager for a temporary period of time.
Can anyone advise? I would also be open to alternative approaches as long as they are also secure.
Thanks
You can create aws_iam_policy or an inline policy which can allow access to certain SSM parameters based on date and time.
In case of inline policy, this can be attached to the instance role which would look something like this:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
inline_policy {
name = "my_inline_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/development-*",
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "2020-04-01T00:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
}
}]
})
}
tags = {
Environment = "dev"
}
}
So in the end the suggestions from #ervin-szilagyi got me 90% of the way there... I then needed to make some small changes to his suggestion. I am including my updated changes here to hopefully help others who struggle with this.
My aws_iam_role that allows temporary access (10 minutes) to the password now looks like:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "access_my_password_iam_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": aws_secretsmanager_secret.my_password_secret.arn,
"Condition": {
"DateGreaterThan": { "aws:CurrentTime": timestamp() },
"DateLessThan": { "aws:CurrentTime": timeadd(timestamp(), "10m") }
}
},
{
"Effect": "Allow",
"Action": "secretsmanager:ListSecrets",
"Resource": "*"
}
]
})
}
tags = {
Environment = "dev"
}
}
To retrieve the password during cloud-init, in the end I switched to using the aws CLI command as opposed to cURL, which yielded a cloud-init config like the following:
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
# Retrieve SA password from AWS Secrets Manager
command="aws --output text --region ${local.aws_region} secretsmanager get-secret-value --secret-id ${aws_secretsmanager_secret.my_password_secret.id} --query SecretString"
max_retry=5
counter=0
until my_password=$($command)
do
sleep 1
[[ counter -eq $max_retry ]] && echo "Failed!" && exit 1
echo "Attempt #$counter - Unable to retrieve AWS Secret, trying again..."
((counter++))
done
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
There are two main ways to achieve this:
pass the value as is during the instance creation with terraform
post-bootstrap invocation of some script
Your approach of polling it in the cloud-init is a hybrid one, which is perfectly fine, but I'm not sure whether you actually need to go down that route.
Let's explore the first option, where you do everything in terraform. We have two sub-options there depending on where you create the secret and the instance within the same terraform execution run (within the same folder in which the code resides) or it's a two step process, where you create the secret first, and then the instance, as it has a minor difference between the two on how to pass the secret value as a var to the script.
Case A: in case they are created together:
You can pass the password directly to the script.
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${random_password.my_password.result} /opt/srv/bin/install.sh
EOF
}
}
Case B: if they are created in separate folders
You could use a data resource to get the secret value from terraform (the role with which you are deploying your terraform code will need permissions GetSecret)
data "aws_secretsmanager_secret_version" "my_password" {
secret_id = "/development/my_password"
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${data.aws_secretsmanager_secret_version.my_password.secret_string} /opt/srv/bin/install.sh
EOF
}
}
In both cases you wouldn't need to assign SSM permissions to the EC2 instance profile attached to the instance, you won't need to use curl or other means in the script, and the password would not be part of your bash script.
It will be stored in your terraform state, so you should make sure that the access to it is restricted.
Even with the hybrid approach where you are going to get the secret from the secret manager during the instance bootstrap, the password would still be stored in your state as you are creating that secret with resource "random_password" as per the Terraform Sensitive data in state.
Now, let's look at option 2. It is very similar to your approach, but instead of doing it in the user-data, you can use Systems Manager Run Command to start your installation script as a post-bootstrap step. Then depending on how do you invoke the script, whether it is present locally on the instance, or you are using a document with a State Manager you can either pass the secret to it as a variable again, or get it from the Secrets Manager with aws-cli or curl, or whatever you prefer (which will require the necessary level of IAM permissions).

How do I use AWS Backup in Terraform to create a vault in a different region?

I'm implementing a solution to backup my Oracle RDS database using AWS Backup. I'd like to have one vault in my current region and a backup vault in a different region. Being somewhat new to Terraform, I'm not quite sure how to accomplish this. Would I add another AWS provider in a different region? some of my code is below for reference:
providers.tf:
# Configure the AWS Provider
provider "aws" {
profile = "sandbox"
region = var.primary_region # resolves to us-east-1
alias = "primary"
allowed_account_ids = [
var.account_id
]
}
------------------------------------------------------
backups.tf:
resource "aws_backup_region_settings" "test" {
resource_type_opt_in_preference = {
"RDS" = true
}
}
resource "aws_backup_vault" "test" {
name = "backup_vault"
kms_key_arn = aws_kms_key.sensitive.arn
}
# Would like this to be created in us-west-2:
resource "aws_backup_vault" "test_destination" {
name = backup_destination_vault"
kms_key_arn = aws_kms_key.sensitive.arn
}
resource "aws_backup_plan" "backup" {
name = "oasis-backup-plan"
rule {
rule_name = "backup"
target_vault_name = aws_backup_vault.backup.name
schedule = "cron(0 12-20 * * ? *)"
copy_action {
destination_vault_arn = aws_backup_vault.backup_destination.arn
}
}
}
resource "aws_iam_role" "backup" {
name = "backup_role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["sts:AssumeRole"],
"Effect": "allow",
"Principal": {
"Service": ["backup.amazonaws.com"]
}
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "backup" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup"
role = aws_iam_role.backup.name
}
resource "aws_backup_selection" "backup" {
iam_role_arn = aws_iam_role.backup.arn
name = "backup_selection"
plan_id = aws_backup_plan.backup.id
resources = [
aws_db_instance.oasis.arn
data.aws_db_instance.backup.db_instance_arn # My Oracle DB, already existing
]
}
I am aware that AWS Backup is heavily leveraged within AWS Organizations; Despite the fact we are using that pattern for our numerous accounts, I'm trying to avoid getting that level of control involved at this point; I'm just doing a POC to try to get a reasonable backup plan to a DR region going.
So in order to do what you want to do you need to use a feature of terraform that allows you to configure multiple providers:
https://www.terraform.io/docs/language/providers/configuration.html
Once you've configured that you can specify what provider to use when you want to provision the second vault and everything should work without much issue.

Referencing gitlab secrets in Terraform

I am quite new to Terraforms and gitlab CI and there is something that I am trying to do here with it.
I want to use Terraform to create an IAM user and a S3 bucket. Using policies allow certain operations on this S3 bucket to this IAM user. Have the IAM user's credentials saved in the artifactory.
Now the above is going to be my core module.
The core module looks something like the below:
Contents of : aws-s3-iam-combo.git
(The credentials for the IAM user using which all the Terraform would be run, say admin-user, would be stored in gitlab secrets.)
main.tf
resource "aws_s3_bucket" "bucket" {
bucket = "${var.name}"
acl = "private"
force_destroy = "true"
tags {
environment = "${var.tag_environment}"
team = "${var.tag_team}"
}
policy =<<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "${aws_iam_user.s3.arn}"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
resource "aws_iam_user" "s3" {
name = "${var.name}-s3"
force_destroy = "true"
}
resource "aws_iam_access_key" "s3" {
user = "${aws_iam_user.s3.name}"
}
resource "aws_iam_user_policy" "s3_policy" {
name = "${var.name}-policy-s3"
user = "${aws_iam_user.s3.name}"
policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
outputs.tf
output "bucket" {
value = "${aws_s3_bucket.bucket.bucket}"
}
output "bucket_id" {
value = "${aws_s3_bucket.bucket.id}"
}
output "iam_access_key_id" {
value = "${aws_iam_access_key.s3.id}"
}
output "iam_access_key_secret" {
value = "${aws_iam_access_key.s3.secret}"
}
variables.tf
variable "name" {
type = "string"
}
variable "tag_team" {
type = "string"
default = ""
}
variable "tag_environment" {
type = "string"
default = ""
}
variable "versioning" {
type = "string"
default = false
}
variable "profile" {
type = "string"
default = ""
}
Anyone in the organization who now needs to create S3 buckets, would need to create a new repo, something of the form:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
}
gitlab-ci.yml
stages:
- plan
- apply
plan:
image: hashicorp/terraform
stage: plan
script:
- terraform init
- terraform plan
apply:
image: hashicorp/terraform
stage: apply
script:
- terraform init
- terraform apply
when: manual
only:
- master
And then the pipeline would trigger and when this repo gets merged to master, the resources (S3 and IAM user) would be created and the user would have this IAM user's credentials.
Now the problem is that we have multiple AWS accounts. So say if a dev wants to create an S3 in a certain account, it would not be possible with the above set up as the admin-user, whose creds are in gitlab secrets, is only for one account alone.
Now I don't understand how do I achieve the above requirement of mine. I have the below idea: (Please suggest if there's a better way to do this)
Have multiple different creds set up in gitlab secrets for each AWS account in question
Take user input, specifying the AWS account they want the resources created in, as a variable. So something like say:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
aws_account = "account1"
}
And then in the aws-s3-iam-combo.git main.tf somehow read the creds for account1 from the gitlab secrets.
Now I do not know how achieve the above, like how do i read from gitlab the required secret variable etc.
Can someone please help here?
you asked this some time ago, but maybe my idea still helps the one or the other...
You can do this with envsubst (requires the pkg gettext to be installed on your runner or in the Docker image used to run the pipeline).
Here is an example:
First, in the project settings you set your different user accounts as environment variables (project secrets:
SECRET_1: my-secret-1
SECRET_2: my-secret-2
SECRET_3: my-secret-3
Then, create a file that holds a Terraform variable, let's name it vars_template.tf:
variable "gitlab_secrets" {
description = "Variables from GitLab"
type = "map"
default = {
secret_1 = "$SECRET_1"
secret_2 = "$SECRET_2"
secret_3 = "$SECRET_3"
}
}
In your CI pipeline, you can now configure the following:
plan:dev:
stage: plan dev
script:
- envsubst < vars_template.tf > ./vars_envsubst.tf
- rm vars_template.tf
- terraform init
- terraform plan -out "planfile_dev"
artifacts:
paths:
- environments/dev/planfile_dev
- environments/dev/vars_envsubst.tf
apply:dev:
stage: apply dev
script:
- cd environments/dev
- rm vars_template.tf
- terraform init
- terraform apply -input=false "planfile_dev"
dependencies:
- plan:dev
It's important to note that the original vars_template.tf has to be deleted, otherwise Terraform will throw an error that the variable is defined multiple times. You could circumvent this by storing the template file in a directory which is outside the Terraform working directory though.
But from the Terraform state you can see that the variable values where correctly substituted:
"outputs": {
"gitlab_secrets": {
"sensitive": false,
"type": "map",
"value": {
"secret_1": "my-secret-1",
"secret_2": "my-secret-2",
"secret_3": "my-secret-3"
}
}
}
You can then access the values with "${vars.gitlab_secrets["secret_1"]}" in your Terraform resources etc.
UPDATE: Note that this method will store the secrets in the Terraform state file, which can be a potential security issue if the file is stored in an S3 bucket for collaborative work with Terraform. The bucket should at least be encrypted. In addition, it's recommended to limit the access to the files with ACLs so that, e.g., only a user terraform has access to it. And, of course, a user could reveil the secrets via Terraoform outputs...