I'm using Terraform to deploy an ECS cluster, and task definition has these sections:
{
name = "secret"
valueFrom = "SECRET_NAME"
}
Now, if I'm using a parameter, I can just place a static parameter ARN in the valueFrom field. However, I'm using secrets and Secrets Manager. AWS automatically adds 6 random characters to every deployed secret ARN. I have 4 environments, all taken care of by a single pipeline. I obviously need a way to fit all 4 environments and their secrets in a single valueFrom field. Does this field only take an ARN, or is there some other way for me to do this?
Edit:
I forgot to mention I already have these secrets as an imported resource by name:
data "aws_secretsmanager_secret" "secret" {
name = "secret_name"
}
Is there something I can do with that?
Related
Overview
Currently, dashboards are being deployed via Terraform using values from a dictionary in locals.tf:
resource "aws_cloudwatch_dashboard" "my_alb" {
for_each = local.env_mapping[var.env]
dashboard_name = "${each.key}_alb_web_operational"
dashboard_body = templatefile("templates/alb_ops.tpl", {
environment = each.value.env,
account = each.value.account,
region = each.value.region,
alb = each.value.alb
tg = each.value.alb_tg
}
This leads to fragility because the values of AWS infrastructure resources like the ALB and ALB target group are hard coded. Sometimes when applying updates AWS resources are destroyed and recreated.
Question
What's the best approach to get these values dynamically? For example, this could be achieved by writing a Python/Boto3 Lambda, which looks up these values and then passes them to Terraform as env variables. Are there any other recommended ways to achieve the same?
It depends on how much environment is dynamical. But sounds like Terraform data sources is what you are looking for.
Usually, loadbalancer names are fixed or generated by some rule and should be known before creating dashboard.
Let's suppose that names are fixed, and names are:
variable "loadbalancers" {
type = object
default = {
alb01 = "alb01",
alb02 = "alb02"
}
}
In this case loadbalancers may be taken by:
data "aws_lb" "albs" {
for_each = var.loadbalancers
name = each.value # or each.key
}
And after that you will be able to get dynamically generated parameters:
data.aws_lb["alb01"].id
data.aws_lb["alb01"].arn
etc
If loadbalancer names are generated by some rule, you should use aws cli or aws cdk to get all names, or just generate names by same rule as it was generated inside AWS environment and pass inside Terraform variable.
Notice: terraform plan (apply, destroy) will raise error if you pass non-existent name. You should check if LB with provided name exists.
I must be missing something in how AWS secrets can be accessed through Terraform. Here is the scenario I am struggling with:
I create an IAM user named "infra_user", create ID and secret access key for the user, download the values in plain txt.
"infra_user" will be used to authenticate via terraform to provision resources, lets say an S3 and an EC2 instance.
To protect the ID and secret key of "infra_user", I store them in AWS secrets manager.
In order to authenticate with "infra_user" in my terraform script, I will need to retrieve the secrets via the following block:
data "aws_secretsmanager_secret" "arn" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
But, to even use the data block in my script and retrieve the secrets wouldn't I need to authenticate to AWS in some other way in my provider block before I declare any resources?
If I create another user, say "tf_user", to just retrieve the secrets where would I store the access key for "tf_user"? How do I avoid this circular authentication loop?
The Terraform AWS provider documentation has a section on Authentication and Configuration and lists an order of precedence for how the provider discovers which credentials to use. You can choose the method that makes the most sense for your use case.
For example, one (insecure) method would be to set the credentials directly in the provider:
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
Or, you could set the environment variables in your shell:
export AWS_ACCESS_KEY_ID="my-access-key"
export AWS_SECRET_ACCESS_KEY="my-secret-key"
export AWS_DEFAULT_REGION="us-west-2"
now your provider block simplifies to:
provider "aws" {}
when you run terraform commands it will automatically use the credentials in your environment.
Or as yet another alternative, you can store the credentials in a profile configuration file and choose which profile to use by setting the AWS_PROFILE environment variable.
Authentication is somewhat more complex and configurable than it seems at first glance, so I'd encourage you to read the documentation.
I’m using terraform to spin up ECS service using some secrets by creating parameters in SSM parameter store. The problem is I cannot have the secrets in my terraform config files so i am creating those parameters using default values first using terraform apply -target=aws_ssm_parameter.example and then i am manually updating the secrets in Aws console then doing terraform apply on all the resources which will pick the new parameter values but is there any workaround for this to keep it simple and do it in one go with terraform apply without using target. I tried very hard to find but couldn’t get possible solution.
Here is my Tf code for ssm parameter where i am just passing default value and then changing it in AWS console before applying terraform for whole resources.
resource "aws_ssm_parameter" "example" {
name = "/${terraform.workspace}/${local.app}/username"
type = "SecureString"
value = "default"
lifecycle {
ignore_changes = [value]
}
}
The idea is that I want to use Terraform resource aws_secretsmanager_secret to create only three secrets (not workspace-specified secret), one for the dev environment, one for preprod and the third one for production env.
Something like:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "example-secret-dev"
}
resource "aws_secretsmanager_secret" "preprod_secret" {
name = "example-secret-preprod"
}
resource "aws_secretsmanager_secret" "prod_secret" {
name = "example-secret-prod"
}
But after creating them, I don't want to overwrite them every time I run 'Terraform apply', is there a way to tell Terraform if any of the secrets exist, skip the creation of the secret and do not overwrite?
I had a look at this page but still doesn't have a clear solution, any suggestion will be appreciated.
It will not overwrite the secret if you create it manually in the console or using AWS SDK. The aws_secretsmanager_secret creates only the secret, but not its value. To set value you have to use aws_secretsmanager_secret_version.
Anyway, this is something you can easily test yourself. Just run your code with a secret, update its value in AWS console, and re-run terraform apply. You should see no change in the secret's value.
You could have Terraform generate random secret values for you using:
data "aws_secretsmanager_random_password" "dev_password" {
password_length = 16
}
Then create the secret metadata using:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "dev-secret"
recovery_window_in_days = 7
}
And then by creating the secret version:
resource "aws_secretsmanager_secret_version" "dev_sv" {
secret_id = aws_secretsmanager_secret.dev_secret.id
secret_string = data.aws_secretsmanager_random_password.dev_password.random_password
lifecycle {
ignore_changes = [secret_string, ]
}
}
Adding the 'ignore_changes' lifecycle block to the secret version will prevent Terraform from overwriting the secret once it has been created. I tested this just now to confirm that a new secret with a new random value will be created, and subsequent executions of terraform apply do not overwrite the secret.
We have an AWS SecretsManager Secret that was created once. That secret will be updated by an external job every hour.
I have the problem that sometimes the terraform plan/apply fails with the following message:
AWS Provider 2.48
Error: Error refreshing state: 1 error occurred:
* module.xxx.xxx: 1 error occurred:
* module.xxx.aws_secretsmanager_secret_version.xxx:
aws_secretsmanager_secret_version.xxx: error reading Secrets Manager Secret Version: InvalidRequestException: You can't perform this operation on secret version 68AEABC3-34BE-4723-8BF5-469A44F9B1D9 because it was deleted.
We've tried two solutions:
1) Force delete the whole secret via aws cli, but this has the side effect that one of our dependend resources will also be recreated (ecs template definition depends on that secret). This works, but we do not want the side effect of recreating the ecs thing.
2) Manually edit the backend .tfstate file and set the current AWS secret version. Then run the plan again.
Both solution seem to be hacky in a way. What is the best way to solve this issue ?
You can use terraform import to reconcile the state difference before you run a plan or apply.
In your case, this would look like:
terraform import module.xxx.aws_secretsmanager_secret_version.xxx arn:aws:secretsmanager:some_region:some_account_id:secret:example-123456|xxxxx-xxxxxxx-xxxxxxx-xxxxx
I think perhaps the problem you are having is that by default AWS tries to "help you" by not letting you delete secrets automatically until 7 days have elapsed. AWS tries the "help you" by telling you they give you a grace period of 7 days to update your "code" that may rely on this. Which makes automation more difficult.
I have worked around this by setting the recovery window period to "0 days", effectively eliminating that grace period that AWS provides.
Then you can have terraform, rename, or delete your secret at will, either manually (via AWS CLI) or via terraform.
You can update an existing secret by putting in this value FIRST. Then change the name of the secret (if you wish to), or delete it (this terraform section) as desired and run the terraform again after the recovery window days = 0 has been applied.
Here is an example:
resource "aws_secretsmanager_secret" "mySecret" {
name = "your secret name"
recovery_window_in_days = "0"
// this is optional and can be set to true | false
lifecycle {
create_before_destroy = true
}
}
*Note, there is also an option to "create before destroy" you can set on the lifecyle.
https://www.terraform.io/docs/configuration/resources.html
Also, you can use the terraform resource to update the secret values like this:
This example will set the secret values once and then tell terraform to ignore any changes made to the values (username, password in this example) after the initial creation.
If you remove the lifecyle section, then terraform will keep track of whether or not the secret values themselves have changed. If they have changed they would revert back to the value in the terraform state.
If you store your tfstate files in an s3 protected bucket that is safer than not doing so, because they are plaintext in the statefile, so anyone with access to your terraform state file could see your secret values.
I would suggest: 1) figuring out what is deleting your secrets unexpectedly? 2) having your "external job" be a terraform bash script to update the values using a resource as in the example below.
Hope this gives you some ideas.
resource "aws_secretsmanager_secret_version" "your-secret-data" {
secret_id = aws_secretsmanager_secret.your-secret.id
secret_string = <<-EOF
{
"username": "usernameValue",
"password": "passwordValue"
}
EOF
// ignore any updates to the initial values above done after creation.
lifecycle {
ignore_changes = [
secret_string
]
}
}