Terraform: Build resource in one environment - amazon-web-services

I want to build an s3 bucket only when we are using our prod account. We are using terraform 11.7 which I don't think supports the 'null' choice for if statement
Does anyone know if something like this can be done in terraform 11:
resource "aws_s3_bucket" "feedback_service_bucket" {
bucket = "${var.account_name == "prod" ? "ces-${var.environment}-QA-compendex" : null}"
acl = "private"
}
I will upgrade soon to terraform 13, but as this is required in this sprint I don't have time. We have a prod.tfvars file which is used when the applied environment is set to prod (obviously), so if the above is not possible, is there a way to put an entire resource into a tfvars file. I know you can use variables, but I'm not sure about resource. Any help would be great, thanks

Does anyone know if something like this can be done in terraform 11:
You can use count for such scenario:
resource "aws_s3_bucket" "feedback_service_bucket" {
count = "${var.account_name == "prod" ? 1 : 0}"
bucket = "ces-${var.environment}-QA-compendex"
acl = "private"
}
You can refer to this: https://www.terraform.io/docs/configuration-0-11/interpolation.html#conditionals
I will upgrade soon to terraform 13, but as this is required in this sprint I don't have time.
For info, Terraform 0.13 won't give you much advantages for this specific use case. You would still use count. What changes in 0.13 regarding conditional resources is that you have an alternative using for_each - introduced with v0.12 - and in which you can achieve the same behavior.

Related

How to move terraform resource in terraform apply

I have custom resource defined in my terraform module:
resource "aws_alb_target_group" "whatever"
{
....
}
Turns out whatever is not good name, and I need to update it.
Classic way of doing it would be to login onto each environment and execute terraform state mv, however I have lots of environments, and no automation for such action.
How I can change name of resource without manually moving state (only through editing terraform modules and applying plans)?
Based on the explanation in the question, I guess your best bet would be to use the moved block [1]. So for example, in your case that would be:
resource "aws_alb_target_group" "a_much_better_whatever"
{
....
}
moved {
from = aws_alb_target_group.whatever
to = aws_alb_target_group.a_much_better_whatever
}
EDIT: As #Matt Schuchard noted, the moved block is available only for Terraform versions >=1.1.0.
EDIT 2: As per #Martin Atkins' comments, changed the resource name to be the name of the resource moving to instead of moving from.
[1] https://www.terraform.io/language/modules/develop/refactoring#moved-block-syntax
I'm in the same situation.
My plan is to create the new resource group in Terraform, apply it, move the resources in the Azure portal to the new resource group, and then do terraform state mv to move the resources in terraform.
Yes, if you have a lot of resources its boring.. but I guess I won't brake anything this way.

How to not override AWS desired_count with Terraform?

I'm managing an autoscaling cloud infrastructure in AWS.
Every time I run Terraform it wants to override the desired_count, i.e. the number of running instances.
I would like this to not happen. How do I do that?
Constraints: I manage multiple different microservices, each of which set up their running instances with a shared module, where desired_count is specified. I don't want to change the shared module such that desired_count is ignored for all my microservices. Rather, I want to be able to override or not on a service-by-service (i.e caller-by-caller) basis.
This rules out a straightforward use of lifecycle { ignore_changes = ... }. As far as I can tell, the list of changes to ignore cannot be given as arguments (my Terraform complains when I try; feel free to tell me how you succeed at this).
My next idea (if possible) is to read the value from the stored state, if present, and ask for a desired_count equal to its current value, or my chosen initial value if it has no current value. If there are no concurrent Terraform runs (i.e. no races), this should accomplish the same thing. Is this possible?
I'm no expert terraformer. I would appreciate it a lot if you give very detailed answers.
The lifecycle parameters affect how the graph is built, so they can't be parameterized. The terraform team hasn't ruled out that this could be implemented, but they haven't done it with the issue reported over a couple years.
What you could do is create two aws_ecs_service resources, and switch between them:
resource "aws_ecs_service" "service" {
count = var.use_lifecycle ? 0 : 1
...
}
resource "aws_ecs_service" "service_with_lifecycle" {
count = var.use_lifecycle ? 1 : 0
...
lifecycle {
ignore_changes = ["desired_count"]
}
}
Given that, you need a way to reference the service you created. You can do that with a local:
locals {
service = var.use_lifecycle ? aws_ecs_service.service_with_lifecycle[0] : aws_ecs_service.service[0]
}

Terraform single resource on multiple workspaces

I have two workspaces (like dev and prd) and I have to create single resource to use on all of them.
My example is to create AWS ECR repository:
resource "aws_ecr_repository" "example" {
name = "example"
}
I applied it on prd workspace and after switching to dev workspace, Terraform wants to create the same, but it exist.
After consideration I used count to create it only on prd like that:
resource "aws_ecr_repository" "example" {
count = local.stage == "prd" ? 1 : 0
name = "example"
}
and on prd workspace I use it like that:
aws_ecr_repository.default[0].repository_url
but there is a problem how to use it on dev workspace.
What is the better way to solve this?
since i´m not able to add a comment (i do not have enough rep)
i´m adding this as an answer.
as Jens mentioned, best is to avoid this approach.
but you can import a remote state with something like this:
data "terraform_remote_state" "my_remote_state" {
backend = "local" # could also be a remote state like s3
config = {
key = "project-key"
}
workspace = "prd"
}
in your prod workspace you have to define the outputs of your repo:
output "ecr_repo_url" {
aws_ecr_repository.default[0].repository_url
}
in your dev workspace, you can access the value with:
data.terraform_remote_state.my_remote_state.ecr_repo_url
in some cases this maybe useful, but be aware to what Jens said: if you destroy your prod environment, you can´t apply or change your dev environment!

How to convert the aws secret manager string to map in terraform (0.11.13)

I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10
It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.

Terraform - Upload file to S3 on every apply

I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:
uploaded version outputs as null. I would expect some version_id like 1, 2, 3
When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.
What am I doing wrong? Here is my Terraform config:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket_name"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "my_files.zip"
}
output "my_bucket_file_version" {
value = "${aws_s3_bucket_object.file_upload.version_id}"
}
Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.
To make subsequent changes, there are a few options:
You could use a different local filename for each new version.
You could use a different remote object path for each new version.
You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.
The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "${path.module}/my_files.zip"
etag = "${filemd5("${path.module}/my_files.zip")}"
}
With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.
(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)
The preferred solution is now to use the source_hash property. Note that aws_s3_bucket_object has been replaced by aws_s3_object.
locals {
object_source = "${path.module}/my_files.zip"
}
resource "aws_s3_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = local.object_source
source_hash = filemd5(local.object_source)
}
Note that etag can have issues when encryption is used.
You shouldn't be using Terraform to do this. Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state.
Also, it is better to use local-exec to do that. Something like:
resource "aws_s3_bucket" "my-bucket" {
# ...
provisioner "local-exec" {
command = "aws s3 cp path_to_my_file ${aws_s3_bucket.my-bucket.id}"
}
}