Terraform plan does not include all of my .tf changes - amazon-web-services

I am using AWS provider. I've added transaction blocks in my lifecycle_rule block with the appropriate days and storage_class properties. Besides that change I've also increased the expiry_days from 30 to 180.
The variable looks like this:
variable "bucket_details" {
type = map(object({
bucket_name = string
purpose = string
infrequent_transition_days = number
infrequent_transition_storage = string
archive_transition_days = number
archive_transition_storage = string
expiry_days = number
versioning = bool
}))
}
The resource looks like this: (I've removed unrelated configs)
resource "aws_s3_bucket" "bucket-s3" {
for_each = var.bucket_details
bucket = "${each.key}-${var.region}-${var.environment}"
lifecycle_rule {
id = "clear"
enabled = true
transition {
days = each.value.infrequent_transition_days
storage_class = each.value.infrequent_transition_storage
}
transition {
days = each.value.archive_transition_days
storage_class = each.value.archive_transition_storage
}
expiration {
days = each.value.expiry_days
}
}
}
I've followed this transition example for reference.
When I run transaction plan I get the following output:
~ lifecycle_rule {
abort_incomplete_multipart_upload_days = 0
enabled = true
id = "clear"
tags = {}
+ expiration {
+ days = 180
}
- expiration {
- days = 30 -> null
- expired_object_delete_marker = false -> null
}
}
No transition changes listed. Could it be because transition is AWS-specific and thus Terraform does not catch it?

I tried your code as is and here is the response:
provider "aws" {
region = "us-west-2"
}
variable "region" {
default = "us-west-2"
}
variable "environment" {
default = "dev"
}
variable "bucket_details" {
type = map(object({
bucket_name = string
infrequent_transition_days = number
infrequent_transition_storage = string
archive_transition_days = number
archive_transition_storage = string
expiry_days = number
}))
default = {
hello_world = {
bucket_name: "demo-001",
infrequent_transition_days: 10,
infrequent_transition_storage: "STANDARD_IA",
archive_transition_days: 10,
archive_transition_storage: "GLACIER",
expiry_days = 30
}}
}
resource "aws_s3_bucket" "bucket-s3" {
for_each = var.bucket_details
bucket = "${each.key}-${var.region}-${var.environment}"
lifecycle_rule {
id = "clear"
enabled = true
transition {
days = each.value.infrequent_transition_days
storage_class = each.value.infrequent_transition_storage
}
transition {
days = each.value.archive_transition_days
storage_class = each.value.archive_transition_storage
}
expiration {
days = each.value.expiry_days
}
}
}
Response of Terraform plan:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.bucket-s3["hello_world"] will be created
+ resource "aws_s3_bucket" "bucket-s3" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "hello_world-us-west-2-dev"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ lifecycle_rule {
+ enabled = true
+ id = "clear"
+ expiration {
+ days = 30
}
+ transition {
+ days = 10
+ storage_class = "GLACIER"
}
+ transition {
+ days = 10
+ storage_class = "STANDARD_IA"
}
}
+ versioning {
+ enabled = (known after apply)
+ mfa_delete = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply"
now.
As you can see there are transition changes. Can you try setting defaults vars and check the response.

Related

Terraform kubernetes service account and role binding modules not working

I am trying to create a kubernetes service account in a created namespace, which will have a secret and a cluster role binding, however, even though the terraform plan and apply stage shows that is is being created, it isn't, please see below module code and screenshots:
resource "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "serviceaccount" {
metadata {
name = var.name
}
subject {
kind = "User"
name = "system:serviceaccount:kube-system:${var.name}"
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
}
data "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
data "kubernetes_secret" "serviceaccount" {
metadata {
name = data.kubernetes_service_account.serviceaccount.default_secret_name
namespace = "kube-system"
}
binary_data = {
"token": ""
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
And the output from terraform run in devops:
# module.dd_service_account.data.kubernetes_secret.serviceaccount will be read during apply
# (config refers to values not yet known)
<= data "kubernetes_secret" "serviceaccount" {
+ binary_data = (sensitive value)
+ data = (sensitive value)
+ id = (known after apply)
+ immutable = (known after apply)
+ type = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = (known after apply)
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# module.dd_service_account.data.kubernetes_service_account.serviceaccount will be read during apply
# (depends on a resource or a module with changes pending)
<= data "kubernetes_service_account" "serviceaccount" {
+ automount_service_account_token = (known after apply)
+ default_secret_name = (known after apply)
+ id = (known after apply)
+ image_pull_secret = (known after apply)
+ secret = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# module.dd_service_account.kubernetes_cluster_role_binding.serviceaccount will be created
+ resource "kubernetes_cluster_role_binding" "serviceaccount" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
+ role_ref {
+ api_group = "rbac.authorization.k8s.io"
+ kind = "ClusterRole"
+ name = "cluster-admin"
}
+ subject {
+ api_group = (known after apply)
+ kind = "User"
+ name = "system:serviceaccount:kube-system:deployer-new"
+ namespace = "default"
}
}
# module.dd_service_account.kubernetes_service_account.serviceaccount will be created
+ resource "kubernetes_service_account" "serviceaccount" {
+ automount_service_account_token = true
+ default_secret_name = (known after apply)
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
When kubectl on the cluster, the namespace I created is there but no service accounts are there.
Any ideas?
Thanks.

Terraform try to replace the previous DynamoDB table instead of creating new one

I am new to terraform. While creating terraform AWS DynamoDB module, It will try to replace the existing table instead of creating a new table every-time. But if we use new terraform state file it will create another dynamodb table without replacing.
Terraform Version: 0.15
locals {
table_name_from_env = var.dynamodb_table
table_name = join("-", [local.table_name_from_env, lower(var.Component)])
kinesis_name = join("-", [local.table_name, "kinesis"])
}
resource "aws_dynamodb_table" "non_autoscaled" {
count = !var.autoscaling_enabled ? 1 : 0
name = "${local.table_name}"
read_capacity = "${var.read_capacity}"
write_capacity = "${var.write_capacity}"
billing_mode = "${var.billing_mode}"
hash_key = "${var.hash_key}"
range_key = var.range_key
dynamic "attribute" {
for_each = var.attributes
content {
name = attribute.value.name
type = attribute.value.type
}
}
ttl {
enabled = var.ttl_enabled
attribute_name = var.ttl_attribute_name
}
# tags = tomap({"organization" = "${var.organization}", "businessunit" = "${var.businessunit}"})
tags = tomap({"Component" = "${var.Component}"})
# tags = local.common_tags
}
resource "aws_dynamodb_table" "autoscaled" {
count = var.autoscaling_enabled ? 1 : 0
name = "${local.table_name}"
read_capacity = "${var.read_capacity}"
write_capacity = "${var.write_capacity}"
billing_mode = "${var.billing_mode}"
hash_key = "${var.hash_key}"
range_key = var.range_key
dynamic "attribute" {
for_each = var.attributes
content {
name = attribute.value.name
type = attribute.value.type
}
}
ttl {
enabled = var.ttl_enabled
attribute_name = var.ttl_attribute_name
}
}
resource "aws_kinesis_stream" "dynamodb_table_kinesis" {
count = var.kinesis_enabled ? 1 : 0
name = "${local.kinesis_name}"
shard_count = "${var.shard_count}"
stream_mode_details {
stream_mode = "${var.kinesis_stream_mode}"
}
}
resource "aws_dynamodb_kinesis_streaming_destination" "dynamodb_table_kinesis_dest_non_autoscaled"{
count = var.kinesis_enabled && !var.autoscaling_enabled ? 1 : 0
stream_arn = aws_kinesis_stream.dynamodb_table_kinesis[0].arn
table_name = aws_dynamodb_table.non_autoscaled[0].name
}
resource "aws_dynamodb_kinesis_streaming_destination" "dynamodb_table_kinesis_dest_autoscaled"{
count = var.kinesis_enabled && var.autoscaling_enabled ? 1 : 0
stream_arn = aws_kinesis_stream.dynamodb_table_kinesis[0].arn
table_name = aws_dynamodb_table.autoscaled[0].name
}
Can anybody suggest what is missing in my approach ?
Terraform Plan output:
+ terraform plan
module.aws_managed_dynamodb_table.aws_kinesis_stream.dynamodb_table_kinesis[0]: Refreshing state... [id=arn:aws:kinesis:stream/dynamodb-testing12345-coms-kinesis]
module.aws_managed_dynamodb_table.aws_dynamodb_table.non_autoscaled[0]: Refreshing state... [id=dynamodb-testing12345-coms]
module.aws_managed_dynamodb_table.aws_dynamodb_kinesis_streaming_destination.dynamodb_table_kinesis_dest_non_autoscaled[0]: Refreshing state... [id=dynamodb-testing12345-coms,arn:aws:kinesis:ap-south-1:stream/dynamodb-testing12345-coms-kinesis]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.aws_managed_dynamodb_table.aws_dynamodb_kinesis_streaming_destination.dynamodb_table_kinesis_dest_non_autoscaled[0] must be replaced
-/+ resource "aws_dynamodb_kinesis_streaming_destination" "dynamodb_table_kinesis_dest_non_autoscaled" {
~ id = "dynamodb-testing12345-coms,arn:aws:kinesis:ap-south-1:stream/dynamodb-testing12345-coms-kinesis" -> (known after apply)
~ stream_arn = "arn:aws:kinesis:ap-south-1:stream/dynamodb-testing12345-coms-kinesis" -> (known after apply) # forces replacement
~ table_name = "dynamodb-testing12345-coms" -> "dynamodb-testing123456-coms" # forces replacement
}
# module.aws_managed_dynamodb_table.aws_dynamodb_table.non_autoscaled[0] must be replaced
-/+ resource "aws_dynamodb_table" "non_autoscaled" {
~ arn = "arn:aws:dynamodb:ap-south-1:table/dynamodb-testing12345-coms" -> (known after apply)
~ id = "dynamodb-testing12345-coms" -> (known after apply)
~ name = "dynamodb-testing12345-coms" -> "dynamodb-testing123456-coms" # forces replacement
+ stream_arn = (known after apply)
- stream_enabled = false -> null
+ stream_label = (known after apply)
+ stream_view_type = (known after apply)
tags = {
"Component" = "XXX"
}
# (6 unchanged attributes hidden)
~ point_in_time_recovery {
~ enabled = false -> (known after apply)
}
+ server_side_encryption {
+ enabled = (known after apply)
+ kms_key_arn = (known after apply)
}
# (3 unchanged blocks hidden)
}
# module.aws_managed_dynamodb_table.aws_kinesis_stream.dynamodb_table_kinesis[0] must be replaced
-/+ resource "aws_kinesis_stream" "dynamodb_table_kinesis" {
~ arn = (known after apply)
~ id = (known after apply)
~ name = "dynamodb-testing12345-coms-kinesis" -> "dynamodb-testing123456-coms-kinesis" # forces replacement
- shard_level_metrics = [] -> null
- tags = {} -> null
~ tags_all = {} -> (known after apply)
# (4 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 3 to add, 0 to change, 3 to destroy.

Terraform module s3 lifecycle rule not not working

I have a s3 lifecycle rule that should delete the failed multipart upload after n number of days by using lifecycle rules. I want to use lookup instead of try
resource "aws_s3_bucket_lifecycle_configuration" "default" {
count = length(var.lifecycle_rule) != 0 ? 1 : 0
bucket = aws_s3_bucket.bucket.bucket
dynamic "rule" {
for_each = try(jsondecode(var.lifecycle_rule), var.lifecycle_rule)
content {
id = lookup(rule.value, "id", "default")
status = lookup(rule.value, "status", "Enabled")
dynamic "abort_incomplete_multipart_upload" {
for_each = lookup(rule.value, "abort_incomplete_multipart_upload", null) != null ? [rule.value.abort_incomplete_multipart_upload] : []
content {
days_after_initiation = abort_incomplete_multipart_upload.value.days_after_initiation
}
}
}
}
}
When I try to use this module resource in my child module, it does not work
module "test" {
source = "./s3"
bucket_name = "test"
lifecycle_rule = [
{
expiration = {
days = 7
}
},
{
id = "abort-incomplete-multipart-upload-lifecyle-rule"
abort_incomplete_multipart_upload_days = {
days_after_initiation = 6
}
}
]
}
terraform plan gives me
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ filter {
}
}
expected output:
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 8
}
+ filter {
}
}
Here's the code that works:
resource "aws_s3_bucket_lifecycle_configuration" "default" {
count = length(var.lifecycle_rule) != 0 ? 1 : 0
bucket = aws_s3_bucket.bucket.bucket
dynamic "rule" {
for_each = try(jsondecode(var.lifecycle_rule), var.lifecycle_rule)
content {
id = lookup(rule.value, "id", "default")
status = lookup(rule.value, "status", "Enabled")
dynamic "abort_incomplete_multipart_upload" {
for_each = lookup(rule.value, "abort_incomplete_multipart_upload_days", null) != null ? [rule.value.abort_incomplete_multipart_upload_days] : []
content {
days_after_initiation = abort_incomplete_multipart_upload.value.days_after_initiation
}
}
}
}
}
There are basically two issues:
The lookup was looking for a non-existing key in your map, abort_incomplete_multipart_upload, instead of abort_incomplete_multipart_upload_days
Because of the first error, it was propagated to the value you wanted, i.e., rule.value.abort_incomplete_multipart_upload instead of rule.value.abort_incomplete_multipart_upload_days
This code yields the following output:
# aws_s3_bucket_lifecycle_configuration.default[0] will be created
+ resource "aws_s3_bucket_lifecycle_configuration" "default" {
+ bucket = (known after apply)
+ id = (known after apply)
+ rule {
+ id = "default"
+ status = "Enabled"
}
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 6
}
}
}
However, if you want it to be one rule (i.e., the example output you want), you need to make a change to your lifecycle_rule variable:
lifecycle_rule = [
{
expiration = {
days = 7
}
id = "abort-incomplete-multipart-upload-lifecyle-rule"
abort_incomplete_multipart_upload_days = {
days_after_initiation = 6
}
}
]
This gives:
+ resource "aws_s3_bucket_lifecycle_configuration" "default" {
+ bucket = (known after apply)
+ id = (known after apply)
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 6
}
}
}

Resource plan for creation although count evaluates to false

I have the following variables
variable "policies" {
type = list(string)
description = "List of policy document to attach to the IAM Role."
default = []
}
variable "policy_name" {
type = string
description = "Name of the policy attached to the IAM Role."
default = null
}
variable "policy_description" {
type = string
description = "Description of the policy attached to the IAM Role."
default = ""
}
Which are used by the following Terraform resources:
resource "aws_iam_role" "this" {
name = var.role_name
assume_role_policy = var.assume_role_policy
}
data "aws_iam_policy_document" "this" {
count = var.policies != [] ? 1 : 0
source_policy_documents = var.policies
}
resource "aws_iam_policy" "this" {
count = var.policies != [] ? 1 : 0
name = var.policy_name
description = var.policy_description
policy = data.aws_iam_policy_document.this[count.index].json
}
resource "aws_iam_role_policy_attachment" "this" {
count = var.policies != [] ? 1 : 0
policy_arn = aws_iam_policy.this[count.index].arn
role = aws_iam_role.this.name
}
Now, my understanding is that aws_iam_policy_document, aws_iam_policy and aws_iam_role_policy_attachment are to be created only when var.policies is not empty.
However, these resources are still plan for creation when calling them like
module "iam_role_batch" {
source = "./resources/iam/role"
role_name = local.iam_role_batch_service_name
assume_role_policy = data.aws_iam_policy_document.batch_service.json
}
# module.iam_role_batch.aws_iam_policy.this[0] will be created
+ resource "aws_iam_policy" "this" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = (known after apply)
+ path = "/"
+ policy = jsonencode(
{
+ Statement = null
+ Version = "2012-10-17"
}
)
+ policy_id = (known after apply)
+ tags_all = (known after apply)
}
# module.iam_role_batch.aws_iam_role_policy_attachment.this[0] will be created
+ resource "aws_iam_role_policy_attachment" "this" {
+ id = (known after apply)
+ policy_arn = (known after apply)
+ role = "xxxxxxx"
}
Plan: 2 to add, 0 to change, 0 to destroy.
Why? AFAIK, policies is by default set to [], so the resources should not be planned for creation.
What do I miss?
is by default set to []
Actually it is set to data type of list(string). So your condition var.policies != [] is always true, and that is why the resource is always created. [] is not the same as list(string).
Usually you would do the following instead:
count = length(var.policies) > 0 ? 1 : 0

AWS Multiple DNS A record creation

I need to create multiple DNS with their respected IPs. I need to assign the first IP to the first DNS and the 2nd one to 2nd DNS. something like dns1 - 10.1.20.70 and dns2-10.1.20.40. But getting both of the IPs are getting assigned for both DNS(dns1 and dns2).Any suggestions?
Code:
resource "aws_route53_record" "onprem_api_record" {
for_each = toset(local.vm_fqdn)
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = each.value
type = "A"
records = var.api_ips[terraform.workspace]
ttl = "300"
}
locals {
vm_fqdn = flatten(["dns1-${terraform.workspace}.${local.domain}", "dns2-${terraform.workspace}.${local.domain}"] )
}
variable "api_ips" {
type = map(list(string))
default = {
"dev" = [ "10.1.20.70", "10.1.20.140" ]
"qa" = [ "10.1.22.180", "10.1.22.150" ]
"test" = [ "10.1.23.190", "10.1.23.160" ]
}
}
Output
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns1.dev.ciscodcloud.com"
+ records = [
+ "10.1.20.40",
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "Z30HW9VL6PYDXQ"
}
aws_route53_record.onprem_api_record["dna2.dev.cisco.com"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns2.dev.cisco.com"
+ records = [
+ "10.1.20.40",
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "Z30HW9VL6PYDXQ"
}
Plan: 2 to add, 0 to change, 1 to destroy.
You may want to use zipmap. Here is a terse example showing its use in for_each with for as could be used in your case.
resource "aws_route53_record" "onprem_api_record" {
for_each = { for fqdn, ip in zipmap(local.vm_fqdn, local.ips["dev"]) : fqdn => ip }
zone_id = "x"
name = each.key
type = "A"
records = [each.value]
ttl = "300"
}
locals {
ips = {
"dev" = ["10.1.20.70", "10.1.20.140"]
"qa" = ["10.1.22.180", "10.1.22.150"]
"test" = ["10.1.23.190", "10.1.23.160"]
}
vm_fqdn = ["dns1-dev.domain", "dns2-dev.domain"]
}
And the plan looks like:
# aws_route53_record.onprem_api_record["dns1-dev.domain"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns1-dev.domain"
+ records = [
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "x"
}
# aws_route53_record.onprem_api_record["dns2-dev.domain"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns2-dev.domain"
+ records = [
+ "10.1.20.140",
]
+ ttl = 300
+ type = "A"
+ zone_id = "x"
}
Plan: 2 to add, 0 to change, 0 to destroy.
You can do this as follows with count:
resource "aws_route53_record" "onprem_api_record" {
count = length(local.vm_fqdn)
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = local.vm_fqdn[count.index]
type = "A"
records = [var.api_ips[terraform.workspace][count.index]]
ttl = "300"
}