terraform: tfsec not able to read EKS cluster encryption configuration - amazon-web-services

I have an EKS cluster resource to which the team has added encryption_config, We are adding a dynamic block probably to add multiple configurations. Now when I am trying to run tfsec ( version 1.28.0 ) on my code I get a Cluster does not have secret encryption enabled.
Here is the dynamic block
resource "aws_eks_cluster" "this" {
...
dynamic "encryption_config" {
for_each = toset(var.cluster_encryption_config)
content {
provider {
key_arn = encryption_config.value["provider_key_arn"]
}
resources = encryption_config.value["resources"]
}
}
}
definition inside variables.tf
variable "cluster_encryption_config" {
description = "Configuration block with encryption configuration for the cluster. See examples/secrets_encryption/main.tf for example format"
type = list(object({
provider_key_arn = string
resources = list(string)
}))
default = []
}

From what you write cluster_encryption_config is set to empty list []. Therefore, encryption_config block does not run, and there is no encryption configured. You have to setup cluster_encryption_config to something with valid values (not an empty list).

Related

How can I configure Terraform to update a GCP compute engine instance template without destroying and re-creating?

I have a service deployed on GCP compute engine. It consists of a compute engine instance template, instance group, instance group manager, and load balancer + associated forwarding rules etc.
We're forced into using compute engine rather than Cloud Run or some other serverless offering due to the need for docker-in-docker for the service in question.
The deployment is managed by terraform. I have a config that looks something like this:
data "google_compute_image" "debian_image" {
family = "debian-11"
project = "debian-cloud"
}
resource "google_compute_instance_template" "my_service_template" {
name = "my_service"
machine_type = "n1-standard-1"
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
}
...
metadata_startup_script = data.local_file.startup_script.content
metadata = {
MY_ENV_VAR = var.whatever
}
}
resource "google_compute_region_instance_group_manager" "my_service_mig" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
...
}
resource "google_compute_region_backend_service" "my_service_backend" {
...
backend {
group = google_compute_region_instance_group_manager.my_service_mig.instance_group
}
}
resource "google_compute_forwarding_rule" "my_service_frontend" {
depends_on = [
google_compute_region_instance_group_manager.my_service_mig,
]
name = "my_service_ilb"
backend_service = google_compute_region_backend_service.my_service_backend.id
...
}
I'm running into issues where Terraform is unable to perform any kind of update to this service without running into conflicts. It seems that instance templates are immutable in GCP, and doing anything like updating the startup script, adding an env var, or similar forces it to be deleted and re-created.
Terraform prints info like this in that situation:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.connectors_compute_engine.google_compute_instance_template.airbyte_translation_instance1 must be replaced
-/+ resource "google_compute_instance_template" "my_service_template" {
~ id = "projects/project/..." -> (known after apply)
~ metadata = { # forces replacement
+ "TEST" = "test"
# (1 unchanged element hidden)
}
The only solution I've found for getting out of this situation is to entirely delete the entire service and all associated entities from the load balancer down to the instance template and re-create them.
Is there some way to avoid this situation so that I'm able to change the instance template without having to manually update all the terraform config two times? At this point I'm even fine if it ends up creating some downtime for the service in question rather than a full rolling update or something since that's what's happening now anyway.
I was triggered by this issue as well.
However, according to:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance_template#using-with-instance-group-manager
Instance Templates cannot be updated after creation with the Google
Cloud Platform API. In order to update an Instance Template, Terraform
will destroy the existing resource and create a replacement. In order
to effectively use an Instance Template resource with an Instance
Group Manager resource, it's recommended to specify
create_before_destroy in a lifecycle block. Either omit the Instance
Template name attribute, or specify a partial name with name_prefix.
I would also test and plan with this lifecycle meta argument as well:
+ lifecycle {
+ prevent_destroy = true
+ }
}
Or more realistically in your specific case, something like:
resource "google_compute_instance_template" "my_service_template" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
+ lifecycle {
+ create_before_destroy = true
+ }
}
So terraform plan with either create_before_destroy or prevent_destroy = true before terraform apply on google_compute_instance_template to see results.
Ultimately, you can remove google_compute_instance_template.my_service_template.id from state file and import it back.
Some suggested workarounds in this thread:
terraform lifecycle prevent destroy

Terraform conditional option_settings in a dynamic option block

When using RDS option_groups, there are some options that require option_settings and some that don't. Terraform throws an error if the option_settings block is included with an option that doesn't use option settings, and terraform apply fails. I have a module that accepts a map of objects for RDS instances, including their option groups/options/option_settings. Within this is a resource that has an option that requires the option settings to be omitted (S3_INTEGRATION option). Below is the option_group resource block code being used:
resource "aws_db_option_group" "main" {
for_each = {
for name, rds in var.main : name => rds
if rds.option_group_name != ""
}
name = each.value["option_group_name"]
option_group_description = "Terraform Option Group"
engine_name = each.value["engine"]
major_engine_version = each.value["major_engine_version"]
dynamic "option" {
for_each = each.value["options"]
content {
option_name = option.key
option_settings {
name = option.value["option_name"]
value = option.value["option_value"]
}
}
}
}
Is there a way to make the option_settings block creation in an option conditional to circumvent this?
Terraform supports nested dynamic blocks too, which in my understanding is something that you are looking for.
Hashicorp documentation on Nested Dynamic Blocks: https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks#multi-level-nested-block-structures
You can modify your aws_db_option_group module with the below code and make your option_settings optional for the module. It will work only when the values are supplied(but it also depends on the typing of the variable if its flexible).
If you are already using Terraform version >= 1.3 then you also can use optional object type attributes already.
Hashicorp documentations : https://www.hashicorp.com/blog/terraform-1-3-improves-extensibility-and-maintainability-of-terraform-modules
resource "aws_db_option_group" "main" {
for_each = {
for name, rds in var.main : name => rds
if rds.option_group_name != ""
}
name = each.value["option_group_name"]
option_group_description = "Terraform Option Group"
engine_name = each.value["engine"]
major_engine_version = each.value["major_engine_version"]
dynamic "option" {
for_each = each.value["options"]
content {
option_name = option.key
dynamic "option_settings" {
for_each = option.value["option_settings"]
content {
name = option_settings.key
value = option_settings.value
}
## Uncomment if you find this better and remove the above content block.
# content {
# name = option_settings.value["name"]
# value = option_settings.value["value"]
# }
}
}
}
}
Hope it helps.

GCP Terraform Forwarding Rule Target

Hey Terraform friends;
Trying to navigate my way through some basic load balancing in the GCP environment. First time doing it, so, still trying to pick up on some of the nuances between the various types and environments.
My goal is to have a target pool of hosts be the destination recipient for connections made to the forwarding rule, in whatever load balancing policy the business needs.
The setup is a bit awkward, because I have the project that I'm doing the work in, but the destination subnetworks are in a shared VPC in another project. Now, according to powers that implemented, the project has permissions to use that subnetwork, which, so far is accurate (I've deployed compute instances to it).
Here's my code:
#target pool
resource "google_compute_target_pool" "target_pool" {
name = "${var.name}-pool"
instances = var.instances
session_affinity = var.session_affinity
}
resource "google_compute_address" "lb_address" {
name = "${var.name}-load-balancer"
address_type = "INTERNAL"
address = "10.129.48.250" //cause fuck it I'm trying to figure it out
subnetwork = data.google_compute_subnetwork.gcp_subnetwork.name
project = var.subnetwork_project
}
#load balancer
resource "google_compute_forwarding_rule" "basic_load_balancer" {
name = var.name
target = google_compute_target_pool.target_pool.id
ip_address = google_compute_address.lb_address.address
port_range = join(",", var.port_range)
labels = var.labels
subnetwork = data.google_compute_subnetwork.gcp_subnetwork.name
project = var.subnetwork_project
load_balancing_scheme = var.lb_scheme
}
With the relevant data:
data "google_compute_subnetwork" "gcp_subnetwork" {
name = var.subnetwork
project = var.subnetwork_project
}
And variables:
variable "name" {
description = "Name of the application usage"
type = string
}
variable "port_range" {
description = "List of ports to load balance"
}
variable "port_range_health" {
description = "List of ports to health check"
type = list(string)
default = [ "" ]
}
variable "instances" {
description = "List of Instances by Zone/Name value"
type = list(string)
}
variable "backup_instances" {
description = "List of Instances by Zone/Name value for backup use"
type = list(string)
default = []
}
variable "session_affinity" {
description = "Load balancer session affinity type"
type = string
default = "NONE"
}
variable "labels" {
description = "Labels to apply to the load balancer"
type = map
default = {}
}
variable "lb_scheme" {
description = "What the LB Forwarding rule is to be used for"
type = string
default = "INTERNAL"
}
variable "subnetwork" {
description = "What network the load balancer should be deployed into"
type = string
}
variable "subnetwork_project" {
description = "Project that contains the subnetwork to deploy into"
type = string
default = ""
}
variable "project" {
description = "Project supplied by user"
type = string
}
My subnetwork project is set to the project id of my shared VPC, and I specify the name of the subnetwork. Instance
And when I try to apply, I keep getting hit with an error message about the structure of the target:
Error: Error creating ForwardingRule: googleapi: Error 400: Invalid value for field 'resource.target': 'projects/insight-dev-272215/regions/us-central1/targetPools/insight-dev-pool'. The URL is malformed. Must be a valid In-Project Target Proxy URL or a supported Google API bundle., invalid
│
│ with module.insight_solr_lb.google_compute_forwarding_rule.basic_load_balancer,
│ on ..\..\..\modules\load_balancer\gcp\basic\main.tf line 29, in resource "google_compute_forwarding_rule" "basic_load_balancer":
│ 29: resource "google_compute_forwarding_rule" "basic_load_balancer" {
│
I've beaten my head around a few times for a few things, but I'm not making much progress. Anyone experienced in the GCP Forwarding rules see my issue?
I've tried to pass the URI of the target group to the target, but it didn't like that either.
Thanks!

Terraform Resource attribute not being removed when passing in empty values

I am working with a GCP Cloud Composer resource and added in a dynamic block to create an attribute for the resource to set allowed_ip_ranges which can be used as an IP filter for accessing the Apache Airflow Web UI.
I was able to get the allowed ranges setup and can update them in place to new values also.
If I attempt to pass in a blank list I am expecting the IP address(es) to be removed as attributes for the resource but Terraform seems to think that no changes are needed.
There is probably something wrong in my code but I am not sure what exactly I would need to do. Does it involve adding in a conditional expression to the for_each loop in the dynamic block?
Child module main.tf
web_server_network_access_control {
dynamic "allowed_ip_range" {
for_each = var.allowed_ip_range
content {
value = allowed_ip_range.value["value"]
description = allowed_ip_range.value["description"]
}
}
}
Child module variables.tf
variable "allowed_ip_range" {
description = "The IP ranges which are allowed to access the Apache Airflow Web Server UI."
type = list(map(string))
default = []
}
Parent module terraform.tfvars
allowed_ip_range = [
{
value = "11.0.0.2/32"
description = "Test dynamic block 1"
},
]
You can set the default value in your variables.tf file:
variable "allowed_ip_range" {
description = "The IP ranges which are allowed to access the Apache Airflow Web Server UI"
type = list(map(string))
default = [
{
value = "0.0.0.0/0"
description = "Allows access from all IPv4 addresses (default value)"
},
{
value = "::0/0"
description = "Allows access from all IPv6 addresses (default value)"
},
]
}
And when you will delete your variable from terraform.tfvars, you will have the default values

terraform count dependent on data from target environment

I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?