How to associate new "aws_wafregional_rule" with existing WAF ACL - amazon-web-services

I have created a WAF ACL using AWS Console. Now I need to create WAF Rule using Terraform, so I have implemented below rule.
resource "aws_wafregional_byte_match_set" "blocked_path_match_set" {
name = format("%s-%s-blocked-path", local.name, var.module)
dynamic "byte_match_tuples" {
for_each = length(var.blocked_path_prefixes) > 0 ? var.blocked_path_prefixes : []
content {
field_to_match {
type = lookup(byte_match_tuples.value, "type", null)
}
target_string = lookup(byte_match_tuples.value, "target_string", null)
positional_constraint = lookup(byte_match_tuples.value, "positional_constraint", null)
text_transformation = lookup(byte_match_tuples.value, "text_transformation", null)
}
}
}
resource "aws_wafregional_rule" "blocked_path_allowed_ipaccess" {
metric_name = format("%s%s%sBlockedPathIpaccess", var.application, var.environment, var.module)
name = format("%s%s%sBlockedPathIpaccessRule", var.application, var.environment, var.module)
predicate {
type = "ByteMatch"
data_id = aws_wafregional_byte_match_set.blocked_path_match_set.id
negated = false
}
}
But how do I map this new rule to existing "web_acl" which was created through AWS Console. As per documentation I can use "aws_wafregional_web_acl" to create new web_acl, but is there a way to associate rule created through terraform with existing waf_acl ? I have a gitlab pipeline which deploys terraform code to aws, so eventually I will pass id/arn of existing web_acl and through pipeline just add/update new rule without impacting existing rules which were created through console.
Please share your valuable feedback.
Thank you.

As per the WAF documentation you associate the rule via an AWS WAF resource, see the example below for a code snippet.
resource "aws_wafregional_web_acl" "foo" {
name = "foo"
metric_name = "foo"
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = aws_wafregional_rule.blocked_path_allowed_ipaccess.id
}
}
However, as you said you have created the resource already in the AWS console. Terraform does support the import of an AWS resource, so you would need to go with this method if you would like to manage it via Terraform.

Related

How can I configure Terraform to update a GCP compute engine instance template without destroying and re-creating?

I have a service deployed on GCP compute engine. It consists of a compute engine instance template, instance group, instance group manager, and load balancer + associated forwarding rules etc.
We're forced into using compute engine rather than Cloud Run or some other serverless offering due to the need for docker-in-docker for the service in question.
The deployment is managed by terraform. I have a config that looks something like this:
data "google_compute_image" "debian_image" {
family = "debian-11"
project = "debian-cloud"
}
resource "google_compute_instance_template" "my_service_template" {
name = "my_service"
machine_type = "n1-standard-1"
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
}
...
metadata_startup_script = data.local_file.startup_script.content
metadata = {
MY_ENV_VAR = var.whatever
}
}
resource "google_compute_region_instance_group_manager" "my_service_mig" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
...
}
resource "google_compute_region_backend_service" "my_service_backend" {
...
backend {
group = google_compute_region_instance_group_manager.my_service_mig.instance_group
}
}
resource "google_compute_forwarding_rule" "my_service_frontend" {
depends_on = [
google_compute_region_instance_group_manager.my_service_mig,
]
name = "my_service_ilb"
backend_service = google_compute_region_backend_service.my_service_backend.id
...
}
I'm running into issues where Terraform is unable to perform any kind of update to this service without running into conflicts. It seems that instance templates are immutable in GCP, and doing anything like updating the startup script, adding an env var, or similar forces it to be deleted and re-created.
Terraform prints info like this in that situation:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.connectors_compute_engine.google_compute_instance_template.airbyte_translation_instance1 must be replaced
-/+ resource "google_compute_instance_template" "my_service_template" {
~ id = "projects/project/..." -> (known after apply)
~ metadata = { # forces replacement
+ "TEST" = "test"
# (1 unchanged element hidden)
}
The only solution I've found for getting out of this situation is to entirely delete the entire service and all associated entities from the load balancer down to the instance template and re-create them.
Is there some way to avoid this situation so that I'm able to change the instance template without having to manually update all the terraform config two times? At this point I'm even fine if it ends up creating some downtime for the service in question rather than a full rolling update or something since that's what's happening now anyway.
I was triggered by this issue as well.
However, according to:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance_template#using-with-instance-group-manager
Instance Templates cannot be updated after creation with the Google
Cloud Platform API. In order to update an Instance Template, Terraform
will destroy the existing resource and create a replacement. In order
to effectively use an Instance Template resource with an Instance
Group Manager resource, it's recommended to specify
create_before_destroy in a lifecycle block. Either omit the Instance
Template name attribute, or specify a partial name with name_prefix.
I would also test and plan with this lifecycle meta argument as well:
+ lifecycle {
+ prevent_destroy = true
+ }
}
Or more realistically in your specific case, something like:
resource "google_compute_instance_template" "my_service_template" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
+ lifecycle {
+ create_before_destroy = true
+ }
}
So terraform plan with either create_before_destroy or prevent_destroy = true before terraform apply on google_compute_instance_template to see results.
Ultimately, you can remove google_compute_instance_template.my_service_template.id from state file and import it back.
Some suggested workarounds in this thread:
terraform lifecycle prevent destroy

Is there a way in terraform to have multiple lifecycle configuration blocks for a single AWS S3 bucket?

I am using module to create a AWS S3 bucket via terraform. This module creates a bucket with some a lot of default policies/configuration as mandated by my company. Along with that it sets some lifecycle rules using aws_s3_bucket_lifecycle_configuration.
I don't want to use those rules and they can be disabled via the inputs to the said module. But the problem is when I try to add my custom lifecycle configurations, I always get a different result each time. Sometimes my rules are applied while at other instances they are not present in the configuration.
Even the documentation says that:
NOTE: S3 Buckets only support a single lifecycle configuration. Declaring multiple aws_s3_bucket_lifecycle_configuration resources to the same S3 Bucket will cause a perpetual difference in configuration.
What can be the way around this issue?
I cant set enable_private_bucket to false, but here is the code for the configuration resource in the module.
resource "aws_s3_bucket_lifecycle_configuration" "pca_private_bucket_infrequent_access" {
count = var.enable_private_bucket ? 1 : 0
bucket = aws_s3_bucket.pca_private_bucket[0].id
}
You need to do the v3 style which is deprecated but it seems to be the only way of doing it.
Here's how I have it set up where I have extra lifecycle rules using the dynamic block
resource "aws_s3_bucket" "cache" {
bucket = local.cache_bucket_name
force_destroy = false
tags = {
Name = "${var.vpc_name} cache"
}
lifecycle_rule {
id = "${local.cache_bucket_name} lifecycle rule"
abort_incomplete_multipart_upload_days = 1
enabled = true
noncurrent_version_expiration {
days = 1
}
transition {
days = 1
storage_class = "INTELLIGENT_TIERING"
}
}
dynamic "lifecycle_rule" {
for_each = var.cache_expiration_rules
content {
id = "${lifecycle_rule.value["prefix"]} expiration in ${lifecycle_rule.value["days"]} days"
enabled = true
prefix = lifecycle_rule.value["prefix"]
expiration {
days = lifecycle_rule.value["days"]
}
}
}
lifecycle {
prevent_destroy = true
}
}

Deploy multiple Cloudrun service with same dockerimage

There are 25+ Cloudrun services that use the same docker image(from GCR) but are configured with different variables. What is an easy and reliable method to deploy all the services with the latest container image from any kind of incoming events?
Currently using below CLI command to execute one by one manually. Is there any automated way to implement auto deployment for all the service one after another or in parallel.
gcloud run deploy SERVICE --image IMAGE_URL
Addn: Labels are been used to mark the 25 containers which have the same container images. Not required to build docker image everytime from source. The same image can be used.
In case Terraform is an option for you, you can automate all Cloud Run services deployment using either with the count or for_each meta-arguments:
count if you need the same service name with indexes
provider "google" {
project = "MY-PROJECT-ID"
}
resource "google_cloud_run_service" "default" {
count = 25
name = "MY-SERVICE-${count.index}"
location = "MY-REGION"
metadata {
annotations = {
"run.googleapis.com/client-name" = "terraform"
}
}
template {
spec {
containers {
image = "IMAGE_URL"
}
}
}
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = ["allUsers"]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
for_each = google_cloud_run_service.default
location = each.value.location
project = each.value.project
service = each.value.name
policy_data = data.google_iam_policy.noauth.policy_data
}
where MY-PROJECT-ID and MY-REGION needs to be replaced with your project specific values.
for_each if you need different service names
provider "google" {
project = "MY-PROJECT-ID"
}
resource "google_cloud_run_service" "default" {
for_each = toset( ["Service 1", "Service 2", "Service 25"] )
name = each.key
location = "MY-REGION"
metadata {
annotations = {
"run.googleapis.com/client-name" = "terraform"
}
}
template {
spec {
containers {
image = "IMAGE_URL"
}
}
}
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = ["allUsers"]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
for_each = google_cloud_run_service.default
location = each.value.location
project = each.value.project
service = each.value.name
policy_data = data.google_iam_policy.noauth.policy_data
}
where MY-PROJECT-ID and MY-REGION needs to be replaced with your project specific values as well.
You can refer to the official GCP Cloud Run documentation for further details on Terraform usage.

Scope down statement on WAFv2 using Terraform

I've created a managed rule group statement using Terraform and i'm now trying to add a scope down statement to it in order to exclude requests from a specific url. This can be done very easily on the AWS console however according to Terraform docs it appears that scope_down_statement can't be associated with managed_rule_group_statement.
Am I missing something? Here's is where i'm trying to add the scope_down_statement:
resource "aws_wafv2_web_acl" "example" {
name = "waf-example"
description = "Example of a managed rule"
scope = "REGIONAL"
default_action {
allow {}
}
rule {
name = "AWSManagedRulesAnonymousIpList"
priority = 0
override_action {
none {}
}
statement {
managed_rule_group_statement {
name = "AWSManagedRulesAnonymousIpList"
vendor_name = "AWS"
}
}
I was experiencing the same issue. You need to upgrade the AWS provider version to 3.50. Please see the https://github.com/hashicorp/terraform-provider-aws/pull/19407
Thanks

How to make gcp cloud function public using Terraform

I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.