how to set log retention days for Cloudfront function in terraform? - amazon-web-services

I have an example Cloudfront function:
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
runtime = "cloudfront-js-1.0"
comment = "The cool function"
publish = true
code = <<EOT
function handler(event) {
var headers = event.request.headers;
if (
typeof headers.coolheader === "undefined" ||
headers.coolheader.value !== "That_is_cool_bro"
) {
console.log("That is not cool bro!")
}
return event.request;
}
EOT
}
When I create this function, Cloudwatch /aws/cloudfront/function/cool-function log group will be created automatically
But log group retention policy is Never Expire
And I can't see any parameters in terraform that allow to set retention days
So the question is:
is it possible to automatically import aws_cloudwatch_log_group every time when Cloudfront function creating and change retention_in_days for this resource?

Quite a few AWS services create their log groups implicitly on first use. To prevent that you need to explicitly create the group before the service has a chance to do it.
For that you need to define the aws_cloudwatch_log_group with the given name yourself, specify the correct retention and then create an explicit depends_on relation between the function and the log group to ensure the log group is created first. For migration purposes you now would need to import already created log groups into your terraform state.
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
...
depends_on = [
aws_cloudwatch_log_group.logs
]
}
resource "aws_cloudwatch_log_group" "logs" {
name = "/aws/cloudfront/function/cool-function"
retention_in_days = 123
...
}

Related

How can I configure Terraform to update a GCP compute engine instance template without destroying and re-creating?

I have a service deployed on GCP compute engine. It consists of a compute engine instance template, instance group, instance group manager, and load balancer + associated forwarding rules etc.
We're forced into using compute engine rather than Cloud Run or some other serverless offering due to the need for docker-in-docker for the service in question.
The deployment is managed by terraform. I have a config that looks something like this:
data "google_compute_image" "debian_image" {
family = "debian-11"
project = "debian-cloud"
}
resource "google_compute_instance_template" "my_service_template" {
name = "my_service"
machine_type = "n1-standard-1"
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
}
...
metadata_startup_script = data.local_file.startup_script.content
metadata = {
MY_ENV_VAR = var.whatever
}
}
resource "google_compute_region_instance_group_manager" "my_service_mig" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
...
}
resource "google_compute_region_backend_service" "my_service_backend" {
...
backend {
group = google_compute_region_instance_group_manager.my_service_mig.instance_group
}
}
resource "google_compute_forwarding_rule" "my_service_frontend" {
depends_on = [
google_compute_region_instance_group_manager.my_service_mig,
]
name = "my_service_ilb"
backend_service = google_compute_region_backend_service.my_service_backend.id
...
}
I'm running into issues where Terraform is unable to perform any kind of update to this service without running into conflicts. It seems that instance templates are immutable in GCP, and doing anything like updating the startup script, adding an env var, or similar forces it to be deleted and re-created.
Terraform prints info like this in that situation:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.connectors_compute_engine.google_compute_instance_template.airbyte_translation_instance1 must be replaced
-/+ resource "google_compute_instance_template" "my_service_template" {
~ id = "projects/project/..." -> (known after apply)
~ metadata = { # forces replacement
+ "TEST" = "test"
# (1 unchanged element hidden)
}
The only solution I've found for getting out of this situation is to entirely delete the entire service and all associated entities from the load balancer down to the instance template and re-create them.
Is there some way to avoid this situation so that I'm able to change the instance template without having to manually update all the terraform config two times? At this point I'm even fine if it ends up creating some downtime for the service in question rather than a full rolling update or something since that's what's happening now anyway.
I was triggered by this issue as well.
However, according to:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance_template#using-with-instance-group-manager
Instance Templates cannot be updated after creation with the Google
Cloud Platform API. In order to update an Instance Template, Terraform
will destroy the existing resource and create a replacement. In order
to effectively use an Instance Template resource with an Instance
Group Manager resource, it's recommended to specify
create_before_destroy in a lifecycle block. Either omit the Instance
Template name attribute, or specify a partial name with name_prefix.
I would also test and plan with this lifecycle meta argument as well:
+ lifecycle {
+ prevent_destroy = true
+ }
}
Or more realistically in your specific case, something like:
resource "google_compute_instance_template" "my_service_template" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
+ lifecycle {
+ create_before_destroy = true
+ }
}
So terraform plan with either create_before_destroy or prevent_destroy = true before terraform apply on google_compute_instance_template to see results.
Ultimately, you can remove google_compute_instance_template.my_service_template.id from state file and import it back.
Some suggested workarounds in this thread:
terraform lifecycle prevent destroy

Is there a way in terraform to have multiple lifecycle configuration blocks for a single AWS S3 bucket?

I am using module to create a AWS S3 bucket via terraform. This module creates a bucket with some a lot of default policies/configuration as mandated by my company. Along with that it sets some lifecycle rules using aws_s3_bucket_lifecycle_configuration.
I don't want to use those rules and they can be disabled via the inputs to the said module. But the problem is when I try to add my custom lifecycle configurations, I always get a different result each time. Sometimes my rules are applied while at other instances they are not present in the configuration.
Even the documentation says that:
NOTE: S3 Buckets only support a single lifecycle configuration. Declaring multiple aws_s3_bucket_lifecycle_configuration resources to the same S3 Bucket will cause a perpetual difference in configuration.
What can be the way around this issue?
I cant set enable_private_bucket to false, but here is the code for the configuration resource in the module.
resource "aws_s3_bucket_lifecycle_configuration" "pca_private_bucket_infrequent_access" {
count = var.enable_private_bucket ? 1 : 0
bucket = aws_s3_bucket.pca_private_bucket[0].id
}
You need to do the v3 style which is deprecated but it seems to be the only way of doing it.
Here's how I have it set up where I have extra lifecycle rules using the dynamic block
resource "aws_s3_bucket" "cache" {
bucket = local.cache_bucket_name
force_destroy = false
tags = {
Name = "${var.vpc_name} cache"
}
lifecycle_rule {
id = "${local.cache_bucket_name} lifecycle rule"
abort_incomplete_multipart_upload_days = 1
enabled = true
noncurrent_version_expiration {
days = 1
}
transition {
days = 1
storage_class = "INTELLIGENT_TIERING"
}
}
dynamic "lifecycle_rule" {
for_each = var.cache_expiration_rules
content {
id = "${lifecycle_rule.value["prefix"]} expiration in ${lifecycle_rule.value["days"]} days"
enabled = true
prefix = lifecycle_rule.value["prefix"]
expiration {
days = lifecycle_rule.value["days"]
}
}
}
lifecycle {
prevent_destroy = true
}
}

Dependency between pubsub topic and subscription using terraform script

I am using one terraform script to create a pub sub topic and subscription. If the subscription needs to subscribes from the topic created by the same script, is there a way to create a dependency such that terraform attempts to create the pub/sub subscription only after the topic is created?
My main file looks like this :
version = ""
project = var.project_id
region = var.region
zone = var.zone
}
# module "Dataflow" {
#source = "../modules/cloud-dataflow"
#}
module "PubSubTopic" {
source = "../modules/pubsub_topic"
}
#module "PubSubSubscription" {
# source = "../modules/pubsub_subscription"
#}
#module "CloudFunction" {
# source = "../modules/cloud-function"
#}
Terraform will attempt to create the resources following the proper order but to answer your question and what your looking for is modules dependency "depends_on".
For example, subscription module will be created only once topic resource has been already created. That way you should add the depends_on on the subscription module.
Example:
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}
Official documentation: https://www.terraform.io/docs/language/meta-arguments/depends_on.html
You can create a simple pubsub topic and a subscription with this snippet (just add the .json for a service account with enough privileges) on your filesystem:
provider "google" {
credentials = "${file("account.json")}" # Or use GOOGLE_APPLICATION_CREDENTIALS
project = "__your_project_id__"
region = "europe-west4" # Amsterdam
}
resource "google_pubsub_topic" "incoming_data" {
name = "incoming-data"
}
resource "google_pubsub_subscription" "incoming_subs" {
name = "Subscription_for_incoming_data"
topic = google_pubsub_topic.incoming_data.name
# Time since Pubsub receives a message to deletion.
expiration_policy {
ttl = "300000s"
}
# Time from client reception to ACK
message_retention_duration = "1200s"
retain_acked_messages = false
enable_message_ordering = false
}
To link a subscription with a topic in terraform, you just need to link it with:
topic = google_pubsub_topic.TERRAFORM_TOPIC.name
Be carefull with Google requirements for topic and subscription identifiers. If they're not valid, terraform plan will pass, but you'll get an Error 400 : You have passed an invalid argument to the service

How to associate new "aws_wafregional_rule" with existing WAF ACL

I have created a WAF ACL using AWS Console. Now I need to create WAF Rule using Terraform, so I have implemented below rule.
resource "aws_wafregional_byte_match_set" "blocked_path_match_set" {
name = format("%s-%s-blocked-path", local.name, var.module)
dynamic "byte_match_tuples" {
for_each = length(var.blocked_path_prefixes) > 0 ? var.blocked_path_prefixes : []
content {
field_to_match {
type = lookup(byte_match_tuples.value, "type", null)
}
target_string = lookup(byte_match_tuples.value, "target_string", null)
positional_constraint = lookup(byte_match_tuples.value, "positional_constraint", null)
text_transformation = lookup(byte_match_tuples.value, "text_transformation", null)
}
}
}
resource "aws_wafregional_rule" "blocked_path_allowed_ipaccess" {
metric_name = format("%s%s%sBlockedPathIpaccess", var.application, var.environment, var.module)
name = format("%s%s%sBlockedPathIpaccessRule", var.application, var.environment, var.module)
predicate {
type = "ByteMatch"
data_id = aws_wafregional_byte_match_set.blocked_path_match_set.id
negated = false
}
}
But how do I map this new rule to existing "web_acl" which was created through AWS Console. As per documentation I can use "aws_wafregional_web_acl" to create new web_acl, but is there a way to associate rule created through terraform with existing waf_acl ? I have a gitlab pipeline which deploys terraform code to aws, so eventually I will pass id/arn of existing web_acl and through pipeline just add/update new rule without impacting existing rules which were created through console.
Please share your valuable feedback.
Thank you.
As per the WAF documentation you associate the rule via an AWS WAF resource, see the example below for a code snippet.
resource "aws_wafregional_web_acl" "foo" {
name = "foo"
metric_name = "foo"
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = aws_wafregional_rule.blocked_path_allowed_ipaccess.id
}
}
However, as you said you have created the resource already in the AWS console. Terraform does support the import of an AWS resource, so you would need to go with this method if you would like to manage it via Terraform.

How to create an alert policy for unknown custom metric in GCP

Given the following alert policy in GCP (created with terraform)
resource "google_monitoring_alert_policy" "latency_alert_policy" {
display_name = "Latency of 95th percentile more than 1 second"
combiner = "OR"
conditions {
display_name = "Latency of 95th percentile more than 1 second"
condition_threshold {
filter = "metric.type=\"custom.googleapis.com/http/server/requests/p95\" resource.type=\"k8s_pod\""
threshold_value = 1000
duration = "60s"
comparison = "COMPARISON_GT"
aggregations {
alignment_period = "60s"
per_series_aligner= "ALIGN_NEXT_OLDER"
cross_series_reducer= "REDUCE_MAX"
group_by_fields = [
"metric.label.\"uri\"",
"metric.label.\"method\"",
"metric.label.\"status\"",
"metadata.user_labels.\"app.kubernetes.io/name\"",
"metadata.user_labels.\"app.kubernetes.io/component\""
]
}
trigger {
count = 1
percent = 0
}
}
}
}
I get the following this error (which is part of a terraform project also creating the cluster):
Error creating AlertPolicy: googleapi: Error 404: The metric referenced by the provided filter is unknown. Check the metric name and labels.
Now, this is a custom metric (by a Spring Boot app with Micrometer), therefore this metric does not exist when creating infrastructure. Does GCP have to know a metric before creating an alert for it? This would mean that a Spring boot app has to be deployed on a cluster and sending metrics before this policy can be created?
Am I missing something... (like this should not be done in terraform, infrastructure)?
interesting question, the reason for the 404 error is because the resource was not found, there seems to be a preexisting pre-requisite for the descriptor. I would create the metric descriptor first, you can use this as reference, then going forward on creating the alerting policy.
This is an ingenious way you may avoid it. Please comment if it makes sense and if you make it work like this, share it.
For reference (this can be referenced from the alert policy according to terraform doc):
resource "google_monitoring_metric_descriptor" "p95_latency" {
description = ""
display_name = ""
type = "custom.googleapis.com/http/server/requests/p95"
metric_kind = "GAUGE"
value_type = "DOUBLE"
labels {
key = "status"
}
labels {
key = "uri"
}
labels {
key = "exception"
}
labels {
key = "method"
}
labels {
key = "outcome"
}
}