AWS WAFv2 Terraform import ID issues - amazon-web-services

I'm trying to import a aws_wafv2 web acl that I've created via the console. I setup the terraform up but for some reason it won't let me import, it says my ID is not available. Any guidance on this is greatly appreciated.
Here is my main.tf
resource "aws_wafv2_web_acl" "waf_acl_buddyman" {
name = "waf_acl_buddyman"
description = "WAF ACL for buddyman"
id = data.terraform_remote_state.aws_wafv2_web_acl.outputs.aws_wafv2_web_acl_af_acl_buddyman_id
scope = "REGIONAL"
default_action {
allow {}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "waf_acl_buddyman"
sampled_requests_enabled = true
}
rule {
name = "waf_buddyman_acl"
priority = 0
override_action {
count {}
}
statement {
managed_rule_group_statement {
name = "AWSManagedRulesCommonRuleSet"
vendor_name = "AWS"
excluded_rule {
name = "SizeRestrictions_QUERYSTRING"
}
excluded_rule {
name = "NoUserAgent_HEADER"
}
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "friendly-rule-metric-name"
sampled_requests_enabled = true
}
}
}
my output.tf looks like this:
output "aws_wafv2_web_acl_af_acl_buddyman_id" {
value = aws_wafv2_web_acl.waf_acl_buddyman.id
}
When I run terraform import aws_wafv2_web_acl.waf_acl_buddyman baf6e249-ec50-45df-ae9f-073e73f83900/waf_acl_buddyman/REGIONAL it shows:
Acquiring state lock. This may take a few moments...
aws_wafv2_web_acl.waf_acl_buddyman: Importing from ID "baf6e249-ec50-45df-ae9f-073e73f83900/waf_acl_buddyman/REGIONAL"...
aws_wafv2_web_acl.waf_acl_buddyman: Import prepared!
Prepared aws_wafv2_web_acl for import
aws_wafv2_web_acl.waf_acl_buddyman: Refreshing state... [id=baf6e249-ec50-45df-ae9f-073e73f83900]
╷
│ Error: Cannot import non-existent remote object
│
│ While attempting to import an existing object to "aws_wafv2_web_acl.waf_acl_buddyman", the provider detected that no object exists with the given id. Only
│ pre-existing objects can be imported; check that the id is correct and that it is associated with the provider's configured region or endpoint, or use
│ "terraform apply" to create a new remote object for this resource.
╵
Releasing state lock. This may take a few moments...

Output is what Terraform module produces. But you want to take Output as input. Guess why it's empty?
How to perform import.

Related

Terraform loop through multiple providers(accounts) - invokation through module

i have a use case where need help to use for_each to loop through multiple providers( AWS accounts & regions) and this is a module, the TF will be using hub and spoke model.
below is the TF Pseudo code i would like to achieve.
module.tf
---------
app_accounts = [
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child1"},
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child2"}
]
below are the provider and resource files, pleas ignore the variables and output files, as its not relevant here
provider.tf
------------
provider "aws" {
for_each = var.app_accounts
alias = "child"
profile = each.value.role
}
here is the main resouce block where i want to multiple child accounts against single master account, so i want to iterate through the loop
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = vpc_id
zone_id = zone_id
}
resource "aws_route53_zone_association" "child" {
provider = aws.child
vpc_id = vpc_id
zone_id = zone_id
}
any idea on how to achieve this, please? thanks in advance.
The typical way to achieve your goal in Terraform is to define a shared module representing the objects that should be present in a single account and then to call that module once for each account, passing a different provider configuration into each.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
alias = "master"
# ...
}
provider "aws" {
alias = "example1"
profile = "example1"
}
module "example1" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example1
aws.master = aws.master
}
}
provider "aws" {
alias = "example2"
profile = "example2"
}
module "example2" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example2
aws.master = aws.master
}
}
The ./modules/account directory would then contain the resource blocks describing what should exist in each individual account. For example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws, aws.master ]
}
}
}
variable "account" {
type = string
}
variable "app_vpc_id" {
type = string
}
resource "aws_route53_zone" "example" {
# (omitting the provider argument will associate
# with the default provider configuration, which
# is different for each instance of this module)
# ...
}
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
resource "aws_route53_zone_association" "child" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
(I'm not sure if you actually intended var.app_vpc_id to be the VPC specified for those zone associations, but my goal here is only to show the general pattern, not to show a fully-working example.)
Using a shared module in this way allows to avoid repeating the definitions for each account separately, and keeps each account-specific setting specified in only one place (either in a provider "aws" block or in a module block).
There is no way to make this more dynamic within the Terraform language itself, but if you expect to be adding and removing accounts regularly and want to make it more systematic then you could use code generation for the root module to mechanically produce the provider and module block for each account, to ensure that they all remain consistent and that you can update them all together in case you need to change the interface of the shared module in a way that will affect all of the calls.

Passing certificate arn to ingress annotation using Terraform

Background
Hi all,
Terraform newbie here.
I'm trying to poll an existing AWS certificate ARN and use that value in my ingress.tf file ingress object annotation.
As a first step, I tried to poll the value using the below terraform code:
# get-certificate-arn.tf
data "aws_acm_certificate" "test" {
domain = "test.example.com"
statuses = ["ISSUED"]
}
output "test" {
value = data.aws_acm_certificate.test.*.arn
description = "TESTING"
}
When I run this code, it gives me my certificate ARN back (YEY!) like the example below:
Changes to Outputs:
+ debugging = [
+ [
+ "arn:aws:acm:us-east-1:1234567890:certificate/12345abc-123-456-789def-12345etc",
]
Question:
I'd like to take this to the next level and use the output from above to feed the ingress annotations as shown by "???" in the code below:
# ingress.tf
resource "kubernetes_ingress_v1" "test_ingress" {
metadata {
name = "test-ingress"
namespace = "default"
annotations = {
"alb.ingress.kubernetes.io/certificate-arn" = ????
...etc...
}
}
I've tried:
"alb.ingress.kubernetes.io/certificate-arn" = data.aws_acm_certificate.test.*.arn
which doesn't work but I can't quite figure out how to pass the value from the get-certificate-arn.tf "data.aws_acm_certificate.test.arn" to the ingress.tf file.
The error I get is:
Error: Incorrect attribute value type
│
│ on ingress.tf line 6, in resource "kubernetes_ingress_v1" "test_ingress":
│ 6: annotations = {
│ 9: "alb.ingress.kubernetes.io/certificate-arn" = data.aws_acm_certificate.test.*.arn
[...truncated...]
│ 16: }
│ ├────────────────
│ │ data.aws_acm_certificate.test is object with 11 attributes
│
│ Inappropriate value for attribute "annotations": element "alb.ingress.kubernetes.io/certificate-arn": string required.
If anyone could advise how (IF?!) one can pass a variable to kubernetes_ingress_v1 'annotations' that would be amazing. I'm still learning Terraform and am still reviewing the fundamentals of passing variables around.
Have you tried maybe using :
"${data.aws_acm_certificate.test.arn}"
or alternatively
you can build the whole annotations block as a local
local{
ingress_annotations = {
somekey = somevalue
some_other_key = data.aws_acm_certificate.test.arn
}
and using it in the resource
annotations = local.ingress_annotations
I'm not that keen on TF
but you might need to have a more complex setup with a for loop.
local{
ingress_annotations = [
{key = value } ,{key = data.aws_acm_certificate.test.arn}
]
}
resource "kubernetes_ingress_v1" "test_ingress" {
metadata {
name = "test-ingress"
namespace = "default"
annotations = {for line in local.ingress_annotations : line.key => line.value
}
}
In the end, the solution was a typo in the data field, removing the "*" resolved the issue. For interests sake, if you want to combine two certificates to an ingress annotation you can join them as shown here[1]:
"alb.ingress.kubernetes.io/certificate-arn" = format("%s,%s",data.aws_acm_certificate.test.arn,data.aws_acm_certificate.test2.arn)

Using Terraform Provider in aws module

I am going through the terraform documentation, and it seems unclear to me. I'm quite new to Terraform so no doubt i'm misunderstanding something here:
https://developer.hashicorp.com/terraform/language/modules/develop/providers
Problem:
My terraform pipeline is returning the following warning:
│
│ on waf-cdn.tf line 9, in module "waf_cdn":
│ 9: aws = aws.useastone
│
│ Module module.waf_cdn does not declare a provider named aws.
│ If you wish to specify a provider configuration for the module, add an entry for aws in the required_providers block within the module.
My root module is calling a child waf module. I understand that i need to configure my provider within my root module. There are 2 files within my root module:
...terraform.tf...
terraform {
backend "s3" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.1"
}
}
}
...and providers.tf...
provider "aws" {
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = "us-east-1"
alias = "useastone"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = var.region
alias = "master"
assume_role {
role_arn = replace(
"arn:aws:iam::${var.master_account_id}:role/${local.role_name}",
local.app_region,
"master"
)
}
}
When calling the child module, the SCOPE attribute of the waf needs to specify the region as us-east-1 for CLOUDFRONT as it is a global service in AWS. Therefore, i need to pass the useastone provider when calling the child waf module as seen below:
module "waf_cdn" {
source = "../modules/qa-aws-waf-common"
name = "${local.waf_prefix}-cdn"
logging_arn = aws_kinesis_firehose_delivery_stream.log_stream_cdn.arn
scope = "CLOUDFRONT"
tags = merge(module.tags.tags, { name = "${local.name_prefix}-qa-waf-cdn" })
providers = {
aws = aws.useastone
}
}
With this code i'm getting the error show above.
I'm banging my head against the documentation here so any help guys would be really appreciated.
Here's hoping, thanks!
As per the documentation you linked, here is the passage you are interested in [1]:
Additional provider configurations (those with the alias argument set) are never inherited automatically by child modules, and so must always be passed explicitly using the providers map.
Since that is the case, you need to define the provider(s) on the module level as well:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
configuration_aliases = [ aws.useastone ]
}
}
}
That would probably be an additional providers.tf file in ../modules/qa-aws-waf-common.
[1] https://developer.hashicorp.com/terraform/language/modules/develop/providers#passing-providers-explicitly

Terraform import : ignore specific resource from public module

I trying to import the state of a (private ) s3 bucket which was created via the console. Im using the public s3 module. I was able to create a module block and import the state of the bucket. However terraform plan also tries to create a aws_s3_bucket_public_access_block . How do I ignore or stop terraform from creating that specific resource from the module ?
main.tf
locals {
region = "dev"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-${region}-bucket"
acl = "private"
block_public_acls = true
block_public_policy = true
lifecycle_rule = [
{
id = "weekly_expiration_rule"
enabled = true
expiration = {
days = 7
}
}
]
}
Import command for bucket - terraform import module.s3_bucket.aws_s3_bucket.this my-dev-bucket
Meanwhile when I try importing the public access block resource I run into the error ` Error: Cannot import non-existent remote object, even when I have the settings configured on the bucket.
Looking into the source code more carefully , specifically this section
resource "aws_s3_bucket_public_access_block" "this" {
count = var.create_bucket && var.attach_public_policy ? 1 : 0
setting attach_public_policy to false got me what I needed
You should run terraform plan to see the real output and read the source code in github (resource "aws_s3_bucket" "this"). You can see count at line 6.
# module.s3_bucket.aws_s3_bucket.this[0] will be created
...
# module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be created
...
You can import with these commands:
t import module.s3_bucket.aws_s3_bucket.this[0] my-test-bucket-823567823576023
t import module.s3_bucket.aws_s3_bucket_public_access_block.this[0] my-test-bucket-823567823576023
My test main.tf, after I import it, t plan show 0 to add
terraform {
required_version = ">= 0.13.1"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.69"
}
random = {
source = "hashicorp/random"
version = ">= 2.0"
}
}
}
provider "aws" {
region = "ap-southeast-1"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-test-bucket-823567823576023"
acl = "private"
}

How to resolve the error message when adding SQS redrive policy for deadletter queue created using for_each

I want terraform to associate my SQS Management Event with my DLQ management event and i want the same thing done with SQS Data Event and DLQ Data Event.I am getting error messages when i run apply on my code below.please I need some help.
.tfvars
sqs_queue_names = ["CloudTrail_SQS_Management_Event", "CloudTrail_SQS_Data_Event"]
dead_queue_names = ["CloudTrail_DLQ_Management_Event", "CloudTrail_DLQ_Data_Event"]
variable.tf
variable "sqs_queue_names"{
description = "The name of different SQS to be created"
type = set(string)
}
variable "dead_queue_names"{
description = "The name of different Dead Queues to be created"
type = set(string)
}
main.tf
resource "aws_sqs_queue" "CloudTrail_SQS"{
for_each = var.sqs_queue_names
name = each.value
redrive_policy = jsonencode({
deadLetterTargetArn = values(aws_sqs_queue.CloudTrail_SQS_DLQ)[*].arn
maxReceiveCount = var.max_receive_count
})
tags = var.default_tags
}
resource "aws_sqs_queue" "CloudTrail_SQS_DLQ"{
for_each = var.dead_queue_names
name = each.value
tags = var.default_tags
}
ERROR MESSAGES:
Error: error creating SQS Queue (CloudTrail_SQS_Management_Event): InvalidParameterValue: Value {"deadLetterTargetArn":["arn:aws:sqs:us-east-1:123456789012:CloudTrail_DLQ_Data_Event","arn:aws:sqs:us-east-1:123456789012:CloudTrail_DLQ_Management_Event"],"maxReceiveCount":10} for parameter RedrivePolicy is invalid. Reason: Invalid value for deadLetterTargetArn.
│ status code: 400, request id: 9663b896-d86f-569e-92e2-e17152c2db26
│
│ with aws_sqs_queue.CloudTrail_SQS["CloudTrail_SQS_Management_Event"],
│ on main.tf line 5, in resource "aws_sqs_queue" "CloudTrail_SQS":
│ 5: resource "aws_sqs_queue" "CloudTrail_SQS"{
│
╵
╷
│ Error: error creating SQS Queue (CloudTrail_SQS_Data_Event): InvalidParameterValue: Value {"deadLetterTargetArn":["arn:aws:sqs:us-east-1:123456789012:CloudTrail_DLQ_Data_Event","arn:aws:sqs:us-east-1:123456789012:CloudTrail_DLQ_Management_Event"],"maxReceiveCount":10} for parameter RedrivePolicy is invalid. Reason: Invalid value for deadLetterTargetArn.
│ status code: 400, request id: 88b8e4c5-1d50-5559-92f8-bd2297fd231f
│
│ with aws_sqs_queue.CloudTrail_SQS["CloudTrail_SQS_Data_Event"],
│ on main.tf line 5, in resource "aws_sqs_queue" "CloudTrail_SQS":
│ 5: resource "aws_sqs_queue" "CloudTrail_SQS"{
The problem here is that you are not associating the dead letter queue with the corresponding SQS queue. values(aws_sqs_queue.CloudTrail_SQS_DLQ)[*].arn - this essentially passes every dead letter queue ARN for each SQS queue, it does not passes to correct ARN only.
In order to overcome this, I suggest creating a module where we can tie together the SQS queue and its DLQ. We can name for now my_sqs:
my_sqs/variables.tf:
variable "sqs_queue_name"{
description = "The name of different SQS to be created"
type = string
}
variable "dead_queue_name"{
description = "The name of different Dead Queues to be created"
type = string
}
variable "max_receive_count" {
type = number
}
my_sqs/main.tf:
resource "aws_sqs_queue" "sqs" {
name = var.sqs_queue_name
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.dlq.arn
maxReceiveCount = var.max_receive_count
})
}
resource "aws_sqs_queue" "dlq" {
name = var.dead_queue_name
}
Now we can use this module like this:
variables.tf:
# Please not, we are tying the SQS and the DQL together here as well.
variable "queue_names" {
default = [
{
sqs_name = "CloudTrail_SQS_Management_Event"
dlq_name = "CloudTrail_DLQ_Management_Event"
},
{
sqs_name = "CloudTrail_SQS_Data_Event"
dlq_name = "CloudTrail_DLQ_Data_Event"
}
]
}
From the main.tf we call the module we created above:
main.tf:
module "my_sqs" {
source = "./my_sqs"
for_each = {
for sqs, dlq in var.queue_names : sqs => dlq
}
sqs_queue_name = each.value.sqs_name
dead_queue_name = each.value.dlq_name
max_receive_count = 4
}
Please note, this example may work with the latest Terraform versions. It may not work with an older version which does not support having a for_each on a module.