Internal Exception while creating AWS FMS Policy for CloudFront - amazon-web-services

I am getting below error while creating firewall manager policy for cloud front distribution.
the documentation provide little details on how to deploy a Cloudfront distribution which is a Global resource.
I am getting below error while executing my code:
aws_fms_policy.xxxx: Creating...
╷
│ Error: error creating FMS Policy: InternalErrorException:
│
│ with aws_fms_policy.xxxx,
│ on r_wafruleset.tf line 1, in resource "aws_fms_policy" "xxxx":
│ 1: resource "aws_fms_policy" "xxxx" {
│
╵
Releasing state lock. This may take a few moments...
main.tf looks like this with provider information:
provider "aws" {
region = "ap-southeast-2"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
r_fms.tf looks like this:
resource "aws_fms_policy" "xxxx" {
name = "xxxx"
exclude_resource_tags = true
resource_tags = var.exclude_tags
remediation_enabled = true
provider = aws.us_east_1
include_map {
account = ["123123123"]
}
resource_type = "AWS::CloudFront::Distribution"
security_service_policy_data {
type = "WAFV2"
managed_service_data = jsonencode(
{
type = "WAFV2"
defaultAction = {
type = "ALLOW"
}
overrideCustomerWebACLAssociation = false
postProcessRuleGroups = []
preProcessRuleGroups = [
{
excludeRules = []
managedRuleGroupIdentifier = {
vendorName = "AWS"
managedRuleGroupName = "AWSManagedRulesAmazonIpReputationList"
version = true
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
{
excludeRules = []
managedRuleGroupIdentifier = {
managedRuleGroupName = "AWSManagedRulesWindowsRuleSet"
vendorName = "AWS"
version = null
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
]
sampledRequestsEnabledForDefaultActions = true
})
}
}
I have tried to follow the thread but still getting below error:
https://github.com/hashicorp/terraform-provider-aws/issues/17821
Terraform Version:
Terraform v1.1.7
on windows_386
+ provider registry.terraform.io/hashicorp/aws v4.6.0

There is open issue in terraform aws provider.

A workaround for this issue is to remove: 'version' attribute;
AWS has recently introduced Versioning with WAF policies managed by Firewall Manager; which is causing this weird error.
Though a permanent fix is InProgress (refer my earlier post) we can remove the attribute to avoid this error.
Another approach is to use the new attribute: versionEnabled=true in case you want versioning enabled.

Related

EventBridge Target RoleArn is required for target

I'm using Terraform 1.3.5 and this module previously worked flawlessly, until I renamed the module. Now I am getting this error:
Error: creating EventBridge Target (cleanup-terraform-20221130175229684800000001): ValidationException: RoleArn is required for target arn:aws:events:us-east-1:123456789012:api-destination/services-destination/c187090f-268b-4d9b-b09d-f9b077e0c0cf.
│ status code: 400, request id: 63dc6425-2a94-4f66-b7c2-106b0607d964
│
│ with module.a-eventbridge-trigger.aws_cloudwatch_event_target.api_destination,
│ on ..\a-eventbridge-trigger\main.tf line 61, in resource "aws_cloudwatch_event_target" "api_destination":
│ 61: resource "aws_cloudwatch_event_target" "api_destination" {
Here is the complete content of the main.tf in the module:
# configures api connection
resource "aws_cloudwatch_event_connection" "auth" {
name = "services-token"
description = "Gets oauth bearer token"
authorization_type = "OAUTH_CLIENT_CREDENTIALS"
auth_parameters {
oauth {
authorization_endpoint = "${var.vars.apiBaseUrl}${var.vars.auth}"
http_method = "POST"
client_parameters {
client_id = var.secretContent.Client_Id
client_secret = var.secretContent.Client_Secret
}
oauth_http_parameters {
body {
key = "grant_type"
value = "client_credentials"
is_value_secret = true
}
body {
key = "client_id"
value = var.secretContent.Client_Id
is_value_secret = true
}
body {
key = "client_secret"
value = var.secretContent.Client_Secret
is_value_secret = true
}
}
}
}
}
# configures api destination
resource "aws_cloudwatch_event_api_destination" "request" {
name = "services-destination"
description = "Requests clean up"
invocation_endpoint = "${var.vars.apiBaseUrl}${var.vars.endpoint}"
http_method = "POST"
invocation_rate_limit_per_second = 20
connection_arn = aws_cloudwatch_event_connection.auth.arn
}
# sets up the scheduling
resource "aws_cloudwatch_event_rule" "every_midnight" {
name = "${var.name}-services-cleanup"
description = "Fires on every day at midnight of UTC+0"
schedule_expression = "cron(0 0 * * ? *)"
is_enabled = true
}
# tells the scheduler to call the api destination
resource "aws_cloudwatch_event_target" "api_destination" {
rule = aws_cloudwatch_event_rule.every_midnight.name
arn = aws_cloudwatch_event_api_destination.request.arn
}
And the module is called like this from the root module:
module "a-eventbridge-trigger" {
source = "../a-eventbridge-trigger"
name = local.prefixName
resourceTags = local.commonTags
vars = var.vars
secretContent = var.secrets
}
Here is the providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.43.0"
}
}
backend "s3" {}
}
What am I missing and why would it stop working suddenly?
I have run a complete destroy and fresh apply but I still get this.

Terraform: get account id for provider + for_each + account module

I'm trying to create multiple AWS Accounts in an Organization containing ressources.
The resources should owned by the created accounts.
for that I created a module for the accounts:
resource "aws_organizations_account" "this" {
name = var.customer
email = var.email
parent_id = var.parent_id
role_name = "OrganizationAccountAccessRole"
provider = aws.src
}
resource "aws_s3_bucket" "this" {
bucket = "exconcept-terraform-state-${var.customer}"
provider = aws.dst
depends_on = [
aws_organizations_account.this
]
}
output "account_id" {
value = aws_organizations_account.this.id
}
output "account_arn" {
value = aws_organizations_account.this.arn
}
my provider file for the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
configuration_aliases = [ aws.src, aws.dst ]
}
}
}
In the root module I'm calling the module like this:
module "account" {
source = "./modules/account"
for_each = var.accounts
customer = each.value["customer"]
email = each.value["email"]
# close_on_deletion = true
parent_id = aws_organizations_organizational_unit.testing.id
providers = {
aws.src = aws.default
aws.dst = aws.customer
}
}
Since the provider information comes from the root module, and the accounts are created with a for_each map, how can I use the current aws.dst provider?
Here is my root provider file:
provider "aws" {
region = "eu-central-1"
profile = "default"
alias = "default"
}
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::${module.account[each.key].account_id}:role/OrganizationAccountAccessRole"
}
alias = "customer"
region = "eu-central-1"
}
With Terraform init I got this error:
Error: Cycle: module.account.aws_s3_bucket_versioning.this, module.account.aws_s3_bucket.this, provider["registry.terraform.io/hashicorp/aws"].customer, module.account.aws_s3_bucket_acl.this, module.account (close)

How do you declare a gcp rate_limit_options block in terraform

I'm trying to create a gcp cloud armor rate limiting "throttle" resource but i keep getting the error below.
Error: Unsupported block type
│
│ on main.tf line 20, in resource "google_compute_security_policy" "throttle":
│ 172: rate_limit_options {
│
│ Blocks of type "rate_limit_options" are not expected here.
Here is what my resource block looks like;
resource "google_compute_security_policy" "throttle" {
name = "${var.environment_name}-throttle"
description = "rate limits request based on throttle"
rule {
action = "throttle"
preview = true
priority = "1000"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
rate_limit_options {
conform_action = "allow"
exceed_action = "deny(429)"
enforce_on_key = "ALL"
rate_limit_threshold {
count = "200"
interval_sec = "300"
}
}
}
}
here is what my provide block look like
provider "google-beta" {
project = var.project[var.environment_name]
region = "us-central1"
}
How do i declare the rate_limit_option block?
This worked for me:
resource "google_compute_security_policy" "throttle" {
name = ${var.environment_name}-throttle"
description = "rate limits"
provider = google-beta
rule {
action = "throttle"
preview = true
priority = "1000"
rate_limit_options {
conform_action = "allow"
exceed_action = "deny(429)"
enforce_on_key = "ALL"
rate_limit_threshold {
count = "200"
interval_sec = "300"
}
}
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
}
}
The block rate_limit_options is supported by the google-beta provider.
Use this:
provider "google-beta" {
project = "my-project-id"
...
}
Using the google-beta provider

getting error while trying to copy data using google_bigquery_data_transfer_config using terraform

I am trying to setup a bigquery data transfer configuration using terraform. I am using my personal gcp account. I have a setup of terraform on my laptop so that terraform and gcp can work together.
Trying this below code in main.tf,
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.18.0"
}
}
}
provider "google" {
# Configuration options
project="gcp-project-100"
region="us-central1"
zone="us-central1-a"
credentials = "keys.json"
}
data "google_project" "project" {
}
resource "google_project_iam_member" "permissions" {
project = data.google_project.project.project_id
role = "roles/iam.serviceAccountShortTermTokenMinter"
member = "serviceAccount:service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
}
resource "google_bigquery_data_transfer_config" "query_config" {
depends_on = [google_project_iam_member.permissions]
display_name = "my-query"
location = "US"
data_source_id = "scheduled_query"
schedule = "every wednesday 09:30"
service_account_name = "service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
destination_dataset_id = "practice"
params = {
destination_table_name_template = "test_gsod"
write_disposition = "WRITE_TRUNCATE"
query = "select station_number , year , month,day, mean_temp,mean_dew_point ,mean_visibility from `bigquery-public-data.samples.gsod`"
}
}
terraform apply is failing with below details
google_bigquery_data_transfer_config.query_config: Creating...
╷
│ Error: Error creating Config: googleapi: Error 403: The caller does not have permission
│
│ with google_bigquery_data_transfer_config.query_config,
│ on main.tf line 27, in resource "google_bigquery_data_transfer_config" "query_config":
│ 27: resource "google_bigquery_data_transfer_config" "query_config" {
Can someone please help me , how to do this setup properly.
The issue is resolved now. I have used below piece of code along with bigquery admin role for my terraform service account.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.18.0"
}
}
}
provider "google" {
# Configuration options
project="gcp-project-100"
region="us-central1"
zone="us-central1-a"
credentials = "keys.json"
}
resource "google_bigquery_data_transfer_config" "query_config" {
display_name = "my-query"
location = "US"
data_source_id = "scheduled_query"
schedule = "every 15 mins"
destination_dataset_id = "practice"
params = {
destination_table_name_template = "test_gsod"
write_disposition = "WRITE_TRUNCATE"
query = "select station_number , year , month,day, mean_temp,mean_dew_point ,mean_visibility from `bigquery-public-data.samples.gsod`"
}
}
Now it is working fine. Thanks.

Passing Alias AWS Provider to Child Module Terraform

I am trying to pass two AWS Terraform providers to my child module. I want the default to stay unaliased, because I can't go through and add a provider to all of the terraform resources in the parent module.
Parent Module------------------------------------------
versions.tf
terraform {
required_version = "~> 1.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "some-org"
workspaces {
prefix = "some-state-file"
}
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
configuration_aliases = [ aws.domain-management ]
}
}
}
provider "aws" {
access_key = var.aws_access_key_id
secret_key = var.aws_secret_access_key
region = var.aws_region
default_tags {
tags = {
Application = var.application_name
Environment = var.environment
}
}
}
provider "aws" {
alias = "domain-management"
region = var.domain_management_aws_region
access_key = var.domain_management_aws_access_key_id
secret_key = var.domain_management_aws_secret_access_key
}
module.tf (calling child module)
module "vanity-cert-test" {
source = "some-source"
fully_qualified_domain_name = "some-domain.com"
alternative_names = ["*.${var.dns_zone.name}"]
application_name = var.application_name
environment = var.environment
service_name = var.service_name
domain_managment_zone_name = "some-domain02.com"
providers = {
aws.domain-management = aws.domain-management
}
}
Child Module-------------------------------------------------------
versions.tf
terraform {
required_version = "~> 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
confiuration_aliases = [aws.domain-management]
}
}
}
provider "aws" {
alias = domain-management
}
route53.tf
# Create validation Route53 records
resource "aws_route53_record" "vanity_route53_cert_validation" {
# use domain management secondary aws provider
provider = aws.domain-management
for_each = {
for dvo in aws_acm_certificate.vanity_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
zone_id = data.aws_route53_zone.vanity_zone.zone_id
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
allow_overwrite = true
}
The use case for this is to have a vanity cert defined in a seperate account from where the DNS Validation for the certificate needs to go. Currently when running this, I get the following error:
terraform plan -var-file=./application.tfvars
╷
│ Warning: Provider aws.domain-management is undefined
│
│ on services/self-service-ticket-portal-app/ssl-certificate.tf line 33, in module "vanity-cert-test":
│ 33: aws.domain-management = aws.domain-management
│
│ Module module.services.module.self-service-ticket-portal-app.module.vanity-cert-test does not declare a provider named aws.domain-management.
│ If you wish to specify a provider configuration for the module, add an entry for aws.domain-management in the required_providers block within the module.
╵
╷
│ Error: missing provider module.services.module.self-service-ticket-portal-app.provider["registry.terraform.io/hashicorp/aws"].domain-management
If your "Parent Module" is the root module, then you can't use configuration_aliases in it. configuration_aliases is only used in child modules:
To declare a configuration alias within a module in order to receive an alternate provider configuration from the parent module, add the configuration_aliases argument to that provider's required_providers entry.