I've configured the following certificate using aws_acm_ceritifcate resource:
provider "aws" {
alias = "virginia"
region = "us-east-1"
}
resource "aws_acm_certificate" "primary" {
domain_name = var.domain_name
validation_method = "DNS"
subject_alternative_names = ["*.${var.domain_name}"]
provider = aws.virginia
lifecycle {
create_before_destroy = true
}
tags = merge(
var.tags,
{
Name = "${var.project}-ACM-certificate",
}
)
}
resource "aws_route53_record" "certificate_validator_record" {
allow_overwrite = true
name = tolist(aws_acm_certificate.primary.domain_validation_options)[0].resource_record_name
records = [tolist(aws_acm_certificate.primary.domain_validation_options)[0].resource_record_value]
type = tolist(aws_acm_certificate.primary.domain_validation_options)[0].resource_record_type
zone_id = aws_route53_zone.primary.zone_id
ttl = 60
}
resource "aws_acm_certificate_validation" "certificate_validator" {
certificate_arn = aws_acm_certificate.primary.arn
validation_record_fqdns = [aws_route53_record.certificate_validator_record.fqdn]
}
As you can see, I need the certificate to validate the configured domain and its sub-domains. I configured Cloudfront:
module "cdn" {
source = "terraform-aws-modules/cloudfront/aws"
comment = "CloudFront for caching S3 private and static website"
is_ipv6_enabled = true
price_class = "PriceClass_100"
create_origin_access_identity = true
aliases = [var.frontend_domain_name]
origin_access_identities = {
s3_identity = "S3 dedicated for hosting the frontend"
}
origin = {
s3_identity = {
domain_name = module.s3_bucket.s3_bucket_bucket_regional_domain_name
s3_origin_config = {
origin_access_identity = "s3_identity"
}
}
}
default_cache_behavior = {
target_origin_id = "s3_identity"
viewer_protocol_policy = "redirect-to-https"
default_ttl = 5400
min_ttl = 3600
max_ttl = 7200
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
compress = true
query_string = true
}
default_root_object = "index.html"
custom_error_response = [
{
error_code = 403
response_code = 404
response_page_path = "/index.html"
},
{
error_code = 404
response_code = 404
response_page_path = "/index.html"
}
]
viewer_certificate = {
acm_certificate_arn = aws_acm_certificate.primary.arn
ssl_support_method = "sni-only"
}
tags = merge(
var.tags,
{
Name = "${var.project}-Cloudfront",
Stack = "frontend"
}
)
}
But when I try to create this terraform plan I get this error:
module.cdn.aws_cloudfront_distribution.this[0]: Still creating... [1m0s elapsed]
╷
│ Error: reading ACM Certificate (arn:aws:acm:us-east-1:***:certificate/ARN_PLACEHOLDER): couldn't find resource
│
│ with aws_acm_certificate_validation.certificate_validator,
│ on acm.tf line 33, in resource "aws_acm_certificate_validation" "certificate_validator":
│ 33: resource "aws_acm_certificate_validation" "certificate_validator" {
│
╵
╷
│ Error: error creating CloudFront Distribution: InvalidViewerCertificate: The certificate that is attached to your distribution doesn't cover the alternate domain name (CNAME) that you're trying to add. For more details, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-requirements
│ status code: 400, request id: blabla
│
│ with module.cdn.aws_cloudfront_distribution.this[0],
│ on .terraform/modules/cdn/main.tf line 15, in resource "aws_cloudfront_distribution" "this":
│ 15: resource "aws_cloudfront_distribution" "this" {
│
╵
Releasing state lock. This may take a few moments...
If I go to my AWS account and check the certificate:
So if the certificate is valid and placed in us-east-1, where am I wrong?
I solved the issue with:
resource "aws_acm_certificate_validation" "certificate_validator" {
provider = aws.virginia
certificate_arn = aws_acm_certificate.primary.arn
validation_record_fqdns = [aws_route53_record.certificate_validator_record.fqdn]
}
Problem was that my cert validation was configured in my default region rather than us-east-1 region (as my certificate)
Related
I am trying to generate a certificate and make it validate via DNS... all seems to work, until the last steps when I use resource "aws_acm_certificate_validation"
my code is the following:
# Create Certificate
resource "aws_acm_certificate" "ic_cert" {
provider = aws.us-east-1
domain_name = aws_s3_bucket.ic_bucket_main.bucket
subject_alternative_names = [aws_s3_bucket.ic_bucket_redirect.bucket]
validation_method = "DNS"
tags = {
Billing = "company X"
}
lifecycle {
create_before_destroy = true
}
}
# Validate Certificate via DNS
# get zone_id
data "aws_route53_zone" "selected" {
provider = aws.us-east-1
name = aws_s3_bucket.ic_bucket_main.bucket
}
# Generate DNS Records
resource "aws_route53_record" "ic_DNS_validation" {
provider = aws.us-east-1
for_each = {
for dvo in aws_acm_certificate.ic_cert.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = data.aws_route53_zone.selected.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
# Confirm certificate creation
resource "aws_acm_certificate_validation" "ic_cert_validation" {
certificate_arn = aws_acm_certificate.ic_cert.arn
#validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
#validation_record_fqdns = [aws_route53_record.ic_DNS_validation.fqdn]
validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
}
and I get the following error:
Error: reading ACM Certificate (arn:aws:acm:us-east-1:xxxxxxxxxxxxxxxxxxxxx8:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx): couldn't find resource
│ with aws_acm_certificate_validation.ic_cert_validation,
│ on certificates.tf line 45, in resource "aws_acm_certificate_validation" "ic_cert_validation":
│ 45: resource "aws_acm_certificate_validation" "ic_cert_validation" {
would anybody spot what is the issue?
Since ACM is a regional serivce and the certificate was created using provider = aws.us-east-1 the resource used for certificate validation should also use the same configuration (as the certificates were already created in that region):
resource "aws_acm_certificate_validation" "ic_cert_validation" {
provider = aws.us-east-1
certificate_arn = aws_acm_certificate.ic_cert.arn
#validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
#validation_record_fqdns = [aws_route53_record.ic_DNS_validation.fqdn]
validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
}
I'm trying to create a gcp cloud armor rate limiting "throttle" resource but i keep getting the error below.
Error: Unsupported block type
│
│ on main.tf line 20, in resource "google_compute_security_policy" "throttle":
│ 172: rate_limit_options {
│
│ Blocks of type "rate_limit_options" are not expected here.
Here is what my resource block looks like;
resource "google_compute_security_policy" "throttle" {
name = "${var.environment_name}-throttle"
description = "rate limits request based on throttle"
rule {
action = "throttle"
preview = true
priority = "1000"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
rate_limit_options {
conform_action = "allow"
exceed_action = "deny(429)"
enforce_on_key = "ALL"
rate_limit_threshold {
count = "200"
interval_sec = "300"
}
}
}
}
here is what my provide block look like
provider "google-beta" {
project = var.project[var.environment_name]
region = "us-central1"
}
How do i declare the rate_limit_option block?
This worked for me:
resource "google_compute_security_policy" "throttle" {
name = ${var.environment_name}-throttle"
description = "rate limits"
provider = google-beta
rule {
action = "throttle"
preview = true
priority = "1000"
rate_limit_options {
conform_action = "allow"
exceed_action = "deny(429)"
enforce_on_key = "ALL"
rate_limit_threshold {
count = "200"
interval_sec = "300"
}
}
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
}
}
The block rate_limit_options is supported by the google-beta provider.
Use this:
provider "google-beta" {
project = "my-project-id"
...
}
Using the google-beta provider
I am getting below error while creating firewall manager policy for cloud front distribution.
the documentation provide little details on how to deploy a Cloudfront distribution which is a Global resource.
I am getting below error while executing my code:
aws_fms_policy.xxxx: Creating...
╷
│ Error: error creating FMS Policy: InternalErrorException:
│
│ with aws_fms_policy.xxxx,
│ on r_wafruleset.tf line 1, in resource "aws_fms_policy" "xxxx":
│ 1: resource "aws_fms_policy" "xxxx" {
│
╵
Releasing state lock. This may take a few moments...
main.tf looks like this with provider information:
provider "aws" {
region = "ap-southeast-2"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
r_fms.tf looks like this:
resource "aws_fms_policy" "xxxx" {
name = "xxxx"
exclude_resource_tags = true
resource_tags = var.exclude_tags
remediation_enabled = true
provider = aws.us_east_1
include_map {
account = ["123123123"]
}
resource_type = "AWS::CloudFront::Distribution"
security_service_policy_data {
type = "WAFV2"
managed_service_data = jsonencode(
{
type = "WAFV2"
defaultAction = {
type = "ALLOW"
}
overrideCustomerWebACLAssociation = false
postProcessRuleGroups = []
preProcessRuleGroups = [
{
excludeRules = []
managedRuleGroupIdentifier = {
vendorName = "AWS"
managedRuleGroupName = "AWSManagedRulesAmazonIpReputationList"
version = true
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
{
excludeRules = []
managedRuleGroupIdentifier = {
managedRuleGroupName = "AWSManagedRulesWindowsRuleSet"
vendorName = "AWS"
version = null
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
]
sampledRequestsEnabledForDefaultActions = true
})
}
}
I have tried to follow the thread but still getting below error:
https://github.com/hashicorp/terraform-provider-aws/issues/17821
Terraform Version:
Terraform v1.1.7
on windows_386
+ provider registry.terraform.io/hashicorp/aws v4.6.0
There is open issue in terraform aws provider.
A workaround for this issue is to remove: 'version' attribute;
AWS has recently introduced Versioning with WAF policies managed by Firewall Manager; which is causing this weird error.
Though a permanent fix is InProgress (refer my earlier post) we can remove the attribute to avoid this error.
Another approach is to use the new attribute: versionEnabled=true in case you want versioning enabled.
main.tf
module "vpc" {
source = "../modules/aws/vpc"
env_prefix = "prod"
environment = "production"
env_name = "test"
vpc_cidr = "10.1.0.0/16"
public_subnet_cidrs = {
test-prod-nat = {
subnet = "10.1.15.0/24"
name = "test-prod-nat"
az = "ap-northeast-1a"
}
}
}
nat.tf
resource "aws_nat_gateway" "private" {
for_each = var.public_subnet_cidrs
allocation_id = aws_eip.nat_gateway.id
subnet_id = aws_subnet.public[each.key].id
tags = merge(
local.tags,
{
Name = format("%s_%s_%s", var.env_prefix, var.env_name, "nat-gateway")
}
)
lifecycle {
prevent_destroy = false
}
}
route_table.tf
/**
* for private subnet
*/
resource "aws_route_table" "private" {
vpc_id = aws_vpc.dandori.id
tags = merge(
local.tags,
{
Name = format("%s_%s", var.env_prefix, var.env_name)
}
)
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = [
for v in aws_nat_gateway.private : v.id
]
}
lifecycle {
prevent_destroy = false
}
}
When I run the terraform plan after creating the above tf file, I get the following Error
【Error】
╷
│ Error: Incorrect attribute value type
│
│ on ../modules/aws/vpc/route_table.tf line 55, in resource "aws_route_table" "private":
│ 55: nat_gateway_id = [
│ 56: for v in aws_nat_gateway.private : v.id
│ 57: ]
│ ├────────────────
│ │ aws_nat_gateway.private is object with 1 attribute "test-prod-nat"
│
│ Inappropriate value for attribute "nat_gateway_id": string required.
route_table.tf and nat.tf will be files in the module
I'm trying to set the nat_gateway_id in route_table.tf using the for loop method, but I can't set it correctly as shown in the Error message.
What should I do to solve this problem?
Please give me some advice.
If you want to create a route table for each aws_nat_gateway.private, then it should be:
resource "aws_route_table" "private" {
for_each = aws_nat_gateway.private
vpc_id = aws_vpc.dandori.id
tags = merge(
local.tags,
{
Name = format("%s_%s", var.env_prefix, var.env_name)
}
)
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = each.value["id"]
}
lifecycle {
prevent_destroy = false
}
}
Following this Github repo, the user pool domain farm_users is created yet terraform applyreturns this error. Tried destroy. Tried deleting the user pool domain in the aws console and repeating apply.
╷
│ Error: Error creating Cognito User Pool Domain: InvalidParameterException: Domain already associated with another user pool.
│
│ with module.api.aws_cognito_user_pool_domain.farm_users_pool_domain,
│ on modules/api/main.tf line 55, in resource "aws_cognito_user_pool_domain" "farm_users_pool_domain":
│ 55: resource "aws_cognito_user_pool_domain" "farm_users_pool_domain" {
│
After running apply:
$ aws cognito-idp describe-user-pool-domain --domain "fupdomain"
An error occurred (ResourceNotFoundException) when calling the DescribeUserPoolDomain operation: User pool domain fupdomain does not exist in this account.
main.tf
provider "aws" {
version = "~> 2.31"
region = var.region
}
data "aws_caller_identity" "current" {}
resource "random_string" "build_id" {
length = 16
special = false
upper = false
number = false
}
module "network" {
source = "./modules/network"
availability_zone = var.availability_zone
vpc_cidr = var.vpc_cidr
}
module "node_iam_role" {
source = "./modules/node_iam_role"
}
resource "aws_s3_bucket" "render_bucket" {
bucket = "${random_string.build_id.result}-render-data"
acl = "private"
}
# Stores server-side code bundles. i.e. Worker node and lambda layer
resource "aws_s3_bucket" "code_bundles_bucket" {
bucket = "${random_string.build_id.result}-code-bundles"
acl = "private"
}
# Stores and serves javascript client
resource "aws_s3_bucket" "client_bucket" {
bucket = "${random_string.build_id.result}-client-bucket"
acl = "public-read"
website {
index_document = "index.html"
error_document = "error.html"
}
}
# Code bundles
data "archive_file" "worker_node_code" {
type = "zip"
source_dir = "${path.root}/src/farm_worker"
output_path = "${path.root}/src/bundles/farm_worker.zip"
}
resource "aws_s3_bucket_object" "worker_code_bundle" {
bucket = aws_s3_bucket.code_bundles_bucket.id
key = "farm_worker.zip"
source = "${path.root}/src/bundles/farm_worker.zip"
depends_on = [data.archive_file.worker_node_code]
}
# Security groups for the worker nodes
resource "aws_security_group" "ssh" {
name = "allow_ssh"
vpc_id = module.network.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "nfs" {
name = "NFS"
vpc_id = module.network.vpc_id
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Build queues for project init and frame rendering
resource "aws_sqs_queue" "frame_render_deadletter" {
name = "frame_render_deadletter_queue"
}
resource "aws_sqs_queue" "frame_render_queue" {
name = "frame_render_queue"
visibility_timeout_seconds = 7000
redrive_policy = "{\"deadLetterTargetArn\":\"${aws_sqs_queue.frame_render_deadletter.arn}\",\"maxReceiveCount\":5}"
}
resource "aws_sqs_queue" "project_init_queue" {
name = "project_init_queue"
visibility_timeout_seconds = 7000
}
# EFS for shared storage during baking and rendering
resource "aws_efs_file_system" "shared_render_vol" {
tags = {
Name = "SharedRenderEFS"
}
}
resource "aws_efs_mount_target" "shared_mount" {
file_system_id = aws_efs_file_system.shared_render_vol.id
subnet_id = module.network.subnet_id
security_groups = [aws_security_group.nfs.id]
}
module "worker_node" {
source = "./modules/worker_node"
key_name = var.node_key_name
image_id = var.blender_node_image_id
vpc_security_group_ids = [aws_security_group.ssh.id, aws_security_group.nfs.id]
iam_instance_profile = module.node_iam_role.worker_iam_profile_name
build_id = random_string.build_id.result
region = var.region
render_bucket = aws_s3_bucket.render_bucket.id
code_bucket = aws_s3_bucket.code_bundles_bucket.id
frame_queue_url = aws_sqs_queue.frame_render_queue.id
project_init_queue_url = aws_sqs_queue.project_init_queue.id
shared_file_system_id = aws_efs_file_system.shared_render_vol.id
instance_types = var.instance_types
asg_name = var.worker_asg_name
asg_subnets = [module.network.subnet_id]
asg_max_workers = var.worker_node_max_count
asg_min_workers = 0
cloudwatch_namespace = var.cloudwatch_namespace
}
module "bpi_emitter" {
source = "./modules/bpi_emitter"
cloudwatch_namespace = var.cloudwatch_namespace
asg_name = module.worker_node.asg_name
frame_queue = aws_sqs_queue.frame_render_queue.id
project_init_queue = aws_sqs_queue.project_init_queue.id
frame_queue_bpi = var.frame_queue_bpi
project_init_queue_bpi = var.project_init_queue_bpi
}
# module "bucket_upload_listener" {
# source = "./modules/bucket_upload_listener"
# bucket_name = aws_s3_bucket.render_bucket.id
# bucket_arn = aws_s3_bucket.render_bucket.arn
# project_init_queue = aws_sqs_queue.project_init_queue.id
# }
resource "aws_dynamodb_table" "projects_table" {
name = "FarmProjects"
billing_mode = "PAY_PER_REQUEST"
hash_key = "ProjectId"
attribute {
name = "ProjectId"
type = "S"
}
}
resource "aws_dynamodb_table" "application_settings" {
name = "FarmApplicationSettings"
billing_mode = "PAY_PER_REQUEST"
hash_key = "SettingName"
attribute {
name = "SettingName"
type = "S"
}
}
module "api" {
source = "./modules/api"
region = var.region
bucket = aws_s3_bucket.render_bucket.id
frame_queue = aws_sqs_queue.frame_render_queue.id
project_init_queue = aws_sqs_queue.project_init_queue.id
client_endpoint = "https://${aws_s3_bucket.client_bucket.website_endpoint}"
dynamo_tables = {
projects = aws_dynamodb_table.projects_table.name,
application_settings = aws_dynamodb_table.application_settings.name
}
}
The domain name should be globally unique. This means that, if in another account the same domain is used, then you can't use it. Try for example:
aws cognito-idp create-user-pool-domain --domain fupdomain --user-pool-id <pool-id>
The output will be:
An error occurred (InvalidParameterException) when calling the
CreateUserPoolDomain operation: Domain already associated with another
user pool.
This makes sense, as the domain name is used to build a url of the form:
https://{domain}.auth.us-east-1.amazoncognito.com
This is where users should be authenticated against.
You need to edit the template and pick another name.