Terraform WAF Web ACL Resource is useless? - amazon-web-services

Terraform provides a WAF Web ACL Resource. Can it be attached to anything using terraform such as an ALB or is it useless?

With the release of the 1.12 AWS provider it is now possible to directly create regional WAF resources for use with load balancers.
You can now create any of a aws_wafregional_byte_match_set, aws_wafregional_ipset, aws_wafregional_size_constraint_set, aws_wafregional_sql_injection_match_set or aws_wafregional_xss_match_set, linking these to aws_wafregional_rule as predicates and then in turn adding the WAF rules to a aws_wafregional_web_acl. Then finally you can attach the regional WAF to a load balancer with the aws_wafregional_web_acl_association resource.
The Regional WAF Web ACL association resource docs give a helpful example of how they all link together:
resource "aws_wafregional_ipset" "ipset" {
name = "tfIPSet"
ip_set_descriptor {
type = "IPV4"
value = "192.0.7.0/24"
}
}
resource "aws_wafregional_rule" "foo" {
name = "tfWAFRule"
metric_name = "tfWAFRule"
predicate {
data_id = "${aws_wafregional_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
resource "aws_wafregional_web_acl" "foo" {
name = "foo"
metric_name = "foo"
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = "${aws_wafregional_rule.foo.id}"
}
}
resource "aws_vpc" "foo" {
cidr_block = "10.1.0.0/16"
}
data "aws_availability_zones" "available" {}
resource "aws_subnet" "foo" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "10.1.1.0/24"
availability_zone = "${data.aws_availability_zones.available.names[0]}"
}
resource "aws_subnet" "bar" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "10.1.2.0/24"
availability_zone = "${data.aws_availability_zones.available.names[1]}"
}
resource "aws_alb" "foo" {
internal = true
subnets = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"]
}
resource "aws_wafregional_web_acl_association" "foo" {
resource_arn = "${aws_alb.foo.arn}"
web_acl_id = "${aws_wafregional_web_acl.foo.id}"
}
Original post:
The regional WAF resources have been caught up in a mixture of review and people abandoning pull requests but are scheduled for the AWS provider 1.12.0 release.
Currently there are only byte match set and IP address set resources available so they're not much use without the rule, ACL and association resources to actually do things with.
Until then you could use CloudFormation with Terraform's own escape hatch aws_cloudformation_stack resource with something like this:
resource "aws_lb" "load_balancer" {
...
}
resource "aws_cloudformation_stack" "waf" {
name = "waf-example"
parameters {
ALBArn = "${aws_lb.load_balancer.arn}"
}
template_body = <<STACK
Parameters:
ALBArn:
Type: String
Resources:
WAF:
Type: AWS::WAFRegional::WebACL
Properties:
Name: WAF-Example
DefaultAction:
Type: BLOCK
MetricName: WafExample
Rules:
- Action:
Type: ALLOW
Priority: 2
RuleId:
Ref: WhitelistRule
WhitelistRule:
Type: AWS::WAFRegional::Rule
Properties:
Name: WAF-Example-Whitelist
MetricName: WafExampleWhiteList
Predicates:
- DataId:
Ref: ExternalAPIURI
Negated: false
Type: ByteMatch
ExternalAPIURI:
Type: AWS::WAFRegional::ByteMatchSet
Properties:
Name: WAF-Example-StringMatch
ByteMatchTuples:
- FieldToMatch:
Type: URI
PositionalConstraint: STARTS_WITH
TargetString: /public/
TextTransformation: NONE
WAFALBattachment:
Type: AWS::WAFRegional::WebACLAssociation
Properties:
ResourceArn:
Ref: ALBArn
WebACLId:
Ref: WAF
STACK
}

Related

How can I set a Route53 record as an alias for EKS load balancer?

I set an EKS cluster using Terraform. I try to set Route53 record to map my domain name, to the load balancer of my cluster.
I set my EKS cluster:
resource "aws_eks_cluster" "main" {
name = "${var.project}-cluster"
role_arn = aws_iam_role.cluster.arn
version = "1.24"
vpc_config {
subnet_ids = flatten([aws_subnet.public[*].id, aws_subnet.private[*].id])
endpoint_private_access = true
endpoint_public_access = true
public_access_cidrs = ["0.0.0.0/0"]
}
tags = merge(
var.tags,
{
Stack = "backend"
Name = "${var.project}-eks-cluster",
}
)
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
]
}
And I have created the following k8s service:
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: dashboard-backend
type: LoadBalancer
ports:
- protocol: TCP
port: '$PORT'
targetPort: '$PORT'
As far as I know, once I deploy a k8s service, AWS automatically generates an ALB resource for my service. So, I set this route53 sources:
resource "aws_route53_zone" "primary" {
name = var.domain_name
tags = merge(
var.tags,
{
Name = "${var.project}-Route53-zone",
}
)
}
data "kubernetes_service" "backend" {
metadata {
name = "backend-service"
}
}
resource "aws_route53_record" "backend_record" {
zone_id = aws_route53_zone.primary.zone_id
name = "www.api"
type = "A"
ttl = "300"
alias {
name = data.kubernetes_service.backend.status.0.load_balancer.0.ingress.0.hostname
zone_id = ??????
evaluate_target_health = true
}
}
I did get the load balancer host name using data.kubernetes_service.backend.status.0.load_balancer.0.ingress.0.hostname, but how can I get its zone ID to use in zone_id key?
You can get the ELB-hosted Zone Id using the data source aws_elb_hosted_zone_id, as it only depends on the region where you created this ELB. Technically you can hardcode this value also because these are static values on a regional basis.
Official AWS Documentation on Elastic Load Balancing endpoints
resource "aws_route53_zone" "primary" {
name = var.domain_name
tags = merge(
var.tags,
{
Name = "${var.project}-Route53-zone",
}
)
}
data "kubernetes_service" "backend" {
metadata {
name = "backend-service"
}
}
## Add data source ##
data "aws_elb_hosted_zone_id" "this" {}
### This will use your aws provider-level region config otherwise mention explicitly.
resource "aws_route53_record" "backend_record" {
zone_id = aws_route53_zone.primary.zone_id
name = "www.api"
type = "A"
ttl = "300"
alias {
name = data.kubernetes_service.backend.status.0.load_balancer.0.ingress.0.hostname
zone_id = data.aws_elb_hosted_zone_id.this.id ## Updated ##
evaluate_target_health = true
}
}
Out of your question scope, even though hopefully this may work but I would also suggest you look into external-dns for managing DNS with EKS.

aws eks access denied aws-auth ConfigMap in your cluster is invalid error on creating eks managed node group using terraform

I have one eks cluster with fargate compute capacity. Now i am adding eks node group as compute capacity. I have created terraform script to create eks node group and launch a template for new node group.
when I am running terraform script using the eks cluster owner role. I am getting the following error message.
Error: error waiting for EKS Node Group to create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
* : AccessDenied: The aws-auth ConfigMap in your cluster is invalid.
terraform code
#--- setup launch template for eks nodegroups ---#
resource "aws_launch_template" "eks_launch_template" {
name = "launch-template"
key_name = var.ssh_key_name
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = var.disk_size
}
}
tag_specifications{
resource_type= "instance"
tags = merge(var.tags, { Name = "${local.name_prefix}-eks-node" })
}
tag_specifications{
resource_type= "volume"
tags = var.tags
}
tag_specifications{
resource_type= "network-interface"
tags = var.tags
}
tag_specifications{
resource_type= "spot-instances-request"
tags = var.tags
}
vpc_security_group_ids =[aws_security_group.eks_worker_node_sg.id]
}
#--- setup eks ondemand nodegroup ---#
resource "aws_eks_node_group" "eks_on_demand" {
cluster_name = aws_eks_cluster.eks_cluster.name
node_group_name = "${local.name_prefix}-group"
node_role_arn = aws_iam_role.eks_ec2_role.arn
subnet_ids = var.private_subnets
instance_types = var.nodegroup_instance_types
launch_template {
id = aws_launch_template.eks_launch_template.id
version = aws_launch_template.eks_launch_template.latest_version
}
scaling_config {
desired_size = var.desire_size
max_size = var.max_size
min_size = var.min_size
}
update_config {
max_unavailable = 1
}
tags = var.tags
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
}
#--- eks ec2 node iam role ---#
resource "aws_iam_role" "eks_ec2_role" {
name = "${local.name_prefix}-eks-node-role"
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
#--- attach workernode policy to ec2---#
resource "aws_iam_role_policy_attachment" "eks_ec2_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_ec2_role.name
}
#--- attach cni policy to ec2---#
resource "aws_iam_role_policy_attachment" "eks_ec2_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_ec2_role.name
}
#-- attach ecr read access policy to ec2 ---#
resource "aws_iam_role_policy_attachment" "eks_ec2_ecr_read_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_ec2_role.name
}
Issue was in my aws-auth configmap. It looks like aws eks is performing validation on configmap. if your rolemapping contain common username then it eks will throw an error, for example
- groups:
- Dev-viewer
rolearn: arn:aws:iam::<>:role/<>
username: {{SessionName}}
- groups:
- Dev-manager
rolearn: arn:aws:iam::<>:role/<>
username: {{SessionName}}
- groups:
- Dev-admin
rolearn: arn:aws:iam::<>:role/<>
username: {{SessionName}}
I have change username section of each role.
- groups:
- Dev-viewer
rolearn: arn:aws:iam::<>:role/<>
username: view-{{SessionName}}
- groups:
- Dev-manager
rolearn: arn:aws:iam::<>:role/<>
username: manager-{{SessionName}}
- groups:
- Dev-admin
rolearn: arn:aws:iam::<>:role/<>
username: admin-{{SessionName}}

Is this possible to connect kubernetes ingress ALB with terraform managed alb/route 53 while serving requests on same dns name?

I have a setup in AWS with different lambdas - all managed by terraform. Now only requests to path like https://example.com/home or https://example.com/blog are forwarded to different AWS lambdas using route53 record and ALB with different rules - here is an example for /home/ path:
resource "aws_route53_record" "dns-record" {
name = "example.com"
zone_id = var.zone_id
type = "CNAME"
ttl = "300"
records = [aws_lb.alb.dns_name]
}
resource "aws_lb" "alb" {
name = "my-alb..."
........
}
resource "aws_lb_listener" "alb-in-443" {
load_balancer_arn = aws_lb.alb.arn
port = "443"
protocol = "HTTPS"
........
default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "Fixed response content"
status_code = "200"
}
}
resource "aws_lb_listener_rule" "home-in-443" {
listener_arn = aws_lb_listener.alb-in-443.arn
priority = 100
action {
type = "forward"
target_group_arn = aws_lb_target_group.home-alb-tg.arn
}
condition {
path_pattern {
values = ["/home/*"]
}
}
}
resource "aws_lb_target_group" "home-alb-tg" {
name = "home-alb-tg-lambda"
target_type = "lambda"
vpc_id = data.aws_vpc.vpc.id
}
resource "aws_lambda_permission" "home-lb-lambda-permission" {
......
}
resource "aws_lb_target_group_attachment" "home-alb-tg-attachment" {
.....
}
So far all works fine, but now I need to add AWS EKS cluster and forward all requests to https://example.com to EKS - while continuing to serve /home or /blog with AWS lambda. I can create another ALB with AWS Load balancer controller and then forward requests using such Ingress resource in front of my service, with such config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-service
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: my-app
spec:
rules:
- host: "example.com"
http:
paths:
- path: /*
backend:
serviceName: my-service
servicePort: 80
But this ALB will be detached from route53 and furthermore, such path will conflict with path defined in terraform loadbalancer rule described above. On the other hand, I can define all conditions for all paths (/home,/blog,etc) in ingress config above - but I won't be able to bind them with lambdas.
So, question is - is such setup with serving main url from EKS and different paths with lambdas even possible? Maybe this can be done with aws cloudfront somehow?
Well, it seems that this is technically possible with Cloudfront - here is config that I used. I created 2 different origins -one points to dns name from k8s ALB and another points to dns name from ALB created with terraform.
Here is config:
data "aws_lb" "eks-lb" {
name = "k8s-default-appservi-3f93453" -- we need to get alb name created in k8s - this doesn't look good but we can't specify alb name right now
}
resource "aws_cloudfront_distribution" "my-distribution" {
enabled = true
is_ipv6_enabled = true
aliases = "example.com"
origin {
domain_name = data.aws_lb.eks-lb.dns_name - use DNS name from eks alb here
origin_id = "my-app"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
origin {
domain_name = aws_lb.alb.dns_name - use DNS name from "alb" lb created in terraform above
origin_id = "home"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "https-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "my-app"
forwarded_values {
headers = [ "Host" , "Origin"]
query_string = true
cookies {
forward = "all"
}
}
min_ttl = 0
default_ttl = 0
max_ttl = 0
viewer_protocol_policy = "redirect-to-https"
}
ordered_cache_behavior {
path_pattern = "/home/*"
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "home"
forwarded_values {
headers = [ "Host", "Origin" ]
query_string = true
cookies {
forward = "all"
}
}
min_ttl = 0
default_ttl = 0
max_ttl = 0
viewer_protocol_policy = "redirect-to-https"
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = "cert.arn"
ssl_support_method = "sni-only"
}
}
But I don't like this solution because k8s ALB dns name should be hardcoded and also we have aws resources (ALB and target group) which are not managed by Terraform and which stayed in account even after I deleted both aws load balancer controller and ingress service (github issue). So maybe better solution will be to change AWS load balancer controller to just ingress-nginx with NLB before it and use external-dns to create dns record which will be used in cloudfront configuration.

Access S3 from Lambda within VPC using Terraform

I have a Lambda
resource "aws_lambda_function" "api" {
function_name = "ApiController"
timeout = 10
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/sketch-avatar-api-1.0.0-all.jar"
handler = "io.micronaut.function.aws.proxy.MicronautLambdaHandler"
runtime = "java11"
memory_size = 1024
role = aws_iam_role.api_lambda.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for subnet in aws_subnet.private: subnet.id]
}
}
Within a VPC
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
enable_dns_support = true
enable_dns_hostnames = true
}
I created a aws_vpc_endpoint because I read that's what's need for my VPC to access S3
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
}
I created and attached a policy allowing access to S3
resource "aws_iam_role_policy_attachment" "s3" {
role = aws_iam_role.api_lambda.name
policy_arn = aws_iam_policy.s3.arn
}
resource "aws_iam_policy" "s3" {
policy = data.aws_iam_policy_document.s3.json
}
data "aws_iam_policy_document" "s3" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"s3:*",
]
}
}
It might be worth noting that the buckets I'm trying to access is created using the aws cli but in the same region. So not with terraform.
The problem is that my Lambda is timing out when I try to read files from S3.
The full project can be found here should anyone want to take a peek.
You are creating com.amazonaws.${var.region}.s3 which is gateway VPC endpoint , which shouldn't be confused with interface VPC endpoints.
One of the key differences between the two is that the gateway type requires association with route tables. Thus you should use route_table_ids to associate your S3 gateway with route tables of your subnets.
For example, to use default main VPC route table:
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
route_table_ids = [aws_vpc.vpc.main_route_table_id]
}
Alternatively, you can use aws_vpc_endpoint_route_table_association to do it as well.

Terraform error creating subnet dependency

I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish