How to create AWS ALB using kubernetes_ingress terraform resource? - amazon-web-services

I'm trying to deploy an Application Load Balancer to AWS using Terraform's kubernetes_ingress resource:
I'm using aws-load-balancer-controller which I've installed using helm_release resource to my cluster.
Now I'm trying to deploy a deployment with a service and ingress.
This is how my service looks like:
resource "kubernetes_service" "questo-server-service" {
metadata {
name = "questo-server-service-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
}
spec {
selector = {
"app.kubernetes.io/name" = lookup(kubernetes_deployment.questo-server.metadata.0.labels, "app.kubernetes.io/name")
}
port {
port = 80
target_port = 4000
}
type = "LoadBalancer"
}
}
And this is how my ingress looks like:
resource "kubernetes_ingress" "questo-server-ingress" {
wait_for_load_balancer = true
metadata {
name = "questo-server-ingress-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/target-type" = "instance"
}
}
spec {
rule {
http {
path {
path = "/*"
backend {
service_name = kubernetes_service.questo-server-service.metadata.0.name
service_port = 80
}
}
}
}
}
}
The issue is that when I run terraform apply it creates a Classic Load Balancer instead of an Application Load Balancer.
I've tried changing service's type to NodePort but it didn't help.
I've also tried with adding more annotations to ingress like "alb.ingress.kubernetes.io/load-balancer-name" = "${name}" but then the it created two load balancers at once! One internal ALB and one internet facing CLB.
Any ideas how I can create an internet facing Application Load Balancer using this setup?
--- Update ----
I've noticed, that actually, the service is the Classic Load Balancer via which I can connect to my deployment.
Ingress creates an ALB, but it's prefixed with internal, so my questions here is, how to create an internet facing ALB?
Thanks!

Try using the alb.ingress.kubernetes.io/scheme: internet-facing annotation.
You find a list of all available annotations here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/

Answering my own question, like most of the times :)
This is the proper setup, just in case someone come across it:
The service's type has to be NodePort:
resource "kubernetes_service" "questo-server-service" {
metadata {
name = "questo-server-service-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
}
spec {
selector = {
"app.kubernetes.io/name" = lookup(kubernetes_deployment.questo-server.metadata.0.labels, "app.kubernetes.io/name")
}
port {
port = 80
target_port = 4000
}
type = "NodePort"
}
}
And ingress's annotation has to be set as follows: (you can ingnore load-balancer-name and healthcheck-pass as they are not relevant to the question:
resource "kubernetes_ingress" "questo-server-ingress" {
wait_for_load_balancer = true
metadata {
name = "questo-server-ingress-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/target-type" = "ip"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/load-balancer-name" = "questo-server-alb-${var.env}"
"alb.ingress.kubernetes.io/healthcheck-path" = "/health"
}
}
spec {
rule {
http {
path {
path = "/*"
backend {
service_name = kubernetes_service.questo-server-service.metadata.0.name
service_port = 80
}
}
}
}
}
}

Related

google cloud platform instance in MIG cannot access artifact registry

I'm trying to deploy a managed instance group with a load balancer which will be running a web server container.
The container is stored in the google artifcat registry.
If I manually create a VM and define the container usage, it is successfully able to pull and activate the container.
When I try to create the managed instance group via terraform, the VM does not pull nor activate the container.
When I ssh to the VM and try to manually pull the container, I get the following error:
Error response from daemon: Get https://us-docker.pkg.dev/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The only notable difference between the VM I created manually to the VM created by terraform is that the manual VM has an external IP address. Not sure if this matters and not sure how to add one to the terraform file.
Below is my main.tf file. Can anyone tell me what I'm doing wrong?
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.53.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file("compute_lab2-347808-dab33a244827.json")
project = "lab2-347808"
region = "us-central1"
zone = "us-central1-f"
}
locals {
google_load_balancer_ip_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
]
}
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
cos_image_name = "cos-stable-77-12371-89-0"
container = {
image = "us-docker.pkg.dev/lab2-347808/web-server-repo/web-server-image"
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = false
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
]
restart_policy = "Always"
}
resource "google_compute_firewall" "rules" {
project = "lab2-347808"
name = "allow-web-ports"
network = "default"
description = "Opens the relevant ports for the web server"
allow {
protocol = "tcp"
ports = ["80", "8080", "5432", "5000", "443"]
}
source_ranges = ["0.0.0.0/0"]
#source_ranges = local.google_load_balancer_ip_ranges
target_tags = ["web-server-ports"]
}
resource "google_compute_autoscaler" "default" {
name = "web-autoscaler"
zone = "us-central1-f"
target = google_compute_instance_group_manager.default.id
autoscaling_policy {
max_replicas = 10
min_replicas = 1
cooldown_period = 60
cpu_utilization {
target = 0.5
}
}
}
resource "google_compute_instance_template" "default" {
name = "my-web-server-template"
machine_type = "e2-medium"
can_ip_forward = false
tags = ["ssh", "http-server", "https-server", "web-server-ports"]
disk {
#source_image = "cos-cloud/cos-73-11647-217-0"
source_image = module.gce-container.source_image
}
network_interface {
network = "default"
}
service_account {
#scopes = ["userinfo-email", "compute-ro", "storage-ro"]
scopes = ["cloud-platform"]
}
metadata = {
gce-container-declaration = module.gce-container.metadata_value
}
}
resource "google_compute_target_pool" "default" {
name = "web-server-target-pool"
}
resource "google_compute_instance_group_manager" "default" {
name = "web-server-igm"
zone = "us-central1-f"
version {
instance_template = google_compute_instance_template.default.id
name = "primary"
}
target_pools = [google_compute_target_pool.default.id]
base_instance_name = "web-server-instance"
}
Your VM templates haven't public IPs, therefore, you can't reach public IP.
However, you have 3 ways to solve that issue:
Add a public IP on the VM template (bad idea)
Add a Cloud NAT on your VM private IP range to allow outgoing traffic to the internet (good idea)
Activate the Google private access in the subnet that host the VM private iP range. It create a bridge to access to Google services without having a public IP (my prefered idea) -> https://cloud.google.com/vpc/docs/configure-private-google-access
Apparently I was missing the following acecss_config inside network_interface of the google_compute_instance_template as following:
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}

Ingress annotations provisions unnecessary AWS classic load balancer

Within my AWS EKS cluster provisioning an AWS application load balancer using annotations on the Ingress object. Additionally an unnecessary classic load balancer is being provisioned. Any ideas or best practice on how to prevent this?
resource "kubernetes_service" "api" {
metadata {
name = "${var.project_prefix}-api-service"
}
spec {
selector = {
app = "${var.project_prefix}-api"
}
port {
name = "http"
port = 80
target_port = 1337
}
port {
name = "https"
port = 443
target_port = 1337
}
type = "LoadBalancer"
}
}
resource "kubernetes_ingress" "api" {
wait_for_load_balancer = true
metadata {
name = "${var.project_prefix}-api"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "instance"
"alb.ingress.kubernetes.io/certificate-arn" = local.api-certificate_arn
"alb.ingress.kubernetes.io/load-balancer-name" = "${var.project_prefix}-api"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\": 80}, {\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{\"Type\": \"redirect\", \"RedirectConfig\": { \"Protocol\": \"HTTPS\", \"Port\": \"443\", \"StatusCode\": \"HTTP_301\"}}"
}
}
spec {
backend {
service_name = kubernetes_service.api.metadata.0.name
service_port = 80
}
rule {
http {
path {
path = "/*"
backend {
service_name = "ssl-redirect"
service_port = "use-annotation"
}
}
}
}
}
}
Your LoadBalancer service is responsible for deploying the classic load balancer, and if you just need an application load balancer, is unnecessary.
resource "kubernetes_service" "api" {
metadata {
name = "${var.project_prefix}-api-service"
}
spec {
selector = {
app = "${var.project_prefix}-api"
}
port {
name = "http"
port = 80
target_port = 1337
}
port {
name = "https"
port = 443
target_port = 1337
}
type = "ClusterIP" # See comments below
}
}
resource "kubernetes_ingress" "api" {
wait_for_load_balancer = true
metadata {
name = "${var.project_prefix}-api"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/target-type" = "ip" # See comments below
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "instance"
"alb.ingress.kubernetes.io/certificate-arn" = local.api-certificate_arn
"alb.ingress.kubernetes.io/load-balancer-name" = "${var.project_prefix}-api"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\": 80}, {\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{\"Type\": \"redirect\", \"RedirectConfig\": { \"Protocol\": \"HTTPS\", \"Port\": \"443\", \"StatusCode\": \"HTTP_301\"}}"
}
}
spec {
backend {
service_name = kubernetes_service.api.metadata.0.name
service_port = 80
}
rule {
http {
path {
path = "/*"
backend {
service_name = "ssl-redirect"
service_port = "use-annotation"
}
}
}
}
}
}
Traffic Modes
Depending on your cluster and networking setup, you might be able to use ip target type, where the load balancer can communicate directly with Kubernetes pods via their IP (so ClusterIP service types are fine) if you have a CNI configuration, or use instance in conjunction with NodePort service types as the load balancer cannot directly access the pod IPs. Some relevant links below:
ALB Target Types
VPC CNI EKS Plugin
Load Balancer Types
Some relevant links regarding Kubernetes load balancing and EKS load balancers. Note that Ingress resources are layer 7 and load balance service resources are layer 4, hence ALBs deployed for EKS ingress resources and NLBs for load balanced service resources:
Rancher Kubernetes Load Balancers
AWS Load Balancer Comparison

Deploying AWS Load Balancer Controller on EKS with Terraform

Trying to deploy aws-load-balancer-controller on Kubernetes.
I have the following TF code:
resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
This works up to version 1.9 as required. As soon as I upgrade to version 2.2.3 the pod fails to update and on the pod get the following error:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have read the update the doc and have amended the IAM policy as they state but they also mention:
updating the TargetGroupBinding CRDs
And that where I am not sure how to do that using terraform
If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.
With your Terraform code, you apply an Deployment and an Ingress resource, but you must also add the CustomResourceDefinitions for the TargetGroupBinding custom resource.
This is described under "Add Controller to Cluster" in the Load Balancer Controller installation documentation - with examples for Helm and Kubernetes Yaml provided.
Terraform has beta support for applying CRDs including an example of deploying CustomResourceDefinition.

Terraform 0.12 Creating ingress rules with template

I wonder if something like this could be used. I have functional code working, but as pods in kubernetes will grow up fast, I want to convert into templates. This is an example for the creation of each nginx ingress rule for each wordpress site pod.
Now, each pod has its wordpress ingress entry with:
resource "kubernetes_ingress" "ingress_nginx_siteA" {
metadata {
name = "ingress-nginx-siteA"
namespace = "default"
annotations = { "kubernetes.io/ingress.class" = "nginx", "nginx.ingress.kubernetes.io/configuration-snippet" = "modsecurity_rules '\n SecRuleEngine On\n SecRequestBodyAccess On\n SecAuditEngine RelevantOnly\n SecAuditLogParts ABCIJDEFHZ\n SecAuditLog /var/log/modsec_audit.log\n SecRuleRemoveById 932140\n';\n", "nginx.ingress.kubernetes.io/ssl-passthrough" = "true" }
}
spec {
tls {
hosts = ["siteA.test.com"]
secret_name = "wildcard-test-com"
}
rule {
host = "siteA.test.com"
http {
path {
path = "/"
backend {
service_name = "siteA"
service_port = "80"
}
}
}
}
}
}
Now I want to split into variables.tf that contain the whole sites variables, a template file rules.tpl and the main.tf that orchestrate this stuff.
variables.tf:
variable "wordpress_site" {
type = map(object({
name = string
url = string
certificate = string
}))
default = {
siteA = {
name = siteA
url = siteA.test.com
certificate = wildcard-test-com
}
siteB = {
name = siteB
url = siteB.test.com
certificate = wildcard-test-com
}
}
}
rules.tpl:
%{ for name in wordpress_site.name ~}
resource "kubernetes_ingress" "ingress_nginx_${name}" {
metadata {
name = "ingress-nginx-${name}"
namespace = "default"
annotations = { "kubernetes.io/ingress.class" = "nginx", "nginx.ingress.kubernetes.io/configuration-snippet" = "modsecurity_rules '\n SecRuleEngine On\n SecRequestBodyAccess On\n SecAuditEngine RelevantOnly\n SecAuditLogParts ABCIJDEFHZ\n SecAuditLog /var/log/modsec_audit.log\n SecRuleRemoveById 932140\n';\n", "nginx.ingress.kubernetes.io/ssl-passthrough" = "true" }
}
spec {
tls {
hosts = ["${wordpress_site.url}"]
secret_name = "${wordpress_site.certificate}"
}
rule {
host = "${wordpress_site.url}"
http {
path {
path = "/"
backend {
service_name = "${name}"
service_port = "80"
}
}
}
}
}
}
%{ endfor ~}
and now, in main.tf, what is the best way in order to mix it all? I see that new functionalities are added in TF 0.12 like templatefile function, but didn't know at all if I can use it like:
main.tf:
templatefile(${path.module}/rules.tpl, ${module.var.wordpress_site})
Thanks all for your support!
The templatefile function is for generating strings from a template, not for generating Terraform configuration. Although it would be possible to render your given template to produce a string containing Terraform configuration, Terraform would just see the result as a normal string, not as more configuration to be evaluated.
Instead, what we need to get the desired result is resource for_each, which allows creating multiple instances from a single resource based on a map value.
resource "kubernetes_ingress" "nginx" {
for_each = var.wordpress_site
metadata {
name = "ingress-nginx-${each.value.name}"
namespace = "default"
annotations = {
"kubernetes.io/ingress.class" = "nginx"
"nginx.ingress.kubernetes.io/configuration-snippet" = <<-EOT
modsecurity_rules '
SecRuleEngine On
SecRequestBodyAccess On
SecAuditEngine RelevantOnly
SecAuditLogParts ABCIJDEFHZ
SecAuditLog /var/log/modsec_audit.log
SecRuleRemoveById 932140
';
EOT
"nginx.ingress.kubernetes.io/ssl-passthrough" = "true"
}
}
spec {
tls {
hosts = [each.value.url]
secret_name = each.value.certificate
}
rule {
host = each.value.url
http {
path {
path = "/"
backend {
service_name = each.value.name
service_port = "80"
}
}
}
}
}
}
When a resource has for_each set, Terraform will evaluate the given argument to obtain a map, and will then create one instance of the resource for each element in that map, with each one identified by its corresponding map key. In this case, assuming the default value of var.wordpress_site, you'll get two instances with the following addresses:
kubernetes_ingress.nginx["siteA"]
kubernetes_ingress.nginx["siteB"]
Inside the resource block, references starting with each.value refer to the values from the map, which in this case are the objects describing each site.

Terraform aws_elastic_beanstalk_environment redirect HTTP traffic to HTTPS

I'm using terraform to provision an elasticbeanstalk environment. I want redirect all HTTP traffic to HTTPS. Here is my current configuration:
resource "aws_elastic_beanstalk_environment" "env" {
...
# Configure the default listener (port 80) on a classic load balancer.
setting {
namespace = "aws:elb:listener:80"
name = "InstancePort"
value = "80"
}
setting {
namespace = "aws:elb:listener:80"
name = "ListenerEnabled"
value = "true"
}
# Configure additional listeners on a classic load balancer.
setting {
namespace = "aws:elb:listener:443"
name = "ListenerProtocol"
value = "HTTPS"
}
setting {
namespace = "aws:elb:listener:443"
name = "InstancePort"
value = "80"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerEnabled"
value = "true"
}
# Modify the default stickiness and global load balancer policies for a classic load balancer.
setting {
namespace = "aws:elb:policies"
name = "ConnectionSettingIdleTimeout"
value = "60"
}
setting {
namespace = "aws:elbv2:listener:443"
name = "SSLCertificateArns"
value = "${aws_acm_certificate.cert.arn}"
}
setting {
namespace = "aws:elb:listener:443"
name = "SSLCertificateId"
value = "${aws_acm_certificate.cert.arn}"
}
I've tried various solutions posted online, none will actually work though. Any idea what is wrong with my current configuration? Thanks in advance.
edit: entire config (stackoverflow would not allow posting it all for some reason) https://pastebin.com/aMHgSiXr