I wonder if something like this could be used. I have functional code working, but as pods in kubernetes will grow up fast, I want to convert into templates. This is an example for the creation of each nginx ingress rule for each wordpress site pod.
Now, each pod has its wordpress ingress entry with:
resource "kubernetes_ingress" "ingress_nginx_siteA" {
metadata {
name = "ingress-nginx-siteA"
namespace = "default"
annotations = { "kubernetes.io/ingress.class" = "nginx", "nginx.ingress.kubernetes.io/configuration-snippet" = "modsecurity_rules '\n SecRuleEngine On\n SecRequestBodyAccess On\n SecAuditEngine RelevantOnly\n SecAuditLogParts ABCIJDEFHZ\n SecAuditLog /var/log/modsec_audit.log\n SecRuleRemoveById 932140\n';\n", "nginx.ingress.kubernetes.io/ssl-passthrough" = "true" }
}
spec {
tls {
hosts = ["siteA.test.com"]
secret_name = "wildcard-test-com"
}
rule {
host = "siteA.test.com"
http {
path {
path = "/"
backend {
service_name = "siteA"
service_port = "80"
}
}
}
}
}
}
Now I want to split into variables.tf that contain the whole sites variables, a template file rules.tpl and the main.tf that orchestrate this stuff.
variables.tf:
variable "wordpress_site" {
type = map(object({
name = string
url = string
certificate = string
}))
default = {
siteA = {
name = siteA
url = siteA.test.com
certificate = wildcard-test-com
}
siteB = {
name = siteB
url = siteB.test.com
certificate = wildcard-test-com
}
}
}
rules.tpl:
%{ for name in wordpress_site.name ~}
resource "kubernetes_ingress" "ingress_nginx_${name}" {
metadata {
name = "ingress-nginx-${name}"
namespace = "default"
annotations = { "kubernetes.io/ingress.class" = "nginx", "nginx.ingress.kubernetes.io/configuration-snippet" = "modsecurity_rules '\n SecRuleEngine On\n SecRequestBodyAccess On\n SecAuditEngine RelevantOnly\n SecAuditLogParts ABCIJDEFHZ\n SecAuditLog /var/log/modsec_audit.log\n SecRuleRemoveById 932140\n';\n", "nginx.ingress.kubernetes.io/ssl-passthrough" = "true" }
}
spec {
tls {
hosts = ["${wordpress_site.url}"]
secret_name = "${wordpress_site.certificate}"
}
rule {
host = "${wordpress_site.url}"
http {
path {
path = "/"
backend {
service_name = "${name}"
service_port = "80"
}
}
}
}
}
}
%{ endfor ~}
and now, in main.tf, what is the best way in order to mix it all? I see that new functionalities are added in TF 0.12 like templatefile function, but didn't know at all if I can use it like:
main.tf:
templatefile(${path.module}/rules.tpl, ${module.var.wordpress_site})
Thanks all for your support!
The templatefile function is for generating strings from a template, not for generating Terraform configuration. Although it would be possible to render your given template to produce a string containing Terraform configuration, Terraform would just see the result as a normal string, not as more configuration to be evaluated.
Instead, what we need to get the desired result is resource for_each, which allows creating multiple instances from a single resource based on a map value.
resource "kubernetes_ingress" "nginx" {
for_each = var.wordpress_site
metadata {
name = "ingress-nginx-${each.value.name}"
namespace = "default"
annotations = {
"kubernetes.io/ingress.class" = "nginx"
"nginx.ingress.kubernetes.io/configuration-snippet" = <<-EOT
modsecurity_rules '
SecRuleEngine On
SecRequestBodyAccess On
SecAuditEngine RelevantOnly
SecAuditLogParts ABCIJDEFHZ
SecAuditLog /var/log/modsec_audit.log
SecRuleRemoveById 932140
';
EOT
"nginx.ingress.kubernetes.io/ssl-passthrough" = "true"
}
}
spec {
tls {
hosts = [each.value.url]
secret_name = each.value.certificate
}
rule {
host = each.value.url
http {
path {
path = "/"
backend {
service_name = each.value.name
service_port = "80"
}
}
}
}
}
}
When a resource has for_each set, Terraform will evaluate the given argument to obtain a map, and will then create one instance of the resource for each element in that map, with each one identified by its corresponding map key. In this case, assuming the default value of var.wordpress_site, you'll get two instances with the following addresses:
kubernetes_ingress.nginx["siteA"]
kubernetes_ingress.nginx["siteB"]
Inside the resource block, references starting with each.value refer to the values from the map, which in this case are the objects describing each site.
Related
I have a question about tags in Terrafrom. I have this variables, and I'd like to use the Transit variable description name as a tag in my main.tf file. How do I go about it?
#VPC CIDRs
variable "All_VPCs" {
type = map(any)
default = {
Dev_VPC = {
ip = "10.0.3.0/24"
instance_tenancy = "default"
}
Transit_VPC = {
ip = "10.0.4.0/23"
instance_tenancy = "default"
description = "Transit_VPC"
}
}
}
I used this, but it didn't work.
resource "aws_internet_gateway" "Transit_Internet_Gateway" {
vpc_id = var.All_VPCs.Transit_VPC
tags = {
Name = "${var.All_VPCs.Transit_VPC.description}" + " Internet_Gateway"
}
You can't concatenate strings in Terraform with a + operator. The correct method of doing this is to use string interpolation (which you are already partially doing):
tags = {
Name = "${var.All_VPCs.Transit_VPC.description} Internet_Gateway"
}
I'm trying to deploy an Application Load Balancer to AWS using Terraform's kubernetes_ingress resource:
I'm using aws-load-balancer-controller which I've installed using helm_release resource to my cluster.
Now I'm trying to deploy a deployment with a service and ingress.
This is how my service looks like:
resource "kubernetes_service" "questo-server-service" {
metadata {
name = "questo-server-service-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
}
spec {
selector = {
"app.kubernetes.io/name" = lookup(kubernetes_deployment.questo-server.metadata.0.labels, "app.kubernetes.io/name")
}
port {
port = 80
target_port = 4000
}
type = "LoadBalancer"
}
}
And this is how my ingress looks like:
resource "kubernetes_ingress" "questo-server-ingress" {
wait_for_load_balancer = true
metadata {
name = "questo-server-ingress-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/target-type" = "instance"
}
}
spec {
rule {
http {
path {
path = "/*"
backend {
service_name = kubernetes_service.questo-server-service.metadata.0.name
service_port = 80
}
}
}
}
}
}
The issue is that when I run terraform apply it creates a Classic Load Balancer instead of an Application Load Balancer.
I've tried changing service's type to NodePort but it didn't help.
I've also tried with adding more annotations to ingress like "alb.ingress.kubernetes.io/load-balancer-name" = "${name}" but then the it created two load balancers at once! One internal ALB and one internet facing CLB.
Any ideas how I can create an internet facing Application Load Balancer using this setup?
--- Update ----
I've noticed, that actually, the service is the Classic Load Balancer via which I can connect to my deployment.
Ingress creates an ALB, but it's prefixed with internal, so my questions here is, how to create an internet facing ALB?
Thanks!
Try using the alb.ingress.kubernetes.io/scheme: internet-facing annotation.
You find a list of all available annotations here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/
Answering my own question, like most of the times :)
This is the proper setup, just in case someone come across it:
The service's type has to be NodePort:
resource "kubernetes_service" "questo-server-service" {
metadata {
name = "questo-server-service-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
}
spec {
selector = {
"app.kubernetes.io/name" = lookup(kubernetes_deployment.questo-server.metadata.0.labels, "app.kubernetes.io/name")
}
port {
port = 80
target_port = 4000
}
type = "NodePort"
}
}
And ingress's annotation has to be set as follows: (you can ingnore load-balancer-name and healthcheck-pass as they are not relevant to the question:
resource "kubernetes_ingress" "questo-server-ingress" {
wait_for_load_balancer = true
metadata {
name = "questo-server-ingress-${var.env}"
namespace = kubernetes_namespace.app-namespace.metadata.0.name
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/target-type" = "ip"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/load-balancer-name" = "questo-server-alb-${var.env}"
"alb.ingress.kubernetes.io/healthcheck-path" = "/health"
}
}
spec {
rule {
http {
path {
path = "/*"
backend {
service_name = kubernetes_service.questo-server-service.metadata.0.name
service_port = 80
}
}
}
}
}
}
Trying to deploy aws-load-balancer-controller on Kubernetes.
I have the following TF code:
resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
This works up to version 1.9 as required. As soon as I upgrade to version 2.2.3 the pod fails to update and on the pod get the following error:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have read the update the doc and have amended the IAM policy as they state but they also mention:
updating the TargetGroupBinding CRDs
And that where I am not sure how to do that using terraform
If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.
With your Terraform code, you apply an Deployment and an Ingress resource, but you must also add the CustomResourceDefinitions for the TargetGroupBinding custom resource.
This is described under "Add Controller to Cluster" in the Load Balancer Controller installation documentation - with examples for Helm and Kubernetes Yaml provided.
Terraform has beta support for applying CRDs including an example of deploying CustomResourceDefinition.
I am trying to pass mat_ip of google compute instances created in module "microservice-instance" to another module "database". Since I am creating more than one instance, I am getting following error for output variable in module "microservice-instance".
Error: Missing resource instance key
on modules/microservice-instance/ms-outputs.tf line 3, in output "nat_ip": 3: value = google_compute_instance.apps.network_interface[*].access_config[0].nat_ip
Because google_compute_instance.apps has "count" set, its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
google_compute_instance.apps[count.index]
I have looked at following and using the same way of accessing attribute but its not working. Here is code -
main.tf
provider "google" {
credentials = "${file("../../service-account.json")}"
project = var.project
region =var.region
}
# Include modules
module "microservice-instance" {
count = var.appserver_count
source = "./modules/microservice-instance"
appserver_count = var.appserver_count
}
module "database" {
count = var.no_of_db_instances
source = "./modules/database"
nat_ip = module.microservice-instance.nat_ip
no_of_db_instances = var.no_of_db_instances
}
./modules/microservice-instance/microservice-instance.tf
resource "google_compute_instance" "apps" {
count = var.appserver_count
name = "apps-${count.index + 1}"
# name = "apps-${random_id.app_name_suffix.hex}"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
./modules/microservice-instance/ms-outputs.tf
output "nat_ip" {
value = google_compute_instance.apps.network_interface[*].access_config[0].nat_ip
}
./modules/database/database.tf
resource "random_id" "db_name_suffix" {
byte_length = 4
}
resource "google_sql_database_instance" "postgres" {
name = "postgres-instance-${random_id.db_name_suffix.hex}"
database_version = "POSTGRES_11"
settings {
tier = "db-f1-micro"
ip_configuration {
dynamic "authorized_networks" {
for_each = var.nat_ip
# iterator = ip
content {
# value = ni.0.access_config.0.nat_ip
value = each.key
}
}
}
}
}
You are creating var.appserver_count number of google_compute_instance.apps resources. So you will have:
google_compute_instance.apps[0]
google_compute_instance.apps[1]
...
google_compute_instance.apps[var.appserver_count - 1]
Therefore, in your output, instead of:
output "nat_ip" {
value = google_compute_instance.apps.network_interface[*].access_config[0].nat_ip
}
you have to reference individual apps resources or all of them using [*], for example:
output "nat_ip" {
value = google_compute_instance.apps[*].network_interface[*].access_config[0].nat_ip
}
For example, I have two domains: example.com and example.org.
I want to make primary domain example.com and setup host redirect (in google cloud terms) from example.org and www.* to example.com.
Intuitively it looks like I have to create two "path matchers", one of them will serve backend and another one do host redirect.
variable "primary_domain" {
type = string
default = "example.com"
}
variable "secondary_domains" {
type = set(string)
default = ["example.org", "www.example.com", "www.example.org"]
}
resource "google_compute_url_map" "landing_url_map" {
name = "landing-url-map"
default_service = google_compute_backend_bucket.landing_backend_bucket.self_link
host_rule {
path_matcher = "primary"
hosts = [var.primary_domain]
}
path_matcher {
name = "primary"
default_service = google_compute_backend_bucket.landing_backend_bucket.self_link
}
host_rule {
path_matcher = "secondary"
hosts = var.secondary_domains
}
path_matcher {
name = "secondary"
default_url_redirect {
host_redirect = var.primary_domain
}
}
}
But it fails:
Error: "path_matcher.1.default_url_redirect": conflicts with default_service
on landing.tf line 47, in resource "google_compute_url_map" "landing_url_map":
47: resource "google_compute_url_map" "landing_url_map" {
I've tried multiple other ways to make it work but none of them doesn't work. I make sure it works in the web console but can't find any way how to implement this using terraform.
It looks like you cannot have default_service at the top level of the URL map and have default_url_redirect in a path_matcher. Try removing default_service = google_compute_backend_bucket.landing_backend_bucket.self_link from the top-level as follows:
resource "google_compute_url_map" "landing_url_map" {
name = "landing-url-map"
host_rule {
path_matcher = "primary"
hosts = [var.primary_domain]
}
path_matcher {
name = "primary"
default_service = google_compute_backend_bucket.landing_backend_bucket.self_link
}
host_rule {
path_matcher = "secondary"
hosts = var.secondary_domains
}
path_matcher {
name = "secondary"
default_url_redirect {
host_redirect = var.primary_domain
}
}
}
When none of the specified hostRules match, the request is redirected to a URL specified by defaultUrlRedirect. If defaultUrlRedirect is specified, defaultService or defaultRouteAction must not be set. Structure is documented below.
From the default_url_redirect argument docs for the google_compute_url_map.
Rollback to 3.25.0 helped
The issue has been reported on GitHub
https://github.com/terraform-providers/terraform-provider-google/issues/6695