Using a public ECR image in local Kubernetes cluster in Terraform - amazon-web-services

I've setup a very simple local kubernetes cluster for development purposes, and for that I aim to pull a docker image for my pods from ECR.
Here's the code
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
}
resource "kubernetes_deployment" "test" {
metadata {
name = "test-deployment"
namespace = kubernetes_namespace.test.metadata.0.name
}
spec {
replicas = 2
selector {
match_labels = {
app = "MyTestApp"
}
}
template {
metadata {
labels = {
app = "MyTestApp"
}
}
spec {
container {
image = "public ECR URL" <--- this times out
name = "myTestPod"
port {
container_port = 4000
}
}
}
}
}
}
I've set that ECR repo to public and made sure that it's accessible. My challenge is that in a normal scenario you have to login to ECR in order to retrieve the image, and I do not know how to achieve that in Terraform. So on 'terraform apply', it times out and fails.
I read the documentation on aws_ecr_repository, aws_ecr_authorization_token,Terraform EKS module and local-exec, but none of them seem to have a solution for this.
Achieving this in a Gitlab pipeline is fairly easy, but how can one achieve this in Terraform? how can I pull an image from a public ECR repo for my local Kubernetes cluster?

After a while, I figured out the cleanest way to achieve this;
First retrieve your ECR authorization token data;
data "aws_ecr_authorization_token" "token" {
}
Second, create a secret for your kubernetes cluster**:
resource "kubernetes_secret" "docker" {
metadata {
name = "docker-cfg"
namespace = kubernetes_namespace.test.metadata.0.name
}
data = {
".dockerconfigjson" = jsonencode({
auths = {
"${data.aws_ecr_authorization_token.token.proxy_endpoint}" = {
auth = "${data.aws_ecr_authorization_token.token.authorization_token}"
}
}
})
}
type = "kubernetes.io/dockerconfigjson"
}
Bear in mind that the example in the docs base64 encodes the username and password. The exported attribute authorization_token does the same thing.
Third, once the secret is created, you can then have your pods use it as the image_pull_secrets:
resource "kubernetes_deployment" "test" {
metadata {
name = "MyTestApp"
namespace = kubernetes_namespace.test.metadata.0.name
}
spec {
replicas = 2
selector {
match_labels = {
app = "MyTestApp"
}
}
template {
metadata {
labels = {
app = "MyTestApp"
}
}
spec {
image_pull_secrets {
name = "docker-cfg"
}
container {
image = "test-image-URL"
name = "test-image-name"
image_pull_policy = "Always"
port {
container_port = 4000
}
}
}
}
}
depends_on = [
kubernetes_secret.docker,
]
}
Gotcha: the token expires after 12 hours, so you should either write a bash script that updates your secret in the corresponding namespace, or you should write a Terraform provisioner that gets triggered every time the token expires.
I hope this was helpful.

Related

Deploying AWS Load Balancer Controller on EKS with Terraform

Trying to deploy aws-load-balancer-controller on Kubernetes.
I have the following TF code:
resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
This works up to version 1.9 as required. As soon as I upgrade to version 2.2.3 the pod fails to update and on the pod get the following error:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have read the update the doc and have amended the IAM policy as they state but they also mention:
updating the TargetGroupBinding CRDs
And that where I am not sure how to do that using terraform
If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.
With your Terraform code, you apply an Deployment and an Ingress resource, but you must also add the CustomResourceDefinitions for the TargetGroupBinding custom resource.
This is described under "Add Controller to Cluster" in the Load Balancer Controller installation documentation - with examples for Helm and Kubernetes Yaml provided.
Terraform has beta support for applying CRDs including an example of deploying CustomResourceDefinition.

How do I (or is it possible to) create Google Cloud Run Capacity settings using Terraform?

I am a Terraform newbie but I have done a lot of research and cannot find an answer. I have no problem creating a Cloud Run instance using a simple Terraform file. I can set environmental variables and secrets with no problems. However, I cannot figure out how to create other settings such as "Memory allocated", "CPU allocated", etc.
My terraform file looks like:
resource "google_cloud_run_service" "myproject" {
project ="myproject"
name = "cloud-run-name"
location = "us-east1"
template {
spec {
containers {
image = "gcr.io/myproject/image"
env {
name = "VARIABLE1"
value = "variable1"
}
env {
name = "VARIABLE2"
value = "variable1"
}
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
Memory/CPU goes under template > spec > containers > resources. Timeout and concurrency under template > spec.
Here is an example:
template {
spec {
container_concurrency = var.concurrency
timeout_seconds = var.timeout
containers {
image = "gcr.io/myproject/image"
ports {
container_port = var.port
}
resources {
limits = {
cpu = "2m"
memory = "8000Mi"
}
}
}
}
}

How to persist files and/or folders into google compute engine persistent disk for use in GKE?

I have a GKE cluster and a google compute engine persistent disk created by terraform. This disk is used to create a kubernetes persistent volume, which is then claimed and mounted to a container in a pod.
What I want to do is to persist some files and folders in that persistent disk so that when it is mounted, my container is able to access those files and folders. I have tried to research and it seems like the way to do it is to mount the disk to a compute engine or even a container, then copy and paste from local to there.
Is there a better way? Preferably using terraform.
This is how those resources are defined.
resource "google_compute_disk" "app" {
name = "app-${var.project_id}"
type = "pd-standard"
zone = var.zone
size = var.volume_size_gb
labels = {
environment = var.environment
}
}
resource "kubernetes_persistent_volume" "app" {
metadata {
name = "app-${var.project_id}"
}
spec {
access_modes = ["ReadWriteOnce"]
capacity = {
storage = "${var.volume_size_gb}Gi"
}
persistent_volume_source {
gce_persistent_disk {
pd_name = google_compute_disk.app.name
fs_type = "ext4"
}
}
storage_class_name = "standard"
}
}
resource "kubernetes_persistent_volume_claim" "app" {
metadata {
name = "app-${var.project_id}"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "${var.volume_size_gb}Gi"
}
}
volume_name = kubernetes_persistent_volume.app.metadata.0.name
storage_class_name = "standard"
}
}
resource "kubernetes_deployment" "core_app" {
metadata {
name = "core-app"
labels = {
app = "core"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "core"
}
}
template {
metadata {
labels = {
app = "core"
}
}
spec {
volume {
name = "app-volume"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.app.metadata.0.name
}
}
container {
name = "core-app"
image = "core-image:latest"
port {
container_port = 8080
}
volume_mount {
mount_path = "/mnt/extra-addons"
name = "app-volume"
sub_path = "addons"
}
readiness_probe {
tcp_socket {
port = "8069"
}
initial_delay_seconds = 5
period_seconds = 10
}
image_pull_policy = "Always"
}
}
}
}
}
The way is correct. If you want to initialize a disk:
You can start from a blank disk and then write your data in it by mounting it to a Compute Engine
You can create a disk from a snapshot, or an image in which the data have been already and previously stored
Now about Terraform, I have an opinion and now real answer for you. Terraform is a IaC tool: Infrastructure as Code. That means: dedicated to the infrastructure.
In your case, you want to perform "software deployment". The K8S resources that you deploy, but also the disk preparation and mounting and so on. IMO, Terraform isn't the right tool for this. You have other tools, like Ansible, which are more suitable for software/os management.
Note: I'm sure with Terraform 0.13 or 0.14 you can create script that you can execute and achieve what you want, but I think that it isn't the right way.

Terraform 0.12 Creating ingress rules with template

I wonder if something like this could be used. I have functional code working, but as pods in kubernetes will grow up fast, I want to convert into templates. This is an example for the creation of each nginx ingress rule for each wordpress site pod.
Now, each pod has its wordpress ingress entry with:
resource "kubernetes_ingress" "ingress_nginx_siteA" {
metadata {
name = "ingress-nginx-siteA"
namespace = "default"
annotations = { "kubernetes.io/ingress.class" = "nginx", "nginx.ingress.kubernetes.io/configuration-snippet" = "modsecurity_rules '\n SecRuleEngine On\n SecRequestBodyAccess On\n SecAuditEngine RelevantOnly\n SecAuditLogParts ABCIJDEFHZ\n SecAuditLog /var/log/modsec_audit.log\n SecRuleRemoveById 932140\n';\n", "nginx.ingress.kubernetes.io/ssl-passthrough" = "true" }
}
spec {
tls {
hosts = ["siteA.test.com"]
secret_name = "wildcard-test-com"
}
rule {
host = "siteA.test.com"
http {
path {
path = "/"
backend {
service_name = "siteA"
service_port = "80"
}
}
}
}
}
}
Now I want to split into variables.tf that contain the whole sites variables, a template file rules.tpl and the main.tf that orchestrate this stuff.
variables.tf:
variable "wordpress_site" {
type = map(object({
name = string
url = string
certificate = string
}))
default = {
siteA = {
name = siteA
url = siteA.test.com
certificate = wildcard-test-com
}
siteB = {
name = siteB
url = siteB.test.com
certificate = wildcard-test-com
}
}
}
rules.tpl:
%{ for name in wordpress_site.name ~}
resource "kubernetes_ingress" "ingress_nginx_${name}" {
metadata {
name = "ingress-nginx-${name}"
namespace = "default"
annotations = { "kubernetes.io/ingress.class" = "nginx", "nginx.ingress.kubernetes.io/configuration-snippet" = "modsecurity_rules '\n SecRuleEngine On\n SecRequestBodyAccess On\n SecAuditEngine RelevantOnly\n SecAuditLogParts ABCIJDEFHZ\n SecAuditLog /var/log/modsec_audit.log\n SecRuleRemoveById 932140\n';\n", "nginx.ingress.kubernetes.io/ssl-passthrough" = "true" }
}
spec {
tls {
hosts = ["${wordpress_site.url}"]
secret_name = "${wordpress_site.certificate}"
}
rule {
host = "${wordpress_site.url}"
http {
path {
path = "/"
backend {
service_name = "${name}"
service_port = "80"
}
}
}
}
}
}
%{ endfor ~}
and now, in main.tf, what is the best way in order to mix it all? I see that new functionalities are added in TF 0.12 like templatefile function, but didn't know at all if I can use it like:
main.tf:
templatefile(${path.module}/rules.tpl, ${module.var.wordpress_site})
Thanks all for your support!
The templatefile function is for generating strings from a template, not for generating Terraform configuration. Although it would be possible to render your given template to produce a string containing Terraform configuration, Terraform would just see the result as a normal string, not as more configuration to be evaluated.
Instead, what we need to get the desired result is resource for_each, which allows creating multiple instances from a single resource based on a map value.
resource "kubernetes_ingress" "nginx" {
for_each = var.wordpress_site
metadata {
name = "ingress-nginx-${each.value.name}"
namespace = "default"
annotations = {
"kubernetes.io/ingress.class" = "nginx"
"nginx.ingress.kubernetes.io/configuration-snippet" = <<-EOT
modsecurity_rules '
SecRuleEngine On
SecRequestBodyAccess On
SecAuditEngine RelevantOnly
SecAuditLogParts ABCIJDEFHZ
SecAuditLog /var/log/modsec_audit.log
SecRuleRemoveById 932140
';
EOT
"nginx.ingress.kubernetes.io/ssl-passthrough" = "true"
}
}
spec {
tls {
hosts = [each.value.url]
secret_name = each.value.certificate
}
rule {
host = each.value.url
http {
path {
path = "/"
backend {
service_name = each.value.name
service_port = "80"
}
}
}
}
}
}
When a resource has for_each set, Terraform will evaluate the given argument to obtain a map, and will then create one instance of the resource for each element in that map, with each one identified by its corresponding map key. In this case, assuming the default value of var.wordpress_site, you'll get two instances with the following addresses:
kubernetes_ingress.nginx["siteA"]
kubernetes_ingress.nginx["siteB"]
Inside the resource block, references starting with each.value refer to the values from the map, which in this case are the objects describing each site.

How do I convert a terraform file to provision/create VMs in vSphere instead of GCP?

After going through the below URL, I am able to create/destroy a sample VM on vSphere
VMware vSphere Provider
Contents of different files:
provider.tf
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
# If you have a self-signed cert
allow_unverified_ssl = true
}
data "vsphere_datacenter" "dc" {
name = "XYZ400"
}
data "vsphere_datastore" "datastore" {
name = "datastore1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_resource_pool" "pool" {
name = "Pool1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_network" "network" {
name = "Network1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
resource "vsphere_virtual_machine" "vmtest" {
name = "terraform-test"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 2
memory = 1024
guest_id = "other3xLinux64Guest"
wait_for_guest_net_timeout = 0
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
label = "disk0"
size = 20
}
}
variables.tf
variable "vsphere_user" {
description = "vSphere user name"
default = "username"
}
variable "vsphere_password" {
description = "vSphere password"
default = "password"
}
variable "vsphere_server" {
description = "vCenter server FQDN or IP"
default = "vCenterURL"
}
test.tf
data "vsphere_virtual_machine" "template" {
name = "CENTOS7_Template"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
Commands used
terraform init
terraform validate
terraform plan
terraform apply
terraform destroy
My next need is to convert existing terraform files provisioning GCP VMs to ones in VMware vSphere cloud environment. If anybody has an pointers please share.