I'm trying to create google_compute_instance_template with image container.
On the GUI under instance template you've to check the checkbox:
"Deploy a container image to this VM instance"
After that, I can add the container image URI and on the advanced options, I can add environment params, args, etc ...
Unfortunately, I didn't find how to do it from Terraform.
Thanks for help.
I think this terraform module is what you're looking for - https://github.com/terraform-google-modules/terraform-google-container-vm
example usage:
module "gce-container" {
source = "github.com/terraform-google-modules/terraform-google-container-vm"
version = "0.1.0"
container = {
image="gcr.io/google-samples/hello-app:1.0"
env = [
{
name = "TEST_VAR"
value = "Hello World!"
}
],
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = "false"
},
{
mountPath = "/persistent-data"
name = "data-disk-0"
readOnly = "false"
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
{
name = "data-disk-0"
gcePersistentDisk = {
pdName = "data-disk-0"
fsType = "ext4"
}
},
]
restart_policy = "Always"
}
The Managed Instance example in https://github.com/terraform-google-modules/terraform-google-container-vm has not been updated for Terraform version 1.1.2 which is what i am using so i'm going to post my configuration.
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
container = {
image="gcr.io/project-name/image-name:tag"
securityContext = {
privileged : true
}
tty : true
env = [
{
name = "PORT"
value = "3000"
}
],
# Declare volumes to be mounted.
# This is similar to how docker volumes are declared.
volumeMounts = []
}
# Declare the Volumes which will be used for mounting.
volumes = []
restart_policy = "Always"
}
data "google_compute_image" "gce_container_vm_image" {
family = "cos-stable"
project = "cos-cloud"
}
resource "google_compute_instance_template" "my_instance_template" {
name = "instance-template"
description = "This template is used to create app server instances"
// the `gce-container-declaration` key is very important
metadata = {
"gce-container-declaration" = module.gce-container.metadata_value
}
labels = {
"container-vm" = module.gce-container.vm_container_label
}
machine_type = "e2-small"
can_ip_forward = false
scheduling {
automatic_restart = true
on_host_maintenance = "MIGRATE"
}
// Create a new boot disk from an image
disk {
source_image = data.google_compute_image.gce_container_vm_image.self_link
auto_delete = true
boot = true
disk_type = "pd-balanced"
disk_size_gb = 10
}
network_interface {
subnetwork = google_compute_subnetwork.my-network-subnet.name
// Add an ephemeral external IP.
access_config {
// Ephemeral IP
}
}
service_account {
# Compute Engine default service account
email = "577646309382-compute#developer.gserviceaccount.com"
scopes = ["cloud-platform"]
}
}
Related
My error
enter image description here
Invalid value for "path" parameter: no file exists at "cis-userdata.sh"; this function works only with files that are distributed as part of the configuration source code, so if this file will be created by a resource in this configuration you must instead obtain this result from an attribute of that resource.
My files:
enter image description here
My code:
EC2.tf
# ------------------------------------------------------------------------------------------------------------
# ------------------------------- EC2 Module with Latest Ubuntu AMI ------------------------------------------
# ------------------------------ No Network Interfaces. Imports Only -----------------------------------------
# ------------------------------------------------------------------------------------------------------------
resource "aws_instance" "ec2" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
iam_instance_profile = var.iam_instance_profile
monitoring = var.monitoring
disable_api_termination = var.disable_api_termination
ebs_optimized = true
key_name = var.key_name
vpc_security_group_ids = var.security_groups
subnet_id = var.subnet_id
user_data = templatefile(var.template, {
HOSTNAME = var.name,
linuxPlatform = "",
isRPM = "",
})
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
}
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
ControlledByAnsible = var.controlled_by_ansible
ConfigAnsible = var.configansible
}
root_block_device {
delete_on_termination = true
encrypted = true
kms_key_id = var.kms_key_arn # Arn instead of id to avoid forced replacement.
volume_size = 16
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
}
}
lifecycle {
ignore_changes = [
ami,
user_data,
root_block_device,
]
}
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["xxxx"] # Canonical
}
variables.tf
variable "name" {
default = "xxx-prod"
}
variable "instance_type" {
default = "m5.large"
}
variable "public_ip" {
default = false
}
variable "instance_id" {
default = ""
}
variable "stateManager" {
default = ""
}
variable "iam_instance_profile" {
default = "infra-" # Required for systems manager
}
variable "security_groups" {
default = ["sg-xxxx"] #
}
variable "subnet_id" {
default = "subnet-xxxx"
}
variable "availability_zone" {
default = "us-east-1a"
}
variable "disable_api_termination" {
default = "true"
}
variable "kms_key_arn" {
default = "arn:aws:kms:us-east-1:xxxxx:key/xxxx"
}
variable "creator" {
default = "xxx#xxx.com"
}
variable "cost_center" {
default = "xxx"
}
variable "stack" {
default = "Production"
}
variable "controlled_by_ansible" {
default = "False"
}
variable "country" {
default = ""
}
variable "ec2_number" {
default = "01"
}
variable "monitoring" {
default = true
}
variable "device" {
default = "/dev/xvda"
}
variable "template" {
default = ("cis-userdata.sh")
}
variable "key_name" {
default = "xxx"
}
variable "image_id" {
default = "ami-xxx"
}
variable "volume_size" {
default = 16
}
resource "aws_instance" "ec2" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
iam_instance_profile = var.iam_instance_profile
monitoring = var.monitoring
disable_api_termination = var.disable_api_termination
ebs_optimized = true
key_name = var.key_name
vpc_security_group_ids = var.security_groups
subnet_id = var.subnet_id
user_data = templatefile("${path.module}/${var.template}", {
HOSTNAME = var.name,
linuxPlatform = "",
isRPM = "",
})
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
}
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
ControlledByAnsible = var.controlled_by_ansible
ConfigAnsible = var.configansible
}
root_block_device {
delete_on_termination = true
encrypted = true
kms_key_id = var.kms_key_arn # Arn instead of id to avoid forced replacement.
volume_size = 16
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
}
}
lifecycle {
ignore_changes = [
ami,
user_data,
root_block_device,
]
}
}
cis-userdata.sh would contain the user_data for setting up the AMI as follows.
#!/bin/bash
sudo apt update
cd /home/ubuntu
sudo apt-get purge 'apache2*'
sudo apt install -y apache2
sudo ufw allow 'Apache Full'
sudo systemctl enable apache2
sudo systemctl start apache2
sudo echo “Hello World from $(hostname -f)” > /var/www/html/index.html
For more, visit this answer https://stackoverflow.com/a/62599263/11145307
I have a requirement to create multiple VMs in GCP using the Instance Template module located here:
https://github.com/terraform-google-modules/terraform-google-vm/tree/master/modules/instance_template
My Instance Template code looks like this:
module "db_template" {
source = "terraform-google-modules/vm/google//modules/instance_template"
version = "7.8.0"
name_prefix = "${var.project_short_name}-db-template"
machine_type = var.app_machine_type
disk_size_gb = 20
source_image = "debian-10-buster-v20220719"
source_image_family = "debian-10"
source_image_project = "debian-cloud"
additional_disks = var.additional_disks
labels = {
costing = "db",
inventory = "gcp",
}
network = var.network
subnetwork = var.subnetwork
access_config = []
service_account = {
email = var.service_account_email
scopes = ["cloud-platform"]
}
tags = ["compute"]
}
in my tfvars I have this:
additional_disks = [
{ disk_name = "persistent-disk-1"
device_name = "persistent-disk-1"
auto_delete = true
boot = false
disk_size_gb = 50
disk_type = "pd-standard"
interface = "SCSI"
disk_labels = {}
}
]
However when my code has multiple VMs to deploy with this template, only 1 VM gets deployed--the first--and the subsequent VMs error out with this message:
Error: Error creating instance: googleapi: Error 409: The resource 'projects/<PATH>/persistent-disk-1' already exists, alreadyExists
I understand what is happening but I don't know how to fix it. The subsequent VMs cannot be created because the additional_disk name has already been taken by the first VM. I thought the whole point of using the instance template would be that there is logic built into this where you can use the same template and create multiple VMs of that type.
But it seems like I have to do some additional coding to get multiple VMs deployed with this template.
Can anyone suggest how to do this?
Ultimately got this working with various for_each constructs:
locals {
app_servers = ["inbox", "batch", "transfer", "tools", "elastic", "artemis"]
db_servers = ["inboxdb", "batchdb", "transferdb", "gatewaydb", "artemisdb"]
}
resource "google_compute_disk" "db_add_disk" {
for_each = toset(local.db_servers)
name = "${each.value}-additional-disk"
type = "pd-standard" // pd-ssd
zone = var.zone
size = 50
// interface = "SCSI"
labels = {
environment = "dev"
}
physical_block_size_bytes = 4096
}
module "db_template" {
source = "terraform-google-modules/vm/google//modules/instance_template"
version = "7.8.0"
name_prefix = "${var.project_short_name}-db-template"
machine_type = var.app_machine_type
disk_size_gb = 20
source_image = "debian-10-buster-v20220719"
source_image_family = "debian-10"
source_image_project = "debian-cloud"
labels = {
costing = "db",
inventory = "gcp",
}
network = var.network
subnetwork = var.subnetwork
access_config = []
service_account = {
email = var.service_account_email
scopes = ["cloud-platform"]
}
tags = ["compute"]
}
resource "google_compute_instance_from_template" "db_server-1" {
for_each = toset(local.db_servers)
name = "${var.project_short_name}-${each.value}-1"
zone = var.zone
source_instance_template = module.db_template.self_link
// Override fields from instance template
labels = {
costing = "db",
inventory = "gcp",
component = "${each.value}"
}
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_attached_disk" "db_add_disk" {
for_each = toset(local.db_servers)
disk = google_compute_disk.db_add_disk[each.key].id
instance = google_compute_instance_from_template.db_server-1[each.key].id
}
We are utilizing the GCP network and GKE modules in Terraform to create the VPC and GKE cluster subsequently. Now we would like to create a firewall rule with the target as GKE nodes. We don't want to update the existing firewall rules which are auto-created as the format which GCP uses to name them might change in future due to which our logic may fail. That's why there is a need to create a separate firewall rule along with a separate network tag pointing to the GKE nodes. Module info
VPC
module "vpc" {
source = "terraform-google-modules/network/google"
#version = "~> 2.5"
project_id = var.project_id
network_name = "${var.project_name}-${var.env_name}-vpc"
subnets = [
{
subnet_name = "${var.project_name}-${var.env_name}-subnet"
subnet_ip = "${var.subnetwork_cidr}"
subnet_region = var.region
}
]
secondary_ranges = {
"${var.project_name}-${var.env_name}-subnet" = [
{
range_name = "${var.project_name}-gke-pod-ip-range"
ip_cidr_range = "${var.ip_range_pods_cidr}"
},
{
range_name = "${var.project_name}-gke-service-ip-range"
ip_cidr_range = "${var.ip_range_services_cidr}"
}
]
}
}
GKE_CLUSTER
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
project_id = var.project_id
name = "${var.project_name}-gke-${var.env_name}-cluster"
regional = true
region = var.region
zones = ["${var.region}-a", "${var.region}-b", "${var.region}-c"]
network = module.vpc.network_name
subnetwork = module.vpc.subnets_names[0]
ip_range_pods = "${var.project_name}-gke-pod-ip-range"
ip_range_services = "${var.project_name}-gke-service-ip-range"
http_load_balancing = false
network_policy = false
horizontal_pod_autoscaling = true
filestore_csi_driver = false
enable_private_endpoint = false
enable_private_nodes = true
master_ipv4_cidr_block = "${var.control_plane_cidr}"
istio = false
cloudrun = false
dns_cache = false
node_pools = [
{
name = "${var.project_name}-gke-node-pool"
machine_type = "${var.machine_type}"
node_locations = "${var.region}-a,${var.region}-b,${var.region}-c"
min_count = "${var.node_pools_min_count}"
max_count = "${var.node_pools_max_count}"
disk_size_gb = "${var.node_pools_disk_size_gb}"
# local_ssd_count = 0
# spot = false
# local_ssd_ephemeral_count = 0
# disk_type = "pd-standard"
# image_type = "COS_CONTAINERD"
# enable_gcfs = false
auto_repair = true
auto_upgrade = true
# service_account = "project-service-account#<PROJECT ID>.iam.gserviceaccount.com"
preemptible = false
# initial_node_count = 80
}
]
# node_pools_tags = {
# all = []
# default-node-pool = ["default-node-pool",]
# }
}
FIREWALL
module "firewall_rules" {
source = "terraform-google-modules/network/google//modules/firewall-rules"
project_id = var.project_id
network_name = module.vpc.network_name
rules = [{
name = "allow-istio-ingress"
description = null
direction = "INGRESS"
priority = null
ranges = ["${var.control_plane_cidr}"]
source_tags = null
source_service_accounts = null
target_tags = null
target_service_accounts = null
allow = [{
protocol = "tcp"
ports = ["15017"]
}]
deny = []
log_config = {
metadata = "INCLUDE_ALL_METADATA"
}
}]
depends_on = [module.gke]
}
Although the GKE module has tags property to define tags explicitly, we still need assistance to properly instantiate it and then fetch the same tag value in the firewall module.
I found a working solution to my question posted earlier. Please refer to the GKE module snippet. In that, we only need to modify the below part and an explicit network tag will be created to point to all the nodes in that node pool.
module "gke" {
.
.
node_pools = [
{
name = "gke-node-pool"
.
.
.
},
]
node_pools_tags = {
"gke-node-pool" = "gke-node-pool-network-tag"
}
}
Trying to deploy aws-load-balancer-controller on Kubernetes.
I have the following TF code:
resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
This works up to version 1.9 as required. As soon as I upgrade to version 2.2.3 the pod fails to update and on the pod get the following error:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have read the update the doc and have amended the IAM policy as they state but they also mention:
updating the TargetGroupBinding CRDs
And that where I am not sure how to do that using terraform
If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.
With your Terraform code, you apply an Deployment and an Ingress resource, but you must also add the CustomResourceDefinitions for the TargetGroupBinding custom resource.
This is described under "Add Controller to Cluster" in the Load Balancer Controller installation documentation - with examples for Helm and Kubernetes Yaml provided.
Terraform has beta support for applying CRDs including an example of deploying CustomResourceDefinition.
I have an issue where every time I create a new instance template, terraform auto deletes the previous instance I created. This occurs even if auto_delete = false and the name is different. My Code is located below.
// Create instance template (replace with created image)
resource "google_compute_instance_template" "default" {
name = "appserver-template-020103"
description = "This template is used to create app server instances."
tags = ["foo", "bar"]
labels = {
environment = "dev"
}
instance_description = "description assigned to instances"
machine_type = "f1-micro"
can_ip_forward = false
scheduling {
automatic_restart = true
on_host_maintenance = "MIGRATE"
}
// Create a new boot disk from an image
disk {
source_image = "ubuntu-1604-xenial-v20180126"
auto_delete = false
boot = false
}
network_interface {
network = "default"
}
metadata {
foo = "bar"
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}