while building with terraform plan i am getting the error. This is my .tf file:
provider "vsphere" {
user = "aaa"
password = "aaa"
vsphere_server = "172.22.1.139"
allow_unverified_ssl = "true"
}
resource "vsphere_folder" "frontend" {
path = "VirtualMachines"
datacenter = "A2MS0110-VMFS5"
}
resource "vsphere_virtual_machine" "FIRST_VM" {
name = "FIRST_VM"
vcpu = 1
memory = 2048
datacenter = "A2MS0110-VMFS5"
network_interface {
label = "VM Network"
}
disk {
datastore = "A2MS0110-VMFS5"
vmdk = "/demo2"
}
}
I am getting the following error:
* provider.vsphere: Error setting up client: Post https://172.22.1.139/sdk: net/http: TLS handshake timeout
Is this network problem or configuration error?
Related
I'm trying to deploy a managed instance group with a load balancer which will be running a web server container.
The container is stored in the google artifcat registry.
If I manually create a VM and define the container usage, it is successfully able to pull and activate the container.
When I try to create the managed instance group via terraform, the VM does not pull nor activate the container.
When I ssh to the VM and try to manually pull the container, I get the following error:
Error response from daemon: Get https://us-docker.pkg.dev/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The only notable difference between the VM I created manually to the VM created by terraform is that the manual VM has an external IP address. Not sure if this matters and not sure how to add one to the terraform file.
Below is my main.tf file. Can anyone tell me what I'm doing wrong?
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.53.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file("compute_lab2-347808-dab33a244827.json")
project = "lab2-347808"
region = "us-central1"
zone = "us-central1-f"
}
locals {
google_load_balancer_ip_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
]
}
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
cos_image_name = "cos-stable-77-12371-89-0"
container = {
image = "us-docker.pkg.dev/lab2-347808/web-server-repo/web-server-image"
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = false
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
]
restart_policy = "Always"
}
resource "google_compute_firewall" "rules" {
project = "lab2-347808"
name = "allow-web-ports"
network = "default"
description = "Opens the relevant ports for the web server"
allow {
protocol = "tcp"
ports = ["80", "8080", "5432", "5000", "443"]
}
source_ranges = ["0.0.0.0/0"]
#source_ranges = local.google_load_balancer_ip_ranges
target_tags = ["web-server-ports"]
}
resource "google_compute_autoscaler" "default" {
name = "web-autoscaler"
zone = "us-central1-f"
target = google_compute_instance_group_manager.default.id
autoscaling_policy {
max_replicas = 10
min_replicas = 1
cooldown_period = 60
cpu_utilization {
target = 0.5
}
}
}
resource "google_compute_instance_template" "default" {
name = "my-web-server-template"
machine_type = "e2-medium"
can_ip_forward = false
tags = ["ssh", "http-server", "https-server", "web-server-ports"]
disk {
#source_image = "cos-cloud/cos-73-11647-217-0"
source_image = module.gce-container.source_image
}
network_interface {
network = "default"
}
service_account {
#scopes = ["userinfo-email", "compute-ro", "storage-ro"]
scopes = ["cloud-platform"]
}
metadata = {
gce-container-declaration = module.gce-container.metadata_value
}
}
resource "google_compute_target_pool" "default" {
name = "web-server-target-pool"
}
resource "google_compute_instance_group_manager" "default" {
name = "web-server-igm"
zone = "us-central1-f"
version {
instance_template = google_compute_instance_template.default.id
name = "primary"
}
target_pools = [google_compute_target_pool.default.id]
base_instance_name = "web-server-instance"
}
Your VM templates haven't public IPs, therefore, you can't reach public IP.
However, you have 3 ways to solve that issue:
Add a public IP on the VM template (bad idea)
Add a Cloud NAT on your VM private IP range to allow outgoing traffic to the internet (good idea)
Activate the Google private access in the subnet that host the VM private iP range. It create a bridge to access to Google services without having a public IP (my prefered idea) -> https://cloud.google.com/vpc/docs/configure-private-google-access
Apparently I was missing the following acecss_config inside network_interface of the google_compute_instance_template as following:
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}
i can't create a VM on GCP using terraform, i want to attach a kms key in the attribute "kms_key_self_link", but when the machine is being created, time goes and after 2 minutes waiting (in every case) the error 503 appears. I'm going to share my script, is worthly to say that with the attribute "kms_key_self_link" dissabled, the script runs ok.
data "google_compute_image" "tomcat_centos" {
name = var.vm_img_name
}
data "google_kms_key_ring" "keyring" {
name = "keyring-example"
location = "global"
}
data "google_kms_crypto_key" "cmek-key" {
name = "crypto-key-example"
key_ring = data.google_kms_key_ring.keyring.self_link
}
data "google_project" "project" {}
resource "google_kms_crypto_key_iam_member" "key_user" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/owner"
member = "serviceAccount:service-${data.google_project.project.number}#compute-system.iam.gserviceaccount.com"
}
resource "google_compute_instance" "vm-hsbc" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
allow_stopping_for_update = true
can_ip_forward = false
deletion_protection = false
boot_disk {
kms_key_self_link = data.google_kms_crypto_key.cmek-key.self_link
initialize_params {
type = var.disk_type
#GCP-CE-CTRL-22
image = data.google_compute_image.tomcat_centos.self_link
}
}
network_interface {
network = var.network
}
#GCP-CE-CTRL-2-...-5, 7, 8
service_account {
email = var.service_account_email
scopes = var.scopes
}
#GCP-CE-CTRL-31
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
}
And this is the complete error:
Error creating instance: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: '5C54C97EB5265.AA25590.F4046F68'), backendError
I solved this issue granting to my compute service account the role of encrypter/decripter through this resource:
resource "google_kms_crypto_key_iam_binding" "key_iam_binding" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/cloudkms.cryptoKeyEncrypter"
members = [
"serviceAccount:service-${data.google_project.gcp_project.number}#compute-system.iam.gserviceaccount.com",
]
}
getting the below error with my terraform config.
Error: Post "https://35.224.178.141/api/v1/namespaces": x509: certificate signed by unknown authority
on main.tf line 66, in resource "kubernetes_namespace" "example":
66: resource "kubernetes_namespace" "example" {
Here is my config, all I want to do for now is create a cluster auth with it, and create a namespace.
I have searched everyone and cant see where anyone else has run into this problem.
It is most likely something stupid I am doing. I thought this would be relatively simple, but its turning out to be a pain. I dont want to have to wrap gcloud commands in my build script.
provider "google" {
project = var.project
region = var.region
zone = var.zone
credentials = "google-key.json"
}
terraform {
backend "gcs" {
bucket = "tf-state-bucket-devenv"
prefix = "terraform"
credentials = "google-key.json"
}
}
resource "google_container_cluster" "my_cluster" {
name = var.kube-clustername
location = var.zone
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = var.kube-poolname
location = var.zone
cluster = google_container_cluster.my_cluster.name
node_count = var.kube-nodecount
node_config {
preemptible = var.kube-preemptible
machine_type = "n1-standard-1"
disk_size_gb = 10
disk_type = "pd-standard"
metadata = {
disable-legacy-endpoints = "true",
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
data "google_client_config" "provider" {}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
token = "{data.google_client_config.provider.access_token}"
}
resource "kubernetes_namespace" "example" {
metadata {
name = "my-first-namespace"
}
}
TL;DR
Change the provider definition to:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.provider.access_token
}
What changed?
The "{}" was deleted from the cluster_ca_certificate and token values
I included the explanation below.
I used your original terraform file and I received the same error as you. I modified (simplified) your terraform file and added the output definitions:
resource "google_container_cluster" "my_cluster" {
OMMITED
}
data "google_client_config" "provider" {}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
token = "{data.google_client_config.provider.access_token}"
}
output "cert" {
value = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}
output "token" {
value = "{data.google_client_config.provider.access_token}"
}
Running above file showed:
$ terraform apply --auto-approve
data.google_client_config.provider: Refreshing state...
google_container_cluster.my_cluster: Creating...
google_container_cluster.my_cluster: Creation complete after 2m48s [id=projects/PROJECT-NAME/locations/europe-west3-c/clusters/gke-terraform]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
cert = {base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}
token = {data.google_client_config.provider.access_token}
As you can see the values were interpreted as strings from the provider and not "processed" to get the needed values. To fix that you will need change the provider definition to:
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.provider.access_token
Running $ terraform apply --auto-approve once again:
data.google_client_config.provider: Refreshing state...
google_container_cluster.my_cluster: Creation complete after 3m18s [id=projects/PROJECT-NAME/locations/europe-west3-c/clusters/gke-terraform]
kubernetes_namespace.example: Creating...
kubernetes_namespace.example: Creation complete after 0s [id=my-first-namespace]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
cert = -----BEGIN CERTIFICATE-----
MIIDKzCCAhOgAwIBAgIRAO2bnO3FU6HZ0T2u3XBN1jgwDQYJKoZIhvcNAQELBQAw
<--OMMITED-->
a9Ybow5tZGu+fqvFHnuCg/v7tln/C3nVuTbwa4StSzujMsPxFv4ONVl4F4UaGw0=
-----END CERTIFICATE-----
token = ya29.a0AfH6SMBx<--OMMITED-->fUvCeFg
As you can see the namespace was created. You can check it by running:
$ gcloud container clusters get-credentials CLUSTER-NAME --zone=ZONE
$ kubectl get namespace my-first-namespace
Output:
NAME STATUS AGE
my-first-namespace Active 3m14s
Additional resources:
Terraform.io: Docs: Configuration: Outputs
Terraform.io: Docs: Configuration: Variables
After going through the below URL, I am able to create/destroy a sample VM on vSphere
VMware vSphere Provider
Contents of different files:
provider.tf
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
# If you have a self-signed cert
allow_unverified_ssl = true
}
data "vsphere_datacenter" "dc" {
name = "XYZ400"
}
data "vsphere_datastore" "datastore" {
name = "datastore1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_resource_pool" "pool" {
name = "Pool1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_network" "network" {
name = "Network1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
resource "vsphere_virtual_machine" "vmtest" {
name = "terraform-test"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"
num_cpus = 2
memory = 1024
guest_id = "other3xLinux64Guest"
wait_for_guest_net_timeout = 0
network_interface {
network_id = "${data.vsphere_network.network.id}"
}
disk {
label = "disk0"
size = 20
}
}
variables.tf
variable "vsphere_user" {
description = "vSphere user name"
default = "username"
}
variable "vsphere_password" {
description = "vSphere password"
default = "password"
}
variable "vsphere_server" {
description = "vCenter server FQDN or IP"
default = "vCenterURL"
}
test.tf
data "vsphere_virtual_machine" "template" {
name = "CENTOS7_Template"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
Commands used
terraform init
terraform validate
terraform plan
terraform apply
terraform destroy
My next need is to convert existing terraform files provisioning GCP VMs to ones in VMware vSphere cloud environment. If anybody has an pointers please share.
I want connect my "private" network (instances without external IP's) via terraform.
google_compute_router_nat.advanced-nat: Error patching router us-west2/my-router1: googleapi: Error 400: Invalid value for field 'resource.nats[0].subnetworks[0].name': 'test-us-west2-private-subnet'. The URL is malformed.
More details:
Reason: invalid, Message: Invalid value for field 'resource.nats[0].subnetworks[0].name': 'test-us-west2-private-subnet'. The URL is malformed.
Reason: invalid, Message: Invalid value for field 'resource.nats[0].natIps[0]': '10.0.0.0/16'. The URL is malformed.
Task: migrate classic scheme from AWS to GCP: One VPC Network, Bastion Host in Public Network, in Private network all machines without external IP's. Use NAT Gateway for Private Network.
resource "google_compute_router" "router" {
name = "my-router1"
network = "${var.gcp_project_name}-net"
bgp {
asn = 64514
}
}
resource "google_compute_router_nat" "advanced-nat" {
name = "nat-1"
router = "${google_compute_router.router.name}"
region = "us-west2"
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = ["10.0.0.0/16"]
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${var.gcp_project_name}-${var.gcp_region_name}-private-subnet"
}
}
# VPC
resource "google_compute_network" "gcp_project_name" {
name = "${var.gcp_project_name}-net"
auto_create_subnetworks = "false"
}
# PRIVATE SUBNET
resource "google_compute_subnetwork" "gcp_project_name_private_subnet" {
name = "${var.gcp_project_name}-${var.gcp_region_name}-private-subnet"
ip_cidr_range = "10.0.0.0/16"
network = "${google_compute_network.gcp_project_name.self_link}"
region = "${var.gcp_region_name}"
}
# PUBLIC SUBNET
resource "google_compute_subnetwork" "gcp_project_name_public_subnet" {
name = "${var.gcp_project_name}-${var.gcp_region_name}-public-subnet"
ip_cidr_range = "10.8.0.0/16"
network = "${google_compute_network.gcp_project_name.self_link}"
region = "${var.gcp_region_name}"
}
resource "google_compute_router" "router" {
name = "${var.gcp_router_name}"
network = "${var.gcp_project_name}-net"
region = "${var.gcp_region_name}"
}
resource "google_compute_router_nat" "advanced-nat" {
name = "${var.gcp_nat_name}"
router = "${var.gcp_router_name}"
region = "${var.gcp_region_name}"
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${google_compute_subnetwork.gcp_project_name_private_subnet.self_link}"
}
}