hay folks ,
I want to run a script in gcp machine for that i created a resource below file
disk = google_compute_disk.default2.id
instance = google_compute_instance.default.id
} # aatach disk to vm
resource "google_compute_firewall" "firewall" {
name = "gritfy-firewall-externalssh"
network = "default"
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["externalssh"]
} # allow ssh
resource "google_compute_address" "static" {
name = "vm-public-address"
project = "fit-visitor-305606"
region = "asia-south1"
depends_on = [ google_compute_firewall.firewall ]
} # reserve ip
resource "google_compute_instance" "default" {
name = "new"
machine_type = "custom-8-16384"
zone = "asia-south1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.static.address
}
}
metadata = {
ssh-keys = "${var.user}:${file(var.publickeypath)}"
}
lifecycle {
ignore_changes = [attached_disk]
}
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
}
provisioner "remote-exec" {
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
timeout = "500s"
private_key = file(var.privatekeypath)
}
inline = [
"sudo yum -y install epel-release",
"sudo yum -y install nginx",
"sudo nginx -v",
]
}
} # Create VM
resource "google_compute_disk" "default2" {
name = "test-disk"
type = "pd-balanced"
zone = "asia-south1-a"
image = "centos-7-v20210609"
size = 100
} # Create Disk
using this I am able to create VM and disk and also able to attach vm to disk but not able to run my script
error log are =
and private key part is working fine the key is assign to VM and I try to connect with that key it is connected may the problem with the provision part only
any help or guidance would be really helpful...
Like error message says, you need connection configuration for provisioner. Also you need remote-exec provisoner for running scripts.
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
connection {
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/autoo.sh",
"cd /tmp",
"./autoo.sh"
]
connection {
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
source: https://stackoverflow.com/a/36668395/5454632
Related
Running the following terraform gcp project i can see the machines do communicate with each other but no internet, machines looks to resolve the domains but not able to ping them. I am adding internal static ip since i need to be static in order for the instances to communicate with each other.
Anything i am missing?
Thank you in advance
provider "google" {
project = "terraform-368808"
region = "us-west1"
}
resource "google_compute_network" "default" {
name = "manager-network"
auto_create_subnetworks = false
mtu = 1460
}
resource "google_compute_subnetwork" "default" {
name = "manager-subnet"
ip_cidr_range = "10.10.10.0/24"
region = "us-west1"
network = google_compute_network.default.id
}
resource "google_compute_address" "manager_ip_one" {
name = "manager-ip-one"
subnetwork = google_compute_subnetwork.default.id
address_type = "INTERNAL"
address = "10.10.10.42"
region = "us-west1"
}
output "manager-ip-one" {
value = google_compute_address.manager_ip_one.address
}
resource "google_compute_address" "manager_ip_two" {
name = "manager-two"
subnetwork = google_compute_subnetwork.default.id
address_type = "INTERNAL"
address = "10.10.10.43"
region = "us-west1"
}
output "manager-ip-two" {
value = google_compute_address.manager_ip_two.address
}
resource "google_compute_instance" "manager1" {
name = "manager-node-1"
machine_type = "e2-medium"
zone = "us-west1-a"
tags = ["ssh"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
metadata_startup_script = "sudo apt update -y; sudo apt install wget htop -y;"
network_interface {
subnetwork = google_compute_subnetwork.default.id
network_ip = google_compute_address.manager_ip_one.address
}
provisioner "local-exec" {
command = "echo ${google_compute_address.manager_ip_one.address} >> private_ips.txt"
}
}
resource "google_compute_instance" "manager2" {
name = "manager-node-2"
machine_type = "e2-medium"
zone = "us-west1-a"
tags = ["ssh"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
metadata_startup_script = "sudo apt update -y; sudo apt install wget htop -y;"
network_interface {
subnetwork = google_compute_subnetwork.default.id
network_ip = google_compute_address.manager_ip_two.address
}
provisioner "local-exec" {
command = "echo ${google_compute_address.manager_ip_two.address} >> private_ips.txt"
}
}
resource "google_compute_firewall" "ssh" {
name = "allow-ssh"
allow {
ports = ["22"]
protocol = "tcp"
}
direction = "INGRESS"
network = google_compute_network.default.id
priority = 1000
source_ranges = ["0.0.0.0/0"]
target_tags = ["ssh"]
}
resource "google_compute_firewall" "icmp" {
name = "allow-icmp"
allow {
protocol = "icmp"
}
direction = "INGRESS"
network = google_compute_network.default.id
priority = 1001
source_ranges = ["0.0.0.0/0"]
target_tags = ["icmp"]
}
I think that in order to access an external internet from a compute engine (leaving aside firewall rules), the compute engine should either have an external IP address, or a make a connection using a Cloud NAT.
To communicate with the internet, you can use an external IPv4 or external IPv6 address configured on the instance. If the instance doesn't have an external address, Cloud NAT can be used for IPv4 traffic.
In case of an external IP address, you might like to add a few lines into the terraform script (I use a snippet from your code above) configuring an "access config" section:
network_interface {
subnetwork = google_compute_subnetwork.default.id
network_ip = google_compute_address.manager_ip_one.address
access_config {
// Ephemeral public IP
}
}
Or, presumably, you created/reserved an external IP address in your terraform script as well (let's say with a name manager_ip_ext):
network_interface {
subnetwork = google_compute_subnetwork.default.id
network_ip = google_compute_address.manager_ip_one.address
access_config {
// Explicit public IP
nat_ip = google_compute_address.manager_ip_ext.address
}
}
Another option, as mentioned above, is to organise egress through a Cloud NAT solution. Some details are provided in the documentation - Set up and manage network address translation with Cloud NAT Cloud NAT can be deployed/managed through Terraform as well.
Suppose that the ec2 module has two server, dynamic created. Like:
module "ec2-web" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
count = 2
name = "${local.appName}-webserver-${count.index + 1}"
.....
}
Now I have a null_resource config file, which has a connection only:
resource "null_resource" "web-upload" {
depends_on = [module.ec2-web]
connection {
type = "ssh"
host = module.ec2-web[0].public_ip
user = "ec2-user"
password = ""
private_key = file("keypair/a-ssh-key.pem")
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo mkdir -p /var/www/html",
"sudo chown -R ec2-user:ec2-user /var/www/html",
]
}
provisioner "file" {
source = "web/"
destination = "/var/www/html"
}
}
Now how should I update any config can let finally terraform upload files to both server accordingly?
You would use the same approach with the count meta-argument:
resource "null_resource" "web-upload" {
count = 2
connection {
type = "ssh"
host = module.ec2-web[count.index].public_ip
user = "ec2-user"
password = ""
private_key = file("keypair/a-ssh-key.pem")
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo mkdir -p /var/www/html",
"sudo chown -R ec2-user:ec2-user /var/www/html",
]
}
provisioner "file" {
source = "web/"
destination = "/var/www/html"
}
}
The explicit dependency using depends_on meta-argument is not required as the reference to the module output is used (module.ec2-web[count.index].public_ip). This means terraform will wait for the module to be done with creating resources prior to attempting the null_resource.
I'm trying to deploy a managed instance group with a load balancer which will be running a web server container.
The container is stored in the google artifcat registry.
If I manually create a VM and define the container usage, it is successfully able to pull and activate the container.
When I try to create the managed instance group via terraform, the VM does not pull nor activate the container.
When I ssh to the VM and try to manually pull the container, I get the following error:
Error response from daemon: Get https://us-docker.pkg.dev/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The only notable difference between the VM I created manually to the VM created by terraform is that the manual VM has an external IP address. Not sure if this matters and not sure how to add one to the terraform file.
Below is my main.tf file. Can anyone tell me what I'm doing wrong?
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.53.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file("compute_lab2-347808-dab33a244827.json")
project = "lab2-347808"
region = "us-central1"
zone = "us-central1-f"
}
locals {
google_load_balancer_ip_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
]
}
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
cos_image_name = "cos-stable-77-12371-89-0"
container = {
image = "us-docker.pkg.dev/lab2-347808/web-server-repo/web-server-image"
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = false
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
]
restart_policy = "Always"
}
resource "google_compute_firewall" "rules" {
project = "lab2-347808"
name = "allow-web-ports"
network = "default"
description = "Opens the relevant ports for the web server"
allow {
protocol = "tcp"
ports = ["80", "8080", "5432", "5000", "443"]
}
source_ranges = ["0.0.0.0/0"]
#source_ranges = local.google_load_balancer_ip_ranges
target_tags = ["web-server-ports"]
}
resource "google_compute_autoscaler" "default" {
name = "web-autoscaler"
zone = "us-central1-f"
target = google_compute_instance_group_manager.default.id
autoscaling_policy {
max_replicas = 10
min_replicas = 1
cooldown_period = 60
cpu_utilization {
target = 0.5
}
}
}
resource "google_compute_instance_template" "default" {
name = "my-web-server-template"
machine_type = "e2-medium"
can_ip_forward = false
tags = ["ssh", "http-server", "https-server", "web-server-ports"]
disk {
#source_image = "cos-cloud/cos-73-11647-217-0"
source_image = module.gce-container.source_image
}
network_interface {
network = "default"
}
service_account {
#scopes = ["userinfo-email", "compute-ro", "storage-ro"]
scopes = ["cloud-platform"]
}
metadata = {
gce-container-declaration = module.gce-container.metadata_value
}
}
resource "google_compute_target_pool" "default" {
name = "web-server-target-pool"
}
resource "google_compute_instance_group_manager" "default" {
name = "web-server-igm"
zone = "us-central1-f"
version {
instance_template = google_compute_instance_template.default.id
name = "primary"
}
target_pools = [google_compute_target_pool.default.id]
base_instance_name = "web-server-instance"
}
Your VM templates haven't public IPs, therefore, you can't reach public IP.
However, you have 3 ways to solve that issue:
Add a public IP on the VM template (bad idea)
Add a Cloud NAT on your VM private IP range to allow outgoing traffic to the internet (good idea)
Activate the Google private access in the subnet that host the VM private iP range. It create a bridge to access to Google services without having a public IP (my prefered idea) -> https://cloud.google.com/vpc/docs/configure-private-google-access
Apparently I was missing the following acecss_config inside network_interface of the google_compute_instance_template as following:
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}
i have a script file named as auto.sh in my local laptop and i want to run this script file in GCP machine as soon as provision
i created this terraform file
resource "google_compute_attached_disk" "default3" {
disk = google_compute_disk.default2.id
instance = google_compute_instance.default.id
}
resource "google_compute_instance" "default" {
name = "test"
machine_type = "custom-8-16384"
zone = "us-central1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
}
}
metadata_startup_script = "touch abcd.txt"
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_disk" "default2" {
name = "test-disk"
type = "pd-balanced"
zone = "us-central1-a"
image = "centos-7-v20210609"
size = 100
}
and this is working fine now i want to run that script
You should replace the
metadata_startup_script = "touch abcd.txt"
with either you script inline if it's short enough, or with something like
metadata_startup_script = "${file("/path/to/your/file")}"
To load it from a file
See metadata_startup_script docs
I currently have the following Terraform plan:
provider "aws" {
region = var.region
}
resource "aws_instance" "ec2" {
ami = var.ami
instance_type = var.instanceType
subnet_id = var.subnet
security_groups = var.securityGroups
timeouts {
create = "2h"
delete = "2h"
}
tags = {
Name = "${var.ec2ResourceName}"
CoreInfra = "false"
}
lifecycle {
prevent_destroy = true
}
key_name = "My_Key_Name"
connection {
type = "ssh"
user = "ec2-user"
password = ""
private_key = file(var.keyPath)
host = self.public_ip
}
provisioner "file" {
source = "/home/ec2-user/code/backend/ec2/setup_script.sh"
destination = "/tmp/setup_script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/setup_script.sh",
"bash /tmp/setup_script.sh ${var.ec2ResourceName}"
]
}
}
resource "aws_eip" "eip_manager" {
name = "eip-${var.ec2ResourceName}"
instance = aws_instance.ec2.id
vpc = true
tags = {
Name = "eip-${var.ec2ResourceName}"
}
lifecycle {
prevent_destroy = true
}
}
This plan can be run multiple times, creating a single EC2 instance each time without removing the previous one. However, there is a single Elastic IP that ends up being reassigned to the most recently-created EC2 instance. How can I add an Elastic IP to each new instance that does not get reassigned?
maybe with aws_eip_association, here is the snippet:
resource "aws_eip_association" "eip_assoc" {
instance_id = aws_instance.ec2.id
allocation_id = aws_eip.eip_manager.id
}
More info here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eip_association