I'm trying to create VM instances on GCP using Terraform. Instances do get created but I can't seem to have SSH access to the instances. My tf file:
# Cloud Provider
provider "google" {
version = "3.5.0"
credentials = file("./terraform-service-account.json")
project = "terraform-279210"
region = "us-central1"
zone = "us-central1-c"
}
# Virtual Private Network
resource "google_compute_network" "vpc_network" {
name = "terraform-network"
}
# VM Instance
resource "google_compute_instance" "demo-vm-instance" {
name = "demo-vm-instance"
machine_type = "f1-micro"
tags = ["demo-vm-instance"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
metadata = {
ssh-keys = "demouser:${file("./demouser.pub")}"
}
network_interface {
network = google_compute_network.vpc_network.name
access_config {
}
}
}
ssh -i demouser.pub demouser#<vm-external-ip> returns ssh: connect to host <vm-external-ip> port 22: Operation timed out
Looks like firewall rules block TCP connections through port 22 as nc -zv <vm-external-ip> 22 doesn't succeed.
Create firewall rules using following
resource "google_compute_firewall" "ssh-rule" {
name = "demo-ssh"
network = google_compute_network.vpc_network.name
allow {
protocol = "tcp"
ports = ["22"]
}
target_tags = ["demo-vm-instance"]
source_ranges = ["0.0.0.0/0"]
}
Related
I'm learning terraform and want to create a new server and execute the shell script inside it using terraform provisioner. I'm facing this issue when terraform is trying to connect via SSH into newly created server.
I've gone through other solutions and verified that key is present at the correct location with 600 permission and also user is set by OS login but still getting same error
This is my main.tf file
provider "google" {
project = var.project
region = var.region
}
resource "google_compute_firewall" "firewall" {
name = "gritfy-firewall-externalssh"
network = "default"
allow {
protocol = "tcp"
ports = ["22","443"]
}
source_ranges = ["0.0.0.0/0"] # Not So Secure. Limit the Source Range
target_tags = ["externalssh"]
}
resource "google_compute_network" "default" {
name = "test-network"
}
# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
name = "vm-public-address"
project = var.project
region = var.region
depends_on = [google_compute_firewall.firewall]
}
resource "google_compute_instance" "dev" {
name = "devserver"
machine_type = "e2-micro"
zone = "${var.region}-a"
tags = ["externalssh", "webserver"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.static.address
}
}
provisioner "file" {
# source file name on the local machine where you execute terraform plan and apply
source = "LAMP_Stack.sh"
# destination is the file location on the newly created instance
destination = "/tmp/LAMP_Stack.sh"
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
timeout = "500s"
private_key = file(var.privatekeypath)
}
}
# to connect to the instance after the creation and execute few commands for provisioning
provisioner "remote-exec" {
connection {
host = google_compute_address.static.address
type = "ssh"
# username of the instance would vary for each account refer the OS Login in GCP documentation
user = var.user
timeout = "500s"
# private_key being used to connect to the VM. ( the public key was copied earlier using metadata )
private_key = file(var.privatekeypath)
}
# Commands to be executed as the instance gets ready.
# set execution permission and start the script
inline = [
"chmod a+x /tmp/LAMP_Stack.sh",
"sudo /tmp/LAMP_Stack.sh"
]
}
# Ensure firewall rule is provisioned before server, so that SSH doesn't fail.
depends_on = [google_compute_firewall.firewall]
service_account {
email = var.email
scopes = ["compute-ro"]
}
}
When you created the Compute Engine instance, you did not specify metadata for the SSH RSA public key that matches the RSA private key that you are using.
Modify the username and id_rsa.pub filename.
variable "gce_ssh_user" {
default = "username"
}
variable "gce_ssh_pub_key_file" {
default = "id_rsa.pub"
}
resource "google_compute_instance" "dev" {
...
metadata = {
ssh-keys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
}
}
The public key should have this format:
ssh-rsa AAAAB3NzaC1yc ... 0a9Wpd
An example command to create the SSH RSA private and public keys (id_rsa, id_rsa.pub):
ssh-keygen -t rsa -C "replace with username#hostname" -f id_rsa
I really can't figure out why I'm unable to SSH into my newly created EC2 instance and can't figure out why for the life of me.
Here is some of my code in Terraform where I created the EC2 and security groups for it.
This is my EC2 code
resource "aws_key_pair" "AzureDevOps" {
key_name = var.infra_env
public_key = var.public_ssh_key
}
# Create network inferface for EC2 instance and assign secruity groups
resource "aws_network_interface" "vm_nic_1" {
subnet_id = var.subnet_id
private_ips = ["10.0.0.100"]
tags = {
Name = "${var.infra_env}-nic-1"
}
security_groups = [
var.ssh_id
]
}
# Add elastic IP addresss for public connectivity
resource "aws_eip" "vm_eip_1" {
vpc = true
instance = aws_instance.virtualmachine_1.id
associate_with_private_ip = "10.0.0.100"
depends_on = [var.gw_1]
tags = {
Name = "${var.infra_env}-eip-1"
}
}
# Deploy virtual machine using Ubuntu ami
resource "aws_instance" "virtualmachine_1" {
ami = var.ami
instance_type = var.instance_type
key_name = aws_key_pair.AzureDevOps.id
#retrieve the Administrator password
get_password_data = true
connection {
type = "ssh"
port = 22
password = rsadecrypt(self.password_data, file("id_rsa"))
https = true
insecure = true
timeout = "10m"
}
network_interface {
network_interface_id = aws_network_interface.vm_nic_1.id
device_index = 0
}
user_data = file("./scripts/install-cwagent.ps1")
tags = {
Name = "${var.infra_env}-vm-1"
}
}
Here is the code for my security group
resource "aws_security_group" "ssh" {
name = "allow_ssh"
description = "Allow access to the instance via ssh"
vpc_id = var.vpc_id
ingress {
description = "Access the instance via ssh"
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.infra_env}-allow-ssh"
}
}
If I need to provide any more code or information I can, it's my first time trying to do this and it's frustrating trying to figure it out. I'm trying to use Putty as well and not sure if I just don't know how to use it correctly or if it's something wrong with my EC2 configuration.
I used my public ssh key from my computer for the variable in my aws_key_pair resource. I saved my public ssh key pair as a .ppk file for putty and on my aws console when I go to "connect" it says to use ubuntu#10.0.0.100 for my host name in Putty which I did and when I click okay and it tries to connect it gets a network error connection timed out
I used my public ssh key
You need to use your private key, not public.
use ubuntu#10.0.0.100
10.0.0.100 is private IP address. To be able to connect to your instance over the internet you need to use public IP address.
I'm trying to learn wireguard. I found this great tutorial on how to install it on GCP ....
https://sreejithag.medium.com/set-up-wireguard-vpn-with-google-cloud-57bb3267a6ef
Very basic (for somebody new to wireguard) but it did work. The tutorial shows a vm being provisioned with ip forwarding.Through the GCP web interface
I wanted to set this up with terraform. I've searched the terraform registry and found this...
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_forwarding_rule
Heres's my main.tf with the virtual machine provisioning. Where would I put something like ip forwarding? Without terraform complaining?
code---
# This is the provider used to spin up the gcloud instance
provider "google" {
project = var.project_name
region = var.region_name
zone = var.zone_name
credentials = "mycredentials.json"
}
# Locks the version of Terraform for this particular use case
terraform {
required_version = "0.14.6"
}
# This creates the google instance
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
tags = ["allow-http", "allow-https", "allow-dns", "allow-tor", "allow-ssh", "allow-2277", "allow-mosh", "allow-whois", "allow-openvpn", "allow-wireguard"] # FIREWALL
boot_disk {
initialize_params {
image = var.image_name
size = var.disk_size_gb
}
}
network_interface {
network = "default"
# Associated our public IP address to this instance
access_config {
nat_ip = google_compute_address.static.address
}
}
# We connect to our instance via Terraform and remotely executes our script using SSH
provisioner "remote-exec" {
script = var.script_path
connection {
type = "ssh"
host = google_compute_address.static.address
user = var.username
private_key = file(var.private_key_path)
}
}
}
# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
name = "vm-public-address"
}
For WireGuard, you need to enable IP Forwarding. The resource you are trying to use is for HTTP(S) Load Balancers.
Instead enable the google_compute_instance resource attribute can_ip_forward.
can_ip_forward - (Optional) Whether to allow sending and receiving of
packets with non-matching source or destination IPs. This defaults to
false.
can_ip_forward
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
can_ip_forward = true
....
}
I am new to terraform. I have a Dockerized application that can be deployed using a docker-compose file.
I wrote a terraform script that creates a a security group, an EC2 machine, and runs a script that downloads docker and docker-compose, and I am trying to upload this docker-compose file from the local machine to the remote one. Whenever terraform reaches this step, it generates the following error:
aws_instance.docker_swarm: Provisioning with 'file'...
Error: host for provisioner cannot be empty
Below is the terraform template:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 2.70"
}
}
}
provider "aws" {
profile = var.profile
region = var.region
}
resource "aws_security_group" "allow_ssh_http" {
name = "allow_ssh_http"
description = "Allow SSH and HTTP access to ports 22 and 1337"
ingress {
description = "SSH from everywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Access Port 1337 from Everywhere"
from_port = 1337
to_port = 1337
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_ssh_http"
}
}
resource "aws_instance" "docker_swarm" {
ami = var.amis[var.region]
instance_type = var.instance_type
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh_http.id]
user_data = "${file("deployServices.sh")}"
provisioner "file" {
source = "./docker-compose.yml"
destination = "/home/ubuntu/docker-compose.yml"
}
tags = {
Name = "NK Microservices Stack"
}
}
output "ec2_id" {
value = aws_instance.docker_swarm.id
}
output "ec2_ip" {
value = aws_instance.docker_swarm.public_ip
}
I think you will need to provide connection details to the provisioner. For example:
resource "aws_instance" "docker_swarm" {
ami = var.amis[var.region]
instance_type = var.instance_type
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh_http.id]
user_data = "${file("deployServices.sh")}"
provisioner "file" {
source = "./docker-compose.yml"
destination = "/home/ubuntu/docker-compose.yml"
connection {
host = self.public_ip
user = "ubuntu"
private_key = file("<path-to-private-ssh-key>")
}
}
tags = {
Name = "NK Microservices Stack"
}
}
where <path-to-private-ssh-key> is the ssh key associated with your var.key_name.
The Terraform documentation states that you should only use provisioners as a last resort.
User Data can contain files in addition to a startup script. I would definitely recommend that approach before trying to use a provisioner. Or you could even include the file as a heredoc in the user data script.
Alternatively, you could copy the file to S3 before running Terraform, or even use Terraform to upload the file to S3, and then simply have the EC2 instance download the file as part of the user data script.
I am trying to ssh into a newly created EC2 instance with terraform. My host is Windows 10 and I have no problems SSHing into the instance using Bitvise SSH Client from my host but Terraform can't seem to SSH in to create a directory on the instance:
My main.tf:
provider "aws" {
region = "us-west-2"
}
resource "aws_security_group" "instance" {
name = "inlets-server-instance"
description = "Security group for the inlets server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "tunnel" {
ami = "ami-07b4f3c02c7f83d59"
instance_type = "t2.nano"
key_name = "${var.key_name}"
vpc_security_group_ids = [aws_security_group.instance.id]
tags = {
Name = "inlets-server"
}
provisioner "local-exec" {
command = "echo ${aws_instance.tunnel.public_ip} > ${var.public_ip_path}"
}
provisioner "remote-exec" {
inline = [
"mkdir /home/${var.ssh_user}/ansible",
]
connection {
type = "ssh"
host = "${file("${var.public_ip_path}")}"
user = "${var.ssh_user}"
private_key = "${file("${var.private_key_path}")}"
timeout = "1m"
agent = false
}
}
}
My variables.tf:
variable "key_name" {
description = "Name of the SSH key pair generated in Amazon EC2."
default = "aws_ssh_key"
}
variable "public_ip_path" {
description = "Path to the file that contains the instance's public IP address"
default = "ip_address.txt"
}
variable "private_key_path" {
description = "Path to the private SSH key, used to access the instance."
default = "aws_ssh_key.pem"
}
variable "ssh_user" {
description = "SSH user name to connect to your instance."
default = "ubuntu"
}
All I get are attempted connections:
aws_instance.tunnel (remote-exec): Connecting to remote host via SSH...
aws_instance.tunnel (remote-exec): Host: XX.XXX.XXX.XXX
aws_instance.tunnel (remote-exec): User: ubuntu
aws_instance.tunnel (remote-exec): Password: false
aws_instance.tunnel (remote-exec): Private key: true
aws_instance.tunnel (remote-exec): Certificate: false
aws_instance.tunnel (remote-exec): SSH Agent: false
aws_instance.tunnel (remote-exec): Checking Host Key: false
and it finally timeouts with:
Error: timeout - last error: dial tcp: lookup XX.XXX.XXX.XXX
: no such host
Any ideas?
You didn't talk about your network structure.
Is your win10 machine inside the VPC? If not, do you have internet gateway, routing table, NAT gateway properly set up?
It would be cleaner and safer to create an Elastic IP resource to access the IP address of your machine with terraform knowledge instead of trying to get it from the machine. Surely, the local exec will be quicker than the remote exec but you create an implicit dependency that might generate problems.