Terraform Remote-Exec Provisioner Timeout - amazon-web-services

I'm creating a server in AWS using Terraform.
My remote-exec provisioner will not connect and execute, continually giving me the output:
aws_spot_instance_request.single_server_instance (remote-exec): Connecting to remote host via WinRM...
aws_spot_instance_request.single_server_instance (remote-exec): Host: 54.219.179.241
aws_spot_instance_request.single_server_instance (remote-exec): Port: 5985
aws_spot_instance_request.single_server_instance (remote-exec): User: Administrator
aws_spot_instance_request.single_server_instance (remote-exec): Password: true
aws_spot_instance_request.single_server_instance (remote-exec): HTTPS: false
aws_spot_instance_request.single_server_instance (remote-exec): Insecure: false
aws_spot_instance_request.single_server_instance (remote-exec): CACert: false
Before failing with:
Error applying plan:
1 error(s) occurred:
* aws_spot_instance_request.single_server_instance: 1 error(s) occurred:
* timeout
My Resource is as follows:
resource "aws_spot_instance_request" "single_server_instance" {
# The connection block tells our provisioner how to
# communicate with the resource (instance)
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
#insecure = true
#port = 5985
host = "${self.public_ip}"
#timeout = "5m"
}
wait_for_fulfillment = true
associate_public_ip_address = true
instance_type = "${var.aws_instance_type}"
ami = "${lookup(var.aws_amis, var.aws_region)}"
spot_price = "1.00"
vpc_security_group_ids = [
"${aws_security_group.WinRM.id}",
"${aws_security_group.RDP.id}"
]
key_name = "sdlweb85"
provisioner "remote-exec" {
inline = [
"mkdir c:\\installs"
#"powershell.exe Invoke-WebRequest -Uri 'https://www.dropbox.com/s/meno68gl3rfbtio/install.ps1?dl=0' -OutFile 'C:/installs/install.ps1'"
]
}
#provisioner "file" {
# source = "scripts/"
# destination = "c:/install_scripts/"
#}
user_data = <<EOF
<powershell>
# Configure a Windows host for remote management (this works for both Ansible and Chef)
# You will want to copy this script to a location you own (e.g. s3 bucket) or paste it here
Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1'))
# Set Administrator password
$admin = [adsi]("WinNT://./administrator, user")
$admin.psbase.invoke("SetPassword", "${var.admin_password}")
Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False
New-SelfSignedCertificate -DnsName "*.amazonaws.com" -CertStoreLocation "cert:\LocalMachine\My"
#winrm quickconfig -quiet
</powershell>
EOF
}
Security Groups
resource "aws_security_group" "WinRM" {
name = "WinRM"
# WinRM access from anywhere
ingress {
from_port = 5985
to_port = 5986
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "RDP" {
name = "RDP"
# RDP access from anywhere
ingress {
from_port = 3389
to_port = 3389
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
I cannot figure out the problem. From my local machine, I can connect to the remote server using
Enter-PSSession -ComputerName $ip -Credential ~\Administrator

Winrm HTTP is disabled by default on Amazon Windows AMI. You have to run a powershell command to enable it, ansible has a nice one to do it. You can create an AMI that has winrm enabled and use that one to launch spot instances.
You can also set the provisioner in terraform to HTTPS true
https://www.terraform.io/docs/provisioners/connection.html#https

Note that terraform uses Go WinRM which doesn't support https at this time.
I had to stick with the following:
user_data = <<EOF
<script>
winrm quickconfig -q & winrm set winrm/config #{MaxTimeoutms="1800000"} & winrm set winrm/config/service #{AllowUnencrypted="true"} & winrm set winrm/config/service/auth #{Basic="true"}
</script>
<powershell>
netsh advfirewall firewall add rule name="WinRM in" protocol=TCP dir=in profile=any localport=5985 remoteip=any localip=any action=allow
# Set Administrator password
$admin = [adsi]("WinNT://./administrator, user")
$admin.psbase.invoke("SetPassword", "${var.admin_password}")
</powershell>
EOF

Related

httpserver in EC2 instance via terraform

terraform {
required_providers {
aws = {
version = "~>3.27"
source = "hashicorp/aws"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "tag_name" {
type = string
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.allow_port_8080.id]
user_data = <<-EOF
#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html
EOF
tags = {
Name = var.tag_name
}
}
resource "aws_security_group" "allow_port_8080" {
name = "allow_port_8080"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
this is the terraform file created. I want to set up http server in my EC2 instance then to access it via ipv4 public IP.
but http://publicip:8080, giving error as
This site can’t be reached
I tried modifying as below
user_data = <<-EOF
#!/bin/bash
echo "<h1>Hello World</h1>" > index.html
nohup busybox httpd -f -p 8080
EOF
I am following
https://www.youtube.com/watch?v=0i-Q6ZMDtlQ&list=PLqq-6Pq4lTTYwjFB9E9aLUJhTBLsCF0p_&index=32
thank you
Your aws_security_group does not allow for any outgoing traffic, thus you can't install httpd on it. You have to explicitly allow outgoing traffic:
resource "aws_security_group" "allow_port_8080" {
name = "allow_port_8080"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}

how to fix the 504 error in my load balancer

I am new to terraform and I am trying to create a simple structure with one ALB 2 servers with a simple app and one db instance, but I get a 504 error when accesing to the ALB`s DNS whoch checking the amazon documentation means The load balancer established a connection to the target but the target did not respond before the idle timeout period elapsed. I have gone over the code 100 times but I cannot find the mistake. This is my alb config:
#ASG
resource "aws_launch_configuration" "web-lc" {
name = "web-lc"
image_id = "ami-0fc970315c2d38f01"
instance_type = "t2.micro"
security_groups = [aws_security_group.ec2-webServers-sg.id]
key_name = "practica_final_kp"
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo docker run -d --name rtb -p 8080:8080 vermicida/rtb
EOF
}
resource "aws_autoscaling_group" "ec2-web-asg" {
name = "ec2-web-asg"
max_size = 2
min_size = 2
force_delete = true
launch_configuration = aws_launch_configuration.web-lc.name
vpc_zone_identifier = [aws_subnet.public-subnet1.id, aws_subnet.public-subnet2.id]
tag {
key = "Name"
value = "ec2-web-asg"
propagate_at_launch = "true"
}
}
#ALB
resource "aws_alb_target_group" "tg-alb" {
name = "tg-alb"
port = 80
protocol = "HTTP"
target_type = "instance"
vpc_id = aws_vpc.final-vpc.id
}
resource "aws_alb" "web-alb" {
name = "web-alb"
internal = false
subnets = [aws_subnet.public-subnet1.id, aws_subnet.public-subnet2.id]
security_groups = [aws_security_group.lb-sg.id]
}
resource "aws_alb_listener" "front_end" {
load_balancer_arn = aws_alb.web-alb.id
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.tg-alb.id
type = "forward"
}
}
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = aws_autoscaling_group.ec2-web-asg.id
alb_target_group_arn = aws_alb_target_group.tg-alb.arn
}
this is the security group:
resource "aws_security_group" "ec2-webServers-sg" {
name = "ec2-webServers-sg"
vpc_id = aws_vpc.final-vpc.id
ingress {
description = "APP"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24"]
}
egress {
description = "SQL"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.10.0/24", "10.0.20.0/24"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "F-web-servers-sg"
}
}
It looks like your service on the EC2 instance is running on port 8080, but your target group is pointing to port 80. You need to change the target group port to 8080.
There could also be a problem with security groups and VPC Network ACLs blocking the traffic, but you didn't include the definition of aws_security_group.ec2-webServers-sg.id in your question.

When I am running terraform apply

I am creating ec2 instance and this is my main.tf file
variable "aws_key_pair" {
default = "~/aws/aws_keys/terraform-ec2.pem"
}
provider "aws" {
region = "us-east-1"
version = "~>2.46"
}
resource "aws_security_group" "http_server_sg" {
name = "http_server_sg"
vpc_id = "vpc-c5f40fb8"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name = "http_server_sg"
}
}
resource "aws_instance" "http_server" {
ami = "ami-0947d2ba12ee1ff75"
key_name = "terraform-ec2"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.http_server_sg.id]
subnet_id = "subnet-1169c14e"
connection {
type = "ssh"
host = self.public_ip
user = "ec2_user"
private_key = file(var.aws_key_pair)
}
provisioner "remote_exec" {
inline = [
"sudo yum install httpd -y",
"sudo service httpd start",
"echo Welcome to virtual server setup by terraform , IP address ${self.public_dns} | sudo tee /var/www/html/index.html"
]
}
}
When I am running : terraform apply I am getting following error
Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provisioner "remote_exec" to obtain schema: unknown
provisioner "remote_exec"
But I have already done terraform init and when I am running terraform validate I am getting same above error
It's "remote-exec" ...

Can't SSH into EC2 instance created with Terraform

I am working on a very simple Terraform project. I am using windows command prompt. I only have one EC2 instance for now. This is the project structure -
terraform-project
|_ec2.tf
|_vars.tf
|_test-key
|_test-key.pub
|_.terraform
|_terraform.tfstate
The ec2.tf file is as below -
provider "aws" {
region = "eu-central-1"
}
resource "aws_key_pair" "test-key"{
key_name = "test-key"
public_key = "${file("test-key.pub")}"
}
resource "aws_instance" "my-ec2"{
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.test-key.key_name}"
tags = {
Name = "Terraform"
Batch = "7am"
}
}
The vars.tf file is as below -
variable "ami" {
default = "ami-0233214e13e500f77"
}
variable "instance_type" {
default = "t2.micro"
}
Terraform Apply worked successfully and I can see the instance from the AWS management console. But when I try to SSH into the instance I get permission issues-
ssh -i test-key ec2-user#54.xx.xxx.xxx
ssh: connect to host 54.xx.xxx.xxx port 22: Permission denied
The instance has default VPC and security group. All inbound and outbound traffic is allowed.
I am working behind a company proxy. Before I started I set the proxy settings on my windows command prompt -
set HTTP_PROXY=http://proxy.companytnet.net:port
set HTTPS_PROXY=http://proxy.companynet.net:port
SSH with verbose gives this:
ssh -vvv -i test-key ec2-user#54.xx.xxx.xxx
OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5
debug1: Reading configuration data C:\\Users\\M710583/.ssh/config
debug3: Failed to open file:C:/ProgramData/ssh/ssh_config error:2
debug2: resolve_canonicalize: hostname 54.xx.xxx.xxx is address
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to 54.xx.xxx.xxx [54.xx.xxx.xxx] port 22.
debug3: finish_connect - ERROR: async io completed with error: 10013, io:00000256B95289B0
debug1: connect to address 54.xx.xxx.xxx port 22: Permission denied
ssh: connect to host 54.xx.xxx.xxx port 22: Permission denied
I able to SSH into other servers and instances but not into the one created with Terraform. What am I doing wrong ?
You need to add a proper security group. Something like:
resource "aws_security_group" "main" {
egress = [
{
cidr_blocks = [ "0.0.0.0/0", ]
description = ""
from_port = 0
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "-1"
security_groups = []
self = false
to_port = 0
}
]
ingress = [
{
cidr_blocks = [ "0.0.0.0/0", ]
description = ""
from_port = 22
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "tcp"
security_groups = []
self = false
to_port = 22
}
]
}
resource "aws_instance" "my-ec2"{
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${aws_key_pair.test-key.key_name}"
tags = {
Name = "Terraform"
Batch = "7am"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
I would recommend adding the vpc_security_group_ids = "${aws_security_group.main-sg.id}" to your EC2 resource block.
This attribute associates your VPC with the security group which you are referencing; read more about it here.

Why can't Terraform SSH into my EC2 instance?

I am trying to ssh into a newly created EC2 instance with terraform. My host is Windows 10 and I have no problems SSHing into the instance using Bitvise SSH Client from my host but Terraform can't seem to SSH in to create a directory on the instance:
My main.tf:
provider "aws" {
region = "us-west-2"
}
resource "aws_security_group" "instance" {
name = "inlets-server-instance"
description = "Security group for the inlets server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "tunnel" {
ami = "ami-07b4f3c02c7f83d59"
instance_type = "t2.nano"
key_name = "${var.key_name}"
vpc_security_group_ids = [aws_security_group.instance.id]
tags = {
Name = "inlets-server"
}
provisioner "local-exec" {
command = "echo ${aws_instance.tunnel.public_ip} > ${var.public_ip_path}"
}
provisioner "remote-exec" {
inline = [
"mkdir /home/${var.ssh_user}/ansible",
]
connection {
type = "ssh"
host = "${file("${var.public_ip_path}")}"
user = "${var.ssh_user}"
private_key = "${file("${var.private_key_path}")}"
timeout = "1m"
agent = false
}
}
}
My variables.tf:
variable "key_name" {
description = "Name of the SSH key pair generated in Amazon EC2."
default = "aws_ssh_key"
}
variable "public_ip_path" {
description = "Path to the file that contains the instance's public IP address"
default = "ip_address.txt"
}
variable "private_key_path" {
description = "Path to the private SSH key, used to access the instance."
default = "aws_ssh_key.pem"
}
variable "ssh_user" {
description = "SSH user name to connect to your instance."
default = "ubuntu"
}
All I get are attempted connections:
aws_instance.tunnel (remote-exec): Connecting to remote host via SSH...
aws_instance.tunnel (remote-exec): Host: XX.XXX.XXX.XXX
aws_instance.tunnel (remote-exec): User: ubuntu
aws_instance.tunnel (remote-exec): Password: false
aws_instance.tunnel (remote-exec): Private key: true
aws_instance.tunnel (remote-exec): Certificate: false
aws_instance.tunnel (remote-exec): SSH Agent: false
aws_instance.tunnel (remote-exec): Checking Host Key: false
and it finally timeouts with:
Error: timeout - last error: dial tcp: lookup XX.XXX.XXX.XXX
: no such host
Any ideas?
You didn't talk about your network structure.
Is your win10 machine inside the VPC? If not, do you have internet gateway, routing table, NAT gateway properly set up?
It would be cleaner and safer to create an Elastic IP resource to access the IP address of your machine with terraform knowledge instead of trying to get it from the machine. Surely, the local exec will be quicker than the remote exec but you create an implicit dependency that might generate problems.