Cannot provision aws_spot_instance via terraform - amazon-web-services

I am attempting to spin up a spot instance via terraform. When I try to use a provisioner block (either "remote-exec" or "file"), it fails and I see an SSH error in DEBUG level output. When I switch from a spot instance request to a standard aws instance resource declaration, the provisioning works fine.
Code not working:
resource "aws_spot_instance_request" "worker01" {
ami = "ami-0cb95574"
spot_price = "0.02"
instance_type = "m3.medium"
vpc_security_group_ids = [ "${aws_security_group.ssh_access.id}", "${aws_security_group.tcp_internal_access.id}","${aws_security_group.splunk_access.id}","${aws_security_group.internet_access.id}" ]
subnet_id = "..."
associate_public_ip_address = true
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("${var.private_key_path}")}"
}
provisioner "remote-exec" {
inline = [
"touch foo",
]
}
}
Error:
aws_spot_instance_request.worker01 (remote-exec): Connecting to remote host via SSH...
aws_spot_instance_request.worker01 (remote-exec): Host:
aws_spot_instance_request.worker01 (remote-exec): User: ec2-user
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 handshaking with SSH
aws_spot_instance_request.worker01 (remote-exec): Password: false
aws_spot_instance_request.worker01 (remote-exec): Private key: true
aws_spot_instance_request.worker01 (remote-exec): SSH Agent: true
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 handshake error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
2017/09/01 16:17:52 [DEBUG] plugin: terraform: remote-exec-provisioner (internal) 2017/09/01 16:17:52 Retryable error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Working code:
resource "aws_instance" "worker01" {
ami = "ami-0cb95574"
instance_type = "m3.medium"
vpc_security_group_ids = [ "${aws_security_group.ssh_access.id}", "${aws_security_group.tcp_internal_access.id}","${aws_security_group.splunk_access.id}","${aws_security_group.internet_access.id}" ]
subnet_id = "..."
associate_public_ip_address = true
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("${var.private_key_path}")}"
}
provisioner "remote-exec" {
inline = [
"touch foo",
]
}
}
I have tried a few different iterations of the non-working code (including an silly attempt to hard-code a public ip for a spot instance and an attempted self-reference to the spot instances public ip - which gave an no such attribute error). Unfortunately, I could not find anyone with similar issues via google. From what I have read, I should be able to provision a spot instance in this manner.
Thanks for any help you can provide.

You need to add wait_for_fulfillment = true to your spot instance request or the resource will return before the instance is created.

Related

terraform v0.12.21 throws "Failed to read ssh private key: no key found"

Not able to connect to ec2 instance from terraform. The same key pair works if I create the ec2 instance manually (not via terraform). That confirms my key-pair is correct. Here is the code that I'm trying to do. The error I get: `aws_instance.ec2_test_instance: Provisioning with 'remote-exec'...
Error: Failed to read ssh private key: no key found
Error: Error import KeyPair: MissingParameter: The request must contain the parameter PublicKeyMaterial
status code: 400, request id: `
resource "aws_instance" "ec2_test_instance" {
ami = var.instance_test_ami
instance_type = var.instance_type
subnet_id = var.aws_subnet_id
key_name = aws_key_pair.deployer.key_name
tags = {
Name = var.environment_tag
}
connection {
type = "ssh"
host = self.public_ip
user = "centos"
private_key = "file(path.root/my-key)"
}
provisioner "remote-exec" {
inline = [
"sudo yum -y install wget, unzip",
"sudo yum -y install java-1.8.0-openjdk",
]
}
You will need to use ${} for the interpolation syntax in your path:
private_key = file("${path.module}/my-key")
In the documentation, the example shows ${} around the actual file path within the argument field:
https://www.terraform.io/docs/configuration/functions/file.html

Terraform aws: [WARN] retryable error: dial tcp: lookup self.public_ip on 127.0.0.53:53: no such host

Issue:
Hello All,
Thanks for your time. So I am new to Terraform and in general devops.
I feel I am doing something wrong under the provisioner connection block.
I'm trying to create an ansible master and slave configuration using terraform. For my master to be able to talk with the slaves, need the ssh public keys of the master node to be available in all slaves in .ssh/authorized, for that I'm trying to ssh and pass on the master public keys while creating the slave.
For some reason I am not able to ssh into the slave while creation I tried all I could have imagined and went through lots of forums. I'm sure I maybe doing something wrong here.
Any help would be appreciated.
Regards.
Terraform Version
Terraform v0.13.0
Terraform Configuration Files
variable "region" {
default = "us-east-1"
}
variable "type" {
default = "t2.micro"
}
variable "ec2LinuxAmi" {
type = map(string)
default = {
us-east-1 = "ami-0bcc094591f354be2"
}
}
variable "keyname" {
default = "terraformKeys"
}
variable "privateKeyPath" {
description = "Path to private key"
default = "/home/userName/.ssh/id_rsa"
}
variable "awsKey" {
default = "terraformKeys.pem"
}
variable "user_names" {
description = "Create IAM users with these names"
type = list(string)
default = ["ansibleMaster"]
}
provider "aws" {
region = var.region
shared_credentials_file = "/home/userName/.aws/credentials"
profile = "default"
}
resource "aws_security_group" "port_22_ingress_globally_accessible" {
name = "port_22_ingress_globally_accessible"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "linux"{
count = length(var.user_names)
ami = lookup(var.ec2LinuxAmi, var.region)
instance_type = var.type
security_groups = [ "port_22_ingress_globally_accessible" ]
key_name = var.keyname
tags = {
Name = var.user_names[count.index]
}
provisioner "file" {
source = "foo"
destination = "/tmp/foo"
}
connection {
type = "ssh"
user = "ubuntu"
host = "self.public_ip"
port = 22
private_key = "${file("/home/userName/.ssh/id_rsa")}"
}
}
Debug Output
2020/08/30 19:10:30 [WARN] Provider "registry.terraform.io/hashicorp/aws" produced an unexpected new value for aws_instance.linux[0], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .disable_api_termination: was null, but now cty.False
- .ebs_optimized: was null, but now cty.False
- .hibernation: was null, but now cty.False
- .monitoring: was null, but now cty.False
- .iam_instance_profile: was null, but now cty.StringVal("")
- .credit_specification: block count changed from 0 to 1
2020/08/30 19:10:30 [TRACE] eval: *terraform.EvalMaybeTainted
2020/08/30 19:10:30 [TRACE] eval: *terraform.EvalWriteState
2020/08/30 19:10:30 [TRACE] EvalWriteState: recording 0 dependencies for aws_instance.linux[0]
2020/08/30 19:10:30 [TRACE] EvalWriteState: writing current state object for aws_instance.linux[0]
2020/08/30 19:10:30 [TRACE] eval: *terraform.EvalApplyProvisioners
2020/08/30 19:10:30 [TRACE] EvalApplyProvisioners: provisioning aws_instance.linux[0] with "file"
aws_instance.linux[0]: Provisioning with 'file'...
2020-08-30T19:10:30.228-0400 [DEBUG] plugin.terraform: file-provisioner (internal) 2020/08/30 19:10:30 using private key for authentication
2020-08-30T19:10:30.229-0400 [DEBUG] plugin.terraform: file-provisioner (internal) 2020/08/30 19:10:30 [DEBUG] Connecting to self.public_ip:22 for SSH
2020-08-30T19:10:30.248-0400 [DEBUG] plugin.terraform: file-provisioner (internal) 2020/08/30 19:10:30 [ERROR] connection error: dial tcp: lookup self.public_ip on 127.0.0.53:53: no such host
The debug ends with:
"Error: timeout - last error: SSH authentication failed
(ubuntu#18.204.3.15:22): ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain".
Your host will be just a string "self.public_ip":
host = "self.public_ip"
It should be:
host = self.public_ip
Also connection should be inside provisioner block:
provisioner "file" {
source = "foo"
destination = "/tmp/foo"
connection {
type = "ssh"
user = "ubuntu"
host = self.public_ip
port = 22
private_key = "${file("/home/userName/.ssh/id_rsa")}"
}
}
Lastly, aws_key_pair resource is not being created. But maybe it was excluded from the question.

Terraform Resource: Connection Error while executing apply?

I am trying to login to ec2 instance that terraform will create with the following code:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
But this gives me an error:
PS C:\Users\Amritvir Singh\Documents\GitHub\AWS-Scribble\Terraform> terraform apply
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Enter a value: us-east-1
Error: Invalid function argument
on Session1.tf line 13, in resource "aws_instance" "sess1":
13: private_key = file("/logon.pem")
Invalid value for "path" parameter: no file exists at logon.pem; this function
works only with files that are distributed as part of the configuration source
code, so if this file will be created by a resource in this configuration you
must instead obtain this result from an attribute of that resource.
How do I save pass the key from resource to provisioner at runtime without logging into the console?
Have you tried using the full path? Especially beneficial if you are using modules.
I.E:
private_key = file("${path.module}/logon.pem")
Or I think even this will work
private_key = file("./logon.pem")
I believe your existing code is looking for the file at the root of your filesystem.
connection should be in the provisioner block:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
provisioner "remote-exec" {
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
The above assumes that everything else is correct, e.g. the key file exist or security groups allow for ssh connection.

Why can't Terraform SSH into my EC2 instance?

I am trying to ssh into a newly created EC2 instance with terraform. My host is Windows 10 and I have no problems SSHing into the instance using Bitvise SSH Client from my host but Terraform can't seem to SSH in to create a directory on the instance:
My main.tf:
provider "aws" {
region = "us-west-2"
}
resource "aws_security_group" "instance" {
name = "inlets-server-instance"
description = "Security group for the inlets server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "tunnel" {
ami = "ami-07b4f3c02c7f83d59"
instance_type = "t2.nano"
key_name = "${var.key_name}"
vpc_security_group_ids = [aws_security_group.instance.id]
tags = {
Name = "inlets-server"
}
provisioner "local-exec" {
command = "echo ${aws_instance.tunnel.public_ip} > ${var.public_ip_path}"
}
provisioner "remote-exec" {
inline = [
"mkdir /home/${var.ssh_user}/ansible",
]
connection {
type = "ssh"
host = "${file("${var.public_ip_path}")}"
user = "${var.ssh_user}"
private_key = "${file("${var.private_key_path}")}"
timeout = "1m"
agent = false
}
}
}
My variables.tf:
variable "key_name" {
description = "Name of the SSH key pair generated in Amazon EC2."
default = "aws_ssh_key"
}
variable "public_ip_path" {
description = "Path to the file that contains the instance's public IP address"
default = "ip_address.txt"
}
variable "private_key_path" {
description = "Path to the private SSH key, used to access the instance."
default = "aws_ssh_key.pem"
}
variable "ssh_user" {
description = "SSH user name to connect to your instance."
default = "ubuntu"
}
All I get are attempted connections:
aws_instance.tunnel (remote-exec): Connecting to remote host via SSH...
aws_instance.tunnel (remote-exec): Host: XX.XXX.XXX.XXX
aws_instance.tunnel (remote-exec): User: ubuntu
aws_instance.tunnel (remote-exec): Password: false
aws_instance.tunnel (remote-exec): Private key: true
aws_instance.tunnel (remote-exec): Certificate: false
aws_instance.tunnel (remote-exec): SSH Agent: false
aws_instance.tunnel (remote-exec): Checking Host Key: false
and it finally timeouts with:
Error: timeout - last error: dial tcp: lookup XX.XXX.XXX.XXX
: no such host
Any ideas?
You didn't talk about your network structure.
Is your win10 machine inside the VPC? If not, do you have internet gateway, routing table, NAT gateway properly set up?
It would be cleaner and safer to create an Elastic IP resource to access the IP address of your machine with terraform knowledge instead of trying to get it from the machine. Surely, the local exec will be quicker than the remote exec but you create an implicit dependency that might generate problems.

How to run an apache server in custom vpc(non default)aws through terraform?

I created a custom vpc in aws through terraform with 1 public subnet and 2 private subnet.Now i need to launch an instance running apache in private subnet through Terraform.My code to launch an instance in private subnet is this
resource "aws_instance" "my_apache" {
ami = "ami-8437a5e4"
key_name = "clust"
subnet_id = "${aws_subnet.my_private1.id}"
vpc_security_group_ids = ["sg-40542d3b"]
availability_zone = "us-west-2a"
instance_type = "t2.micro"
tags {
Name = "apache"
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install apache2",
"sudo service apache2 start"
]
}
}
The instance is launching but apache server is not running in the instance.I am getting the error like dis.
aws_instance.my_apache (remote-exec): Connecting to remote host via SSH...
aws_instance.my_apache (remote-exec): Host: 172.16.2.163
aws_instance.my_apache (remote-exec): User: root
aws_instance.my_apache (remote-exec): Password: false
aws_instance.my_apache (remote-exec): Private key: false
aws_instance.my_apache (remote-exec): SSH Agent: true
aws_instance.my_apache: Still creating... (3m0s elapsed)
^CInterrupt received. Gracefully shutting down...
aws_instance.my_apache: Still creating... (3m10s elapsed)
aws_instance.my_apache (remote-exec): Connecting to remote host via SSH...
aws_instance.my_apache (remote-exec): Host: 172.16.2.163
aws_instance.my_apache (remote-exec): User: root
aws_instance.my_apache (remote-exec): Password: false
aws_instance.my_apache (remote-exec): Private key: false
aws_instance.my_apache (remote-exec): SSH Agent: true
It keeps on continuing.
What could be the issue?.How to make apache run in that instance?
First of all you are creating your instance in private subnet so please make sure you have connectivity to your instance from the machine you are running terraform.
Second thing your logs are specifying:
aws_instance.my_apache (remote-exec): Password: false
aws_instance.my_apache (remote-exec): Private key: false
Use provisnors connection:
connection {
type = "ssh"
user = "root"
private_key = "${var.private_key}"
}
Refer: https://www.terraform.io/docs/provisioners/connection.html