unable to connect with ssh key in terraform aws ec2 - amazon-web-services

I have created ssh_key by
ssh-keygen -t rsa -b 4096 -C "example#example.com" -f terraform_ec2_key
and put that terraform_ec2_key in my terraform file and it looks like this
provider "aws" {
region = "us-west-2"
}
resource "aws_key_pair" "terraform_ec2_key" {
key_name = "terraform_ec2_key"
public_key = "${file("terraform_ec2_key.pub")}"
}
resource "aws_instance" "myfirst" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
key_name = aws_key_pair.terraform_ec2_key.key_name
tags = {
name = "terraformec2"
}
}
and when i did terraform apply command it was apllied succesfully and the instance was created but when i connect to the instance using connect command provided by aws
ssh -i terraform_ec2_key ubuntu#ec2-xx-xxx-x-xxx.us-west-2.compute.amazonaws.com
I am getting ubuntu#ec2-54-202-8-184.us-west-2.compute.amazonaws.com: Permission denied (publickey).
Here what I have tried chmod 400 terraform_ec2_key but not working also open port 22 from inbound rule but still i am getting same error is there anything am I missing?

Based on the comments.
The code is perfectly fine in itself. The issue was caused by wrong ami-id. Instead of ubuntu AMI, other was used, for example, amazon linux 2. This will result in the Permission denied (publickey) as amazon linux 2 uses ec2-user user, not `ubuntu.

Related

Terraform: EC2 instance is still creating after running shell script using provisioner

I need some help from Terraform export.
I'm going to create EC2 instance and install some packages on it using terraform.
To install packages, I used the provisiner of terraform. This is a EC2 instance part.
resource "aws_instance" "lms_server" {
ami = var.AMI
instance_type = var.instance_type
key_name = var.private_key
iam_instance_profile = aws_iam_instance_profile.instance_profile.name
associate_public_ip_address = true
subnet_id = aws_subnet.main-public-1.id
vpc_security_group_ids = [aws_security_group.security_rule.id]
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh ${var.gh_user} ${var.gh_token} ${var.gh_url} ${aws_db_instance.lms_mysql_db.address} ${var.db_name} ${var.db_username} ${var.db_password} ${aws_sqs_queue.lms_queue.id} ${var.sqs_name} ${self.public_ip} ${var.aws_region} ${var.bucket_name}",
]
}
connection {
type = "ssh"
host = "${self.public_ip}"
user = var.user_name
private_key = "${file("lms_key.pem")}"
}
root_block_device {
volume_size = var.volume_size
}
tags = {
lms_app = "lms_server"
}
}
As you can see here, I access the EC2 via SSH and copied script.sh file that includes all commands. Then run it. I think EC2 was created successfully and all packages were installed, but terraform CLI keeps ec2 instance: still creating status.
This means that the creation of EC2 instance is not finished yet. so If I drop this(Ctrl+C) and then run terraform apply again, it used to destroy and create instance from the first again. Also it's installing all packages again.
This operation is happening each time I update the terraform script for other, not EC2.
I'm looking forward to getting some help about this problem.
Thank you for your time and consideration.
Can you show us the scripts.sh content? Maybe there is something keep that script executing so that it can not finish.
You can try using user_data instead. It will run your script after the EC2 is created in the bootstrap phase and it is logged in the EC2.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#user_data

Ansible with Terraform run playbook against one of multiple hosts

My end goal is to create a 3 node kubeadm kubernetes cluster with terraform and/or ansible.
As of now I am provisioning three identical instances with terraform.
Then with remote-exec and inline installing packages that all instances share between themselves.
Now I want to install specific packages only on one of those three instances. Trying to achieve this using local-exec.
I am struggling with connecting only to 1 instance with local-exec. I know how to connect to all of them and execute playbook against three instances. But the end goal is to connect to one instance only.
the code snipped:
resource "aws_instance" "r100c96" {
count = 3
ami = "ami-0b9064170e32bde34"
instance_type = "t2.micro"
key_name = local.key_name
tags = {
Name = "terra-ans${count.index}"
}
provisioner "remote-exec" {
connection {
host = "${self.public_ip}"
type = "ssh"
user = local.ssh_user
private_key = file(local.private_key_path)
}
inline = ["sudo hostnamectl set-hostname test"]
}
provisioner "local-exec" {
command = "ansible-playbook -i ${element((aws_instance.r100c96.*.public_ip),0)}, --private-key ${local.private_key_path} helm.yaml"
}
...
}
Thanks,
You can use null_resource, and and run your remote-exec for selected instance only, once all three instances in aws_instance.r100c96 are provisioned.
I think instead of * use the count.index, on every loop run it will pass the specific VM IP.
Also, there are multiple ways to provision a VM using Ansible.
Consider if you can dynamically build your Hosts file and provision them in parallel instead one at a time.

How can a process started using terraform user_data on an aws_instance continue to run beyond terraform apply finishing?

I have a terraform file which creates an aws_instance and calls a process foo on that instance which should run for 10 mins. This process simulates some traffic which I can monitor elsewhere. I can manually ssh to the instance and run the process and it behaves as expected.
The problem is it seems the process stops running once terraform apply has completed setting everything up (this is my assumption judging by when I stop seeing traffic and see terraform apply finish).
If my assumption is correct is there a way to start the process in such a way that it will outlive terraform finishing?
My terraform file creates the aws_instance like so, where foo has been previously uploaded to another bucket:
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
aws s3 cp s3://foobar-bucket/foo ./
chmod +x foo
sudo ./foo
EOF
tags = {
Name = "terraform-example"
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
aws s3 cp s3://foobar-bucket/foo ./
chmod +x foo
sudo nohup ./foo & disown
EOF
tags = {
Name = "terraform-example"
}
}
You can find the difference between &, nohup and disown here or maybe use other combination depending on your need.
How can I start a remote service using Terraform provisioning?

Terraform - AWS EC2 Instance - Userdata is available on the device but does not run when I apply the template

I've been struggling with this issue for the last few days. I have an instance that is created using a terraform template with userdata that is specified from a template file. The instance uses the Debian Jessie community AMI and I am able to view the user data on the instance through wget. I've tried a copy of the AMI, I've tried using #cloud-boothook, and I've tried putting the userdata script inside the main TF template.
I checked the cloud-init-output.log logs and there is an error that my research indicates to be a problem with environment variables and sudo but I would still expect that the temp file would get created because there is no sudo call preceeding that echo line (from here:user data scripts fails without giving reason).
util.py[WARNING]: Running scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Section of my TF template that creates the instance:
resource "aws_instance" "example" {
ami = "ami-116d857a"
instance_type = "t2.micro"
source_dest_check = "False"
subnet_id = "${aws_subnet.public.id}"
key_name = "my-generic-keyname"
vpc_security_group_ids = ["${aws_security_group.vpn-sg.id}"]
user_data = "${data.template_file.bootscript.rendered}"
depends_on = ["aws_subnet.public"]
}
User data contained in the template file:
#!/bin/bash
echo 'Running user data' > /tmp/user-script-output.txt
sudo apt install strongswan-starter -y
sudo apt install curl -y
# Write secrets file
[cat <<-EOF > /etc/ipsec.secrets
# This file holds shared secrets or RSA private keys for authentication.
# RSA private key for this host, authenticating it to any other host
# which knows the public part.
privateip : PSK "$clientPSK"
EOF
Expected Behavior:
When I SSH into the instance, I can view the metadata no problem but the file in /tmp isn't created, curl is not installed, and neither is the strongswan package.
I appreciate the help!

aws ec2 us-east-2 is an invalid region choice

I have created a new instance on us-east-2, configured the security groups, policies and access rules and I can see it running and access it via the browser. However, when I attempt to connect to it via the aws-cli, it tells me us-east-2 is an invalid choice for the region.
What am I missing here? It is clearly a region on AWS:
I am running Ubuntu and aws --version results in: aws-cli/1.2.9 Python/3.4.3 Linux/3.13.0-100-generic
I am trying to connect to the instance via aws ec2 get-console-output --instance-id XXXXXXXX --region us-east-2
You CLI version is outdated by 3 years and it doesn't know the new regions. Can you upgrade the CLI to 1.10.x and try?
$ aws --version
aws-cli/1.10.66 Python/2.7.12 Linux/3.14.35-28.38.amzn1.x86_64 botocore/1.4.56
$ aws ec2 describe-regions
{
"Regions": [
{
"Endpoint": "ec2.us-east-2.amazonaws.com",
"RegionName": "us-east-2"
},