ssh key pair in terraform - amazon-web-services

Can you please tell me a way to pass key in terraform for ec2 spin up.
variable "public_path" {
default = "D:\"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "my_key"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair
security_groups = [ "${aws_security_group.test_sg.id}" ]
}
Error : Invalid value for "path" parameter: failed to read D:".

Bash: tree
.
├── data
│ └── key
└── main.tf
1 directory, 2 files
Above is what my file system looks like. I'm not on windows. You were passing the directory and thinking that key_name means it would find the name of your key in that directory. But the fuction file() has no idea what key_name is. That is a value local to the aws_key_pair resource. So make sure you give the file fuction the full path to the file.
Look below for my code. You also passed aws_key_pair.app_keypair to your aws_instance resource. But that's an object that contains several properties. You need to specify which property you want to pass. In this case aws_key_pair.app_keypair.key_name. This will cause aws to stand up an EC2 and then look for a key pair with the name in your code. Then it associates them together.
provider aws {
profile = "myprofile"
region = "us-west-2"
}
variable "public_path" {
default = "./data/key"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "somekeyname"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair.key_name
}
Here is my plan output. You can see the key is getting injected correctly. This is the same key in the terraform docs, so it's safe to put in here.
Terraform will perform the following actions:
# aws_instance.web will be created
+ resource "aws_instance" "web" {
<...ommitted for stack overflow brevity...>
+ key_name = "somekeyname"
<...ommitted for stack overflow brevity...>
}
# aws_key_pair.app_keypair will be created
+ resource "aws_key_pair" "app_keypair" {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = "somekeyname"
+ key_pair_id = (known after apply)
+ public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email#example.com"
}
Plan: 2 to add, 0 to change, 0 to destroy.

Related

"Invalid function argument" with Userdata in Terraform

I'm trying to pass user data over a file so the code will look less clumsy, but having trouble. I've tried all the different combinations but nothing is working.
I went through the Terraform documentation, it doesn't give any special instructions for the path value.
Folder structure:
project1/env/dr/compute/main.tf
module "share_server" {
count = 2
source = "../../../../terraform_modules/modules/compute/"
ami = data.aws_ami.amazonlinux2.id
instance_type = "t3.micro"
availability_zone = data.aws_availability_zones.az.names[count.index]
subnet_id = data.aws_subnets.app_subnet.ids[count.index]
associate_public_ip_address = "false"
key_name = "app7"
vpc_security_group_ids = ["sg-08d38198dc153c410"]
instance_root_device_size = 20
kms_key_id = "ea88e727-e506-4530-b92f-2827d8f9c94e"
volume_type = "gp3"
platform = "linux"
backup = true
Environment = "dr"
server_role = "Application_Server"
server_component = "share_servers"
hostname = "app-dr-test-10"
tags = {
Name = "${local.instance_name}-share-${count.index}"
}
}
My ec2 module resides in the below folder structure.
project1/modules/compute/ec2.tf
project1/modules/computer/userdata/share_userdata.tpl
The ec2.tf code is below. I have removed the bottom half of the code so the post won't be too big to read.
resource "aws_instance" "ec2" {
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = var.subnet_id
associate_public_ip_address = var.associate_public_ip_address
user_data = templatefile("userdata/share_userdata.tpl",
{ hostname = var.hostname }
)
Error:
PS B:\PubOps\app7_terraform\environments\dr\compute> terraform apply
│ Error: Invalid function argument │
│ on ..\..\..\..\terraform_modules\modules\compute\main.tf line 10, in resource "aws_instance" "ec2":
│
10: user_data = templatefile("userdata/share_userdata.tpl",
│ 11: {
│ 12: hostname = var.hostname
│ 13: })
│
│ Invalid value for "path" parameter: no file exists at userdata/share_userdata.tpl; this function works only with files that are distributed as part of the
│ configuration source code, so if this file will be created by a resource in this configuration you must instead obtain this result from an attribute of that
│ resource. ╵ PS B:\PubOps\alfresco7_terraform\environments\dr\compute>
User data
#!/bin/bash
yum update -y
### hostname
sudo hostnamectl set-hostname $hostname
echo "127.0.0.1 $hostname
$hostname localhost4 localhost4.localdomain4" > /etc/hosts
echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
#EFS utility and mounting
yum install -y amazon-efs-utils
EOF
References:
The same code mentioned on GitHub, maybe it is working for the author, but not for me: https://github.com/kunduso/ec2-userdata-terraform/blob/add-userdata/ec2.tf
My goal is to set up user data and pass variables over the AWS parameter store as shown in the below URL, but I couldn't pass the basic setup.
https://skundunotes.com/2021/11/17/manage-sensitive-variables-in-aws-ec2-user-data-with-terraform/
I tried pointing the file like this: ./share_userdata.tpl.
I tried with absolute path b/project1/dr/compute/share_userdata.tpl.
I also tried giving $module.path/share_userdata.tpl.
None of them worked.
The error is rather clear:
no file exists at userdata/share_userdata.tpl
Thus you must ensure that in the folder where you execute templatefile("userdata/share_userdata.tpl" there is a subfolder called userdata and in that folder there is a file share_userdata.tpl.
You need to pass the full or relative path in your template file. You can try the below code for relative path e.g.:
resource "aws_instance" "ec2" {
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = var.subnet_id
associate_public_ip_address = var.associate_public_ip_address
user_data = templatefile("./userdata/share_userdata.tpl",
{ hostname = var.hostname }
)
Here ./ indicating current folder.
Assuming the ec2.tf and userdata folder exist in same path. for example: project1/modules/compute/.

Using Terraform to create an AWS EC2 bastion

I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}
I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.

How to decrypt windows administrator password in terraform?

I'm provisioning a single windows server for testing with terraform in AWS. Every time i need to decrypt my windows password with my PEM file to connect. Instead, i chose the terraform argument get_password_data and stored my password_data in tfstate file. Now how do i decrypt the same with interpolation syntax rsadecrypt
Please find my below terraform code
### Resource for EC2 instance creation ###
resource "aws_instance" "ec2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
subnet_id = "${var.subnet_id}"
security_groups = ["${var.security_groups}"]
availability_zone = "${var.availability_zone}"
private_ip = "x.x.x.x"
get_password_data = "true"
connection {
password = "${rsadecrypt(self.password_data)}"
}
root_block_device {
volume_type = "${var.volume_type}"
volume_size = "${var.volume_size}"
delete_on_termination = "true"
}
tags {
"Cost Center" = "R1"
"Name" = "AD-test"
"Purpose" = "Task"
"Server Name" = "Active Directory"
"SME Name" = "Ravi"
}
}
output "instance_id" {
value = "${aws_instance.ec2.id}"
}
### Resource for EBS volume creation ###
resource "aws_ebs_volume" "additional_vol" {
availability_zone = "${var.availability_zone}"
size = "${var.size}"
type = "${var.type}"
}
### Output of Volume ID ###
output "vol_id" {
value = "${aws_ebs_volume.additional_vol.id}"
}
### Resource for Volume attachment ###
resource "aws_volume_attachment" "attach_vol" {
device_name = "${var.device_name}"
volume_id = "${aws_ebs_volume.additional_vol.id}"
instance_id = "${aws_instance.ec2.id}"
skip_destroy = "true"
}
The password is encrypted using the key_pair you specified when launching the instance, you still need to use it to decrypt as password_data is still just the base64 encoded encrypted password data.
You should use ${rsadecrypt(self.password_data,file("/path/to/private_key.pem"))}
This is for good reason. You really don't want just a base64 encoded password floating around in state.
Short version:
You are missing the second argument in the interpolation function.
I know this is not related to the actual question but it might be useful if you don't want to expose your private key in a public environment (e.g.. Git)
I would rather print the encrypted password
resource "aws_instance" "ec2" {
ami = .....
instance_type = .....
security_groups = [.....]
subnet_id = .....
iam_instance_profile = .....
key_name = .....
get_password_data = "true"
tags = {
Name = .....
}
}
Like this
output "Administrator_Password" {
value = [
aws_instance.ec2.password_data
]
}
Then,
Get base64 password and put it in a file called pwdbase64.txt
Run this command to decode the base64 to bin file
certutil -decode pwdbase64.txt password.bin
Run this command to decrypt your password.bin
openssl rsautl -decrypt -inkey privatekey.openssh -in password.bin
If you don't know how to play with openssl. Please check this post
privatekey.openssh should look like:
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQCd+qQbLiSVuNludd67EtepR3g1+VzV6gjsZ+Q+RtuLf88cYQA3
6M4rjVAy......1svfaU/powWKk7WWeE58dnnTZoLvHQ
ZUvFlHE/LUHCQkx8sSECQGatJGiS5fgZhvpzLn4amNwKkozZ3tc02fMzu8IgdEit
jrk5Zq8Vg71vH1Z5OU0kjgrR4ZCjG9ngGdaFV7K7ki0=
-----END RSA PRIVATE KEY-----
public key should look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB......iFZmwQ==
terraform key pair code should look like
resource "aws_key_pair" "key_pair_ec2" {
key_name = "key_pair_ec2"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB......iFZmwQ=="
}
Pd: You can use puttygen to generate the keys
Rather than having .pem files lying around or explicitly inputting a public key, you can generate the key directly with tls_private_key and then directly copy the resulting password into AWS SSM Parameter Store so you can retrieve it from there after your infrastructure is stood up.
Here's the way I generate the key:
resource "tls_private_key" "instance_key" {
algorithm = "RSA"
}
resource "aws_key_pair" "instance_key_pair" {
key_name = "${local.name_prefix}-instance-key"
public_key = tls_private_key.instance_key.public_key_openssh
}
In your aws_instance you want to be sure these are set:
key_name = aws_key_pair.instance_key_pair.key_name
get_password_data = true
Finally, store the resulting password in SSM (NOTE: you need to wrap the private key nonsensitive):
resource "aws_ssm_parameter" "windows_ec2" {
depends_on = [aws_instance.winserver_instance[0]]
name = "/Microsoft/AD/${var.environment}/ec2-win-password"
type = "SecureString"
value = rsadecrypt(aws_instance.winserver_instance[0].password_data, nonsensitive(tls_private_key.instance_key
.private_key_pem))
}

Creating Multiple AWS EBS volumes and attach it to an instance using Terraform

I am creating a terraform configuration to allow user to input the number of AWS EBS volumes they want to attach to the EC2 instance.
variable "number_of_ebs" {}
resource "aws_volume_attachment" "ebs_att" {
count = "${var.number_of_ebs}"
device_name= "/dev/sdh"
volume_id = "${element(aws_ebs_volume.newVolume.*.id, count.index)}"
instance_id = "${aws_instance.web.id}"
}
resource "aws_instance" "web" {
ami = "ami-14c5486b"
instance_type = "t2.micro"
availability_zone = "us-east-1a"
vpc_security_group_ids=["${aws_security_group.instance.id}"]
key_name="KeyPairVirginia"
tags {
Name = "HelloWorld"
}
}
resource "aws_ebs_volume" "newVolume" {
count = "${var.number_of_ebs}"
name = "${format("vol-%02d", count.index + 1)}"
availability_zone = "us-east-1a"
size = 4
type="standard"
tags {
Name = "HelloWorld"
}
}
It surely is giving error. I am unaware of how to dynamically assign different name to each volume that is created and get volume_id to the attach to the instance.
Below is the error that I get.
var.number_of_ebs
Enter a value: 2
Error: aws_ebs_volume.newVolume[0]: : invalid or unknown key: name
Error: aws_ebs_volume.newVolume[1]: : invalid or unknown key: name
If you check the docs for resource aws_ebs_volume, you see that the argument name is not supported.
This explains the error message.

Terraform Spot Instance inside VPC

I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339