How to decrypt windows administrator password in terraform? - amazon-web-services

I'm provisioning a single windows server for testing with terraform in AWS. Every time i need to decrypt my windows password with my PEM file to connect. Instead, i chose the terraform argument get_password_data and stored my password_data in tfstate file. Now how do i decrypt the same with interpolation syntax rsadecrypt
Please find my below terraform code
### Resource for EC2 instance creation ###
resource "aws_instance" "ec2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
subnet_id = "${var.subnet_id}"
security_groups = ["${var.security_groups}"]
availability_zone = "${var.availability_zone}"
private_ip = "x.x.x.x"
get_password_data = "true"
connection {
password = "${rsadecrypt(self.password_data)}"
}
root_block_device {
volume_type = "${var.volume_type}"
volume_size = "${var.volume_size}"
delete_on_termination = "true"
}
tags {
"Cost Center" = "R1"
"Name" = "AD-test"
"Purpose" = "Task"
"Server Name" = "Active Directory"
"SME Name" = "Ravi"
}
}
output "instance_id" {
value = "${aws_instance.ec2.id}"
}
### Resource for EBS volume creation ###
resource "aws_ebs_volume" "additional_vol" {
availability_zone = "${var.availability_zone}"
size = "${var.size}"
type = "${var.type}"
}
### Output of Volume ID ###
output "vol_id" {
value = "${aws_ebs_volume.additional_vol.id}"
}
### Resource for Volume attachment ###
resource "aws_volume_attachment" "attach_vol" {
device_name = "${var.device_name}"
volume_id = "${aws_ebs_volume.additional_vol.id}"
instance_id = "${aws_instance.ec2.id}"
skip_destroy = "true"
}

The password is encrypted using the key_pair you specified when launching the instance, you still need to use it to decrypt as password_data is still just the base64 encoded encrypted password data.
You should use ${rsadecrypt(self.password_data,file("/path/to/private_key.pem"))}
This is for good reason. You really don't want just a base64 encoded password floating around in state.
Short version:
You are missing the second argument in the interpolation function.

I know this is not related to the actual question but it might be useful if you don't want to expose your private key in a public environment (e.g.. Git)
I would rather print the encrypted password
resource "aws_instance" "ec2" {
ami = .....
instance_type = .....
security_groups = [.....]
subnet_id = .....
iam_instance_profile = .....
key_name = .....
get_password_data = "true"
tags = {
Name = .....
}
}
Like this
output "Administrator_Password" {
value = [
aws_instance.ec2.password_data
]
}
Then,
Get base64 password and put it in a file called pwdbase64.txt
Run this command to decode the base64 to bin file
certutil -decode pwdbase64.txt password.bin
Run this command to decrypt your password.bin
openssl rsautl -decrypt -inkey privatekey.openssh -in password.bin
If you don't know how to play with openssl. Please check this post
privatekey.openssh should look like:
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQCd+qQbLiSVuNludd67EtepR3g1+VzV6gjsZ+Q+RtuLf88cYQA3
6M4rjVAy......1svfaU/powWKk7WWeE58dnnTZoLvHQ
ZUvFlHE/LUHCQkx8sSECQGatJGiS5fgZhvpzLn4amNwKkozZ3tc02fMzu8IgdEit
jrk5Zq8Vg71vH1Z5OU0kjgrR4ZCjG9ngGdaFV7K7ki0=
-----END RSA PRIVATE KEY-----
public key should look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB......iFZmwQ==
terraform key pair code should look like
resource "aws_key_pair" "key_pair_ec2" {
key_name = "key_pair_ec2"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB......iFZmwQ=="
}
Pd: You can use puttygen to generate the keys

Rather than having .pem files lying around or explicitly inputting a public key, you can generate the key directly with tls_private_key and then directly copy the resulting password into AWS SSM Parameter Store so you can retrieve it from there after your infrastructure is stood up.
Here's the way I generate the key:
resource "tls_private_key" "instance_key" {
algorithm = "RSA"
}
resource "aws_key_pair" "instance_key_pair" {
key_name = "${local.name_prefix}-instance-key"
public_key = tls_private_key.instance_key.public_key_openssh
}
In your aws_instance you want to be sure these are set:
key_name = aws_key_pair.instance_key_pair.key_name
get_password_data = true
Finally, store the resulting password in SSM (NOTE: you need to wrap the private key nonsensitive):
resource "aws_ssm_parameter" "windows_ec2" {
depends_on = [aws_instance.winserver_instance[0]]
name = "/Microsoft/AD/${var.environment}/ec2-win-password"
type = "SecureString"
value = rsadecrypt(aws_instance.winserver_instance[0].password_data, nonsensitive(tls_private_key.instance_key
.private_key_pem))
}

Related

Issue when using Terraform to manage credentials that access RDS database

I created a secret via Terraform, the secret is for accessing an RDS database which is also defined in Terraform, and in the secret, I don't want to include username and password, so I created an empty secret then add the credentials manually in AWS console.
Then in the RDS definition:
resource "aws_rds_cluster" "example_db_cluster" {
cluster_identifier = local.db_name
engine = "aurora-mysql"
engine_version = "xxx"
engine_mode = "xxx"
availability_zones = [xxx]
database_name = "xxx"
master_username = jsondecode(aws_secretsmanager_secret_version.db_secret_string.secret_string)["username"]
master_password = jsondecode(aws_secretsmanager_secret_version.db_secret_string.secret_string)["password"]
.....
The problem is that when I apply terraform, because the secret is empty so Terraform won't find the string for username and password which will cause error, does anyone have a better way to implement this? Feels like it's easier to just create the secret in Secret Manager manually.
You can generate a random_password and add to your secret using a aws_secretsmanager_secret_version.
Here's an example:
resource "random_password" "default_password" {
length = 20
special = false
}
variable "secretString" {
default = {
usernae = "dbuser"
password = random_password.default_password.result
}
type = map(string)
}
resource "aws_secretsmanager_secret" "db_secret_string" {
name = "db_secret_string"
}
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = aws_secretsmanager_secret.db_secret_string.id
secret_string = jsonencode(var.secretString)
}

ssh key pair in terraform

Can you please tell me a way to pass key in terraform for ec2 spin up.
variable "public_path" {
default = "D:\"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "my_key"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair
security_groups = [ "${aws_security_group.test_sg.id}" ]
}
Error : Invalid value for "path" parameter: failed to read D:".
Bash: tree
.
├── data
│ └── key
└── main.tf
1 directory, 2 files
Above is what my file system looks like. I'm not on windows. You were passing the directory and thinking that key_name means it would find the name of your key in that directory. But the fuction file() has no idea what key_name is. That is a value local to the aws_key_pair resource. So make sure you give the file fuction the full path to the file.
Look below for my code. You also passed aws_key_pair.app_keypair to your aws_instance resource. But that's an object that contains several properties. You need to specify which property you want to pass. In this case aws_key_pair.app_keypair.key_name. This will cause aws to stand up an EC2 and then look for a key pair with the name in your code. Then it associates them together.
provider aws {
profile = "myprofile"
region = "us-west-2"
}
variable "public_path" {
default = "./data/key"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "somekeyname"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair.key_name
}
Here is my plan output. You can see the key is getting injected correctly. This is the same key in the terraform docs, so it's safe to put in here.
Terraform will perform the following actions:
# aws_instance.web will be created
+ resource "aws_instance" "web" {
<...ommitted for stack overflow brevity...>
+ key_name = "somekeyname"
<...ommitted for stack overflow brevity...>
}
# aws_key_pair.app_keypair will be created
+ resource "aws_key_pair" "app_keypair" {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = "somekeyname"
+ key_pair_id = (known after apply)
+ public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email#example.com"
}
Plan: 2 to add, 0 to change, 0 to destroy.

How to get private key from secret manager?

I need to store a Private Key in AWS. Because when I create an ec2 instance from AWS I need to use this primary key to auth in provisioner "remote-exec". I don't want to save in repo AWS.
It's a good idea to save a private key in Secret Manager? And then consume it?
And in the case affirmative, How to save the primary key in Secret Manager and then retrieve in TF aws_secretsmanager_secret_version?
In my case, if I validate from a file(), it's working but if I validate from a string, is failed.
connection {
host = self.private_ip
type = "ssh"
user = "ec2-user"
#private_key = file("${path.module}/key") <-- Is working
private_key = jsondecode(data.aws_secretsmanager_secret_version.secret_terraform.secret_string)["ec2_key"] <-- not working. Error: Failed to read ssh private key: no key found
}
I think the reason is due to how you store it. I verified using my own sandbox account the use of aws_secretsmanager_secret_version and it works. However, I stored it as a pain text, not json:
Then I successfuly used it as follows for an instance:
resource "aws_instance" "public" {
ami = "ami-02354e95b39ca8dec"
instance_type = "t2.micro"
key_name = "key-pair-name"
security_groups = [aws_security_group.ec2_sg.name]
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ec2-user"
private_key = data.aws_secretsmanager_secret_version.example.secret_string
host = "${self.public_ip}"
}
inline = [
"ls -la"
]
}
depends_on = [aws_key_pair.key]
}

How to use user_data url when creating an AWS instance in terraform?

Terraform Version = 0.12
data "template_file" "user_data" {
template = file("${path.module}/userdata.sh")
}
resource "aws_instance" "bespin-ec2-web-a" {
ami = "ami-0bea7fd38fabe821a"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.bespin-sg.id]
subnet_id = aws_subnet.bespin-subnet-public-a.id
associate_public_ip_address = true
tags = {
Name = "bespin-ec2-web-a"
}
user_data = data.template_file.user_data.rendered
}
I want to upload user_data to S3 and use it by calling URL.
What can I do?
ex)
resource "template_file" "userdata_sh" {
template = "https://test.s3.ap-northeast-2.amazonaws.com/userdata.sh"
}
Not 100% clear what is to be achieved, however, if to specify userdata for EC2 instances to use, then use a sh file in S3 would not be possible.
Need to specify userdata content directly to aws_instance terraform resource.
EC2/userdata
resource "aws_instance" "this" {
ami = "${local.ami_this_id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.subnet_id}"
vpc_security_group_ids = "${var.security_group_ids}"
key_name = "${aws_key_pair.this.key_name}"
iam_instance_profile = "${var.ec2_instance_profile_id}"
user_data = data.template_file.user_data.rendered # <----- Specify userdata content
root_block_device {
volume_type = "${var.root_volume_type}"
volume_size = "${var.root_volume_size}"
delete_on_termination = true
}
}
If it is to upload to S3 and copy it into EC2 instance and run it as a shell script, then would not need to upload to S3 then copy it into EC2 instances with AWS CLI S3 commands or mounting the S3 bucket inside EC2 using e.g. S3 Fuse.
S3 upload
First, use https://www.terraform.io/docs/providers/local/r/file.html
resource "local_file" "userdata_sh" {
content = data.template_file.user_data.rendered
filename = "your_local_userdata_sh_path"
}
Then use https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html to upload to S3.
resource "aws_s3_bucket_object" "object" {
bucket = "your_s3_bucket_name"
key = "userdata.sh"
source = "your_local_userdata_sh_path"
etag = "${filemd5("your_local_userdata_sh_path")}"
}
URL in template resource
Will not be possible. Template file needs to be in your local machine. If sharing the userdata.sh is the goal, then consider mounting S3 in your machine using e.g. S3 Fuse.

Terraform Spot Instance inside VPC

I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339