How to get private key from secret manager? - amazon-web-services

I need to store a Private Key in AWS. Because when I create an ec2 instance from AWS I need to use this primary key to auth in provisioner "remote-exec". I don't want to save in repo AWS.
It's a good idea to save a private key in Secret Manager? And then consume it?
And in the case affirmative, How to save the primary key in Secret Manager and then retrieve in TF aws_secretsmanager_secret_version?
In my case, if I validate from a file(), it's working but if I validate from a string, is failed.
connection {
host = self.private_ip
type = "ssh"
user = "ec2-user"
#private_key = file("${path.module}/key") <-- Is working
private_key = jsondecode(data.aws_secretsmanager_secret_version.secret_terraform.secret_string)["ec2_key"] <-- not working. Error: Failed to read ssh private key: no key found
}

I think the reason is due to how you store it. I verified using my own sandbox account the use of aws_secretsmanager_secret_version and it works. However, I stored it as a pain text, not json:
Then I successfuly used it as follows for an instance:
resource "aws_instance" "public" {
ami = "ami-02354e95b39ca8dec"
instance_type = "t2.micro"
key_name = "key-pair-name"
security_groups = [aws_security_group.ec2_sg.name]
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ec2-user"
private_key = data.aws_secretsmanager_secret_version.example.secret_string
host = "${self.public_ip}"
}
inline = [
"ls -la"
]
}
depends_on = [aws_key_pair.key]
}

Related

GCP ssh: handshake failed: ssh: unable to authenticate, attempted │ methods [none publickey], no supported methods remain

I'm learning terraform and want to create a new server and execute the shell script inside it using terraform provisioner. I'm facing this issue when terraform is trying to connect via SSH into newly created server.
I've gone through other solutions and verified that key is present at the correct location with 600 permission and also user is set by OS login but still getting same error
This is my main.tf file
provider "google" {
project = var.project
region = var.region
}
resource "google_compute_firewall" "firewall" {
name = "gritfy-firewall-externalssh"
network = "default"
allow {
protocol = "tcp"
ports = ["22","443"]
}
source_ranges = ["0.0.0.0/0"] # Not So Secure. Limit the Source Range
target_tags = ["externalssh"]
}
resource "google_compute_network" "default" {
name = "test-network"
}
# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
name = "vm-public-address"
project = var.project
region = var.region
depends_on = [google_compute_firewall.firewall]
}
resource "google_compute_instance" "dev" {
name = "devserver"
machine_type = "e2-micro"
zone = "${var.region}-a"
tags = ["externalssh", "webserver"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.static.address
}
}
provisioner "file" {
# source file name on the local machine where you execute terraform plan and apply
source = "LAMP_Stack.sh"
# destination is the file location on the newly created instance
destination = "/tmp/LAMP_Stack.sh"
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
timeout = "500s"
private_key = file(var.privatekeypath)
}
}
# to connect to the instance after the creation and execute few commands for provisioning
provisioner "remote-exec" {
connection {
host = google_compute_address.static.address
type = "ssh"
# username of the instance would vary for each account refer the OS Login in GCP documentation
user = var.user
timeout = "500s"
# private_key being used to connect to the VM. ( the public key was copied earlier using metadata )
private_key = file(var.privatekeypath)
}
# Commands to be executed as the instance gets ready.
# set execution permission and start the script
inline = [
"chmod a+x /tmp/LAMP_Stack.sh",
"sudo /tmp/LAMP_Stack.sh"
]
}
# Ensure firewall rule is provisioned before server, so that SSH doesn't fail.
depends_on = [google_compute_firewall.firewall]
service_account {
email = var.email
scopes = ["compute-ro"]
}
}
When you created the Compute Engine instance, you did not specify metadata for the SSH RSA public key that matches the RSA private key that you are using.
Modify the username and id_rsa.pub filename.
variable "gce_ssh_user" {
default = "username"
}
variable "gce_ssh_pub_key_file" {
default = "id_rsa.pub"
}
resource "google_compute_instance" "dev" {
...
metadata = {
ssh-keys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
}
}
The public key should have this format:
ssh-rsa AAAAB3NzaC1yc ... 0a9Wpd
An example command to create the SSH RSA private and public keys (id_rsa, id_rsa.pub):
ssh-keygen -t rsa -C "replace with username#hostname" -f id_rsa

Issue when using Terraform to manage credentials that access RDS database

I created a secret via Terraform, the secret is for accessing an RDS database which is also defined in Terraform, and in the secret, I don't want to include username and password, so I created an empty secret then add the credentials manually in AWS console.
Then in the RDS definition:
resource "aws_rds_cluster" "example_db_cluster" {
cluster_identifier = local.db_name
engine = "aurora-mysql"
engine_version = "xxx"
engine_mode = "xxx"
availability_zones = [xxx]
database_name = "xxx"
master_username = jsondecode(aws_secretsmanager_secret_version.db_secret_string.secret_string)["username"]
master_password = jsondecode(aws_secretsmanager_secret_version.db_secret_string.secret_string)["password"]
.....
The problem is that when I apply terraform, because the secret is empty so Terraform won't find the string for username and password which will cause error, does anyone have a better way to implement this? Feels like it's easier to just create the secret in Secret Manager manually.
You can generate a random_password and add to your secret using a aws_secretsmanager_secret_version.
Here's an example:
resource "random_password" "default_password" {
length = 20
special = false
}
variable "secretString" {
default = {
usernae = "dbuser"
password = random_password.default_password.result
}
type = map(string)
}
resource "aws_secretsmanager_secret" "db_secret_string" {
name = "db_secret_string"
}
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = aws_secretsmanager_secret.db_secret_string.id
secret_string = jsonencode(var.secretString)
}

How to do ip forwarding with terraform on a Google Compute VM

I'm trying to learn wireguard. I found this great tutorial on how to install it on GCP ....
https://sreejithag.medium.com/set-up-wireguard-vpn-with-google-cloud-57bb3267a6ef
Very basic (for somebody new to wireguard) but it did work. The tutorial shows a vm being provisioned with ip forwarding.Through the GCP web interface
I wanted to set this up with terraform. I've searched the terraform registry and found this...
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_forwarding_rule
Heres's my main.tf with the virtual machine provisioning. Where would I put something like ip forwarding? Without terraform complaining?
code---
# This is the provider used to spin up the gcloud instance
provider "google" {
project = var.project_name
region = var.region_name
zone = var.zone_name
credentials = "mycredentials.json"
}
# Locks the version of Terraform for this particular use case
terraform {
required_version = "0.14.6"
}
# This creates the google instance
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
tags = ["allow-http", "allow-https", "allow-dns", "allow-tor", "allow-ssh", "allow-2277", "allow-mosh", "allow-whois", "allow-openvpn", "allow-wireguard"] # FIREWALL
boot_disk {
initialize_params {
image = var.image_name
size = var.disk_size_gb
}
}
network_interface {
network = "default"
# Associated our public IP address to this instance
access_config {
nat_ip = google_compute_address.static.address
}
}
# We connect to our instance via Terraform and remotely executes our script using SSH
provisioner "remote-exec" {
script = var.script_path
connection {
type = "ssh"
host = google_compute_address.static.address
user = var.username
private_key = file(var.private_key_path)
}
}
}
# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
name = "vm-public-address"
}
For WireGuard, you need to enable IP Forwarding. The resource you are trying to use is for HTTP(S) Load Balancers.
Instead enable the google_compute_instance resource attribute can_ip_forward.
can_ip_forward - (Optional) Whether to allow sending and receiving of
packets with non-matching source or destination IPs. This defaults to
false.
can_ip_forward
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
can_ip_forward = true
....
}

Terraform Resource: Connection Error while executing apply?

I am trying to login to ec2 instance that terraform will create with the following code:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
But this gives me an error:
PS C:\Users\Amritvir Singh\Documents\GitHub\AWS-Scribble\Terraform> terraform apply
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Enter a value: us-east-1
Error: Invalid function argument
on Session1.tf line 13, in resource "aws_instance" "sess1":
13: private_key = file("/logon.pem")
Invalid value for "path" parameter: no file exists at logon.pem; this function
works only with files that are distributed as part of the configuration source
code, so if this file will be created by a resource in this configuration you
must instead obtain this result from an attribute of that resource.
How do I save pass the key from resource to provisioner at runtime without logging into the console?
Have you tried using the full path? Especially beneficial if you are using modules.
I.E:
private_key = file("${path.module}/logon.pem")
Or I think even this will work
private_key = file("./logon.pem")
I believe your existing code is looking for the file at the root of your filesystem.
connection should be in the provisioner block:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
provisioner "remote-exec" {
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
The above assumes that everything else is correct, e.g. the key file exist or security groups allow for ssh connection.

How to decrypt windows administrator password in terraform?

I'm provisioning a single windows server for testing with terraform in AWS. Every time i need to decrypt my windows password with my PEM file to connect. Instead, i chose the terraform argument get_password_data and stored my password_data in tfstate file. Now how do i decrypt the same with interpolation syntax rsadecrypt
Please find my below terraform code
### Resource for EC2 instance creation ###
resource "aws_instance" "ec2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
subnet_id = "${var.subnet_id}"
security_groups = ["${var.security_groups}"]
availability_zone = "${var.availability_zone}"
private_ip = "x.x.x.x"
get_password_data = "true"
connection {
password = "${rsadecrypt(self.password_data)}"
}
root_block_device {
volume_type = "${var.volume_type}"
volume_size = "${var.volume_size}"
delete_on_termination = "true"
}
tags {
"Cost Center" = "R1"
"Name" = "AD-test"
"Purpose" = "Task"
"Server Name" = "Active Directory"
"SME Name" = "Ravi"
}
}
output "instance_id" {
value = "${aws_instance.ec2.id}"
}
### Resource for EBS volume creation ###
resource "aws_ebs_volume" "additional_vol" {
availability_zone = "${var.availability_zone}"
size = "${var.size}"
type = "${var.type}"
}
### Output of Volume ID ###
output "vol_id" {
value = "${aws_ebs_volume.additional_vol.id}"
}
### Resource for Volume attachment ###
resource "aws_volume_attachment" "attach_vol" {
device_name = "${var.device_name}"
volume_id = "${aws_ebs_volume.additional_vol.id}"
instance_id = "${aws_instance.ec2.id}"
skip_destroy = "true"
}
The password is encrypted using the key_pair you specified when launching the instance, you still need to use it to decrypt as password_data is still just the base64 encoded encrypted password data.
You should use ${rsadecrypt(self.password_data,file("/path/to/private_key.pem"))}
This is for good reason. You really don't want just a base64 encoded password floating around in state.
Short version:
You are missing the second argument in the interpolation function.
I know this is not related to the actual question but it might be useful if you don't want to expose your private key in a public environment (e.g.. Git)
I would rather print the encrypted password
resource "aws_instance" "ec2" {
ami = .....
instance_type = .....
security_groups = [.....]
subnet_id = .....
iam_instance_profile = .....
key_name = .....
get_password_data = "true"
tags = {
Name = .....
}
}
Like this
output "Administrator_Password" {
value = [
aws_instance.ec2.password_data
]
}
Then,
Get base64 password and put it in a file called pwdbase64.txt
Run this command to decode the base64 to bin file
certutil -decode pwdbase64.txt password.bin
Run this command to decrypt your password.bin
openssl rsautl -decrypt -inkey privatekey.openssh -in password.bin
If you don't know how to play with openssl. Please check this post
privatekey.openssh should look like:
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQCd+qQbLiSVuNludd67EtepR3g1+VzV6gjsZ+Q+RtuLf88cYQA3
6M4rjVAy......1svfaU/powWKk7WWeE58dnnTZoLvHQ
ZUvFlHE/LUHCQkx8sSECQGatJGiS5fgZhvpzLn4amNwKkozZ3tc02fMzu8IgdEit
jrk5Zq8Vg71vH1Z5OU0kjgrR4ZCjG9ngGdaFV7K7ki0=
-----END RSA PRIVATE KEY-----
public key should look like:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB......iFZmwQ==
terraform key pair code should look like
resource "aws_key_pair" "key_pair_ec2" {
key_name = "key_pair_ec2"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB......iFZmwQ=="
}
Pd: You can use puttygen to generate the keys
Rather than having .pem files lying around or explicitly inputting a public key, you can generate the key directly with tls_private_key and then directly copy the resulting password into AWS SSM Parameter Store so you can retrieve it from there after your infrastructure is stood up.
Here's the way I generate the key:
resource "tls_private_key" "instance_key" {
algorithm = "RSA"
}
resource "aws_key_pair" "instance_key_pair" {
key_name = "${local.name_prefix}-instance-key"
public_key = tls_private_key.instance_key.public_key_openssh
}
In your aws_instance you want to be sure these are set:
key_name = aws_key_pair.instance_key_pair.key_name
get_password_data = true
Finally, store the resulting password in SSM (NOTE: you need to wrap the private key nonsensitive):
resource "aws_ssm_parameter" "windows_ec2" {
depends_on = [aws_instance.winserver_instance[0]]
name = "/Microsoft/AD/${var.environment}/ec2-win-password"
type = "SecureString"
value = rsadecrypt(aws_instance.winserver_instance[0].password_data, nonsensitive(tls_private_key.instance_key
.private_key_pem))
}