Use acm certificate for lightsail instance via terraform - amazon-web-services

I have an ACM certificate already generate using terraform and used for an ECS load balancer. Now I have host a Wordpress website using Lightsail at the same domain, so I want reuse the same certficate public key but aws_acm_certificate doesn't expose it as output. So I found out I can get it using AWS CLI like:
aws acm get-certificate --certificate-arn {certificate:arn} --output text --query CertificateChain
I tried to pass the certificate to a aws_lightsail_key_pair but I am getting different errors. For example:
resource "null_resource" "write_acm_public_key" {
provisioner "local-exec" {
command = "aws acm get-certificate --certificate-arn ${aws_acm_certificate.default.arn} --output text --query CertificateChain > ${path.module}/acm-public-key.cert"
interpreter = ["/bin/bash", "-c"]
}
depends_on = [aws_acm_certificate.default]
}
data "local_file" "acm_public_key" {
filename = "${path.module}/acm-public-key.cert"
depends_on = [null_resource.write_acm_public_key]
}
resource "aws_lightsail_key_pair" "key" {
count = var.wp_enable ? 1 : 0
name = "${local.resource_prefix}-key"
public_key = data.local_file.acm_public_key.content
depends_on = [aws_acm_certificate.default, null_resource.write_acm_public_key, data.local_file.acm_public_key]
}
Says
No such file or directory
How can I reuse the same certificate?

Related

Create a key pair and download the .pem file with Terraform (AWS)

I could create the key pair myKey to AWS with Terraform.
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create a "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
}
AWS:
But I couldn't download the myKey.pem file. Is it possible to download the myKey.pem file with Terraform like below?
Feb, 2022 Update:
No, it's not possible to download the myKey.pem file with Terraform. Instead, we can create the myKey.pem file which has the same private key as the key pair myKey on AWS. So the created myKey and myKey.pem file by Terraform are the same as those which we manually create and download on AWS. This is the code below. (I used Terraform v0.15.4)
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create a "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
provisioner "local-exec" { # Create a "myKey.pem" to your computer!!
command = "echo '${tls_private_key.pk.private_key_pem}' > ./myKey.pem"
}
}
Don't forget to make myKey.pem file only readable by you running the code below before ssh to your ec2 instance.
chmod 400 myKey.pem
Otherwise the error below occurs.
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0664 for 'myKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "myKey.pem": bad permissions
ubuntu#35.72.30.251: Permission denied (publickey).
Terraform resource tls_private_key has attributes that can be exported. This is the list.
The way you would download myKey.pem using Terraform would be by exporting the attribute private_key_pem to a local file.
So in your case, it would be:
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create a "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
}
resource "local_file" "ssh_key" {
filename = "${aws_key_pair.kp.key_name}.pem"
content = tls_private_key.pk.private_key_pem
}
Note:
You can't export the content of attribute private_key_pem using either of resources tls_private_key and local_file. If you really wan't to, here's how.
The file myKey.pem is generated by Terraform with permissions 755. You would need to change this to 400

Revoke client vpn ingress on destroy

I'm trying to revoke vpn client ingress rule on 'destroy' in Terrafrom. Everything worked fine with terraform 0.12
Unfortunately, after upgrading to version 0.14, the same method no longer works.
Here is what I have:
resource "null_resource" "client_vpn_ingress" {
provisioner "local-exec" {
when = create
command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr ${var.vpc_cidr_block} --authorize-all-groups --region ${var.aws_region} --profile ${var.profile}"
}
provisioner "local-exec" {
when = destroy
command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr ${var.vpc_cidr_block} --revoke-all-groups --region ${var.aws_region} --profile ${var.profile}"
}
}
and here is the error message:
Error: Invalid reference from destroy provisioner
on vpn_client_endpoint.tf line 84, in resource "null_resource"
"client_vpn_ingress": 84: command = "aws ec2
revoke-client-vpn-ingress --client-vpn-endpoint-id
${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr
${var.vpc_cidr_block} --revoke-all-groups --region ${var.aws_region}
--profile ${var.profile}"
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self',
'count.index', or 'each.key'.
References to other resources during the destroy phase can cause
dependency cycles and interact poorly with create_before_destroy.
Unfortunately I'm no longer able to use Terraform 0.12
Does anyone have any idea how to revoke it on 'terraform destroy' in version >= 0.14 ?
As of version 2.70.0 of the Terraform AWS provider (see this GitHub comment), you can now do something like this:
resource "aws_ec2_client_vpn_authorization_rule" "vpn_auth_rule" {
depends_on = [
aws_ec2_client_vpn_endpoint.vpn
]
client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.vpn.id
target_network_cidr = "0.0.0.0/0"
authorize_all_groups = true
}
This way the ingress rule will be handled as a first-class resource in state by Terraform and you won't have to worry about if/when the command is executed.

Elastic IP for autoscaling group terraform

What's the best approach for use a same ip on autoscaling group without use a load balancer?
I need to use a route53 subdomain to route to instance on autoscaling group.
For now i try to associate a elastic ip to network interface
I have this:
resource "aws_eip" "one_vault" {
vpc = true
network_interface = "${aws_network_interface.same.id}"
associate_with_private_ip = "10.0.1.232"
}
resource "aws_network_interface" "same_ip" {
subnet_id = "subnet-567uhbnmkiu"
private_ips = ["10.0.1.16"]
}
resource "aws_launch_configuration" "launch_config" {
image_id = "${var.ami}"
key_name = "${var.keyname}"
}
You have to do it in your user data. https://forums.aws.amazon.com/thread.jspa?threadID=52601
#!/bin/bash
# configure AWS
aws configure set aws_access_key_id {MY_ACCESS_KEY}
aws configure set aws_secret_access_key {MY_SECRET_KEY}
aws configure set region {MY_REGION}
# associate Elastic IP
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
ALLOCATION_ID={MY_EIP_ALLOC_ID}
aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $ALLOCATION_ID --allow-reassociation
Terraform doesn't support this functionality as auto-scaling groups are managed by cloud provider like AWS, not by it.
For more details:
https://github.com/hashicorp/terraform/issues/7060

How to get private ip of EC2 instances spin up by ASG using Terraform

I have tried following code in order to obtain ip's from ASG, which has been created using Terraform? Is this a good practice or a bad one? But i got the correct ouput as i expected.
data "aws_instances" "test" {
instance_tags {
Environment = "${var.environment}",
instance = "${var.db_instance_name}"
}
instance_state_names = ["running"]
depends_on = ["aws_sqs_queue.ansible", "aws_autoscaling_group.sample"]
}
output.tf
output "privateips" {
value = "${data.aws_instances.test.private_ips}"
}
When creating the ASG, add a local provisioner at the end to execute a local script that interacts with AWS using the cli, so that you can query the ASG IPs:
resource "aws_autoscaling_group" "artifactory" {
name_prefix = "${var.env}-Application-ASG-"
vpc_zone_identifier = ["${var.app_subnets}"]
max_size = "${var.asg_max}"
min_size = "${var.asg_min}"
desired_capacity = "${var.asg_desired}"
force_delete = true
launch_configuration = "${aws_launch_configuration.application.name}"
target_group_arns = ["${aws_alb_target_group.application.arn}"]
provisioner "local-exec" {
command = "./getips.sh"
}
}
script:
ips=""
ids=""
while [ "$ids" = "" ]; do
ids=$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG --region $REGION --query AutoScalingGroups[].Instances[].InstanceId --output text)
sleep 1
done
for ID in $ids;
do
IP=$(aws ec2 describe-instances --instance-ids $ID --region $REGION --query Reservations[].Instances[].PrivateIpAddress --output text)
ips="$ips,$IP"
done
You can get the IPs of the instances in a JSON structure with a single AWS command line execution:
aws ec2 describe-instances \
--filters Name=tag:aws:autoscaling:groupName,Values=$ASG \
--query 'Reservations[*].Instances[*].{"private_ip":PrivateIpAddress}' \
--output json
Sample output:
[
[
{
"private_ip": "10.24.2.120"
}
],
[
{
"private_ip": "10.24.1.147"
}
]
]
This script takes advantage of the fact that an autoscaling group adds a tag to each instance it launches, where the value of that tag is the name of the ASG.
You can place this code directly in the ASG resource definition using the local exec trick presented by #victorm:
resource "aws_autoscaling_group" "ecs" {
name = var.asg_name
...
provisioner "local-exec" {
command = "aws ec2 describe-instances --filters Name=tag:aws:autoscaling:groupName,Values=${var.asg_name} --query 'Reservations[*].Instances[*].{private_ip:PrivateIpAddress}' --output json"
}
}
I added this code to one of my own deployments to make sure it worked. It remains to be worked out how you'd grab and use the output of the execution. I was doing something slightly different with this code. I created an output that spits out the command without executing it. Then I just copy/paste it into my terminal window to run it. I haven't found the need to fully automate the process, without that one manual step. I use that trick (outputting a shell command that can be copy/pasted and executed) for a number of things.
You can create a data resource as such and output the private or public IPs
data "aws_instances" "web_instances" {
instance_state_names = ["running"]
}
output "instance_state_privip" {
description = "Instance Private IPs"
value = data.aws_instances.web_instances.private_ips
}
output "instance_state_pubip" {
description = "Instance Public IPs"
value = data.aws_instances.web_instances.public_ips
}

Getting IAM username in terraform

We have many IAM users, all creating self-serve infrastructure on EC2 using Terraform. Users don't necessarily set the key for their instances, so it's hard to tie an instance to a particular user. I realize we could dig through CloudTrail to find out which users are creating instances, but it seems like it would be simpler to tag the instances with the current IAM username.
The problem is Terraform doesn't appear to expose this - I can use aws_caller_identity or aws_canonical_user_id, but they both appear to return the organization account, not the specific IAM username. Is there a data source in Terraform that will return the IAM user creating the instances?
It looks like aws_caller_identity doesn't actually call the STS GetCallerId endpoint which would be able to provide the information you need - specifically the UserId and the Arn of the user running the command.
Instead it takes the simpler option and simply uses the accountid that the AWS client has already defined and simply returns that.
So you have a couple of options here. You could raise a pull request to have the aws_caller_identity data source actually call the STS GetCallerId endpoint or you could shell out using a local provisioner and use that to tag your resources.
Obviously if people are writing Terraform to directly use the raw resources that Terraform provides then you can't really enforce this other than having something kill anything that's not tagged but that still leaves the issue of people tagging things using someone else's UserId or Arn.
If instead you have a bunch of modules that people then source to use those instead then you could do something ugly like this in the modules that create the EC2 instances:
resource "aws_instance" "instance" {
ami = "ami-123456"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
lifecycle {
ignore_changes = [ "tags.Owner" ]
}
provisioner "local-exec" {
command = <<EOF
owner=`aws sts get-caller-identity --output text --query 'Arn' | cut -d"/" -f2`
aws ec2 create-tags --resources ${self.id} --tags Key=Owner,Value=$${owner}
EOF
}
}
The above Terraform will create an EC2 instance as normal but then ignore the "Owner" tag. After creating the instance it will run a local shell script that fetches the IAM account name/role for the user and then create an "Owner" tag for the instance using that value.
To handle multiple instances (using count), you can refer the below code:
resource "aws_instance" "instance" {
count = "${var.instance_number}"
ami = "ami-xxxxxx"
instance_type = "${var.instance_type}"
security_groups = "${concat(list("sg-xxxxxx"),var.security_groups)}"
disable_api_termination = "${var.termination_protection}"
subnet_id = "${var.subnet_id}"
iam_instance_profile = "test_role"
tags {
Name = "prod-${var.cluster_name}-${var.service_name}-${count.index+1}"
Environment = "prod"
Product = "${var.cluster_name}"
}
lifecycle {
ignore_changes = [ "tags.LaunchedBy" ]
}
provisioner "local-exec" {
command = <<EOF
launched_by=`aws iam get-user --profile prod | python -mjson.tool | grep UserName | awk '{print $2;exit; }'`
aws ec2 create-tags --resources ${self.id} --tags Key=LaunchedBy,Value=$${launched_by}
EOF
}
}