I'm trying to revoke vpn client ingress rule on 'destroy' in Terrafrom. Everything worked fine with terraform 0.12
Unfortunately, after upgrading to version 0.14, the same method no longer works.
Here is what I have:
resource "null_resource" "client_vpn_ingress" {
provisioner "local-exec" {
when = create
command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr ${var.vpc_cidr_block} --authorize-all-groups --region ${var.aws_region} --profile ${var.profile}"
}
provisioner "local-exec" {
when = destroy
command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr ${var.vpc_cidr_block} --revoke-all-groups --region ${var.aws_region} --profile ${var.profile}"
}
}
and here is the error message:
Error: Invalid reference from destroy provisioner
on vpn_client_endpoint.tf line 84, in resource "null_resource"
"client_vpn_ingress": 84: command = "aws ec2
revoke-client-vpn-ingress --client-vpn-endpoint-id
${aws_ec2_client_vpn_endpoint.vpn_endpoint.id} --target-network-cidr
${var.vpc_cidr_block} --revoke-all-groups --region ${var.aws_region}
--profile ${var.profile}"
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self',
'count.index', or 'each.key'.
References to other resources during the destroy phase can cause
dependency cycles and interact poorly with create_before_destroy.
Unfortunately I'm no longer able to use Terraform 0.12
Does anyone have any idea how to revoke it on 'terraform destroy' in version >= 0.14 ?
As of version 2.70.0 of the Terraform AWS provider (see this GitHub comment), you can now do something like this:
resource "aws_ec2_client_vpn_authorization_rule" "vpn_auth_rule" {
depends_on = [
aws_ec2_client_vpn_endpoint.vpn
]
client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.vpn.id
target_network_cidr = "0.0.0.0/0"
authorize_all_groups = true
}
This way the ingress rule will be handled as a first-class resource in state by Terraform and you won't have to worry about if/when the command is executed.
Related
I am trying to login to ec2 instance that terraform will create with the following code:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
But this gives me an error:
PS C:\Users\Amritvir Singh\Documents\GitHub\AWS-Scribble\Terraform> terraform apply
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Enter a value: us-east-1
Error: Invalid function argument
on Session1.tf line 13, in resource "aws_instance" "sess1":
13: private_key = file("/logon.pem")
Invalid value for "path" parameter: no file exists at logon.pem; this function
works only with files that are distributed as part of the configuration source
code, so if this file will be created by a resource in this configuration you
must instead obtain this result from an attribute of that resource.
How do I save pass the key from resource to provisioner at runtime without logging into the console?
Have you tried using the full path? Especially beneficial if you are using modules.
I.E:
private_key = file("${path.module}/logon.pem")
Or I think even this will work
private_key = file("./logon.pem")
I believe your existing code is looking for the file at the root of your filesystem.
connection should be in the provisioner block:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
provisioner "remote-exec" {
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
The above assumes that everything else is correct, e.g. the key file exist or security groups allow for ssh connection.
I have some Terraform code with an aws_instance and a null_resource:
resource "aws_instance" "example" {
ami = data.aws_ami.server.id
instance_type = "t2.medium"
key_name = aws_key_pair.deployer.key_name
tags = {
name = "example"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
resource "null_resource" "example" {
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml"
}
}
It kinda works, but sometimes there is a bug (probably when the instance in a pending state). When I rerun Terraform - it works as expected.
Question: How can I run local-exec only when the instance is running and accepting an SSH connection?
The null_resource is currently only going to wait until the aws_instance resource has completed which in turn only waits until the AWS API returns that it is in the Running state. There's a long gap from there to the instance starting the OS and then being able to accept SSH connections before your local-exec provisioner can connect.
One way to handle this is to use the remote-exec provisioner on the instance first as that has the ability to wait for the instance to be ready. Changing your existing code to handle this would look like this:
resource "aws_instance" "example" {
ami = data.aws_ami.server.id
instance_type = "t2.medium"
key_name = aws_key_pair.deployer.key_name
tags = {
name = "example"
}
vpc_security_group_ids = [aws_security_group.main.id]
}
resource "null_resource" "example" {
provisioner "remote-exec" {
connection {
host = aws_instance.example.public_dns
user = "centos"
file = file("files/id_rsa")
}
inline = ["echo 'connected!'"]
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -T 300 -i ${aws_instance.example.public_dns}, --user centos --private-key files/id_rsa playbook.yml"
}
}
This will first attempt to connect to the instance's public DNS address as the centos user with the files/id_rsa private key. Once it is connected it will then run echo 'connected!' as a simple command before moving on to your existing local-exec provisioner that runs Ansible against the instance.
Note that just being able to connect over SSH may not actually be enough for you to then provision the instance. If your Ansible script tries to interact with your package manager then you may find that it is locked from the instance's user data script running. If this is the case you will need to remotely execute a script that waits for cloud-init to be complete first. An example script looks like this:
#!/bin/bash
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
echo -e "\033[1;36mWaiting for cloud-init..."
sleep 1
done
There is an ansible specific solution for this problem. Add this code to you playbook(there is all so pre_task clause if you use roles)
- name: will wait till reachable
hosts: all
gather_facts: no # important
tasks:
- name: Wait for system to become reachable
wait_for_connection:
- name: Gather facts for the first time
setup:
For cases where instances are not externally exposed (About 90% of the time in most of my projects), and SSM agent is installed on the target instance (newer AWS AMIs come pre-loaded with it), you can leverage SSM to probe the instance. Here's some sample code:
instanceId=$1
echo "Waiting for instance to bootstrap ..."
tries=0
responseCode=1
while [[ $responseCode != 0 && $tries -le 10 ]]
do
echo "Try # $tries"
cmdId=$(aws ssm send-command --document-name AWS-RunShellScript --instance-ids $instanceId --parameters commands="cat /tmp/job-done.txt # or some other validation logic" --query Command.CommandId --output text)
sleep 5
responseCode=$(aws ssm get-command-invocation --command-id $cmdId --instance-id $instanceId --query ResponseCode --output text)
echo "ResponseCode: $responseCode"
if [ $responseCode != 0 ]; then
echo "Sleeping ..."
sleep 60
fi
(( tries++ ))
done
echo "Wait time over. ResponseCode: $responseCode"
Assuming you have AWS CLI installed locally, you can have this null_resource required before you act on the instance. In my case, I was building an AMI.
resource "null_resource" "wait_for_instance" {
depends_on = [
aws_instance.my_instance
]
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "${path.module}/scripts/check-instance-state.sh ${aws_instance.my_instance.id}"
}
}
I have just getting started with terraform using aws services.
I just created a new user IAM user and given it AdministrativeAccess
Copied down the Access Key and Secret and pasted it in terraform instance.tf file under provider "aws" {}
Ran the command: terraform init and it worked fine.
Ran the command: terraform apply but in the end it gives me following error:
aws_instance.example: Creating...
Error: Error launching source instance: Unsupported: The requested
configuration is currently not supported. Please check the
documentation for supported configurations. status code: 400, request
id: cf85fdcf-432e-23d3-1233-790cfb2aa33fs
on instance.tf line 7, in resource "aws_instance" "example": 7:
resource "aws_instance" "example" {
Here is my terraform code:
provider "aws" {
access_key = "ACCESS_KEY"
secret_key = "SECRET_KEY"
region = "us-east-2"
}
resource "aws_instance" "example" {
ami = "ami-0b9bd0b532ebcf4c9"
instance_type = "t2.micro"
}
Any help would be appreciatable,
Cheers :)
The following worked for me after changing the eu-west-1 to eu-west-2 because for some reason eu-west-1 has no VPC (strangely, link). Second thing to change was ami.
Paste the following in instance.tf with correct ACCESS and SECRET keys and do terraform init and then terraform apply. It should work.
provider "aws" {
access_key = "ACCESS_KEY"
secret_key = "SECRET_KEY"
region = "eu-west-2"
}
resource "aws_instance" "example" {
ami = "ami-031e556ebe95c007e"
instance_type = "t2.micro"
}
For my case, I used the wrong AMI ID : I used the "64-bit Arm" arch instead of "64-bit x86".
Using the AMI ID of "64-bit x86" fixes the issue.
The best options are to check your AMI and the region. This is not a terraform issue. Its AWS AMI or REGION problem.
When running the below file with Terraform I get the following error:
Resource 'aws_instance.nodes-opt-us-k8s' not found for variable
'aws_instance.nodes-opt.us1-k8s.id'.
Do I need to include the provisioner twice because my 'count' variable is creating two? When I just include one for 'count' variable I get the error my Ansible playbook needs to run playbook files, which makes since because it is empty until I figure this error out.
I am in the early stages with Terraform and Linux so pardon my ignorance
#-----------------------------Kubernetes Master & Worker Node Server Creations----------------------------
#-----key pair for Workernodes-----
resource "aws_key_pair" "k8s-node_auth" {
key_name = "${var.key_name2}"
public_key = "${file(var.public_key_path2)}"
}
#-----Workernodes-----
resource "aws_instance" "nodes-opt-us1-k8s" {
instance_type = "${var.k8s-node_instance_type}"
ami = "${var.k8s-node_ami}"
count = "${var.NodeCount}"
tags {
Name = "nodes-opt-us1-k8s"
}
key_name = "${aws_key_pair.k8s-node_auth.id}"
vpc_security_group_ids = ["${aws_security_group.opt-us1-k8s_sg.id}"]
subnet_id = "${aws_subnet.opt-us1-k8s.id}"
#-----Link Terraform worker nodes to Ansible playbooks-----
provisioner "local-exec" {
command = <<EOD
cat <<EOF >> workers
[workers]
${self.public_ip}
EOF
EOD
}
provisioner "local-exec" {
command = "aws ec2 wait instance-status-ok --instance-ids ${aws_instance.nodes-opt-us1-k8s.id} --profile Terraform && ansible-playbook -i workers Kubernetes-Nodes.yml"
}
}
Terraform 0.12.26 resolved similar issue for me (when using multiple file provisioners when deploying multiple VMs to Azure)
Hope this helps you: https://github.com/hashicorp/terraform/issues/22006
When using a provisioner and referring to the resource the provisioner is attached to you need to use the self keyword as you've already spotted with what you are writing to the file.
So in your case you want to use the following provisioner block:
...
provisioner "local-exec" {
command = <<EOD
cat <<EOF >> workers
[workers]
${self.public_ip}
EOF
EOD
}
provisioner "local-exec" {
command = "aws ec2 wait instance-status-ok --instance-ids ${self.id} --profile Terraform && ansible-playbook -i workers Kubernetes-Nodes.yml"
}
We have many IAM users, all creating self-serve infrastructure on EC2 using Terraform. Users don't necessarily set the key for their instances, so it's hard to tie an instance to a particular user. I realize we could dig through CloudTrail to find out which users are creating instances, but it seems like it would be simpler to tag the instances with the current IAM username.
The problem is Terraform doesn't appear to expose this - I can use aws_caller_identity or aws_canonical_user_id, but they both appear to return the organization account, not the specific IAM username. Is there a data source in Terraform that will return the IAM user creating the instances?
It looks like aws_caller_identity doesn't actually call the STS GetCallerId endpoint which would be able to provide the information you need - specifically the UserId and the Arn of the user running the command.
Instead it takes the simpler option and simply uses the accountid that the AWS client has already defined and simply returns that.
So you have a couple of options here. You could raise a pull request to have the aws_caller_identity data source actually call the STS GetCallerId endpoint or you could shell out using a local provisioner and use that to tag your resources.
Obviously if people are writing Terraform to directly use the raw resources that Terraform provides then you can't really enforce this other than having something kill anything that's not tagged but that still leaves the issue of people tagging things using someone else's UserId or Arn.
If instead you have a bunch of modules that people then source to use those instead then you could do something ugly like this in the modules that create the EC2 instances:
resource "aws_instance" "instance" {
ami = "ami-123456"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
lifecycle {
ignore_changes = [ "tags.Owner" ]
}
provisioner "local-exec" {
command = <<EOF
owner=`aws sts get-caller-identity --output text --query 'Arn' | cut -d"/" -f2`
aws ec2 create-tags --resources ${self.id} --tags Key=Owner,Value=$${owner}
EOF
}
}
The above Terraform will create an EC2 instance as normal but then ignore the "Owner" tag. After creating the instance it will run a local shell script that fetches the IAM account name/role for the user and then create an "Owner" tag for the instance using that value.
To handle multiple instances (using count), you can refer the below code:
resource "aws_instance" "instance" {
count = "${var.instance_number}"
ami = "ami-xxxxxx"
instance_type = "${var.instance_type}"
security_groups = "${concat(list("sg-xxxxxx"),var.security_groups)}"
disable_api_termination = "${var.termination_protection}"
subnet_id = "${var.subnet_id}"
iam_instance_profile = "test_role"
tags {
Name = "prod-${var.cluster_name}-${var.service_name}-${count.index+1}"
Environment = "prod"
Product = "${var.cluster_name}"
}
lifecycle {
ignore_changes = [ "tags.LaunchedBy" ]
}
provisioner "local-exec" {
command = <<EOF
launched_by=`aws iam get-user --profile prod | python -mjson.tool | grep UserName | awk '{print $2;exit; }'`
aws ec2 create-tags --resources ${self.id} --tags Key=LaunchedBy,Value=$${launched_by}
EOF
}
}