When I create the VPC, I create a subnet in every availability zone.
Then, when I create my application, I want to input the ami and the type of instance (e.g. t3a.nano).
I want to avoid getting this error:
Error: Error launching source instance: Unsupported: Your requested
instance type (a1.medium) is not supported in your requested
Availability Zone (us-west-2b). Please retry your request by not
specifying an Availability Zone or choosing us-west-2a, us-west-2c.
I am looking for a terraform module that can tell me if I can create my instance on this subnet given my ami and instance type.
I didn't find the terraform module one so I created my own.
It is doing what I want but I wonder if there is a better way.
I put my code here.
https://gitlab.com/korrident/terraform_calculate_ami_by_availability_zone
Quickly, I just use a data "external" to call a bash
data "external" "subnet_available_for_ami" {
count = "${length(var.subnets_id)}"
program = ["bash", "${path.module}/check_subnet_ami.bash"]
query = {
ami = "${data.aws_ami.latest_ami.id}"
type = "${var.instance_type}"
subnet = "${var.subnets_id[count.index]}"
region = "${var.aws_region}"
profile = "${var.aws_profile}"
}
}
This script will call AWS CLI with a dry-run
aws --region=${REGION} ec2 run-instances \
--instance-type ${INSTANCE_TYPE} \
--image-id ${AMI} \
--subnet-id ${SUBNET_ID} \
--profile ${PROFILE} \
--dry-run
And in the end I filter the results to return a clean subnet list
locals {
uniq_answers = "${distinct(data.external.subnet_available_for_ami.*.result)}"
uniq_answers_filtered = [
for a in local.uniq_answers :
a if length(a) != 0
]
uniq_subnet_filtered = [
for a in local.uniq_answers_filtered :
a.subnet
]
}
Note that in the same script I also use the component aws_ami.
data "aws_ami" "latest_ami" {
Ideally, I would like this component to return me an ami available on my subnets.
There is no error, it works fine but is minimal.
If nothing is found, the calling module will deal with it.
The most problematic case is when there is only one result (I want my instances to be on multiple availability zones, not just one).
Has anyone found a better design?
Related
ok, so I am trying to attach an EBS volume which I have created using Terraform to an ASG's instance using userdata, but now issue is both are in different AZ's, due to which, it failing to attach. Below is the steps I am trying and failing:
resource "aws_ebs_volume" "this" {
for_each = var.ebs_block_device
size = lookup(each.value,"volume_size", null)
type = lookup(each.value,"volume_type", null)
iops = lookup(each.value, "iops", null)
encrypted = lookup(each.value, "volume_encrypt", null)
kms_key_id = lookup(each.value, "kms_key_id", null)
availability_zone = join(",",random_shuffle.az.result)
}
In above resource, I am using random provider to get one AZ from list of AZs, and same list is provided to ASG resource below:
resource "aws_autoscaling_group" "this" {
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.this.id
max_size = var.max_size
min_size = var.min_size
name = var.name
vpc_zone_identifier = var.subnet_ids // <------ HERE
health_check_grace_period = var.health_check_grace_period
load_balancers = var.load_balancer_names
target_group_arns = var.target_group_arns
tag {
key = "Name"
value = var.name
propagate_at_launch = true
}
}
And here is userdata which I am using:
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
instanceId = curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id
aws ec2 attach-volume --volume-id ${ebs_volume_id} --instance-id $instanceId --device /dev/nvme1n1
Above will attach the newly created volume, as I am passing output ${ebs_volume_id} of above resource.
But, its failing because instance and volume are in different AZs.
Can anyone help me on this as a better solution than hardcoding AZ on both ASG and Volume?
I'd have to understand more about what you're trying to do to solve this with just the aws provider and terraform. And honestly, most ideas are going to be a bit complex.
You could have an ASG per AZ. Otherwise, the ASG is going to select some AZ at each launch. And you'll have more instances in an AZ than you have volumes and volumes in other AZs with no instances to attach to.
So you could create a number of volumes per az and an ASG per AZ. Then the userdata should list all the volumes in the AZ that are not attached to an instance. Then pick the id of the first volume that is unattached. Then attach it. If all are attached, you should trigger your alerting because you have more instances than you have volumes.
Any attempt to do this with a single ASG is really an attempt at writing your own ASG but doing it in a way that fights with your actual ASG.
But there is a company who offers managing this as a service. They also help you manage them as spot instances to save cost: https://spot.io/
The elastigroup resource is an ASG managed by them. So you won't have an aws asg anymore. But they have some interesting stateful configurations.
We support instance persistence via the following configurations. all values are boolean. For more information on instance persistence please see: Stateful configuration
persist_root_device - (Optional) Boolean, should the instance maintain its root device volumes.
persist_block_devices - (Optional) Boolean, should the instance maintain its Data volumes.
persist_private_ip - (Optional) Boolean, should the instance maintain its private IP.
block_devices_mode - (Optional) String, determine the way we attach the data volumes to the data devices, possible values: "reattach" and "onLaunch" (default is onLaunch).
private_ips - (Optional) List of Private IPs to associate to the group instances.(e.g. "172.1.1.0"). Please note: This setting will only apply if persistence.persist_private_ip is set to true
stateful_deallocation {
should_delete_images = false
should_delete_network_interfaces = false
should_delete_volumes = false
should_delete_snapshots = false
}
This allows you to have an autoscaler that preserves volumes and handles the complexities for you.
I want to create AWS CloudWatch dashboard with some metrics via Terraform. All my infrastructure are described in Terraform and I understand how to create AWS CloudWatch dashboard (Terraform + json template). The stuck is in AWS AutoScaling Group. When I want to display some metrics on dashboard, I just use constructions like
some_monitored_instance_id = "${aws_instance.some_instance.id}"
which then puts to json template like
"metrics": [
["AWS/EC2", "CPUUtilization", "InstanceId", "${some_monitored_instance_id}"]
],
All fine when instances are started via
resource "aws_instance" "some_instance" {}
But I can not use such method when instances are started via AutoScaling Group. How can I extract instance ids when instances launched via AutoScaling Group (and Launch Configuration) for future use in Terraform?
First, you really shouldn't. ASGs change out instances and those IDs will change. Cloudwatch offers metrics for ASGs. So you can see metrics for instances made by the ASG. You can also create a resource group and have metrics by resource group.
But, if you really wanted to do this:
data "aws_instances" "test" {
instance_tags = {
SomeTag = "SomeValue"
}
instance_state_names = ["running", "stopped"]
}
output ids {
value = data.aws_instances.test.ids
}
This will work if you put a tag in your launch config that is set on EC2s at launch.
This works because:
instance_tags - (Optional) A map of tags, each pair of which must exactly match a pair on desired instances
see docs
I have created an EBS volume that I can attach to EC2 instances using Terraform, but I cannot work out how to get the EBS to connect to an EC2 created by an autoscaling group.
Code that works:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_instance.server.id
}
Code that doesn't work:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_launch_template.asg-nginx.id
}
What I am hoping for is an auto-scaling launch template that adds an EBS that already exists, allowing for a high-performance EBS share instead of a "we told you not to put code on there" EFS share.
Edit: I am using a multi-attach EBS. I can attach it manually to multiple ASG-created EC2 instances and it works. I just can't do it using Terraform.
Edit 2: I finally settled on a user_data entry in Terraform that ran an AWS command line bash script to attach the multi-attach EBS.
Script:
#!/bin/bash
[…aws keys here…]
aws ec2 attach-volume --device /dev/sdxx --instance-id `cat /var/lib/cloud/data/instance-id` --volume-id vol-01234567890abc
reboot
Terraform:
data "template_file" "shell-script" {
template = file("path/to/script.sh")
}
data "template_cloudinit_config" "script_sh" {
gzip = false
base64_encode = true
part {
content_type = "text/x-shellscript"
content = data.template_file.shell-script.rendered
}
}
resource "aws_launch_template" "template_name" {
[…]
user_data = data.template_cloudinit_config.mount_sh.rendered
[…]
}
The risk here is storing a user's AWS keys in a script, but as the script is never stored on the servers, it's no big deal. Anyone with access to the user_data already has access to better keys than the one you're using here keys.
This would require Terraform being executed every time a new instance is created as part of a scaling event, which would require automation to invoke.
Instead you should look at adding a lifecycle hook for your autoscaling group.
You could configure the topic to trigger an SNS notification that invokes a Lambda function to attach to your new instance.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
I'm using OpsWorks to deploy a bunch of applications and I want to tag the instances and all of their associated resources. I'm using the opscode aws cookbook (https://github.com/opscode-cookbooks/aws) to tag my instances and that works fine using the following recipe:
include_recipe 'aws'
custom_tags = node.fetch('aws_tag', {}).fetch('tags', nil)
instance_id = node.fetch('ec2', {}).fetch('instance_id', nil)
unless custom_tags.nil? || custom_tags.empty?
aws_resource_tag 'tag instance' do
tags custom_tags
resource_id instance_id
action :update
end
end
I'd like to extend this recipe to tag EBS volumes that are attached to the instance. aws_resource_tag() can tag instances, snapshots, and volumes, but I need to provide it a list of the volumes to tag.
How can I get the volume ids attached to the instance?
I don't see anything in http://docs.aws.amazon.com/opsworks/latest/userguide/attributes-json-opsworks-instance.html so you'll probably just have to use the standard ohai data. Connect to the machine and run ohai ec2 and you'll see the full metadata tree.
first of all you need know OpsWorks automatically tag Layer or Stack resources associated but the Tags cannot currently be applied to the root, or default, EBS volume of an instance.
If you using OpsWorks for Windows Stack, I suggest install the following cookbook from Supermarket:
File metadata.rb
depends 'aws', '4.2.2'
depends 'ohai', '4.2.3'
depends 'compat_resource', '12.19.1'
Next add to your stack a IAM role with necessary permission to perform list-tags for OpsWorks and create-tags in the EC2 service.
At the end you can use this recipes add-tags.rb:
Chef::Log.info("******TAGS VOLUME******")
#Chef::Log.level = :debug
instance = search("aws_opsworks_instance", "self:true").first
stack = search("aws_opsworks_stack").first
arnstack = "#{stack['arn']}"
cmd = "aws opsworks list-tags --resource-arn #{arnstack} --region eu-west-1"
Chef::Log.info("****** #{arnstack} ******")
batch 'find_tags' do
cwd "C:\\Program Files\\Amazon\\AWSCLI"
code <<-EOH
#{cmd} > C:\\tmp\\res.json
EOH
end
if ::File.exist?('C:\\tmp\\res.json')
myjsonfile = ::File.read('C:\\tmp\\res.json').chomp
data = JSON.parse("#{myjsonfile}")
data['Tags'].each do |key, value|
aws_resource_tag 'Boot Volume' do
resource_id lazy {instance['root_device_volume_id']}
tags(key => value)
end
end
end
That recipe add all TAG founded on stack only to root volume of my instance.