I want to create AWS CloudWatch dashboard with some metrics via Terraform. All my infrastructure are described in Terraform and I understand how to create AWS CloudWatch dashboard (Terraform + json template). The stuck is in AWS AutoScaling Group. When I want to display some metrics on dashboard, I just use constructions like
some_monitored_instance_id = "${aws_instance.some_instance.id}"
which then puts to json template like
"metrics": [
["AWS/EC2", "CPUUtilization", "InstanceId", "${some_monitored_instance_id}"]
],
All fine when instances are started via
resource "aws_instance" "some_instance" {}
But I can not use such method when instances are started via AutoScaling Group. How can I extract instance ids when instances launched via AutoScaling Group (and Launch Configuration) for future use in Terraform?
First, you really shouldn't. ASGs change out instances and those IDs will change. Cloudwatch offers metrics for ASGs. So you can see metrics for instances made by the ASG. You can also create a resource group and have metrics by resource group.
But, if you really wanted to do this:
data "aws_instances" "test" {
instance_tags = {
SomeTag = "SomeValue"
}
instance_state_names = ["running", "stopped"]
}
output ids {
value = data.aws_instances.test.ids
}
This will work if you put a tag in your launch config that is set on EC2s at launch.
This works because:
instance_tags - (Optional) A map of tags, each pair of which must exactly match a pair on desired instances
see docs
Related
I am trying to attach an IAM role to EC2 instance using terraform. But after looking out on some web pages.. I found that the attaching can be done at the time of creating ec2 instance.
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
iam_instance_profile = "${aws_iam_instance_profile.ec2_profile.name}"
tags = {
Name = "HelloWorld"
}
}
As in the above part , it can be clearly seen that, AMI is being passed which will create a new instance.
Is it somehow possible that instead of using AMI id, we can provide instance it, so that it can attach role to that instance?
I found out one link from terraform community pointing out that this feature is not yet released.
https://github.com/hashicorp/terraform/issues/11852
Please provide inputs on how to accomplish this task.
Thanks in advance
As you pointed out this is not supported. But if you really want to use terraform for that you could consider two options:
Use local-exec which would use AWS CLI associate-iam-instance-profile to attach the role to an existing instance.
Use aws_lambda_invocation. This way you could invoke a custom lambda function from your terraform which would use AWS SDK to associate profile with the instance. For example, for boto3 the method is associate_iam_instance_profile.
I have created an EBS volume that I can attach to EC2 instances using Terraform, but I cannot work out how to get the EBS to connect to an EC2 created by an autoscaling group.
Code that works:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_instance.server.id
}
Code that doesn't work:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_launch_template.asg-nginx.id
}
What I am hoping for is an auto-scaling launch template that adds an EBS that already exists, allowing for a high-performance EBS share instead of a "we told you not to put code on there" EFS share.
Edit: I am using a multi-attach EBS. I can attach it manually to multiple ASG-created EC2 instances and it works. I just can't do it using Terraform.
Edit 2: I finally settled on a user_data entry in Terraform that ran an AWS command line bash script to attach the multi-attach EBS.
Script:
#!/bin/bash
[…aws keys here…]
aws ec2 attach-volume --device /dev/sdxx --instance-id `cat /var/lib/cloud/data/instance-id` --volume-id vol-01234567890abc
reboot
Terraform:
data "template_file" "shell-script" {
template = file("path/to/script.sh")
}
data "template_cloudinit_config" "script_sh" {
gzip = false
base64_encode = true
part {
content_type = "text/x-shellscript"
content = data.template_file.shell-script.rendered
}
}
resource "aws_launch_template" "template_name" {
[…]
user_data = data.template_cloudinit_config.mount_sh.rendered
[…]
}
The risk here is storing a user's AWS keys in a script, but as the script is never stored on the servers, it's no big deal. Anyone with access to the user_data already has access to better keys than the one you're using here keys.
This would require Terraform being executed every time a new instance is created as part of a scaling event, which would require automation to invoke.
Instead you should look at adding a lifecycle hook for your autoscaling group.
You could configure the topic to trigger an SNS notification that invokes a Lambda function to attach to your new instance.
I'm running a shell activity in EC2 resource sample json for creating EC2 resource.
{
"id" : "MyEC2Resource",
"type" : "Ec2Resource",
"actionOnTaskFailure" : "terminate",
"actionOnResourceFailure" : "retryAll",
"maximumRetries" : "1",
"instanceType" : "m5.large",
"securityGroupIds" : [
"sg-12345678",
"sg-12345678"
],
"subnetId": "subnet-12345678",
"associatePublicIpAddress": "true",
"keyPair" : "my-key-pair"
}
Above json is creating EC2 resource using data pipeline but I want to give a name to the above resource when I will open EC2 resource in AWS console it will show EC2 resource name with other attributes, currently it's showing blank.
See attached image for more details
You have to tag the instance with:
Key: Name
Value: MyName
MyName is an example name. You need to change it to what you want it to be.
Adding the tag to the pipeline should propagate the tags to instances. From docs:
Applying a tag to a pipeline also propagates the tags to its underlying resources (for example, Amazon EMR clusters and Amazon EC2 instances)
But probably it does not work retrospectively. If you already have a pipeline with instances, its unlikely new tags will propagate. Propagation usually only works at the resource creation. For existing instances you may need to use EC2 console instead.
I can't find a way to specify a user-data after creating ECS instance definition.
Document says You can pass this user data into the Amazon EC2 launch wizard in Step 6.g of Launching an Amazon ECS Container Instance.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#multi-part_user_data
ECS is launched automatically, how do you specify the user data?
I want to send /var/log/syslog to cloudwatch and I need to add user data (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_cloudwatch_logs.html)
I had to add the user data as a autoscaling group property
steps are
copy existing launch configuration
edit user data of the launch configuration
edit autoscaling group to use the created launch configuration
terminate ecs instances so that the modified autoscaling group launches new ec2 with new launch configuration
via terraform we can pass it as template file within launch config
data "template_file" "user_data" {
template = "${file("${path.module}/templates/user_data.sh")}"
vars = {
ecs_config = "${var.ecs_config}"
ecs_logging = "${var.ecs_logging}"
cluster_name = "${var.cluster}"
env_name = "${var.environment}"
custom_userdata = "${var.custom_userdata}"
cloudwatch_prefix = "${var.cloudwatch_prefix}"
}
By default, user data scripts and cloud-init directives run only during the first boot cycle when an EC2 instance is launched.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
In the article it also explain further possible workaround.
When I create the VPC, I create a subnet in every availability zone.
Then, when I create my application, I want to input the ami and the type of instance (e.g. t3a.nano).
I want to avoid getting this error:
Error: Error launching source instance: Unsupported: Your requested
instance type (a1.medium) is not supported in your requested
Availability Zone (us-west-2b). Please retry your request by not
specifying an Availability Zone or choosing us-west-2a, us-west-2c.
I am looking for a terraform module that can tell me if I can create my instance on this subnet given my ami and instance type.
I didn't find the terraform module one so I created my own.
It is doing what I want but I wonder if there is a better way.
I put my code here.
https://gitlab.com/korrident/terraform_calculate_ami_by_availability_zone
Quickly, I just use a data "external" to call a bash
data "external" "subnet_available_for_ami" {
count = "${length(var.subnets_id)}"
program = ["bash", "${path.module}/check_subnet_ami.bash"]
query = {
ami = "${data.aws_ami.latest_ami.id}"
type = "${var.instance_type}"
subnet = "${var.subnets_id[count.index]}"
region = "${var.aws_region}"
profile = "${var.aws_profile}"
}
}
This script will call AWS CLI with a dry-run
aws --region=${REGION} ec2 run-instances \
--instance-type ${INSTANCE_TYPE} \
--image-id ${AMI} \
--subnet-id ${SUBNET_ID} \
--profile ${PROFILE} \
--dry-run
And in the end I filter the results to return a clean subnet list
locals {
uniq_answers = "${distinct(data.external.subnet_available_for_ami.*.result)}"
uniq_answers_filtered = [
for a in local.uniq_answers :
a if length(a) != 0
]
uniq_subnet_filtered = [
for a in local.uniq_answers_filtered :
a.subnet
]
}
Note that in the same script I also use the component aws_ami.
data "aws_ami" "latest_ami" {
Ideally, I would like this component to return me an ami available on my subnets.
There is no error, it works fine but is minimal.
If nothing is found, the calling module will deal with it.
The most problematic case is when there is only one result (I want my instances to be on multiple availability zones, not just one).
Has anyone found a better design?