I am very new to terraform, but I am trying to grant this resource
resource aws_instance "myinstance" {
ami = "${data.aws_ami.awsami.id}"
instance_type = "t2.small"
key_name = "${aws_key_pair.my_key.key_name}"
vpc_security_group_ids = ["${module.security.my_sg_id}", "${module.security.my_security_group_id}"]
subnet_id = "${element(module.network.public_subnets,1)}"
tags {
Name = "My instance"
}
}
Access to secrets manager. The instance needs to be able to read secrets via ansible script. I've found a blog on using instance profiles. How do I use an instance profile role to grant the instance access to secrets manager?
I was able to accomplish my goal by using the code below. You'll need to add ASSUME_ROLE_POLICY_HERE and POLICY_GOES_HERE. The important piece is specifying iam_instance_profile ="{aws_iam_instance_profile.myinstance_instance_profile.id}"
locals {
env_account = "${terraform.workspace}"
deploy_env_name = "${lookup(var.workspace_deploy_env, local.env_account)}"
}
resource "aws_eip" "myinstanceip" {
instance = "${aws_instance.myinstance.id}"
vpc = true
}
resource aws_instance "myinstance" {
ami = "${data.aws_ami.awsami.id}"
instance_type = "t2.small"
key_name = "${aws_key_pair.my_key.key_name}"
vpc_security_group_ids = ["${module.security.my_sg_id}", "${module.security.my_security_group_id}"]
subnet_id = "${element(module.network.public_subnets,1)}"
iam_instance_profile ="{aws_iam_instance_profile.myinstance_instance_profile.id}"
tags {
Name = "My instance"
}
}
resource aws_route53_record "myinstance_domain_name" {
zone_id = "${module.tf_aws_route53_zone.zone_id}"
name = "myinstance.${module.tf_aws_route53_zone.domain_name}"
type = "A"
ttl = "300"
records = ["${aws_eip.myinstanceip.public_ip}"]
}
output myinstance_ip {
value = "${aws_eip.myinstanceip.public_ip}"
}
resource "aws_iam_instance_profile" "myinstance_instance_profile" {
name = "myinstance-instance-profile"
role = "myinstance-role"
}
resource "aws_iam_role" "myinstance_role" {
name = "myinstance-role"
assume_role_policy = <<EOF
{
ASSUME_ROLE_POLICY_HERE
}
EOF
}
resource "aws_iam_policy" "secrets_manager" {
name = "secrets-manager-myinstance"
description = "Read secrets"
policy = <<POLICY
{
POLICY_GOES_HERE
}
POLICY
}
Related
My launch template specifies an iam instance profile and my node group has a groupe role arn. Based on this error, I removed the iam_instance_role argument from my template resource block and it still gave me the same error
Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile."
Here's my launch template resource blocks with my instance profile included
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
iam_instance_profile {
name = aws_iam_instance_profile.node.name
}
monitoring {
enabled = true
}
}
resource "aws_iam_instance_profile" "node" {
name_prefix = var.name
role = aws_iam_role.node.id
}
resource "aws_iam_role" "node" {
assume_role_policy = data.aws_iam_policy_document.assume_role_ec2.json
name = var.name
}
data "aws_iam_policy_document" "assume_role_ec2" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
}
}
When I first tried to apply this I got that error, so I removed all of it and tried again without the instance profile like-so:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
Got the same error both times. Here's my node group resource block as well
resource "aws_eks_node_group" "nodes_eks" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = module.vpc.private_subnets
# remote_access {
# ec2_ssh_key = aws_key_pair.bastion_auth.id
# }
scaling_config {
desired_size = 3
max_size = 6
min_size = 3
}
ami_type = "CUSTOM"
capacity_type = "ON_DEMAND"
force_update_version = false
# instance_types = [var.instance_type]
labels = {
role = "nodes-pool-1"
}
launch_template {
id = aws_launch_template.node.id
version = aws_launch_template.node.default_version
}
# version = var.k8s_version
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}
In this case, there are multiple points to take care of starting with [1]:
An object representing a node group launch template specification. The launch template cannot include SubnetId, IamInstanceProfile, RequestSpotInstances, HibernationOptions, or TerminateInstances, or the node group deployment or update will fail.
As per the document [2], you cannot specify any of the:
Instance profile - the node IAM role will be used
Subnets - the subnet_ids will be used and they are defined also in the node configuration
Shutdown behavior - EKS controls the instance lifecycle
Note that in the table it says prohibited which means it cannot ever be used. Additionally, in [2], you can find this as well:
Some of the settings in a launch template are similar to the settings used for managed node configuration. When deploying or updating a node group with a launch template, some settings must be specified in either the node group configuration or the launch template. Don't specify both places. If a setting exists where it shouldn't, then operations such as creating or updating a node group fail.
So you were pretty close when you removed the iam_instance_profile, but you still have to get rid of the instance_initiated_shutdown_behavior argument:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
name = var.name
user_data = base64encode("node_userdata.tpl")
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
I strongly suggest reading through the second document as it contains a lot of useful information about what to do when using a custom AMI.
[1] https://docs.aws.amazon.com/eks/latest/APIReference/API_LaunchTemplateSpecification.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-basics
I'm creating aws eks cluster via terraform, but when i create cluster nodegroup with launch template i got 2 launch templates - one is with name and settings that i specify and second is with random name but with setting that i specify. Only difference in this 2 launch templat is IAM instance profile that got 2nd group (that creates automatically).
If i trying to specify IAM instance profile in my group it gives error that i cannot use it here
Is i'm doing something wrong or it's normal that it's creates 2 launch template ?
# eks node launch template
resource "aws_launch_template" "this" {
name = "${var.eks_cluster_name}-node-launch-template"
instance_type = var.instance_types[0]
image_id = var.node_ami
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 80
volume_type = "gp3"
throughput = "125"
encrypted = false
iops = 3000
}
}
lifecycle {
create_before_destroy = true
}
network_interfaces {
security_groups = [data.aws_eks_cluster.this.vpc_config[0].cluster_security_group_id]
}
user_data = base64encode(templatefile("${path.module}/userdata.tpl", merge(local.userdata_vars, local.cluster_data)))
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "${var.eks_cluster_name}-node"
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
tag_specifications {
resource_type = "volume"
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
}
# eks nodes
resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = var.node_group_name
node_role_arn = aws_iam_role.eksNodesGroup.arn
subnet_ids = data.aws_subnet_ids.private.ids
scaling_config {
desired_size = 1
max_size = 10
min_size = 1
}
update_config {
max_unavailable = 1
}
launch_template {
version = aws_launch_template.this.latest_version
id = aws_launch_template.this.id
}
lifecycle {
create_before_destroy = true
ignore_changes = [
scaling_config[0].desired_size
]
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_launch_template.this
]
}
Expecting that terraform will create one launch template
Change the attribute name by name_prefix
Use:
name_prefix = "${var.eks_cluster_name}-node-launch-template"
Instead of:
name = "${var.eks_cluster_name}-node-launch-template"
The best choice to create a Unique name for a launch template using prefix (in your case ${var.eks_cluster_name}) is the name_prefix attribute.
Read more here
I was wondering if anyone could help with this issue? I'm trying to call an SSM document using terraform to stop an ec2 instance. But, it doesn't seems to work. I keep having the error:
Automation Step Execution fails when it is changing the state of each instance. Get Exception from StopInstances API of ec2 Service. Exception Message from StopInstances API: [You are not authorized to perform this operation.
Any suggestion here?
As you could see, there are the right roles. I pass it in parameter.
provider "aws" {
profile = "profile"
region = "eu-west-1"
}
data "aws_ssm_document" "stop_ec2_doc" {
name = "AWS-StopEC2Instance"
document_format = "JSON"
}
data "aws_iam_policy_document" "assume_role" {
version = "2012-10-17"
statement {
sid = "EC2AssumeRole"
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
principals {
identifiers = ["ssm.amazonaws.com"]
type = "Service"
}
}
}
data "aws_ami" "latest_amazon_2" {
most_recent = true
owners = ["amazon"]
name_regex = "^amzn2-ami-hvm-.*x86_64-gp2"
}
#
resource "aws_iam_role" "iam_assume_role" {
name = "iam_assume_role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
#
resource "aws_iam_role_policy_attachment" "role_1" {
role = aws_iam_role.iam_assume_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
# the instance profile
resource "aws_iam_instance_profile" "iam_instance_profile" {
name = "iam_instance_profile"
role = aws_iam_role.iam_assume_role.name
}
# amazon ec2 instances
resource "aws_instance" "ec2_instances" {
count = 2
ami = data.aws_ami.latest_amazon_2.id
instance_type = "t2.micro"
subnet_id = "subnet-12345678901"
iam_instance_profile = aws_iam_instance_profile.iam_instance_profile.name
root_block_device {
volume_size = 8
volume_type = "gp2"
delete_on_termination = true
}
}
resource "aws_ssm_association" "example" {
name = data.aws_ssm_document.stop_ec2_doc.name
parameters = {
AutomationAssumeRole = "arn:aws:iam::12345678901:role/aws-service-role/ssm.amazonaws.com/AWSServiceRoleForAmazonSSM"
InstanceId = aws_instance.ec2_instances[0].id
}
}
Any suggestion is welcome. I tried to create an easy Terraform code to illustrate what I'm trying to do. And to me it should be straight forward.
I create the role. I create the instance profile. I create the association passing the proper role and the instance id.
AWSServiceRoleForAmazonSSM role does not have permissions to stop instances. Instead you should create new role for SSM with such permissions. The simplest way is as follows:
resource "aws_iam_role" "ssm_role" {
name = "ssm_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ssm.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "ec2-attach" {
role = aws_iam_role.ssm_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
}
resource "aws_ssm_association" "example" {
name = data.aws_ssm_document.stop_ec2_doc.name
parameters = {
AutomationAssumeRole = aws_iam_role.ssm_role.arn
InstanceId = aws_instance.ec2_instances[0].id
}
}
The AmazonEC2FullAccess is too permissive just for stopping instances, but I use it as a working example.
I currently have the following Terraform plan:
provider "aws" {
region = var.region
}
resource "aws_instance" "ec2" {
ami = var.ami
instance_type = var.instanceType
subnet_id = var.subnet
security_groups = var.securityGroups
timeouts {
create = "2h"
delete = "2h"
}
tags = {
Name = "${var.ec2ResourceName}"
CoreInfra = "false"
}
lifecycle {
prevent_destroy = true
}
key_name = "My_Key_Name"
connection {
type = "ssh"
user = "ec2-user"
password = ""
private_key = file(var.keyPath)
host = self.public_ip
}
provisioner "file" {
source = "/home/ec2-user/code/backend/ec2/setup_script.sh"
destination = "/tmp/setup_script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/setup_script.sh",
"bash /tmp/setup_script.sh ${var.ec2ResourceName}"
]
}
}
resource "aws_eip" "eip_manager" {
name = "eip-${var.ec2ResourceName}"
instance = aws_instance.ec2.id
vpc = true
tags = {
Name = "eip-${var.ec2ResourceName}"
}
lifecycle {
prevent_destroy = true
}
}
This plan can be run multiple times, creating a single EC2 instance each time without removing the previous one. However, there is a single Elastic IP that ends up being reassigned to the most recently-created EC2 instance. How can I add an Elastic IP to each new instance that does not get reassigned?
maybe with aws_eip_association, here is the snippet:
resource "aws_eip_association" "eip_assoc" {
instance_id = aws_instance.ec2.id
allocation_id = aws_eip.eip_manager.id
}
More info here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eip_association
I create instances with a default CentOS 7 AMI. This AMI creates automatically a volume and attached to the instance. Is it possible to read thats volume ID using terraform? I create the instance using the next code:
resource "aws_instance" "DCOS-master3" {
ami = "${var.aws_centos_ami}"
availability_zone = "eu-west-1b"
instance_type = "t2.medium"
key_name = "${var.aws_key_name}"
security_groups = ["${aws_security_group.bastion.id}"]
associate_public_ip_address = true
private_ip = "10.0.0.13"
source_dest_check = false
subnet_id = "${aws_subnet.eu-west-1b-public.id}"
tags {
Name = "master3"
}
}
You won't be able to extract EBS details from aws_instance since it's AWS side that provides an EBS volume to the resource.
But you can define a EBS data source with some filter.
data "aws_ebs_volume" "ebs_volume" {
most_recent = true
filter {
name = "attachment.instance-id"
values = ["${aws_instance.DCOS-master3.id}"]
}
}
output "ebs_volume_id" {
value = "${data.aws_ebs_volume.ebs_volume.id}"
}
You can refer EBS filters here:
http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-volumes.html
You can: aws_instance.DCOS-master3.root_block_device.0.volume_id
As described in Terraform docs:
For any root_block_device and ebs_block_device the volume_id is exported. e.g. aws_instance.web.root_block_device.0.volume_id
output "volume-id-C" {
description = "root volume-id"
#get the root volume id form the instance
value = element(tolist(data.aws_instance.DCOS-master3.root_block_device.*.volume_id),0)
}
output "volume-id-D" {
description = "ebs-volume-id"
#get the 1st esb volume id form the instance
value = element(tolist(data.aws_instance.DCOS-master3.ebs_block_device.*.volume_id),0)
}
You can get the volume name of an aws_instance like this:
output "instance" {
value = aws_instance.ec2_instance.volume_tags["Name"]
}
And you can set it as follows:
resource "aws_instance" "ec2_instance" {
ami = var.instance_ami
instance_type = var.instance_type
key_name = var.instance_key
...
tags = {
Name = "${var.server_name}_${var.instance_name[count.index]}"
}
volume_tags = {
Name = "local_${var.instance_name[count.index]}"
}
}