I am trying to provision RDS instance with private subnets using terraform template and my template looks like this
following attributes/restrictions while creating rds:
Not publicly Accessible. Security group to be opened only for eks
cluster, not public.
cat modules/rds/rds.tf
resource "aws_db_instance" "rds_instance" {
allocated_storage = 50
identifier = "rds-vaya"
storage_type = "gp2"
engine = "mysql"
engine_version = "8.0.23"
instance_class = "db.t2.micro"
db_name = "vaya"
username = "admin"
password = aws_secretsmanager_secret_version.password.secret_string
publicly_accessible = false
multi_az = true
db_subnet_group_name = aws_db_subnet_group.rdssubnet.id
vpc_security_group_ids = var.eks-sg
tags = {
Name = "OpsyRDSServerInstance"
}
}
cat modules/rds/security.tf
#make rds subnet group
resource "aws_db_subnet_group" "rdssubnet" {
name = "database-subnet"
subnet_ids = var.private_subnet_ids
}
cat modules/eks/security.tf
resource "aws_security_group" "main" {
name = "eks-sg-${var.env}"
vpc_id = var.vpc_id
}
resource "aws_security_group_rule" "ingress_rules" {
count = length(var.ingress_rule)
type = "ingress"
from_port = var.ingress_rule[count.index][0]
to_port = var.ingress_rule[count.index][1]
protocol = var.ingress_rule[count.index][2]
security_group_id = aws_security_group.main.id
source_security_group_id = aws_security_group.main.id
}
cat modules/eks/output.tf
output "eks-sg" {
value = aws_security_group_rule.ingress_rules.*.id
}
cat main.tf
module "eks_cluster" {
source = "./modules/eks"
eks_cluster_name = var.eks_cluster_name
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.subnet_ids
eks_cluster_role_arn = module.iam.eks_cluster_role_arn
eks_cluster_create_depends_on = module.iam.id
instance_count = var.instance_count
instance_type = var.instance_type
ingress_rule = var.ingress_rule
env = var.env
}
module "rds" {
source = "./modules/rds"
vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnet_ids
eks-sg = module.eks_cluster.eks-sg
env = var.env
}
Below is the terraform plan/apply:
# module.rds.aws_db_instance.rds_instance will be created
+ resource "aws_db_instance" "rds_instance" {
+ address = (known after apply)
+ allocated_storage = 50
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ auto_minor_version_upgrade = true
+ availability_zone = (known after apply)
+ backup_retention_period = (known after apply)
+ backup_window = (known after apply)
+ ca_cert_identifier = (known after apply)
+ character_set_name = (known after apply)
+ copy_tags_to_snapshot = false
+ db_name = "vaya"
+ db_subnet_group_name = "database-subnet"
+ delete_automated_backups = true
+ endpoint = (known after apply)
+ engine = "mysql"
+ engine_version = "8.0.23"
+ engine_version_actual = (known after apply)
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ identifier = "rds-vaya"
+ identifier_prefix = (known after apply)
+ instance_class = "db.t2.micro"
+ kms_key_id = (known after apply)
+ latest_restorable_time = (known after apply)
+ license_model = (known after apply)
+ maintenance_window = (known after apply)
+ monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
+ multi_az = true
+ name = (known after apply)
+ nchar_character_set_name = (known after apply)
+ network_type = (known after apply)
+ option_group_name = (known after apply)
+ parameter_group_name = (known after apply)
+ password = (sensitive value)
+ performance_insights_enabled = false
+ performance_insights_kms_key_id = (known after apply)
+ performance_insights_retention_period = (known after apply)
+ port = (known after apply)
+ publicly_accessible = false
+ replica_mode = (known after apply)
+ replicas = (known after apply)
+ resource_id = (known after apply)
+ skip_final_snapshot = false
+ snapshot_identifier = (known after apply)
+ status = (known after apply)
+ storage_type = "gp2"
+ tags = {
+ "Name" = "OpsyRDSServerInstance"
}
+ tags_all = {
+ "Name" = "OpsyRDSServerInstance"
}
+ timezone = (known after apply)
+ username = "admin"
+ vpc_security_group_ids = [
+ "sgrule-2349526507",
+ "sgrule-2500829248",
+ "sgrule-2855048482",
+ "sgrule-4188522375",
]
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ eks-sg = [
+ "sgrule-2500829248",
+ "sgrule-2349526507",
+ "sgrule-2855048482",
+ "sgrule-4188522375",
]
Do you want to perform these actions in workspace "dev"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.rds.aws_db_instance.rds_instance: Creating...
╷
│ Error: creating RDS DB Instance (rds-vaya): InvalidParameterValue: Invalid security group , groupId= sgrule-2349526507, sgrule-2500829248, sgrule-2855048482, sgrule-4188522375, groupName=.
│ status code: 400, request id: fb505df1-9202-4986-9d54-7e4af4fcbc91
│
│ with module.rds.aws_db_instance.rds_instance,
│ on modules/rds/rds.tf line 2, in resource "aws_db_instance" "rds_instance":
│ 2: resource "aws_db_instance" "rds_instance" {
│
i've no idea what the problem and how to fix this issue.
You are outputting the SG rule IDs while you want SG IDs. You need to use the attributes of the SG itself:
resource "aws_security_group" "main" {
name = "eks-sg-${var.env}"
vpc_id = var.vpc_id
}
And the output should be (modules/eks/output.tf):
output "eks-sg" {
value = aws_security_group.main.id
}
Related
I am trying to create a kubernetes service account in a created namespace, which will have a secret and a cluster role binding, however, even though the terraform plan and apply stage shows that is is being created, it isn't, please see below module code and screenshots:
resource "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "serviceaccount" {
metadata {
name = var.name
}
subject {
kind = "User"
name = "system:serviceaccount:kube-system:${var.name}"
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
}
data "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
data "kubernetes_secret" "serviceaccount" {
metadata {
name = data.kubernetes_service_account.serviceaccount.default_secret_name
namespace = "kube-system"
}
binary_data = {
"token": ""
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
And the output from terraform run in devops:
# module.dd_service_account.data.kubernetes_secret.serviceaccount will be read during apply
# (config refers to values not yet known)
<= data "kubernetes_secret" "serviceaccount" {
+ binary_data = (sensitive value)
+ data = (sensitive value)
+ id = (known after apply)
+ immutable = (known after apply)
+ type = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = (known after apply)
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# module.dd_service_account.data.kubernetes_service_account.serviceaccount will be read during apply
# (depends on a resource or a module with changes pending)
<= data "kubernetes_service_account" "serviceaccount" {
+ automount_service_account_token = (known after apply)
+ default_secret_name = (known after apply)
+ id = (known after apply)
+ image_pull_secret = (known after apply)
+ secret = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# module.dd_service_account.kubernetes_cluster_role_binding.serviceaccount will be created
+ resource "kubernetes_cluster_role_binding" "serviceaccount" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
+ role_ref {
+ api_group = "rbac.authorization.k8s.io"
+ kind = "ClusterRole"
+ name = "cluster-admin"
}
+ subject {
+ api_group = (known after apply)
+ kind = "User"
+ name = "system:serviceaccount:kube-system:deployer-new"
+ namespace = "default"
}
}
# module.dd_service_account.kubernetes_service_account.serviceaccount will be created
+ resource "kubernetes_service_account" "serviceaccount" {
+ automount_service_account_token = true
+ default_secret_name = (known after apply)
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
When kubectl on the cluster, the namespace I created is there but no service accounts are there.
Any ideas?
Thanks.
Terraform is creating role and attaching it to the EC2 instance successfully.
However, when I try to run commands with aws cli, it is giving error with missing AccessKeyId:
aws ec2 describe-instances --debug
2022-01-12 18:44:25,755 - MainThread - botocore.utils - DEBUG - Retrieved credentials is missing required field: AccessKeyId
2022-01-12 18:44:25,755 - MainThread - botocore.utils - DEBUG - Error response received when retrievingcredentials: {'Code': 'AssumeRoleUnauthorizedAccess', 'Message': 'EC2 cannot assume the role tf_eks_role_bastion. Please see documentation at https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-ec2.html#troubleshoot_iam-ec2_errors-info-doc.', 'LastUpdated': '2022-01-12T18:42:15Z'}.
My main.tf creates a role, attaches two policies to it, creates an instance-profile for the role and attaches the instance-profile to the newly created ec2-instance.
I got it working while changing main.tf and constanly re-applying the changes. But after executing terraform destroy and then terraform apply again, it stopped working again.
Also, when I create a role in AWS Console manually and attach it to the same ec2-instance, it starts working.
Does anyone understand this missing AccessKeyId error?
My main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-central-1"
}
resource "aws_security_group" "http_sg" {
name = "tf_bastion_host allow ht_p inbound from anywhere"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "ssh_sg" {
name = "tf_bastion_host allow ssh from anywhere"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# create role for bastion host
resource "aws_iam_role" "eks_role" {
name = "tf_eks_role_bastion"
assume_role_policy = jsonencode({
Version = "2012-10-17"
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : "eks.amazonaws.com"
},
"Action" : "sts:AssumeRole"
},
]
})
tags = {
tag-key = "tf_bastion_host"
}
}
# attach policy to role
resource "aws_iam_policy_attachment" "eks_attachment_cluster_policy" {
name = "tf_eks_attachment_cluster_policy"
roles = ["${aws_iam_role.eks_role.name}"]
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}
# attach policy to role
resource "aws_iam_policy_attachment" "eks_attachment_service_policy" {
name = "tf_eks_attachment_service_policy"
roles = ["${aws_iam_role.eks_role.name}"]
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
}
# create an instance profile
resource "aws_iam_instance_profile" "bastion_host_profile" {
name = "tf_bastion_host_profile"
role = aws_iam_role.eks_role.name
}
resource "aws_instance" "bastion_host" {
ami = "ami-05d34d340fb1d89e5"
instance_type = "t2.micro"
count = 1
associate_public_ip_address = true
# use the jenkins key-pair for now
key_name = "jenkins"
# attach the instance profile to the EC2 instance
iam_instance_profile = aws_iam_instance_profile.bastion_host_profile.name
vpc_security_group_ids = [
aws_security_group.http_sg.id,
aws_security_group.ssh_sg.id
]
user_data = file("installs.sh")
tags = {
Name = "tf_bastion_host",
Environment = "production"
}
}
Output of terraform apply -auto-approve:
Terraform used the selected providers to generate the
following execution plan. Resource actions are
indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_iam_instance_profile.bastion_host_profile will be created
+ resource "aws_iam_instance_profile" "bastion_host_profile" {
+ arn = (known after apply)
+ create_date = (known after apply)
+ id = (known after apply)
+ name = "tf_bastion_host_profile"
+ path = "/"
+ role = "tf_eks_role_bastion"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}
# aws_iam_policy_attachment.eks_attachment_cluster_policy will be created
+ resource "aws_iam_policy_attachment" "eks_attachment_cluster_policy" {
+ id = (known after apply)
+ name = "tf_eks_attachment_cluster_policy"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
+ roles = [
+ "tf_eks_role_bastion",
]
}
# aws_iam_policy_attachment.eks_attachment_service_policy will be created
+ resource "aws_iam_policy_attachment" "eks_attachment_service_policy" {
+ id = (known after apply)
+ name = "tf_eks_attachment_service_policy"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
+ roles = [
+ "tf_eks_role_bastion",
]
}
# aws_iam_role.eks_role will be created
+ resource "aws_iam_role" "eks_role" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "eks.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "tf_eks_role_bastion"
+ name_prefix = (known after apply)
+ path = "/"
+ tags = {
+ "tag-key" = "tf_bastion_host"
}
+ tags_all = {
+ "tag-key" = "tf_bastion_host"
}
+ unique_id = (known after apply)
+ inline_policy {
+ name = (known after apply)
+ policy = (known after apply)
}
}
# aws_instance.bastion_host[0] will be created
+ resource "aws_instance" "bastion_host" {
+ ami = "ami-05d34d340fb1d89e5"
+ arn = (known after apply)
+ associate_public_ip_address = true
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ iam_instance_profile = "tf_bastion_host_profile"
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "jenkins"
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Environment" = "production"
+ "Name" = "tf_bastion_host"
}
+ tags_all = {
+ "Environment" = "production"
+ "Name" = "tf_bastion_host"
}
+ tenancy = (known after apply)
+ user_data = "f27e2f754e7658f0f0cdd09facb579d44b20ea5f"
+ user_data_base64 = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification {
+ capacity_reservation_preference = (known after apply)
+ capacity_reservation_target {
+ capacity_reservation_id = (known after apply)
}
}
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
# aws_security_group.http_sg will be created
+ resource "aws_security_group" "http_sg" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 80
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 80
},
]
+ name = "tf_bastion_host allow ht_p inbound from anywhere"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}
# aws_security_group.ssh_sg will be created
+ resource "aws_security_group" "ssh_sg" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 22
},
]
+ name = "tf_bastion_host allow ssh from anywhere"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}
Plan: 7 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ private_instance_ip = (known after apply)
+ public_instance_dns = (known after apply)
aws_iam_role.eks_role: Creating...
aws_security_group.http_sg: Creating...
aws_security_group.ssh_sg: Creating...
aws_security_group.http_sg: Creation complete after 3s [id=sg-0ae7e9865c60ce9c9]
aws_security_group.ssh_sg: Creation complete after 3s [id=sg-016f588fb10a7dbad]
aws_iam_role.eks_role: Creation complete after 3s [id=tf_eks_role_bastion]
aws_iam_policy_attachment.eks_attachment_cluster_policy: Creating...
aws_iam_policy_attachment.eks_attachment_service_policy: Creating...
aws_iam_instance_profile.bastion_host_profile: Creating...
aws_iam_policy_attachment.eks_attachment_service_policy: Creation complete after 2s [id=tf_eks_attachment_service_policy]
aws_iam_instance_profile.bastion_host_profile: Creation complete after 2s [id=tf_bastion_host_profile]
aws_instance.bastion_host[0]: Creating...
aws_iam_policy_attachment.eks_attachment_cluster_policy: Creation complete after 2s [id=tf_eks_attachment_cluster_policy]
aws_instance.bastion_host[0]: Still creating... [10s elapsed]
aws_instance.bastion_host[0]: Still creating... [20s elapsed]
aws_instance.bastion_host[0]: Still creating... [30s elapsed]
aws_instance.bastion_host[0]: Creation complete after 39s [id=i-07926ae9044680939]
Apply complete! Resources: 7 added, 0 changed, 0 destroyed.
In the assume_role_policy of your IAM role
"Service" : "eks.amazonaws.com"
should be changed to
"Service" : "ec2.amazonaws.com"
If your role is going to be used by an EC2 instance, the allowed principal needs to be ec2.amazonaws.com. You might also want to review the managed policies you are attaching to the role, they are more suitable for an EKS cluster and not a bastion host.
I need to create multiple DNS with their respected IPs. I need to assign the first IP to the first DNS and the 2nd one to 2nd DNS. something like dns1 - 10.1.20.70 and dns2-10.1.20.40. But getting both of the IPs are getting assigned for both DNS(dns1 and dns2).Any suggestions?
Code:
resource "aws_route53_record" "onprem_api_record" {
for_each = toset(local.vm_fqdn)
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = each.value
type = "A"
records = var.api_ips[terraform.workspace]
ttl = "300"
}
locals {
vm_fqdn = flatten(["dns1-${terraform.workspace}.${local.domain}", "dns2-${terraform.workspace}.${local.domain}"] )
}
variable "api_ips" {
type = map(list(string))
default = {
"dev" = [ "10.1.20.70", "10.1.20.140" ]
"qa" = [ "10.1.22.180", "10.1.22.150" ]
"test" = [ "10.1.23.190", "10.1.23.160" ]
}
}
Output
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns1.dev.ciscodcloud.com"
+ records = [
+ "10.1.20.40",
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "Z30HW9VL6PYDXQ"
}
aws_route53_record.onprem_api_record["dna2.dev.cisco.com"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns2.dev.cisco.com"
+ records = [
+ "10.1.20.40",
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "Z30HW9VL6PYDXQ"
}
Plan: 2 to add, 0 to change, 1 to destroy.
You may want to use zipmap. Here is a terse example showing its use in for_each with for as could be used in your case.
resource "aws_route53_record" "onprem_api_record" {
for_each = { for fqdn, ip in zipmap(local.vm_fqdn, local.ips["dev"]) : fqdn => ip }
zone_id = "x"
name = each.key
type = "A"
records = [each.value]
ttl = "300"
}
locals {
ips = {
"dev" = ["10.1.20.70", "10.1.20.140"]
"qa" = ["10.1.22.180", "10.1.22.150"]
"test" = ["10.1.23.190", "10.1.23.160"]
}
vm_fqdn = ["dns1-dev.domain", "dns2-dev.domain"]
}
And the plan looks like:
# aws_route53_record.onprem_api_record["dns1-dev.domain"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns1-dev.domain"
+ records = [
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "x"
}
# aws_route53_record.onprem_api_record["dns2-dev.domain"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns2-dev.domain"
+ records = [
+ "10.1.20.140",
]
+ ttl = 300
+ type = "A"
+ zone_id = "x"
}
Plan: 2 to add, 0 to change, 0 to destroy.
You can do this as follows with count:
resource "aws_route53_record" "onprem_api_record" {
count = length(local.vm_fqdn)
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = local.vm_fqdn[count.index]
type = "A"
records = [var.api_ips[terraform.workspace][count.index]]
ttl = "300"
}
I have a terraform script to create an instance and attach a role to it.
Everything is getting created as per expectation.
The issue I am facing is that the IAM role is getting created and the policy is being attached to the role but role is not getting attached to the instance. Please help.
I have the following terraform script:
provider "aws" {
region = "ap-southeast-1"
}
terraform {
backend "s3" {
bucket = "example.com"
key = "terraform/aws/ec2/xxxNNN/terraform.tfstate"
region = "ap-southeast-1"
}
}
resource "aws_iam_policy" "xxx_nodes_role_policy" {
name = "xxx_nodes_role_policy"
description = "IAM Policy for XXX nodes"
policy = "${file("xxx_nodes_role_policy.json")}"
}
resource "aws_iam_role" "ec2_role_for_xxx_nodes" {
name = "ec2_role_for_xxx_nodes"
assume_role_policy = "${file("ec2_assumerolepolicy.json")}"
}
resource "aws_iam_role_policy_attachment" "xxx_nodes_role_policy_attachment" {
role = "${aws_iam_role.ec2_role_for_xxx_nodes.name}"
policy_arn = "${aws_iam_policy.xxx_nodes_role_policy.arn}"
}
resource "aws_iam_instance_profile" "xxx_instance_profile" {
name = "xxx_instance_profile"
role = "${aws_iam_role.ec2_role_for_xxx_nodes.name}"
}
variable "sgids" {
type = list(string)
default = [ "sg-XXX", "sg-XXX" ]
}
resource "aws_instance" "xxxNNN" {
ami = "ami-063e3af9d2cc7fe94"
instance_type = "r5.large"
iam_instance_profile = "${aws_iam_instance_profile.xxx_instance_profile.name}"
availability_zone = "ap-southeast-1a"
key_name = "KKK"
vpc_security_group_ids = var.sgids
subnet_id = "subnet-XXX"
associate_public_ip_address = false
user_data = "${file("set-up.sh")}"
root_block_device {
volume_type = "gp2"
volume_size = "200"
delete_on_termination = true
}
tags = {
Name = "XXXXXXXXXXX/XXXNNN"
}
lifecycle {
prevent_destroy = true
}
}
resource "aws_eip" "XXXNNN" {
vpc = true
instance = "${aws_instance.xxxNNN.id}"
tags = {
Name = "XXXNNN"
}
lifecycle {
prevent_destroy = true
}
}
The contents of xxx_nodes_role_policy.json are as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:ap-southeast-1:0000:log-group:*",
"arn:aws:logs:ap-southeast-1:0000:log-group:production:*"
]
}
]
}
The contents of ec2_assumerolepolicy.json are as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
Everything is getting created as per expectation.
The issue I am facing is that the IAM role is getting created and the policy is being attached to the role but role is not getting attached to the instance. Please help.
EDIT: Adding terraform apply output below:
aws_iam_policy.xxx_nodes_role_policy: Refreshing state... [id=arn:aws:iam::XXX:policy/xxx_nodes_role_policy]
aws_iam_role.ec2_role_for_xxx_nodes: Refreshing state... [id=ec2_role_for_xxx_nodes]
data.aws_iam_policy_document.instance-assume-role-policy: Refreshing state...
aws_iam_role_policy_attachment.xxx_nodes_role_policy_attachment: Refreshing state... [id=ec2_role_for_xxx_nodes-20200717102807715700000001]
aws_iam_instance_profile.xxx_instance_profile: Refreshing state... [id=xxx_instance_profile]
aws_instance.xxxNNN: Refreshing state... [id=i-XXX]
aws_eip.XXXNNN: Refreshing state... [id=eipalloc-XXX]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_eip.XXXNNN will be created
+ resource "aws_eip" "XXXNNN" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags = {
+ "Name" = "XXXNNN"
}
+ vpc = true
}
# aws_iam_policy.xxx_nodes_role_policy will be created
+ resource "aws_iam_policy" "xxx_nodes_role_policy" {
+ arn = (known after apply)
+ description = "IAM Policy for XXX nodes"
+ id = (known after apply)
+ name = "xxx_nodes_role_policy"
+ path = "/"
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "logs:CreateLogStream",
+ "logs:DescribeLogGroups",
+ "logs:DescribeLogStreams",
+ "logs:PutLogEvents",
]
+ Effect = "Allow"
+ Resource = [
+ "arn:aws:logs:ap-southeast-1:XXX:log-group:*",
+ "arn:aws:logs:ap-southeast-1:XXX:log-group:production:*",
]
+ Sid = "VisualEditor0"
},
]
+ Version = "2012-10-17"
}
)
}
# aws_iam_role.ec2_role_for_xxx_nodes will be created
+ resource "aws_iam_role" "ec2_role_for_xxx_nodes" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ec2.amazonaws.com"
}
+ Sid = ""
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "ec2_role_for_xxx_nodes"
+ path = "/"
+ unique_id = (known after apply)
}
# aws_iam_role_policy_attachment.xxx_nodes_role_policy_attachment will be created
+ resource "aws_iam_role_policy_attachment" "xxx_nodes_role_policy_attachment" {
+ id = (known after apply)
+ policy_arn = (known after apply)
+ role = "ec2_role_for_xxx_nodes"
}
# aws_instance.xxxNNN will be created
+ resource "aws_instance" "xxxNNN" {
+ ami = "ami-063e3af9d2cc7fe94"
+ arn = (known after apply)
+ associate_public_ip_address = false
+ availability_zone = "aws-region"
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ iam_instance_profile = "xxx_instance_profile"
+ id = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "r5.large"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "key.name"
+ network_interface_id = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = "subnet-XXX"
+ tags = {
+ "Name" = "XXXNNN"
}
+ tenancy = (known after apply)
+ user_data = "060b3d9c8929ff0f18bdd9fa151f5d982c256a78"
+ volume_tags = (known after apply)
+ vpc_security_group_ids = [
+ "sg-XXX",
+ "sg-XXX",
]
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = true
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ volume_id = (known after apply)
+ volume_size = 200
+ volume_type = "gp2"
}
}
Plan: 5 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_iam_policy.xxx_nodes_role_policy: Creating...
aws_iam_role.ec2_role_for_xxx_nodes: Creating...
aws_instance.xxxNNN: Creating...
aws_iam_role.ec2_role_for_xxx_nodes: Creation complete after 3s [id=ec2_role_for_xxx_nodes]
aws_iam_policy.xxx_nodes_role_policy: Creation complete after 4s [id=arn:aws:iam::XXX:policy/xxx_nodes_role_policy]
aws_iam_role_policy_attachment.xxx_nodes_role_policy_attachment: Creating...
aws_iam_role_policy_attachment.xxx_nodes_role_policy_attachment: Creation complete after 2s [id=ec2_role_for_xxx_nodes-20200717110045184300000001]
aws_instance.xxxNNN: Still creating... [10s elapsed]
aws_instance.xxxNNN: Still creating... [20s elapsed]
aws_instance.xxxNNN: Creation complete after 22s [id=i-XXX]
aws_eip.XXXNNN: Creating...
aws_eip.XXXNNN: Creation complete after 3s [id=eipalloc-XXX]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
I was able to solve this by myself with a discussion with other folks.
RCA (Root Cause Analysis)
So, the issue was that the first time terraform apply was run, an instance profile was created but not probably for reasons of inadequate IAM permissions, the IAM role could not be attached to the IAM instance profile. On all subsequent executions of terraform apply, terraform was using the already existing instance profile (probably because I had not given the terraform user destroy permissions).
How I discovered this
It was a simple thing that was being overlooked. If you count the number of resources in my terraform script in the question, you will see that the number of resources defined are 6 whereas in the output, terraform always responds with:
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
This is how I figured that probably the instance profile was never being deleted.
Removing the instance profile from IAM via AWS CLI and then re-doing a terraform apply fixed this.
For IAM permissions you may refer to: https://iam.cloudonaut.io/reference/iam.html
Do a find on page on this URL and search for InstanceProfile and add the permissions needed to your terraform user. I added all of them.
This is a surprise for me as Terraform is ignoring a resource creation. I'm using modules in terraform and every other resource is getting created except vpc_peering_connection. It is in fact ignoring that block. Hence, for debugging I created an output for the ID of peering connection and then I get below error:
$ terraform plan
Error: Reference to undeclared resource
on network/output.tf line 6, in output "peering-id":
6: value = "${aws_vpc_peering_connection.default.id}"
A managed resource "aws_vpc_peering_connection" "default" has not been
declared in network.
This is a tree snap of my code structure.
.
├── module-network.tf
├── network
│ ├── data.tf
│ ├── igw.tf
│ ├── output.tf
│ ├── peering-conn
│ ├── rt-public.tf
│ ├── security_group.tf
│ ├── subnet.tf
│ ├── var.tf
│ └── vpc.tf
├── output.tf
├── provider-aws.tf
├── terraform.tfstate
├── terraform.tfstate.backup
├── var-static.tf
└── versions.tf
1 directory, 16 files
Terraform Code:
$ cat module-network.tf
module "network" {
source = "./network"
AWS_REGION = var.REGION
ENVIRONMENT = var.ENVIRONMENT
PRODUCT = var.PRODUCT
VPC_CIDR = var.VPC_CIDR
SUBNET_COUNT = var.SUBNET_COUNT
VPC_PEER_ID = var.VPC_PEER_ID
}
$ cat output.tf
output "vpc-id" {
value = module.network.vpc-id
}
output "peering-id" {
value =module.network.peering-id
}
variable "PRODUCT" {
default = "jjmdb"
}
variable "ENVIRONMENT" {
default = "prod"
}
variable "REGION" {
default = "us-east-1"
}
variable "VPC_CIDR" {
default = "10.100.0.0/16"
}
variable "SUBNET_COUNT" {
default = "2"
}
variable "VPC_PEER_ID" {
default = "vpc-0724db2d24120ca8c"
}
$ cat data.tf
data "aws_vpc" "peer_vpc" {
id = "${var.VPC_PEER_ID}"
}
data "aws_subnet_ids" "private_nodes" {
vpc_id = "${data.aws_vpc.peer_vpc.id}"
tags = {
Tier = "node-private"
}
}
$ cat igw.tf
#Create internet gateway
resource "aws_internet_gateway" "igw" {
vpc_id = "${aws_vpc.vpc.id}"
tags = {
Name = "${format("%s-igw",var.PRODUCT)}"
Environment = "${var.ENVIRONMENT}"
}
}
$ cat output.tf
output "vpc-id" {
value = "${aws_vpc.vpc.id}"
}
output "peering-id" {
value = "${aws_vpc_peering_connection.default.id}"
}
$ cat peering-conn
#Create VPC Peering connection
resource "aws_vpc_peering_connection" "default" {
peer_vpc_id = "${data.aws_vpc.peer_vpc.id}"
vpc_id = "${aws_vpc.vpc.id}"
auto_accept = true
tags = {
Name = "${format("%s-peering",var.PRODUCT)}"
Environment = "${var.ENVIRONMENT}"
}
}
$ cat rt-public.tf
#Create a Public Route table
resource "aws_route_table" "rt-public" {
vpc_id = "${aws_vpc.vpc.id}"
tags = {
Name = "${format("%s-rt-public",var.PRODUCT)}"
Environment = "${var.ENVIRONMENT}"
}
}
resource "aws_route" "rt-public-route" {
route_table_id = "${aws_route_table.rt-public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.igw.id}"
}
#Associate public Route Table to public Subnet
resource "aws_route_table_association" "rt-sub-public" {
count = "${var.SUBNET_COUNT}"
subnet_id = "${aws_subnet.sub_public.*.id[count.index]}"
route_table_id = "${aws_route_table.rt-public.id}"
}
$ cat security_group.tf
$ cat subnet.tf
resource "aws_subnet" "sub_public" {
count = "${var.SUBNET_COUNT}"
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${cidrsubnet(var.VPC_CIDR, 2, count.index + 2)}"
tags = {
Name = "${format("%s-sub-public-%d",var.PRODUCT,count.index)}"
Environment = "${var.ENVIRONMENT}"
}
}
$ cat var.tf
variable "AWS_REGION" {}
variable "ENVIRONMENT" {}
variable "PRODUCT" {}
variable "VPC_CIDR" {}
variable "SUBNET_COUNT" {}
variable "VPC_PEER_ID" {}
$ cat vpc.tf
# Create VPC
resource "aws_vpc" "vpc"{
cidr_block = "${var.VPC_CIDR}"
instance_tenancy = "default"
enable_dns_support = "true"
enable_dns_hostnames = "true"
enable_classiclink = "false"
tags = {
Name = "${format("%s-vpc",var.PRODUCT)}"
Environment = "${var.ENVIRONMENT}"
}
}
If I remove the output blocks, it gets ignored. Terraform plan as below:
**Terraform plan**
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
module.network.data.aws_vpc.peer_vpc: Refreshing state...
module.network.data.aws_subnet_ids.private_nodes: Refreshing state...
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.network.aws_internet_gateway.igw will be created
+ resource "aws_internet_gateway" "igw" {
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Environment" = "prod"
+ "Name" = "jjmdb-igw"
}
+ vpc_id = (known after apply)
}
# module.network.aws_route.rt-public-route will be created
+ resource "aws_route" "rt-public-route" {
+ destination_cidr_block = "0.0.0.0/0"
+ destination_prefix_list_id = (known after apply)
+ egress_only_gateway_id = (known after apply)
+ gateway_id = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ instance_owner_id = (known after apply)
+ nat_gateway_id = (known after apply)
+ network_interface_id = (known after apply)
+ origin = (known after apply)
+ route_table_id = (known after apply)
+ state = (known after apply)
}
# module.network.aws_route_table.rt-public will be created
+ resource "aws_route_table" "rt-public" {
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = (known after apply)
+ tags = {
+ "Environment" = "prod"
+ "Name" = "jjmdb-rt-public"
}
+ vpc_id = (known after apply)
}
# module.network.aws_route_table_association.rt-sub-public[0] will be created
+ resource "aws_route_table_association" "rt-sub-public" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# module.network.aws_route_table_association.rt-sub-public[1] will be created
+ resource "aws_route_table_association" "rt-sub-public" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# module.network.aws_subnet.sub_public[0] will be created
+ resource "aws_subnet" "sub_public" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = (known after apply)
+ availability_zone_id = (known after apply)
+ cidr_block = "10.100.128.0/18"
+ id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ "Environment" = "prod"
+ "Name" = "jjmdb-sub-public-0"
}
+ vpc_id = (known after apply)
}
# module.network.aws_subnet.sub_public[1] will be created
+ resource "aws_subnet" "sub_public" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = (known after apply)
+ availability_zone_id = (known after apply)
+ cidr_block = "10.100.192.0/18"
+ id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ "Environment" = "prod"
+ "Name" = "jjmdb-sub-public-1"
}
+ vpc_id = (known after apply)
}
# module.network.aws_vpc.vpc will be created
+ resource "aws_vpc" "vpc" {
+ arn = (known after apply)
+ assign_generated_ipv6_cidr_block = false
+ cidr_block = "10.100.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = false
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = true
+ enable_dns_support = true
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Environment" = "prod"
+ "Name" = "jjmdb-vpc"
}
}
Plan: 8 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Note: I was getting the same issue with Terraform version 0.11.1. So, I upgraded to 0.12.3. But no luck.
Please advise.