AWS Multiple DNS A record creation - amazon-web-services

I need to create multiple DNS with their respected IPs. I need to assign the first IP to the first DNS and the 2nd one to 2nd DNS. something like dns1 - 10.1.20.70 and dns2-10.1.20.40. But getting both of the IPs are getting assigned for both DNS(dns1 and dns2).Any suggestions?
Code:
resource "aws_route53_record" "onprem_api_record" {
for_each = toset(local.vm_fqdn)
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = each.value
type = "A"
records = var.api_ips[terraform.workspace]
ttl = "300"
}
locals {
vm_fqdn = flatten(["dns1-${terraform.workspace}.${local.domain}", "dns2-${terraform.workspace}.${local.domain}"] )
}
variable "api_ips" {
type = map(list(string))
default = {
"dev" = [ "10.1.20.70", "10.1.20.140" ]
"qa" = [ "10.1.22.180", "10.1.22.150" ]
"test" = [ "10.1.23.190", "10.1.23.160" ]
}
}
Output
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns1.dev.ciscodcloud.com"
+ records = [
+ "10.1.20.40",
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "Z30HW9VL6PYDXQ"
}
aws_route53_record.onprem_api_record["dna2.dev.cisco.com"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns2.dev.cisco.com"
+ records = [
+ "10.1.20.40",
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "Z30HW9VL6PYDXQ"
}
Plan: 2 to add, 0 to change, 1 to destroy.

You may want to use zipmap. Here is a terse example showing its use in for_each with for as could be used in your case.
resource "aws_route53_record" "onprem_api_record" {
for_each = { for fqdn, ip in zipmap(local.vm_fqdn, local.ips["dev"]) : fqdn => ip }
zone_id = "x"
name = each.key
type = "A"
records = [each.value]
ttl = "300"
}
locals {
ips = {
"dev" = ["10.1.20.70", "10.1.20.140"]
"qa" = ["10.1.22.180", "10.1.22.150"]
"test" = ["10.1.23.190", "10.1.23.160"]
}
vm_fqdn = ["dns1-dev.domain", "dns2-dev.domain"]
}
And the plan looks like:
# aws_route53_record.onprem_api_record["dns1-dev.domain"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns1-dev.domain"
+ records = [
+ "10.1.20.70",
]
+ ttl = 300
+ type = "A"
+ zone_id = "x"
}
# aws_route53_record.onprem_api_record["dns2-dev.domain"] will be created
+ resource "aws_route53_record" "onprem_api_record" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "dns2-dev.domain"
+ records = [
+ "10.1.20.140",
]
+ ttl = 300
+ type = "A"
+ zone_id = "x"
}
Plan: 2 to add, 0 to change, 0 to destroy.

You can do this as follows with count:
resource "aws_route53_record" "onprem_api_record" {
count = length(local.vm_fqdn)
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = local.vm_fqdn[count.index]
type = "A"
records = [var.api_ips[terraform.workspace][count.index]]
ttl = "300"
}

Related

Retrieve the object output to use for another resources

I would like to use the output (object) as a attribute of another resources.
I have the module like below:
locals {
lb_domain = {
lb_public = {
domain = "dev.example.net"
},
lb_internal = {
domain = "dev.internal.example.net"
}
}
module "dev_acm" {
source = "terraform-aws-modules/acm/aws"
version = "3.5.0"
for_each = local.lb_domain
domain_name = each.value.domain
zone_id = data.aws_route53_zone.this.id
}
And output of this module. this is the object, I would like to use this for another resources module.dev_acm:
+ dev_acm = {
+ lb_internal = {
+ acm_certificate_arn = (known after apply)
+ acm_certificate_domain_validation_options = [
+ {
+ domain_name = "dev.internal.example.net"
+ resource_record_name = (known after apply)
+ resource_record_type = (known after apply)
+ resource_record_value = (known after apply)
},
]
+ acm_certificate_status = (known after apply)
+ acm_certificate_validation_emails = (known after apply)
+ distinct_domain_names = [
+ "dev.internal.example.net",
]
+ validation_domains = (known after apply)
+ validation_route53_record_fqdns = [
+ (known after apply),
]
}
+ lb_public = {
+ acm_certificate_arn = (known after apply)
+ acm_certificate_domain_validation_options = [
+ {
+ domain_name = "dev.example.net"
+ resource_record_name = (known after apply)
+ resource_record_type = (known after apply)
+ resource_record_value = (known after apply)
},
]
+ acm_certificate_status = (known after apply)
+ acm_certificate_validation_emails = (known after apply)
+ distinct_domain_names = [
+ "dev.example.net",
]
+ validation_domains = (known after apply)
+ validation_route53_record_fqdns = [
+ (known after apply),
]
}
}
How I can use acm_certificate_arn public lb_public in the output for another services?
Something like that: module.dev_acm.lb_internal.acm_certificate_arn
That's an extremely strange way of looking at a Terraform module output. I suggest looking at the documentation for the module you are using, instead of looking at the output that way. Not to mention what you are looking at doesn't indicate that certain things are created via for_each.
How I can use acm_certificate_arn in the output for another services?
Something like that: module.dev_acm.lb_internal.acm_certificate_arn
Taking the documantion I linked, and the documentation for referencing for_each instances, it would be:
module.dev_acm["lb_public"].acm_certificate_arn
or
module.dev_acm["lb_internal"].acm_certificate_arn

InvalidParameterValue: Invalid security group: can't attach the eks SGrule to RDS

I am trying to provision RDS instance with private subnets using terraform template and my template looks like this
following attributes/restrictions while creating rds:
Not publicly Accessible. Security group to be opened only for eks
cluster, not public.
cat modules/rds/rds.tf
resource "aws_db_instance" "rds_instance" {
allocated_storage = 50
identifier = "rds-vaya"
storage_type = "gp2"
engine = "mysql"
engine_version = "8.0.23"
instance_class = "db.t2.micro"
db_name = "vaya"
username = "admin"
password = aws_secretsmanager_secret_version.password.secret_string
publicly_accessible = false
multi_az = true
db_subnet_group_name = aws_db_subnet_group.rdssubnet.id
vpc_security_group_ids = var.eks-sg
tags = {
Name = "OpsyRDSServerInstance"
}
}
cat modules/rds/security.tf
#make rds subnet group
resource "aws_db_subnet_group" "rdssubnet" {
name = "database-subnet"
subnet_ids = var.private_subnet_ids
}
cat modules/eks/security.tf
resource "aws_security_group" "main" {
name = "eks-sg-${var.env}"
vpc_id = var.vpc_id
}
resource "aws_security_group_rule" "ingress_rules" {
count = length(var.ingress_rule)
type = "ingress"
from_port = var.ingress_rule[count.index][0]
to_port = var.ingress_rule[count.index][1]
protocol = var.ingress_rule[count.index][2]
security_group_id = aws_security_group.main.id
source_security_group_id = aws_security_group.main.id
}
cat modules/eks/output.tf
output "eks-sg" {
value = aws_security_group_rule.ingress_rules.*.id
}
cat main.tf
module "eks_cluster" {
source = "./modules/eks"
eks_cluster_name = var.eks_cluster_name
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.subnet_ids
eks_cluster_role_arn = module.iam.eks_cluster_role_arn
eks_cluster_create_depends_on = module.iam.id
instance_count = var.instance_count
instance_type = var.instance_type
ingress_rule = var.ingress_rule
env = var.env
}
module "rds" {
source = "./modules/rds"
vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnet_ids
eks-sg = module.eks_cluster.eks-sg
env = var.env
}
Below is the terraform plan/apply:
# module.rds.aws_db_instance.rds_instance will be created
+ resource "aws_db_instance" "rds_instance" {
+ address = (known after apply)
+ allocated_storage = 50
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ auto_minor_version_upgrade = true
+ availability_zone = (known after apply)
+ backup_retention_period = (known after apply)
+ backup_window = (known after apply)
+ ca_cert_identifier = (known after apply)
+ character_set_name = (known after apply)
+ copy_tags_to_snapshot = false
+ db_name = "vaya"
+ db_subnet_group_name = "database-subnet"
+ delete_automated_backups = true
+ endpoint = (known after apply)
+ engine = "mysql"
+ engine_version = "8.0.23"
+ engine_version_actual = (known after apply)
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ identifier = "rds-vaya"
+ identifier_prefix = (known after apply)
+ instance_class = "db.t2.micro"
+ kms_key_id = (known after apply)
+ latest_restorable_time = (known after apply)
+ license_model = (known after apply)
+ maintenance_window = (known after apply)
+ monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
+ multi_az = true
+ name = (known after apply)
+ nchar_character_set_name = (known after apply)
+ network_type = (known after apply)
+ option_group_name = (known after apply)
+ parameter_group_name = (known after apply)
+ password = (sensitive value)
+ performance_insights_enabled = false
+ performance_insights_kms_key_id = (known after apply)
+ performance_insights_retention_period = (known after apply)
+ port = (known after apply)
+ publicly_accessible = false
+ replica_mode = (known after apply)
+ replicas = (known after apply)
+ resource_id = (known after apply)
+ skip_final_snapshot = false
+ snapshot_identifier = (known after apply)
+ status = (known after apply)
+ storage_type = "gp2"
+ tags = {
+ "Name" = "OpsyRDSServerInstance"
}
+ tags_all = {
+ "Name" = "OpsyRDSServerInstance"
}
+ timezone = (known after apply)
+ username = "admin"
+ vpc_security_group_ids = [
+ "sgrule-2349526507",
+ "sgrule-2500829248",
+ "sgrule-2855048482",
+ "sgrule-4188522375",
]
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ eks-sg = [
+ "sgrule-2500829248",
+ "sgrule-2349526507",
+ "sgrule-2855048482",
+ "sgrule-4188522375",
]
Do you want to perform these actions in workspace "dev"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.rds.aws_db_instance.rds_instance: Creating...
╷
│ Error: creating RDS DB Instance (rds-vaya): InvalidParameterValue: Invalid security group , groupId= sgrule-2349526507, sgrule-2500829248, sgrule-2855048482, sgrule-4188522375, groupName=.
│ status code: 400, request id: fb505df1-9202-4986-9d54-7e4af4fcbc91
│
│ with module.rds.aws_db_instance.rds_instance,
│ on modules/rds/rds.tf line 2, in resource "aws_db_instance" "rds_instance":
│ 2: resource "aws_db_instance" "rds_instance" {
│
i've no idea what the problem and how to fix this issue.
You are outputting the SG rule IDs while you want SG IDs. You need to use the attributes of the SG itself:
resource "aws_security_group" "main" {
name = "eks-sg-${var.env}"
vpc_id = var.vpc_id
}
And the output should be (modules/eks/output.tf):
output "eks-sg" {
value = aws_security_group.main.id
}

Terraform kubernetes service account and role binding modules not working

I am trying to create a kubernetes service account in a created namespace, which will have a secret and a cluster role binding, however, even though the terraform plan and apply stage shows that is is being created, it isn't, please see below module code and screenshots:
resource "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "serviceaccount" {
metadata {
name = var.name
}
subject {
kind = "User"
name = "system:serviceaccount:kube-system:${var.name}"
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
}
data "kubernetes_service_account" "serviceaccount" {
metadata {
name = var.name
namespace = "kube-system"
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
data "kubernetes_secret" "serviceaccount" {
metadata {
name = data.kubernetes_service_account.serviceaccount.default_secret_name
namespace = "kube-system"
}
binary_data = {
"token": ""
}
depends_on = [
resource.kubernetes_service_account.serviceaccount
]
}
And the output from terraform run in devops:
# module.dd_service_account.data.kubernetes_secret.serviceaccount will be read during apply
# (config refers to values not yet known)
<= data "kubernetes_secret" "serviceaccount" {
+ binary_data = (sensitive value)
+ data = (sensitive value)
+ id = (known after apply)
+ immutable = (known after apply)
+ type = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = (known after apply)
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# module.dd_service_account.data.kubernetes_service_account.serviceaccount will be read during apply
# (depends on a resource or a module with changes pending)
<= data "kubernetes_service_account" "serviceaccount" {
+ automount_service_account_token = (known after apply)
+ default_secret_name = (known after apply)
+ id = (known after apply)
+ image_pull_secret = (known after apply)
+ secret = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
# module.dd_service_account.kubernetes_cluster_role_binding.serviceaccount will be created
+ resource "kubernetes_cluster_role_binding" "serviceaccount" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
+ role_ref {
+ api_group = "rbac.authorization.k8s.io"
+ kind = "ClusterRole"
+ name = "cluster-admin"
}
+ subject {
+ api_group = (known after apply)
+ kind = "User"
+ name = "system:serviceaccount:kube-system:deployer-new"
+ namespace = "default"
}
}
# module.dd_service_account.kubernetes_service_account.serviceaccount will be created
+ resource "kubernetes_service_account" "serviceaccount" {
+ automount_service_account_token = true
+ default_secret_name = (known after apply)
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "deployer-new"
+ namespace = "kube-system"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
}
When kubectl on the cluster, the namespace I created is there but no service accounts are there.
Any ideas?
Thanks.

Create CALCULATED R53 health check based on previously created child health checks with for_each

Hello stackoverflow community.
I have 5 FQDNs (myurl{1..5}.mydomain.com) for which I need to create 3 Route53 health checks per FQDN (so 15 in total). Two IPs are behind each FQDN, e.g. myurl1.mydomain.com have IPs: 123.123.123.123, 124.124.124.124. End goal:
2 health checks with each IP for the specific FQDN
1 CALCULATED health check which is monitoring the above two
First point is accomplished by:
data "dns_a_record_set" "mywiz" {
for_each = toset(var.urls)
host = "${each.value}.mydomain.com"
}
resource "aws_route53_health_check" "hc-1" {
for_each = data.dns_a_record_set.sort(mywiz)
fqdn = each.value["host"]
ip_address = each.value["addrs"][0]
port = "443"
type = "HTTPS"
failure_threshold = "3"
request_interval = "30"
tags = {
"Name" = "r53-hc-gfp-${each.key}-1"
}
lifecycle {
ignore_changes = [tags]
}
}
resource "aws_route53_health_check" "hc-2" {
#count = length(var.urls)
for_each = data.dns_a_record_set.mywiz
fqdn = each.value["host"]
ip_address = each.value["addrs"][1]
port = "443"
type = "HTTPS"
failure_threshold = "3"
request_interval = "30"
tags = {
"Name" = "r53-hc-gfp-${each.key}-2"
}
lifecycle {
ignore_changes = [tags]
}
}
Output is:
# aws_route53_health_check.hc-1["myurl1"] will be created
+ resource "aws_route53_health_check" "hc-1" {
+ arn = (known after apply)
+ disabled = false
+ enable_sni = (known after apply)
+ failure_threshold = 3
+ fqdn = "myurl1.mydomain.com"
+ id = (known after apply)
+ ip_address = "123.123.123.123"
+ measure_latency = false
+ port = 443
+ request_interval = 30
+ tags = {
+ "Name" = "r53-hc-gfp-myurl1-1"
}
+ tags_all = {
+ "CreatedBy" = "foobar"
+ "CreatedDate" = "2022-03-10T07:48:05Z"
+ "LaunchSource" = "Terraform"
+ "Name" = "r53-hc-gfp-myurl1-1"
+ "Notes" = "Created for GFP"
}
+ type = "HTTPS"
}
# aws_route53_health_check.hc-2["myurl1"] will be created
+ resource "aws_route53_health_check" "hc-2" {
+ arn = (known after apply)
+ disabled = false
+ enable_sni = (known after apply)
+ failure_threshold = 3
+ fqdn = "myurl1.mydomain.com"
+ id = (known after apply)
+ ip_address = "124.124.124.124"
+ measure_latency = false
+ port = 443
+ request_interval = 30
+ tags = {
+ "Name" = "r53-hc-gfp-myurl1-2"
}
+ tags_all = {
+ "CreatedBy" = "foobar"
+ "CreatedDate" = "2022-03-10T07:48:05Z"
+ "LaunchSource" = "Terraform"
+ "Name" = "r53-hc-gfp-myurl1-2"
+ "Notes" = "Created for GFP"
}
+ type = "HTTPS"
}
However I'm struggling with the CALCULATED Route53 health check. How to structure the CALCULATED aws_route53_health_check resource, how to pass the correct (the ones which are for the respective FQDN) health check ids as child_healthchecks. I've tried with:
resource "aws_route53_health_check" "hc-status" {
for_each = aws_route53_health_check.hc-1
type = "CALCULATED"
failure_threshold = "1"
child_healthchecks = [aws_route53_health_check.hc-1.id[each.key]
child_health_threshold = "1"
tags = {
"Name" = "r53-hc-gfpstatus-${each.key}"
}
lifecycle {
ignore_changes = [tags]
}
}
and this resulted in:
|Error: Missing resource instance key
│
│ on main.tf line 58, in resource "aws_route53_health_check" "hc-status":
│ 58: child_healthchecks = [aws_route53_health_check.hc-1.id[each.key]]
│
│ Because aws_route53_health_check.hc-1 has "for_each" set, its attributes must be accessed
│ on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ aws_route53_health_check.hc-1[each.key]
It should be:
child_healthchecks = [aws_route53_health_check.hc-1[each.key].id]
not
child_healthchecks = [aws_route53_health_check.hc-1.id[each.key]]

Terraform creating role with missing AccessKeyId

Terraform is creating role and attaching it to the EC2 instance successfully.
However, when I try to run commands with aws cli, it is giving error with missing AccessKeyId:
aws ec2 describe-instances --debug
2022-01-12 18:44:25,755 - MainThread - botocore.utils - DEBUG - Retrieved credentials is missing required field: AccessKeyId
2022-01-12 18:44:25,755 - MainThread - botocore.utils - DEBUG - Error response received when retrievingcredentials: {'Code': 'AssumeRoleUnauthorizedAccess', 'Message': 'EC2 cannot assume the role tf_eks_role_bastion. Please see documentation at https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-ec2.html#troubleshoot_iam-ec2_errors-info-doc.', 'LastUpdated': '2022-01-12T18:42:15Z'}.
My main.tf creates a role, attaches two policies to it, creates an instance-profile for the role and attaches the instance-profile to the newly created ec2-instance.
I got it working while changing main.tf and constanly re-applying the changes. But after executing terraform destroy and then terraform apply again, it stopped working again.
Also, when I create a role in AWS Console manually and attach it to the same ec2-instance, it starts working.
Does anyone understand this missing AccessKeyId error?
My main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-central-1"
}
resource "aws_security_group" "http_sg" {
name = "tf_bastion_host allow ht_p inbound from anywhere"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "ssh_sg" {
name = "tf_bastion_host allow ssh from anywhere"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# create role for bastion host
resource "aws_iam_role" "eks_role" {
name = "tf_eks_role_bastion"
assume_role_policy = jsonencode({
Version = "2012-10-17"
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"Service" : "eks.amazonaws.com"
},
"Action" : "sts:AssumeRole"
},
]
})
tags = {
tag-key = "tf_bastion_host"
}
}
# attach policy to role
resource "aws_iam_policy_attachment" "eks_attachment_cluster_policy" {
name = "tf_eks_attachment_cluster_policy"
roles = ["${aws_iam_role.eks_role.name}"]
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}
# attach policy to role
resource "aws_iam_policy_attachment" "eks_attachment_service_policy" {
name = "tf_eks_attachment_service_policy"
roles = ["${aws_iam_role.eks_role.name}"]
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
}
# create an instance profile
resource "aws_iam_instance_profile" "bastion_host_profile" {
name = "tf_bastion_host_profile"
role = aws_iam_role.eks_role.name
}
resource "aws_instance" "bastion_host" {
ami = "ami-05d34d340fb1d89e5"
instance_type = "t2.micro"
count = 1
associate_public_ip_address = true
# use the jenkins key-pair for now
key_name = "jenkins"
# attach the instance profile to the EC2 instance
iam_instance_profile = aws_iam_instance_profile.bastion_host_profile.name
vpc_security_group_ids = [
aws_security_group.http_sg.id,
aws_security_group.ssh_sg.id
]
user_data = file("installs.sh")
tags = {
Name = "tf_bastion_host",
Environment = "production"
}
}
Output of terraform apply -auto-approve:
Terraform used the selected providers to generate the
following execution plan. Resource actions are
indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_iam_instance_profile.bastion_host_profile will be created
+ resource "aws_iam_instance_profile" "bastion_host_profile" {
+ arn = (known after apply)
+ create_date = (known after apply)
+ id = (known after apply)
+ name = "tf_bastion_host_profile"
+ path = "/"
+ role = "tf_eks_role_bastion"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}
# aws_iam_policy_attachment.eks_attachment_cluster_policy will be created
+ resource "aws_iam_policy_attachment" "eks_attachment_cluster_policy" {
+ id = (known after apply)
+ name = "tf_eks_attachment_cluster_policy"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
+ roles = [
+ "tf_eks_role_bastion",
]
}
# aws_iam_policy_attachment.eks_attachment_service_policy will be created
+ resource "aws_iam_policy_attachment" "eks_attachment_service_policy" {
+ id = (known after apply)
+ name = "tf_eks_attachment_service_policy"
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
+ roles = [
+ "tf_eks_role_bastion",
]
}
# aws_iam_role.eks_role will be created
+ resource "aws_iam_role" "eks_role" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "eks.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "tf_eks_role_bastion"
+ name_prefix = (known after apply)
+ path = "/"
+ tags = {
+ "tag-key" = "tf_bastion_host"
}
+ tags_all = {
+ "tag-key" = "tf_bastion_host"
}
+ unique_id = (known after apply)
+ inline_policy {
+ name = (known after apply)
+ policy = (known after apply)
}
}
# aws_instance.bastion_host[0] will be created
+ resource "aws_instance" "bastion_host" {
+ ami = "ami-05d34d340fb1d89e5"
+ arn = (known after apply)
+ associate_public_ip_address = true
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ iam_instance_profile = "tf_bastion_host_profile"
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "jenkins"
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Environment" = "production"
+ "Name" = "tf_bastion_host"
}
+ tags_all = {
+ "Environment" = "production"
+ "Name" = "tf_bastion_host"
}
+ tenancy = (known after apply)
+ user_data = "f27e2f754e7658f0f0cdd09facb579d44b20ea5f"
+ user_data_base64 = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ capacity_reservation_specification {
+ capacity_reservation_preference = (known after apply)
+ capacity_reservation_target {
+ capacity_reservation_id = (known after apply)
}
}
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
# aws_security_group.http_sg will be created
+ resource "aws_security_group" "http_sg" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 80
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 80
},
]
+ name = "tf_bastion_host allow ht_p inbound from anywhere"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}
# aws_security_group.ssh_sg will be created
+ resource "aws_security_group" "ssh_sg" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 22
},
]
+ name = "tf_bastion_host allow ssh from anywhere"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}
Plan: 7 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ private_instance_ip = (known after apply)
+ public_instance_dns = (known after apply)
aws_iam_role.eks_role: Creating...
aws_security_group.http_sg: Creating...
aws_security_group.ssh_sg: Creating...
aws_security_group.http_sg: Creation complete after 3s [id=sg-0ae7e9865c60ce9c9]
aws_security_group.ssh_sg: Creation complete after 3s [id=sg-016f588fb10a7dbad]
aws_iam_role.eks_role: Creation complete after 3s [id=tf_eks_role_bastion]
aws_iam_policy_attachment.eks_attachment_cluster_policy: Creating...
aws_iam_policy_attachment.eks_attachment_service_policy: Creating...
aws_iam_instance_profile.bastion_host_profile: Creating...
aws_iam_policy_attachment.eks_attachment_service_policy: Creation complete after 2s [id=tf_eks_attachment_service_policy]
aws_iam_instance_profile.bastion_host_profile: Creation complete after 2s [id=tf_bastion_host_profile]
aws_instance.bastion_host[0]: Creating...
aws_iam_policy_attachment.eks_attachment_cluster_policy: Creation complete after 2s [id=tf_eks_attachment_cluster_policy]
aws_instance.bastion_host[0]: Still creating... [10s elapsed]
aws_instance.bastion_host[0]: Still creating... [20s elapsed]
aws_instance.bastion_host[0]: Still creating... [30s elapsed]
aws_instance.bastion_host[0]: Creation complete after 39s [id=i-07926ae9044680939]
Apply complete! Resources: 7 added, 0 changed, 0 destroyed.
In the assume_role_policy of your IAM role
"Service" : "eks.amazonaws.com"
should be changed to
"Service" : "ec2.amazonaws.com"
If your role is going to be used by an EC2 instance, the allowed principal needs to be ec2.amazonaws.com. You might also want to review the managed policies you are attaching to the role, they are more suitable for an EKS cluster and not a bastion host.