EKS Node Group creation error via Terraform - amazon-web-services

I am trying to provision EKS with node group via terraform
resource "aws_eks_node_group" "eks-node-group" {
cluster_name = aws_eks_cluster.eks-cluster.name
instance_types = var.instance_types
node_group_name = "tf-name"
node_role_arn = aws_iam_role.eks-node-group.arn
subnet_ids = var.subnet_ids
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
update_config {
max_unavailable = 1
}
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-worker-node-policy,
aws_iam_role_policy_attachment.eks-node-group-cni-policy,
aws_iam_role_policy_attachment.eks-node-group-registry-read-only-policy
]
}
I am trying to provision it using private subnet.
However I am getting an error of
One or more Amazon EC2 Subnets of [] for node group does not
automatically assign public IP addresses to instances launched into
it. If you want your instances to be assigned a public IP address,
then you need to enable auto-assign public IP address for the subnet.
See IP addressing in VPC guide:
What do I need to do?

You can find your answer from AWS Documentation : Managed node groups
"Amazon EKS managed node groups can be launched in both public and private subnets. If you launch a managed node group in a public subnet on or after April 22, 2020, the subnet must have MapPublicIpOnLaunch set to true for the instances to successfully join a cluster. If the public subnet was created using eksctl or the Amazon EKS vended AWS CloudFormation templates on or after March 26, 2020, then this setting is already set to true. If the public subnets were created before March 26, 2020, you must change the setting manually. For more information, see Modifying the public IPv4 addressing attribute for your subnet."

Related

How to create attachments in transit gateway module terraform

I have created a transit gateway using the terraform tgw module as shown below.
module "transit-gateway" {
source = "terraform-aws-modules/transit-gateway/aws"
version = "1.4.0"
name = "tgw-nprod"
description = "My TGW shared with several other AWS accounts"
amazon_side_asn = 64532
enable_auto_accept_shared_attachments = true
vpc_attachments = {
vpc1 = {
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
dns_support = true
ipv6_support = false
transit_gateway_default_route_table_association = false
transit_gateway_default_route_table_propagation = false
}
}
ram_allow_external_principals = true
ram_principals = [1234567890, 0987654321]
tags = {
Purpose = "tgw-testing"
}
}
I have created vpc using the terraform vpc module.
When I run the above terraform Iam getting error "Error: error creating EC2 Transit Gateway VPC Attachment: DuplicateSubnetsInSameZone: Duplicate Subnets for same AZ"
I have 2 private subnet in ap-south-1 and 1 public in ap-south-1.
The AWS docs write that you can have your gateway in only one subnet per AZ:
You must select at least one subnet. You can select only one subnet per Availability Zone.
Your error msg suggests that your module.vpc.private_subnets are in same AZ. You have to redefine your VPC so that module.vpc.private_subnets are in two different AZs, or just use one subnet in your subnet_ids.
To use one subnet:
subnet_ids = [module.vpc.private_subnets[0]]

Share RDS instance with another VPC, but no other resources?

I have created two VPCs using Terraform:
resource "aws_vpc" "alpha" {
cidr_block = "10.16.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "Alpha"
}
}
resource "aws_subnet" "alpha_private_a" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.192.0/24"
availability_zone = "${var.aws_region}a"
tags = {
Name = "Alpha Private A"
}
}
resource "aws_subnet" "alpha_private_b" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.224.0/24"
availability_zone = "${var.aws_region}b"
tags = {
Name = "Alpha Private B"
}
}
resource "aws_route_table" "alpha_private" {
vpc_id = aws_vpc.alpha.id
tags = {
Name = "Alpha Private"
}
}
resource "aws_route_table_association" "alpha_private_a" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_a.id
}
resource "aws_route_table_association" "alpha_private_b" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_b.id
}
# The same again for VPC "Bravo"
I also have an RDS in VPC "Alpha":
resource "aws_db_subnet_group" "alpha_rds" {
subnet_ids = [ aws_subnet.alpha_private_a.id, aws_subnet.alpha_private_b.id ]
tags = {
Name = "Alpha RDS"
}
}
resource "aws_db_instance" "alpha" {
identifier = "alpha"
allocated_storage = 20
max_allocated_storage = 1000
storage_type = "gp2"
engine = "postgres"
engine_version = "11.8"
publicly_accessible = false
db_subnet_group_name = aws_db_subnet_group.alpha_rds.name
performance_insights_enabled = true
vpc_security_group_ids = [ aws_security_group.alpha_rds.id ]
lifecycle {
prevent_destroy = true
}
}
Then I have an Elastic Beanstalk instance inside VPC "Bravo".
What I want to achieve:
alpha_rds is accessible to my Elastic Beanstalk instance inside Bravo VPC
Nothing else inside Alpha VPC is accessible to Bravo VPC
Nothing else inside Bravo VPC is accessible to Alpha VPC
I think VPC Peering is required for this?
How can I implement this in Terraform?
Related but not Terraform:
Access Private RDS DB From Another VPC
AWS Fargate connection to RDS in a different VPC
You should be able to set it up like this:
Create a VPC Peering Connection between Alpha and Bravo
In the Route table for Alpha add a route for the CIDR range of Bravo and set the destination to the peering connection (pcx-XXXXXX) to Bravo
In the Route table for Bravo add a route for the IP-address(es) of the Database and point it to the peering connection to Alpha
This setup guarantees, that resources in Bravo can only communicate to the Database in Alpha, every other packet to that VPC can't be routed.
The inverse is a little tougher - right now this setup should stop TCP connections from Alpha to Bravo being established, because there is no return path except for the database. UDP traffic could still go through, it's response will be dropped though, unless it comes from the database.
At this point you could set up Network Access Control lists in the Subnets in Bravo to Deny traffic from Alpha except for the database IPs. This depends on your level of paranoia or your requirements in terms of isolation - personally I wouldn't do it, but it's Friday afternoon and I'm in a lazy mood ;-).
Update
As Mark B correctly pointed out in the comments there is a risk, that the private IP addresses of your RDS cluster may change on failover if the underlying host can't be recovered.
To address these concerns, you could create separate subnets in Alpha for your database node(s) and substitute the database IPs in my description above with the CIDRs of these subnets. That should allow for slightly more flexibility and allows you to get around the NACL problem as well, because you can just edit the routing table of the new database subnet(s) and only add the Peering Connection there.

Terraform get IPs of vpc endpoint subnets

I am trying to setup AWS SFTP transfer in vpc endpoint mode but there is one think I can't manage with.
The problem I have is how to get target IPs for NLB target group.
The only output I found:
output "vpc_endpoint_transferserver_network_interface_ids" {
description = "One or more network interfaces for the VPC Endpoint for transferserver"
value = flatten(aws_vpc_endpoint.transfer_server.*.network_interface_ids)
}
gives network interface ids which cannot be used as targets:
Outputs:
api_url = https://12345.execute-api.eu-west-1.amazonaws.com/prod
vpc_endpoint_transferserver_network_interface_ids = [
"eni-12345",
"eni-67890",
"eni-abcde",
]
I went through:
terraform get subnet integration ips from vpc endpoint subnets tab
and
Terraform how to get IP address of aws_lb
but none of them seems to be working. The latter says:
on modules/sftp/main.tf line 134, in data "aws_network_interface" "ifs":
134: count = "${length(local.nlb_interface_ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
You can create an Elastic IP
resource "aws_eip" "lb" {
instance = "${aws_instance.web.id}"
vpc = true
}
Then specify the Elastic IPs while creating Network LB
resource "aws_lb" "example" {
name = "example"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${aws_subnet.example1.id}"
allocation_id = "${aws_eip.example1.id}"
}
subnet_mapping {
subnet_id = "${aws_subnet.example2.id}"
allocation_id = "${aws_eip.example2.id}"
}
}

AWS Terraform target group for vpc endpoints

How can I create a target group for a network load balancer containing a VPC endpoint in Terraform?
In AWS console, I would have done following steps:
Create VPC Endpoint in two subnets to an endpoint service in another VPC
Create a target group of type IP and register the IP adresses of
the enpoints created in step 1
In terraform, I can create target groups and endpoints, but I don't know how to assign the enpoint's IPs to the target group. Where can I find instructions or an example how to do this? (Creating target groups for type instance is no problem, my question is specific for type IP).
Late to the party! But this is what I did.
Created a null resource that would get the IP addresses of the VPC endpoints and store it in a file
resource "null_resource" "nlb" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "dig +short ${lookup(tomap(element(aws_vpc_endpoint.api-gw.dns_entry, 0)), "dns_name", "")} > /tmp/entry"
}
}
and then read the file entries
resource aws_lb_target_group_attachment nlb {
depends_on = [
null_resource.nlb
]
for_each = toset(slice(split("\n", file("/tmp/entry")), 0, 2))
target_group_arn = resource.aws_lb_target_group.nlb.arn
target_id = each.value
port = 443
}

AWS EKS Terraform - Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found

I followed "https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html" to create an EKS cluster using terraform.
I was able to create a config map successfully but i am unable to get the node details -
$ ./kubectl_1.10.3_darwin get nodes
No resources found.
Service details -
$ ./kubectl_1.10.3_darwin get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 2h
Kubectl logs on nodes -
Aug 5 09:14:32 ip-172-31-18-205 kubelet: I0805 09:14:32.617738 25463 aws.go:1026] Building AWS cloudprovider
Aug 5 09:14:32 ip-172-31-18-205 kubelet: I0805 09:14:32.618168 25463 aws.go:988] Zone not specified in configuration file; querying AWS metadata service
Aug 5 09:14:32 ip-172-31-18-205 kubelet: E0805 09:14:32.794914 25463 tags.go:94] Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found; Kubernetes may behave unexpectedly.
Aug 5 09:14:32 ip-172-31-18-205 kubelet: F0805 09:14:32.795622 25463 server.go:233] failed to run Kubelet: could not init cloud provider "aws": AWS cloud failed to find ClusterID
Aug 5 09:14:32 ip-172-31-18-205 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Aug 5 09:14:32 ip-172-31-18-205 systemd: Unit kubelet.service entered failed state.
Aug 5 09:14:32 ip-172-31-18-205 systemd: kubelet.service failed.
AWS getting started documentation doesn't mention any tags related information "https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html".
After a while I found out that I missed to put resource tags like "kubernetes.io/cluster/*" to my networking resources.
My networking resources are pre-created, I use remote states to fetch the required details. I believe that I can either add tags to it OR create a new VPC env.
Is there any alternate way to solve this without adding tags or provisioning new resources?
Make sure you add a similar tag as below to your VPCs, Subnets & ASGs -
"kubernetes.io/cluster/${CLUSTER_NAME}" = "shared"
NOTE: The usage of the specific kubernetes.io/cluster/* resource tags below are required for EKS and Kubernetes to discover and manage networking resources.
NOTE: The usage of the specific kubernetes.io/cluster/* resource tag below is required for EKS and Kubernetes to discover and manage compute resources. - Terraform docs
I had missed propagating tags using auto-scaling groups on worker nodes. I added below code to ASG terraform module & it started working, at least the nodes were able to connect to the master cluster. You also need to add the tag to VPC & Subnets for EKS and Kubernetes to discover and manage networking resources.
For VPC -
locals {
cluster_tags = {
"kubernetes.io/cluster/${var.project}-${var.env}-cluster" = "shared"
}
}
resource "aws_vpc" "myvpc" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags = "${merge(map("Name", format("%s-%s-vpcs", var.project, var.env)), var.default_tags, var.cluster_tags)}"
}
resource "aws_subnet" "private_subnet" {
count = "${length(var.private_subnets)}"
vpc_id = "${aws_vpc.myvpc.id}"
cidr_block = "${var.private_subnets[count.index]}"
availability_zone = "${element(var.azs, count.index)}"
tags = "${merge(map("Name", format("%s-%s-pvt-%s", var.project, var.env, element(var.azs, count.index))), var.default_tags, var.cluster_tags)}"
}
resource "aws_subnet" "public_subnet" {
count = "${length(var.public_subnets)}"
vpc_id = "${aws_vpc.myvpc.id}"
cidr_block = "${var.public_subnets[count.index]}"
availability_zone = "${element(var.azs, count.index)}"
map_public_ip_on_launch = "true"
tags = "${merge(map("Name", format("%s-%s-pub-%s", var.project, var.env, element(var.azs, count.index))), var.default_tags, var.cluster_tags)}"
}
For ASGs -
resource "aws_autoscaling_group" "asg-node" {
name = "${var.project}-${var.env}-asg-${aws_launch_configuration.lc-node.name}"
vpc_zone_identifier = ["${var.vpc_zone_identifier}"]
min_size = 1
desired_capacity = 1
max_size = 1
target_group_arns = ["${var.target_group_arns}"]
default_cooldown= 100
health_check_grace_period = 100
termination_policies = ["ClosestToNextInstanceHour", "NewestInstance"]
health_check_type="EC2"
depends_on = ["aws_launch_configuration.lc-node"]
launch_configuration = "${aws_launch_configuration.lc-node.name}"
lifecycle {
create_before_destroy = true
}
tags = ["${data.null_data_source.tags.*.outputs}"]
tags = [
{
key = "Name"
value = "${var.project}-${var.env}-asg-eks"
propagate_at_launch = true
},
{
key = "role"
value = "eks-worker"
propagate_at_launch = true
},
{
key = "kubernetes.io/cluster/${var.project}-${var.env}-cluster"
value = "owned"
propagate_at_launch = true
}
]
}
I was able to deploy a sample application post above changes.
PS - Answering this since AWS EKS getting started documentation doesn't have these instructions very clear & people trying to create ASGs manually may fall into this issue. This might help others save their time.
I tried to summarize below all the resources that requires tagging - I hope I haven't missed something.
Tagging Network resources
(Summary of this doc).
1) VPC tagging requirement
When you create an Amazon EKS cluster earlier than version 1.15, Amazon EKS tags the VPC containing the subnets you specify in the following way so that Kubernetes can discover it:
Key Value
kubernetes.io/cluster/<cluster-name> shared
Key: The value matches your Amazon EKS cluster's name.
Value: The shared value allows more than one cluster to use this VPC.
2) Subnet tagging requirement
When you create your Amazon EKS cluster, Amazon EKS tags the subnets you specify in the following way so that Kubernetes can discover them:
Note: All subnets (public and private) that your cluster uses for
resources should have this tag.
Key Value
kubernetes.io/cluster/<cluster-name> shared
Key: The value matches your Amazon EKS cluster.
Value: The shared value allows more than one cluster to use this subnet.
3) Private subnet tagging requirement for internal load balancers
Private subnets must be tagged in the following way so that Kubernetes knows it can use the subnets for internal load balancers. If you use an Amazon EKS AWS CloudFormation template to create...
Key Value
kubernetes.io/role/internal-elb 1
4) Public subnet tagging option for external load balancers
You must tag the public subnets in your VPC so that Kubernetes knows to use only those subnets for external load balancers instead of choosing a public subnet in each Availability Zone (in lexicographical order by subnet ID). If you use an Amazon EKS AWS CloudFormation template...
Key Value
kubernetes.io/role/elb 1
Tagging Auto Scaling group
(Summary of this doc).
The Cluster Autoscaler requires the following tags on your node group Auto Scaling groups so that they can be auto-discovered.
If you used the previous eksctl commands to create your node groups, these tags are automatically applied. If not, you must manually tag your Auto Scaling groups with the following tags.
Key Value
k8s.io/cluster-autoscaler/<cluster-name> owned
k8s.io/cluster-autoscaler/enabled true
Tagging Security groups
(Taken from the end of this doc).
If you have more than one security group associated to your nodes, then one of the security groups must have the following tag applied to it. If you have only one security group associated to your nodes, then the tag is optional.
Key Value
kubernetes.io/cluster/<cluster-name> owned