I have provisioned EKS cluster on AWS with public access to api endpoint. While doing I configured SG with ingress only from specific IP. But I could still run the kubectl get svc against the cluster when accessing it from another IP.
I want to have IP restricted access to EKS cluster.
ref - Terraform - Master cluster SG
If public access is enabled does it mean that anyone who has cluster name can deploy anything?
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl as you have done).
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
So the public access does not mean that anyone who has the cluster name can deploy anything. You can read more about that in the Amazon EKS Cluster Endpoint Access Control AWS documentation.
If you want to provision EKS with Terraform and manage the network topology it's happened through the VPC (Virtual Private Network). You can check this VPC Terraform Module to get all the proper settings.
Hope it'll help.
As well as Claire Bellivier' answer about how EKS clusters are protected via authentication using IAM and RBAC you can now also configure your EKS cluster to be only accessible from private networks such as the VPC the cluster resides in or any peered VPCs.
This has been added in the (as yet unreleased) 2.3.0 version of the AWS provider and can be configured as part of the vpc_options config of the aws_eks_cluster resource:
resource "aws_eks_cluster" "example" {
name = %[2]q
role_arn = "${aws_iam_role.example.arn}"
vpc_config {
endpoint_private_access = true
endpoint_public_access = false
subnet_ids = [
"${aws_subnet.example.*.id[0]}",
"${aws_subnet.example.*.id[1]}",
]
}
}
Related
When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access
I am trying to set up Cloud Run (nodeJs app, code is below) to privately connect to Memory store instance. I've followed this Google Article to create a Serverless VPC Access Connector. Making sure I created the connector in the same region as Cloud Run app, and that the connector is attached to the Redis instance's authorized VPC network.
Memorystore is isolated in a VPC with a private range address.
Nodejs app code is shown below.
const {createClient} = require('redis');
getClient() {
const client = createClient({
socket: {
host: process.env.REDIS_HOST
},
password: process.env.REDIS_PASS
});
client.on('error', (err) => {
throw Error(`redis client error: ${err}`);
});
return client;
}
Google doc states that a firewall rule is created to allow ingress from the connector's subnet to all destinations in the VPC network. This is against my company's security policy as we have other services in this VPC (VM's, GKE instances etc). So I need to restrict connector to be able to reach all destinations in VPC network. Is there a preferred way of achieving this?
Earlier in 2021, Google Cloud made it possible for CloudRun serverless vpc connector to use the allow and target-tags flags to create an ingress firewall rule. It allows targeting the traffic only to a specific resource with in VPC.
Google doc states that a firewall rule is created to allow ingress from the connector's subnet to all destinations in the VPC network. This is against my company's security policy as we have other services in this VPC (VM's, GKE instances etc). So I need to restrict connector to be able to reach all destinations in VPC network. Is there a preferred way of achieving this?
Create a firewall rule and set the priority for this rule to be a lower value than the one you created in the previous step.
gcloud compute firewall-rules create RULE_NAME \
--allow=PROTOCOL \
--source-tags=VPC_CONNECTOR_NETWORK_TAG \
--direction=INGRESS \
--network=VPC_NETWORK \
--target-tags=RESOURCE_TAG \
--priority=PRIORITY
Hope it resolves your issue.
I am having a use case where I need to deploy the EKS cluster in private subnets and access it through API Gateway.
Currently, if I deploy the EKS cluster in public subnet and try to access it is working fine. However, it is not working when the EKS cluster is deployed into the private subnet.
Currently API gateway is configured with VPC link to access EKS cluster securely.
Network load balancer is configured to connect to the EKS cluster nodes.
Please let me know if there is anything that I am missing here.
Thanks,
Avinash
I'm trying to create a EKS cluster in a private subnet. I'm having issues getting it working. I get the error unhealthy nodes in the kubernetes cluster. Wonder if its due to security group or some other issues like VPC endpoints?
When I use NAT gateway setup then it works fine. But I don't want to use nat gateway anymore.
One think I'm not sure is should the EKS cluster subnet_ids be only private subnets?
In the below config I'm using both public and private subnets.
resource "aws_eks_cluster" "main" {
name = var.eks_cluster_name
role_arn = aws_iam_role.eks_cluster.arn
vpc_config {
subnet_ids = concat(var.public_subnet_ids, var.private_subnet_ids)
security_group_ids = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id, aws_security_group.external_access.id]
endpoint_private_access = true
endpoint_public_access = false
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
"aws_iam_role_policy_attachment.aws_eks_cluster_policy",
"aws_iam_role_policy_attachment.aws_eks_service_policy"
]
}
Since you don't have NAT gateway/instance, your nodes can't connect to the internet and fail as they can't "communicate with the control plane and other AWS services" (from here).
Thus, you can use VPC endpoints to enable communication with the plain and the services. To view the properly setup VPC with private subnets for EKS, you can check AWS provided VPC template for EKS (from here).
From the template, the VPC endpoints in us-east-1:
com.amazonaws.us-east-1.ec2
com.amazonaws.us-east-1.ecr.api
com.amazonaws.us-east-1.s3
com.amazonaws.us-east-1.logs
com.amazonaws.us-east-1.ecr.dkr
com.amazonaws.us-east-1.sts
Please note that all these endpoints, escept S3, are not free. So you have to consider if running cheap NAT instances or gateway would be cheaper or more expensive then maintaining these endpoints.
I am using docker containers with secrets on ECS, without problems. After moving to fargate and platform 1.4 for efs support i start getting the following error.
Any help please?
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 1 time(s): secret arn:aws:secretsmanager:eu-central-1:.....
Here's a checklist:
If your ECS tasks are in a public subnet (0.0.0.0/0 routes to Internet Gateway) make sure your tasks can call the "public" endpoint for Secrets Manager. Basically, outbound TCP/443.
If your ECS tasks are in a private subnet, make sure that one of the following is true: (a) your instances need to connect to the Internet through a NAT gateway (0.0.0.0/0 routes to NAT gateway) or (b) you have an AWS PrivateLink endpoint to secrets manager connected to your VPC (and to your subnets)
If you have an AWS PrivateLink connection, make sure the associated Security Group has inbound access from the security groups linked to your ECS tasks.
Make sure you have set GetSecretValue IAM permission to the ARN(s) of the secrets manager entry(or entries) set in the ECS "tasks role".
Edit: Here's another excellent answer - https://stackoverflow.com/a/66802973
I had the same error message, but the checklist above misses the cause of my problem. If you are using VPC endpoints to access AWS services (ie, secretsmanager, ecr, SQS, etc) then those endpoints MUST permit access to the security group that is associated with the VPC subnet that your ECS instance is running in.
Another watchit is, if you are using EFS to host volumes, ensure that your volumes can be mounted by the same security group identified above. Go to EFS, select the appropriate file system, Network tab, then Manage.