I'm looking for a way to restrict outgoing traffic from my pod so it can only reach S3. My ingress is already completely locked down and I have a default of deny all incoming traffic (this would still allow me to connect to S3 as expected).
I was able to find the IP ranges for S3 in my region by following this documentation, and added it to my network policy below:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: aws-s3
spec:
podSelector:
matchLabels:
name: aws-s3
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr:
52.95.144.0/24
- ipBlock:
cidr:
52.95.148.0/23
- ipBlock:
cidr:
3.5.244.0/22
- ipBlock:
cidr:
52.95.142.0/23
- ipBlock:
cidr:
52.95.150.0/24
- ipBlock:
cidr:
18.168.37.160/28
- ipBlock:
cidr:
18.168.37.176/28
After adding this policy my pod can no longer reach the bucket using the aws cli. Has anyone been able to allow egress to S3 or have a fix for a similar issue?
Figured out the issue. The aws cli was trying to resolve s3.amazonaws.com but couldn't because my policy was also blocking the DNS server. I whitelisted it and also allowed all outbound traffic on DNS ports (53/udp and 53/tcp).
Related
I have ingress in my AWS EKS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: my-service-example
servicePort: 80
Ingress will redirect all trafic on domain example.com to my-service-example
The problem is, that ingress is without public url, how can I expose this ingress to the public internet?
You need a load balancer controller in your cluster for make a connection to you kubernetes service (nginx, istio, aws load balancer controller).
In this case you don't say what kind of ingress do you have install in your cluster, so i recommend use the official ingress controller of aws for eks, for the installation of this controller please follow this doc, later you need change your ingress object adding these lines in your annotations block:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
With that the AWS Load Balancer Controller interact with the AWS API for create a Application Load Balancer in your Public Subnets because in the annotations the scheme is internet-facing
I have the following cloudformation stack which defines an ECS Service:
ApiService:
Type: AWS::ECS::Service
DependsOn:
- LoadBalancerListener80
- LoadBalancerListener443
Properties:
Cluster: !Ref EcsClusterArn
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 100
DeploymentController:
Type: ECS
DesiredCount: 1
HealthCheckGracePeriodSeconds: 10
LaunchType: FARGATE
LoadBalancers:
- ContainerName: !Join ['-', ['container', !Ref AWS::StackName]]
ContainerPort: !Ref Port
TargetGroupArn: !Ref LoadBalancerTargetGroup
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED # <-- if disabled, pulling from ecr registry fails
SecurityGroups:
- !Ref ApiServiceContainerSecurityGroup
Subnets: !Ref Subnets
SchedulingStrategy: REPLICA
ServiceName: !Ref AWS::StackName
TaskDefinition: !Ref ApiServiceTaskDefinition
I've noticed that without enabling public IP auto-assign, service tasks are unable to pull docker image from the ECR registry. I don't understand why do I need containers to have public ip to pull images from the registry...the service security group allows all the outbound traffic, the subnets can access the internet through an internet gateway and the IAM role allows pulling from ECR...so why the need for a public ip?
I don't want my containers to have a public ip, they should be reachable only inside the VPC. Or I misunderstood and it's only the task that will receive a public ip (for whatever reason) while containers will still be private inside the VPC?
"the IAM role allows pulling from ECR"
The IAM role just gives it permission, it doesn't provide a network connection.
"the subnets can access the internet through an internet gateway"
I think you'll find that the Internet Gateway only provides Internet Access to resources with a public IP assigned to them.
ECR is a service that exists outside your VPC, so you need one of the following for the network connection to ECR to be established:
Public IP.
NAT Gateway, with a route to the NAT Gateway in the subnet.
ECR Interface VPC Endpoint, with a route to the endpoint in the subnet.
I have my K8s setup(Pods A,B and C) and Elastic cache ("xxx.xxxx.xx.cache.amazonaws.com")in AWS. Right now all my pods have access to elastic cache.
I am looking for a solution by which I can restrict the communication. I read about calico but I understand I can block communication between two pods. Is there any way I can allow A to communicate with xxx.xxxx.xx.cache.amazonaws.com but block Pods B and C.
PS: Elastice cache is not something reside inside the k8s cluster.
You can use Kubernetes network policy where you can define egress policy to allow/deny outgoing traffic to CIDR blocks or IPs from pods selected by a label.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
except: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Above example blocks traffic to 10.0.0.0/24 on port 5978 from pods with label role: db
A network plugin such as calico is needed for it work. Follow docs to install calico in EKS cluster.
I had a Elastic search DB running on a EC2 instance. Backend services which connect to Elastic DB are running on AWS EKS nodes.
In order for the backend kubernetes pods to access Elastic DB, i added allowed security groups to EKS nodes and it is working fine.
But my question is all other pods(not the backend ones) which are running in the same node had possible access to Elastic DB because of the underlying node security groups, is there a better secure way to handle this.
In this situation you could use additionally Kubernetes` Network Policies to define rules, which specify what traffic is allowed to Elastic DB from the selected pods.
For instance start with creating a default deny all egress traffic policy in namespace for all pods, like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Egress
and allow outgoing traffic from specific pods (holding role: db) to CIDR 10.0.0.0/24 on TCP port 5978
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Please consult official documentation for more information on NetworkPolicies.
I have two VPCs, VPC A and VPC B . I have one service running in VPC B. Kubernetes cluster is in VPC A. I am using KOPS in AWS cloud and VPC peering enabled between two VPCs. I can connect to the service running in VPC B from the Kubernetes deployment server host in VPC A. But, I can not connect to the service inside the Kubernetes pod. It is giving timed out. I searched on internet and I found that IPTABLE rules could work.
I have gone through this article,
https://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/
But it is not possible to manually ssh into Kubernetes node servers and set the IPTABLE rules. I want to add it as a part of deployment.
This is my service looks like,
apiVersion: v1
kind: Service
metadata:
name: test-microservice
namespace: development
spec:
# type: LoadBalancer
type: NodePort
# clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
run: test-microservice