Can I block connection between a pod and ElastiCache in AWS - amazon-web-services

I have my K8s setup(Pods A,B and C) and Elastic cache ("xxx.xxxx.xx.cache.amazonaws.com")in AWS. Right now all my pods have access to elastic cache.
I am looking for a solution by which I can restrict the communication. I read about calico but I understand I can block communication between two pods. Is there any way I can allow A to communicate with xxx.xxxx.xx.cache.amazonaws.com but block Pods B and C.
PS: Elastice cache is not something reside inside the k8s cluster.

You can use Kubernetes network policy where you can define egress policy to allow/deny outgoing traffic to CIDR blocks or IPs from pods selected by a label.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
except: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Above example blocks traffic to 10.0.0.0/24 on port 5978 from pods with label role: db
A network plugin such as calico is needed for it work. Follow docs to install calico in EKS cluster.

Related

Kubernetes Network Policy to allow egress to S3

I'm looking for a way to restrict outgoing traffic from my pod so it can only reach S3. My ingress is already completely locked down and I have a default of deny all incoming traffic (this would still allow me to connect to S3 as expected).
I was able to find the IP ranges for S3 in my region by following this documentation, and added it to my network policy below:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: aws-s3
spec:
podSelector:
matchLabels:
name: aws-s3
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr:
52.95.144.0/24
- ipBlock:
cidr:
52.95.148.0/23
- ipBlock:
cidr:
3.5.244.0/22
- ipBlock:
cidr:
52.95.142.0/23
- ipBlock:
cidr:
52.95.150.0/24
- ipBlock:
cidr:
18.168.37.160/28
- ipBlock:
cidr:
18.168.37.176/28
After adding this policy my pod can no longer reach the bucket using the aws cli. Has anyone been able to allow egress to S3 or have a fix for a similar issue?
Figured out the issue. The aws cli was trying to resolve s3.amazonaws.com but couldn't because my policy was also blocking the DNS server. I whitelisted it and also allowed all outbound traffic on DNS ports (53/udp and 53/tcp).

AWS Network Load Balancer and TCP traffic with AWS Fargate

I want to expose a tcp-only service from my Fargate cluster to the public internet on port 80. To achieve this I want to use an AWS Network Load Balancer
This is the configuration of my service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "30"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Using the service from inside the cluster with CLUSTER-IP works. When I apply my config with kubectl the following happens:
Service is created in K8s
NLB is created in AWS
NLB gets Status 'active'
VPC and other values for the NLB look correct
Target Group is created in AWS
There are 0 targets registered
I can't register targets because group expects instances, which I do not have
EXTERNAL_IP is
Listener is not created automatically
Then I create a listener for Port 80 and TCP. After some wait an EXTERNAL_IP is assigned to the service in AWS.
My Problem: It does not work. The service is not available using the DNS Name from the NLB and Port 80.
The in-tree Kubernetes Service LoadBalancer for AWS, can not be used for AWS Fargate.
You can use NLB instance targets with pods deployed to nodes, but not to Fargate.
But you can now install AWS Load Balancer Controller and use IP Mode on your Service LoadBalancer, this also works for AWS Fargate.
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
See Introducing AWS Load Balancer Controller and EKS Network Load Balancer - IP Targets

Kubernetes pod level restricted access to other EC2 instances from AWS EKS nodes

I had a Elastic search DB running on a EC2 instance. Backend services which connect to Elastic DB are running on AWS EKS nodes.
In order for the backend kubernetes pods to access Elastic DB, i added allowed security groups to EKS nodes and it is working fine.
But my question is all other pods(not the backend ones) which are running in the same node had possible access to Elastic DB because of the underlying node security groups, is there a better secure way to handle this.
In this situation you could use additionally Kubernetes` Network Policies to define rules, which specify what traffic is allowed to Elastic DB from the selected pods.
For instance start with creating a default deny all egress traffic policy in namespace for all pods, like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Egress
and allow outgoing traffic from specific pods (holding role: db) to CIDR 10.0.0.0/24 on TCP port 5978
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Please consult official documentation for more information on NetworkPolicies.

Connect to different VPC inside of Kubernetes pod

I have two VPCs, VPC A and VPC B . I have one service running in VPC B. Kubernetes cluster is in VPC A. I am using KOPS in AWS cloud and VPC peering enabled between two VPCs. I can connect to the service running in VPC B from the Kubernetes deployment server host in VPC A. But, I can not connect to the service inside the Kubernetes pod. It is giving timed out. I searched on internet and I found that IPTABLE rules could work.
I have gone through this article,
https://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/
But it is not possible to manually ssh into Kubernetes node servers and set the IPTABLE rules. I want to add it as a part of deployment.
This is my service looks like,
apiVersion: v1
kind: Service
metadata:
name: test-microservice
namespace: development
spec:
# type: LoadBalancer
type: NodePort
# clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
run: test-microservice

AWS cross-cluster comunication with Kubernetes pods

I've question with cross-cluster communication between pods on AWS.
I am using kubernetes to deploy clusters on AWS. Both clusters are in same region and AZ. Both clusters are deployed in their own VPC with non-overlapping subnets. I've successfully created VPC Peering to establish communication between two VPCs. and Minions (instances) from VPC can ping each other through private IP.
Question is, Kubernetes pods from one Cluster (VPC) can not ping Pod in another cluster through it's internal IP. I see traffic leaving pod and minion but dont see it on other VPC.
Here is IP info:
Cluster 1 (VPC 1) - subnet 172.21.0.0/16
Minion(Instance)in VPC 1 - internal IP - 172.21.0.232
Pod on Minion 1 - IP - 10.240.1.54
Cluster 2 (VPC 2) - subnet 172.20.0.0/16
Minion(instance) in VPC 2 - internal IP - 172.20.0.19
Pod on Minion 1 - IP - 10.241.2.36
I've configured VPC Peering between two VPC and I can ping Minion in VPC 1 (172.21.0.232) to Minion
in VPC 2 through IP 172.20.0.19
But when I try to ping pod on VPC 1, Minion 1 - IP 10.240.1.54 from VPC 2, Minion Pod 10.241.2.36, it can not ping.
Is this supported use case in AWS? How can I achieve it. I have configured security group on both instance to allow all traffic from source 10.0.0.0/8 as well but it did not help.
Really appreciate your help!
direct communication with the pods from outside the cluster is not supposed to work. Pods can be exposed to the outside through services.
There is a wide range of options, but a basic services with a definition like the following could expose a pod through a predefined port to the other cluster:
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
nodePort: 34567
With that you could access your pod through the port 34567 which is mapped on any of the kubernetes nodes.
Besides that you should also consider to check out ingress configurations.
A very good summary besides the official documentation is the Kubernetes Services and Ingress Under X-ray
blog post.