Kubernetes not creating ELB - amazon-web-services

I'm trying to set up my Kubernetes services as being external by using type: LoadBalancer on AWS. After I created my service using kubectl I can see the change but no ELB is created, not even async. Any hints on what could cause this? The pod I'm trying to expose is running a Docker image which exposes a web-server on port 8001.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
name: my-service
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8001
selector:
name: my-service

This was answered by Jan Garaj in Google Container Engine: Kubernetes is not exposing external IP after creating container regarding a GCE deployment and the answer for AWS is the same: you need to wait a few minutes for the reconciler to kick in, notice that the ELB should be created, talk to the AWS APIs and create it for you.

Related

AWS Load Balancer Controller on EKS - Sticky Sessions Not Working

I have deployed AWS Load Balancer Controller on AWS EKS. I have created k8s Ingress resource
I am deploying java web application with k8s Deployment. I want to make sure sticky session holds to make my application work.
I have read that if I set below annotation then sticky sessions will work :
alb.ingress.kubernetes.io/target-type: ip
But I am seeing ingress is routing requests to different replica each time letting login fail as session cookies are not persisting.
What am I missing here ?
alb.ingress.kubernetes.io/target-type: ip is required.
but the annotation to enable stickiness is:
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
Also you can set cookie_duration_settings.
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=300
If you want to manage the stick session from the K8s level you can use the, sessionAffinity: ClientIP
kind: Service
apiVersion: v1
metadata:
name: service
spec:
selector:
app: app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000

AWS EKS WITH FARGATE PROFILE USING KONG INGRESS- Unable to expose port 80 to public

I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617

Access AWS cluster endpoint running Kubernetes

I am new to Kubernetes and I am currently deploying a cluster in AWS using Kubeadm. The containers are deployed just fine, but I can't seem to access them with by browser. When I used to do this via Docker Swarm I could simply use the IP address of the AWS node to access and login in my application with by browser, but this does not seem to work with my current Kubernetes setting.
Therefore my question is how can I access my running application under these new settings?
You should read about how to use Services in Kubernetes:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service.
Basically Services allows a Deployment (or Pod) to be reached from inside or outside the cluster.
In your case, if you want to expose a single service in AWS, it is as simple as:
apiVersion: v1
kind: Service
metadata:
name: myApp
labels:
app: myApp
spec:
ports:
- port: 80 #port that the service exposes
targetPort: 8080 #port of a container in "myApp"
selector:
app: myApp #your deployment must have the label "app: myApp"
type: LoadBalancer
You can check if the Service was created successfully in the AWS EC2 console under "Elastic Load Balancers" or using kubectl describe service myApp
Both answers were helpful in my pursuit for a solution to my problem, but I ended up getting lost in the details. Here is an example that may help others with a similar situation:
1) Consider the following application yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-web-app
labels:
app: my-web-app
spec:
serviceName: my-web-app
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: my-web-app
image: myregistry:443/mydomain/my-web-app
imagePullPolicy: Always
ports:
- containerPort: 8080
name: cp
2) I decided to adopt Node Port (thank you #Leandro for pointing it out) to expose my service, hence I added the following to my application yaml:
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app
labels:
name: my-web-app
spec:
type: NodePort
ports:
- name: http1
port: 80
nodePort: 30036
targetPort: 8080
protocol: TCP
selector:
name: my-web-app
One thing that I was missing is that the label names in both sets must match in order to link my-web-app:StatefulSet (1) to my-web-app:Service (2). Then, my-web-app:StatefulSet:containerPort must be the same as my-web-app:Service:targetPort (8080). Finally, my-web-app:Service:nodePort is the port that we expose publicly and it must be a value between 30000-32767.
3) The last step is to ensure that the security group in AWS allows inbound traffic for the chosen my-web-app:Service:nodePort, in this case 30036, if not add the rule.
After following these steps I was able to access my application via aws-node-ip:30036/my-web-app.
Basically the way kubernetes is constructed is different. First of all your containers are kept hidden from the world, unless you create a service to expose them, a load balancer or nodePort. If you create a service of the type of clusterIP, it will be available only from inside the cluster. For simplicity use port forwading to test your containers, if everything is working then create a service to expose them (Node Port or load balancer). The best and more difficult approach is to create an ingress to handle inbound traffic and routing to the services.
Port Forwading example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
Change redis for your pod name and the appropriate port of your container.

My kubernetes AWS NLB integration is not working

I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422

How do pods communicates with the other pod in a cluster created using Kubernetes

I have created 1 master node and 1 worker node with 2 pods using kops in aws.
In one pod i have my oracle database running and in other pod i have deployed my java web application. Now to run the java web application needs to talk to the database pod for
To communicate between 2 pods in a cluster, i have configured the pod's IP address in my java application. I am able to access the application using the cloud providers public URL, everything is good in the dev environment. But in case of production environment, i cannot keep configuring the database Pod's IP address in my java application Pod.
How do people solve this issue ? Do you guys use Pod's Ip address to communicate with other pod's in kubernetes ? or is there any other way for communication between pods ?
Here is how my Pods look like in the cloud
NAME READY STATUS RESTARTS AGE IP NODE
csapp-8cd5d44556-7725f 1/1 Running 2 1d 100.96.1.54 ip-172-56-35-213.us-west-2.compute.internal
csdb-739d459467-92cmh 1/1 Running 0 1h 100.96.1.57 ip-172-27-86-213.us-west-2.compute.internal
Any help or directions on this issue would be helpful.
To make communication between two pods, you should use service resource with port type ClusterPort since they are in the same cluster.
According to the output of kubectl get pods, you have two tiers:
App Tier: csapp-8cd5d44556-7725f
Data Tier : csdb-739d459467-92cmh
Below is an example of service resource for data tier, then how it is used inside App tier.
apiVersion: v1
kind: Service
metadata:
name: example-data-tier
spec:
selector:
app: csdb # ⚠️Make sure of this, it should select the POD csdb-...
ports:
- name: redis
protocol: TCP
port: 6379
# type (default is ClusterPort which is for internal)
And in the POD of App tier, you should feed the environment variable with values from the service above:
apiVersion: v1
kind: Pod
spec:
containers:
- name: xx
image: "xxx:v1"
ports:
- containerPort: 8080
protocol: TCP
env:
- name: "REDIS_URL"
value: "redis://$(EXAMPLE_DATA_TIER_SERVICE_HOST):$(EXAMPLE_DATA_TIER_SERVICE_PORT_REDIS)"
If your DB is something other than Redis, you need to consider that when applying this solution.