container engine ingress not working - google-cloud-platform

I have a simple container in google container registry which basically does a few things and executes a binary which is a go based server, here are the contents of the DockerFile:
FROM debian:stable
WORKDIR /workspace/
COPY key.json .
COPY bin/user-creds.
EXPOSE 1108
ENV GOOGLE_APPLICATION_CREDENTIALS /workspace/key.json
RUN apt-get update \
&& apt-get install -y ca-certificates \
&& chmod +x user-creds
CMD ["./user-creds"]
this container has been tested locally and works perfectly. So using the google cloud shell I ran this container:
kubectl run user-creds --image=eu.gcr.io/GCLOUD_PROJECT/user-creds:COMMIT_SHA --port=1108
Then like it says on the doc, i exposed it on a nodeport
kubectl expose deployment user-creds --target-port=1108 --type=NodePort
Then I created an ingress with a path to the sevice:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: INGRESS_NAME
annotations:
kubernetes.io/ingress.global-static-ip-name: IP_NAME
spec:
rules:
- http:
paths:
- path: /user/creds/*
backend:
serviceName: user-creds
servicePort: 1108
then i created the ingress:
kubectl create -f INGRESS_NAME.yaml
the ingress was created and i waited some time, here is the details of the ingress:
NAME HOSTS ADDRESS PORTS AGE
INGRESS_NAME * IP_ADDRESS 80 38m
but when i go the the actual url with the path I get a 502 error:
When I go to any other path I get the default backend 404 error but when i visit the specific /user/creds/ path i get the 502 error.
To check if it is something wrong with the cluster or my specific container, port or something else, I tried exposing the container as a LoadBalancer and it works perfectly, the Command:
kubectl expose deployment user-creds --target-port=1108 --port=80 --type=LoadBalancer
service details:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP INT_IP_ADDRESS <none> 443/TCP 1h
user-creds LoadBalancer INT_IP_ADDRESS IP_ADDRESS 80:31618/TCP 1m
result: 200 with the correst response body.
Been stuck on this for time now, tried the ingress with no paths just the user-creds as the backend but still has the same error.
Any help or suggestion would be appreciated, thanks :)

Finally figured it out, it was to do with the health check. The health check visits / and expects a 200, if it doesn't get it then it marks the backend as unhealthy and returns 502 for every requests sent to it. My problem was that I was using the / endpoint which would've normally returned a 400 if its being called with no specific request parameters.
It was really a human error on my side, it even specifically said that in the docs: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks
Another thing to consider is that the ingress returns all the the paths before the route so the the server needs to literally listen for /user/creds/ in my case.

Related

404 not found for GKE Ingress

I am trying with Ingress feature in GKE Cluster` . Following are the steps I followed
1. Create deployment with below command
kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
2. Exposed the deployment of type NodePort
kubectl expose deployment hello --port=8080 --type=NodePort
3. my ingress manifests is as follows
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello
servicePort: 8080
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.0.41.132 <None> 8080:30820/TCP 113m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 35.X.X.X 80 26m
But when I access the external IP using curl , it throws 404 not found .
Below error can be seen from GKE Console
I think I am missing something in the ingress definition . Please guide to fix this.
Image definition has been taken from this guide
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
I have tried to create the same ingress from the scratch (none cluster, none ingress service, none service), and I was able to create it and perform a curl successfully, these were the steps:
1.- Create a cluster (It does not matter the details, just create it as you want)
2.- Connect to the cluster and install kubectl-> sudo apt-get install kubectl
3.- kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
4.- kubectl expose deployment hello --port=8080 --type=NodePort
5.- Create the ingress as follows (Without annotations), as per Creating an Ingress resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
6.- Review your ingress kubectl get ingress basic-ingress
#cloudshell:$ kubectl get ingress basic-ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 130.211.xx.xxx 80 5m46s
7.- And now is working when I have performed the curl:
#cloudshell:$ curl http://130.211.xx.xxx
Hello, world!
Version: 2.0.0
Hostname: hello-86dbf5b7c6-f7qgl
You were using ingress annotations, and it is another way to create ingress services, but a little bit more advanced. My suggestion is to create it as simple as possible first.
Please try it at this way and let me know about it.
The same YAML definitions are failing for me in a SharedVPC . This got resolved after adding the below firewall rule
gcloud compute firewall-rules create k8s-fw-l7--60cada75751e6d79 --network <SharedVPC> --description "GCE L7 firewall rule" --allow tcp:30000-32767 --source-ranges 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 --target-tags gke-privatetestgkecluster-cf899a18-node --project <Project>
https://cloud.google.com/load-balancing/docs/health-checks

kiali showing unkown traffic via sending through ambassador

I have installed service mesh(Istio) and working with Ambassador to route traffic to our application. Whenever I am sending traffic through Istio Ingress its working fine and working with the ambassador but when sending through Ambassador, It is showing unknown, You can see on the attached image, could be related to the fact that the ambassador does not use an Istio sidecar.
Used code to deploy Ambassador service:
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: ambassador-http
port: 80
targetPort: 8080
selector:
service: ambassador
---
Is there anything to I can add here to make it possible?
Thanks
Yes, it is possible and here is detailed guide for this from Abmassador documentation:
Getting Ambassador Working With Istio
Getting Ambassador working with Istio is straightforward. In this example, we'll use the bookinfo sample application from Istio.
Install Istio on Kubernetes, following the default instructions (without using mutual TLS auth between sidecars)
Next, install the Bookinfo sample application, following the instructions.
Verify that the sample application is working as expected.
By default, the Bookinfo application uses the Istio ingress. To use Ambassador, we need to:
Install Ambassador.
First you will need to deploy the Ambassador ambassador-admin service to your cluster:
It's simplest to use the YAML files we have online for this (though of course you can download them and use them locally if you prefer!).
First, you need to check if Kubernetes has RBAC enabled:
kubectl cluster-info dump --namespace kube-system | grep authorization-mode
If you see something like --authorization-mode=Node,RBAC in the output, then RBAC is enabled.
If RBAC is enabled, you'll need to use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
Without RBAC, you can use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml
(Note that if you are planning to use mutual TLS for communication between Ambassador and Istio/services in the future, then the order in which you deploy the ambassador-admin service and the ambassador LoadBalancer service below may need to be swapped)
Next you will deploy an ambassador service that acts as a point of ingress into the cluster via the LoadBalancer type. Create the following YAML and put it in a file called ambassador-service.yaml.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: httpbin
spec:
prefix: /httpbin/
service: httpbin.org
host_rewrite: httpbin.org
Then, apply it to the Kubernetes with kubectl:
kubectl apply -f ambassador-service.yaml
The YAML above does several things:
It creates a Kubernetes service for Ambassador, of type LoadBalancer. Note that if you're not deploying in an environment where LoadBalancer is a supported type (i.e. MiniKube), you'll need to change this to a different type of service, e.g., NodePort.
It creates a test route that will route traffic from /httpbin/ to the public httpbin.org HTTP Request and Response service (which provides useful endpoint that can be used for diagnostic purposes). In Ambassador, Kubernetes annotations (as shown above) are used for configuration. More commonly, you'll want to configure routes as part of your service deployment process, as shown in this more advanced example.
You can see if the two Ambassador services are running correctly (and also obtain the LoadBalancer IP address when this is assigned after a few minutes) by executing the following commands:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.63.247.1 35.224.41.XX 8080:32171/TCP 11m
ambassador-admin NodePort 10.63.250.17 <none> 8877:32107/TCP 12m
details ClusterIP 10.63.241.224 <none> 9080/TCP 16m
kubernetes ClusterIP 10.63.240.1 <none> 443/TCP 24m
productpage ClusterIP 10.63.248.184 <none> 9080/TCP 16m
ratings ClusterIP 10.63.255.72 <none> 9080/TCP 16m
reviews ClusterIP 10.63.252.192 <none> 9080/TCP 16m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-2680035017-092rk 2/2 Running 0 13m
ambassador-2680035017-9mr97 2/2 Running 0 13m
ambassador-2680035017-thcpr 2/2 Running 0 13m
details-v1-3842766915-3bjwx 2/2 Running 0 17m
productpage-v1-449428215-dwf44 2/2 Running 0 16m
ratings-v1-555398331-80zts 2/2 Running 0 17m
reviews-v1-217127373-s3d91 2/2 Running 0 17m
reviews-v2-2104781143-2nxqf 2/2 Running 0 16m
reviews-v3-3240307257-xl1l6 2/2 Running 0 16m
Above we see that external IP assigned to our LoadBalancer is 35.224.41.XX (XX is used to mask the actual value), and that all ambassador pods are running (Ambassador relies on Kubernetes to provide high availability, and so there should be two small pods running on each node within the cluster).
You can test if Ambassador has been installed correctly by using the test route to httpbin.org to get the external cluster Origin IP from which the request was made:
$ curl 35.224.41.XX/httpbin/ip
{
"origin": "35.192.109.XX"
}
If you're seeing a similar response, then everything is working great!
(Bonus: If you want to use a little bit of awk magic to export the LoadBalancer IP to a variable AMBASSADOR_IP, then you can type export AMBASSADOR_IP=$(kubectl get services ambassador | tail -1 | awk '{ print $4 }')and usecurl $AMBASSADOR_IP/httpbin/ip
Now you are going to modify the bookinfo demo bookinfo.yaml manifest to include the necessary Ambassador annotations. See below.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: productpage
spec:
prefix: /productpage/
rewrite: /productpage
service: productpage:9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
The annotation above implements an Ambassador mapping from the '/productpage/' URI to the Kubernetes productpage service running on port 9080 ('productpage:9080'). The 'prefix' mapping URI is taken from the context of the root of your Ambassador service that is acting as the ingress point (exposed externally via port 80 because it is a LoadBalancer) e.g. '35.224.41.XX/productpage/'.
You can now apply this manifest from the root of the Istio GitHub repo on your local file system (taking care to wrap the apply with istioctl kube-inject):
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
Optionally, delete the Ingress controller from the bookinfo.yaml manifest by typing kubectl delete ingress gateway.
Test Ambassador by going to the IP of the Ambassador LoadBalancer you configured above e.g. 35.192.109.XX/productpage/. You can see the actual IP address again for Ambassador by typing kubectl get services ambassador.
Also according to documentation there is no need for Ambassador pods to be injected.
Yes, I have already configured all these things. That's why I have mentioned it in the attached image. I have taken this from kiali dashboard. That output I have shared of the bookinfo application. I have deployed my own application and its also working fine.
But I want short out this unknown thing.
I am using the AWS EKS cluster.
Putting note about ambassador:
Ambassador should not have the Istio sidecar for two reasons. First, it cannot since running the two separate Envoy instances will result in a conflict over their shared memory segment. The second is Ambassador should not be in your mesh anyway. The mesh is great for handling traffic routing from service to service, but since Ambassador is your ingress point, it should be solely in charge of deciding which service to route to and how to do it. Having both Ambassador and Istio try to set routing rules would be a headache and wouldn't make much sense.
All the traffic coming from a source that is not part of the service mesh is going to be shown as unknown.
See what kiali says about the unknowns:
https://kiali.io/faq/graph/#many-unknown

How to expose kafka brokers ports externally in Kubernetes on AWS private/public VPC

I cannot access Kafka brokers externally (from a public IP address).
I am using https://github.com/Yolean/kubernetes-kafka
It has a very good guide, but I believe their built in method of exposing ports publicly does not work since I am running this cluster privately in a private/public VPC on AWS.
I believe their built in method of outside access simply exposes host ports on private subnet address (Is this correct?)
I know I can set up a load balancer per broker and alias a domain to each load balancer. But then I'm incurring extra costs on load balancers.
I have been looking at ingress resources, have successfully setup an nginx controller that would communicate to different services based on the url path to the host domain.
However with nginx I would receive a 503 Service Temporarily Unavailable with a curl to the url (would get success on echoserver url). So I quickly realised that http requests aren't making sense here. Not to the brokers anyway?
I'm now stuck on learning nginx and a succesful way of proxying the requests.
Is there a specific proxy protocol I should use?
This could also be incorrect server.properties config.
When using nginx I would have the ingress resource connect to the outside-${BROKER_ID} services (I changed the first one to a clusterIP service, others stayed as NodePort). To me this is external DNS mapping to internal IPs. So I would think that the default listeners setting on the Kafka server.properties are OK for this?. Otherwise should the listener become the domain aliased to the load balancer? I had tried the domain with the URL path as an advertised listener, but that didn't make any sense to me, and resulted in crash loops!
For anyone wanting to look at configs, I'm currently running with the default (kinda, 5 pzoos, no ezoos [they were always stuck as pending]) version of:
https://github.com/Yolean/kubernetes-kafka
This can be setup very quickly on an existing cluster (For AWS):
git clone https://github.com/Yolean/kubernetes-kafka
cd kubernetes-kafka
(AWS) rm configure/!(aws*)
kubectl apply -f configure
kubectl apply -f 00-namespace*
kubectl apply -f rbac*
kubectl apply -f zookeeper
kubectl apply -f kafka
kubectl config set-context $(kubectl config current-context) --namespace=kafka
I am running this version of nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml
The echoserver came from:
https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
Specifc lines used:
kubectl run echoheaders --image=k8s.gcr.io/echoserver:1.4 --replicas=1 --port=8080
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y
Here is my ingress resource for nginx:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echomap
# annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: brokers.my-domain.com
http:
paths:
- path: /broker0
backend:
serviceName: outside-0
servicePort: 31100
- path: /broker1
backend:
serviceName: outside-1
servicePort: 31101
- path: /broker2
backend:
serviceName: outside-2
servicePort: 31102
- path: /bar
backend:
serviceName: echoheaders-y
servicePort: 80
- path: /foo
backend:
serviceName: echoheaders-x
servicePort: 80
EDIT:
I focused on getting external access through load balancers, and somewhat succeeded. Problems can be found here https://serverfault.com/questions/949367/can-connect-to-kafka-but-cannot-consume
Pretty sure nginx isn't going to work as ingress? I can't figure out how a http request becomes a TCP request.
Moving on to internal kafka streams apps now. Will come back to this when having a separate cluster for kafka streams becomes more necessary.

Flask with Gunicorn on Kubernetes ingress yields 502 nginx error

I have built a flask app that I would like to add to a Kubernetes ingress. Currently, I have 2 questions I cannot seem to get my head around:
In order for the flask app to be able to handle several requests, I figured I would add gunicorn. Do I need this, or can I mitigate this by using some kind of automatic horizontal scaling and the ingress routing layer handle it? I am new to Kubernetes, and perhaps the solution is simpler than what I am trying below.
With the presumption that I do need gunicorn, I have proceeded and added it to the flask docker. The problem I have with this is that I now get a 502 Bad Gateway Error nginx and the log of the pod have not printed any error. If I create a load balancer service instead of the clusterIP I use with the ingress, the flask app with unicorn works fine, just as the flask app does on the ingress without adding gunicorn. I have no idea why hence writing this question. The dockerfile installs all dependencies to run flask and finishes with:
EXPOSE 8080
CMD ["gunicorn", "--config", "/flaskapp/gunicorn_config.py", "run:app"]
I have configured my ingress like this:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/client-max-body-size: 128m
ingress.bluemix.net/rewrite-path: serviceName=flask-service rewrite=/;
spec:
rules:
- host: <my-domain>
http:
paths:
- backend:
serviceName: flask-service
servicePort: 8080
path: /flask/
tls:
- hosts:
- <my-domain>
secretName: <my-secret>
status:
loadBalancer:
ingress:
- ip: <ip>
The service looks like this:
apiVersion: v1
kind: Service
metadata:
name: flask-service
labels:
app: flask-service
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
selector:
app: flask
The deployment is also very simple specifying the correct image and port.
Given that I need gunicorn(or similar), how can I solve the 502 Bad Gateway Error I get?
IMO, you don't need gunicorn scaling (it's an overkill) since an HPA will do the scaling if your single application instances already. This depending on CPUs, memory or custom metrics.
The 502 errors seem to me it's more of how gunicorn is configured issue (is there a limit on the workers? can you see the workers to just 1 to test? how is it scaling inside the container? What are the resource limits on the container?). Hard to tell without looking at logs or the environment, but it could be that you gunicorn workers are thrashing in the container thus returning an invalid response. You might want to try --log-level debug on the gunicorn command line.
Hope it helps.

Why doesn't my pod respond to requests on the exposed port?

I've just launched a fairly basic cluster based on the CoreOS kube-aws scripts.
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
I've activated the registry add-on, and I have it correctly proxying to my local box so I can push images to the cluster on localhost:5000. I also have the proxy pod correctly loaded on each node so that localhost:5000 will also pull images from that registry.
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry
Then I dockerized a fairly simple Sinatra app to run on my cluster and pushed it to the registry. I also prepared a ReplicationController definition and Service definition to run the app. The images pulled and started no problem, I can use kubectl to get the startup logs from each pod that belongs to the replication group.
My problem is that when I curl the public ELB endpoint for my service, it just hangs.
Things I've tried:
I got the public IP for one of the nodes running my pod and attempted to curl it at the NodePort described in the service description, same thing.
I SSH'd into that node and attempted curl localhost:3000, same result.
Also SSH'd into that node, I attempted to curl <pod-ip>:3000, same result.
ps shows the Puma process running and listening on port 3000.
docker ps on the node shows that the app container is not forwarding any ports to the host. Is that maybe the problem?
The requests must be routing correctly because hitting those IPs at any other port results in a connection refused rather than hanging.
The Dockerfile for my app is fairly straightforward:
FROM ruby:2.2.4-onbuild
RUN apt-get update -qq && apt-get install -y \
libpq-dev \
postgresql-client
RUN mkdir -p /app
WORKDIR /app
COPY . /app
EXPOSE 3000
ENTRYPOINT ['ruby', '/app/bin/entrypoint.rb']
Where entrypoint.rb will start a Puma server listening on port 3000.
My replication group is defined like so:
apiVersion: v1
kind: ReplicationController
metadata:
name: web-controller
namespace: app
spec:
replicas: 2
selector:
app: web
template:
metadata:
labels:
app: web
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: app
image: localhost:5000/app:v2
resources:
limits:
cpu: 100m
memory: 50Mi
env:
- name: DATABASE_NAME
value: app_production
- name: DATABASE_URL
value: postgresql://some.postgres.aws.com:5432
- name: ENV
value: production
- name: REDIS_URL
value: redis://some.redis.aws.com:6379
volumeMounts:
- name: secrets
mountPath: "/etc/secrets"
readOnly: true
command: ['/app/bin/entrypoint.rb', 'web']
ports:
- containerPort: 3000
And here is my service:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: web
type: LoadBalancer
Output of kubectl describe service web-service:
Name: web-service
Namespace: app
Labels: <none>
Selector: app=web
Type: LoadBalancer
IP: 10.3.0.204
LoadBalancer Ingress: some.elb.aws.com
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32062/TCP
Endpoints: 10.2.47.3:3000,10.2.73.3:3000
Session Affinity: None
No events.
docker ps on one of the nodes shows that the app container is not forwarding any ports to the host. Is that maybe the problem?
Edit to add entrypoint.rb and Procfile
entrypoint.rb:
#!/usr/bin/env ruby
db_user_file = '/etc/secrets/database_user'
db_password_file = '/etc/secrets/database_password'
ENV['DATABASE_USER'] = File.read(db_user_file) if File.exists?(db_user_file)
ENV['DATABASE_PASSWORD'] = File.read(db_password_file) if File.exists?(db_password_file)
exec("bundle exec foreman start #{ARGV[0]}")
Procfile:
web: PORT=3000 bundle exec puma
message_worker: bundle exec sidekiq -q messages -c 1 -r ./config/environment.rb
email_worker: bundle exec sidekiq -q emails -c 1 -r
There was nothing wrong with my Kubernetes set up. It turns out that the app was failing to start because the connection to the DB was timing out due to some unrelated networking issue.
For anyone curious: don't launch anything external to Kubernetes in the 10.x.x.x IP range (e.g. RDS, Elasticache, etc). Long story short, Kubernetes currently has an IPTables masquerade rule hardcoded that messes up communication with anything in that range that isn't part of the cluster. See the details here.
What I ended up doing was creating a separate VPC for my data stores on a different IP range and peering it with my Kubernetes VPC.