Connect Flask python script to Redis in K8s cluster - flask

I have k8s cluster in ec2, I have implemented the guestbook k8s (php and redis) example and it works well.
https://kubernetes.io/docs/tutorials/stateless-application/guestbook/
I have a flask app, it's a docker image. This app has main.py script file and inside it, I have tried connecting Redis with the following help
Connecting a flask container to a redis container over kubernetes
this is my redis service running in EC2
redis-follower ClusterIP 10.102.44.232 <none> 6379/TCP 16d
redis-leader ClusterIP 10.108.164.219 <none> 6379/TCP 16d
this is my python script
from flask import Flask
from flask import render_template
import os
from flask import Flask
from redis import Redis, RedisError
import socket
app = Flask(__name__)
#app.route("/")
def sample_run_report():
redis_conn = Redis(host='redis-leader.default.svc.cluster.local', port=6379, db=0)
redis_conn.set("key", "foo")
val = redis_conn.get("key")
return val
if __name__ == "__main__":
app.run(host='0.0.0.0')
I dont get the value, I think my connection to Redis is not successful.
For deployments I have used guestbook php redis yaml.
application/guestbook/redis-leader-deployment.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
role: leader
tier: backend
spec:
containers:
- name: leader
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
This is the service and deployment YAML file I use for flask app deployment in ec2.
apiVersion: v1
kind: Service
metadata:
name: redis-flask-python1-service
spec:
selector:
app: redis-flask-python1
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-flask-python1
spec:
selector:
matchLabels:
app: redis-flask-python1
replicas: 1
template:
metadata:
labels:
app: redis-flask-python1
spec:
containers:
- name: redis-flask-python1
image: http://docker.io/dechenw/redis-flask-python1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
imagePullSecrets:
- name: regcred
How can I resolve this issue?

Related

Kubernetes Istio Gateway & VirtualService Config Flask Example

I try to get some hands on experience with K8s & istio. I am using minikube and I try to deploy a dummy flask web-app. However, for some reason I do not manage to get the istio routing working.
E.g.
curl -v -H 'Host: hello.com' 'http://127.0.0.1/' --> 503 Service Unavailable
Do you see any issue in my specs?
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "hello.com"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskvirtualservice
spec:
hosts:
- "hello.com"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask.default.svc.cluster.local
port:
number: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask
labels:
app: flask
spec:
replicas: 1
selector:
matchLabels:
app: flask
template:
metadata:
labels:
app: flask
spec:
containers:
- name: flask
image: digitalocean/flask-helloworld
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
selector:
app.kubernetes.io/name: flask
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: 5000
Thanks for your support here!
Cheers
EDIT:
here the updated service definition
apiVersion: v1
kind: Service
metadata:
name: flask
labels:
app: flask
service: flask
spec:
ports:
- port: 5000
name: http
selector:
app: flask
I would suggest you look at this sample from the istio repo:
https://github.com/istio/istio/tree/master/samples/helloworld
This helloworld app is a flask app and you can find the python source code in src
Syntax
In your yaml you do not have --- between your Gateway and VirtualService.
DNS
You also don't make mention of DNS IE you need to make sure that the box you are running curl on has the ability to resolve your domain hello.com to the istio service ip. Since you are using minikube you could add an entry to your OS hosts file.
Routability
It has the ability to send requests to it, IE if you are outside the cluster you need an external ip or do something with kubectl port-forward ...
I hope this helps you sort things out!

Ingress Controller produces 502 Bad Gateway on every other request

I have a kubernetes ingress controller terminating my ssl with an ingress resource handling two routes: 1 my frontend SPA app, and the second backend api. Currently when I hit each frontend and backend service directly they perform flawlessly, but when I call the ingress controller both frontend and backend services alternate between producing the correct result and a 502 Bad Gateway.
To me it smells like my ingress resource is having some sort of path conflict that I'm not sure how to debug.
Reddit suggested that it could be a label and selector mismatch between my services and deployments which I believe I checked thoroughly. they also mentioned: "api layer deployment and a worker layer deployment [that] both share a common app label and your PDB selects that app label with a 50% availability for example". Which I haven't run down because I don't quite understand.
I also realize SSL could play a role in gateway issues; However, my certificates appear to be working when I hit the https:// port of the ingress-controller
frontend-deploy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-front
tier: frontend
replicas: 1
template:
metadata:
labels:
app: my-performance-front
tier: frontend
spec:
containers:
- name: my-performance-frontend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
frontend-svc
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
app: my-performance-front
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-back
tier: backend
replicas: 1
template:
metadata:
labels:
app: my-performance-back
tier: backend
spec:
containers:
- name: my-performance-backend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
backend-svc
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
app: my-performance-back
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
ingress-rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- http:
paths:
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Any ideas, critiques, or experiences are welcomed and appreciated!!!

DisallowedHost Django deployment in Kubernetes cluster: Invalid HTTP_HOST header

I have a Django deployment for a frontend service in my Azure Kubernetes cluster with some basic configuration. But note that the same question applies for my local Minikube cluster. I fetch my Django frontend container image from my remote container registry and expose port 8010. My service configuration is quite simple as well.
frontend.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-v1
labels:
app: frontend-v1
spec:
replicas: 1
selector:
matchLabels:
app: frontend-v1
template:
metadata:
labels:
app: frontend-v1
spec:
containers:
- name: frontend-v1
imagePullPolicy: Always
image: yourremotename.azurecr.io/frontend-remote:v1
ports:
- containerPort: 8010
imagePullSecrets:
- name: acr-secret
frontend.service.yaml
kind: Service
apiVersion: v1
metadata:
name: frontend-v1
spec:
selector:
app: frontend-v1
ports:
- NodePort:
protocol: TCP
port: 8010
targetPort: 8010
type: NodePort
Now, when I access my deployed frontend service in the browser i.e. http://172.17.194.253:31436 with Django's setting DEBUG = True, I get the error:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/dist-packages/django/utils/deprecation.py", line 93, in __call__
response = self.process_request(request)
File "/usr/local/lib/python3.6/dist-packages/django/middleware/common.py", line 48, in process_request
host = request.get_host()
File "/usr/local/lib/python3.6/dist-packages/django/http/request.py", line 122, in get_host
raise DisallowedHost(msg)
Exception Type: DisallowedHost at /
Exception Value: Invalid HTTP_HOST header: '172.17.194.253:31436'. You may need to add '172.17.194.253' to ALLOWED_HOSTS.
But how can I bind the dynamically created HostIp of the pod to Django's ALLOWED_HOSTS?
Since Kubernetes 1.7 it is possible to request the HostIp of the pod in your kubernetes deployment file.(1)
First adjust the deployment file to set the required environment variable for the HostIp. In the beneath scenario I set the POD_IP and the HOST_IP, as they are different. You can inject a variety of Kubernetes application data variables using environment variables in your Kubernetes deployment files, for more info about this topic look here.
frontend.service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-v1
labels:
app: frontend-v1
spec:
replicas: 1
selector:
matchLabels:
app: frontend-v1
template:
metadata:
labels:
app: frontend-v1
spec:
containers:
- name: frontend-v1
imagePullPolicy: Always
image: yourremotename.azurecr.io/frontend-remote:v1
ports:
- containerPort: 8010
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
imagePullSecrets:
- name: acr-secret
Now in you Django settings adjust the ALLOWED_HOSTS configuration to point to the HOST_IP environment variable.
settings.py
import os
...
ALLOWED_HOSTS = [os.environ.get('HOST_IP'), '127.0.0.1']
....
Note that this allows the pod's HostIP as well als localhost for local development purposes.
Warning! Some blog posts or tutorials advise you to set ALLOWED_HOSTS = ['*'] to accept all host IP's, but this is a serious security loophole. Don't do this!
Now redeploy your pod and your Django application should run smoothly again.
Or, simply add Host: yourdomain.com in the readinessProbe header. You can also customise the default path.
readinessProbe:
httpGet:
path: /
port: 8010 # Must be same as containerPort
httpHeaders:
- name: Host
value: yourdomain.com

Aws ingress controller setup

I have try to expose my micro-service to the internet with aws ec2. Using the deployment and service yaml file under below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy: {}
template:
metadata:
labels:
app: my-app
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: my-app
image: XXX
ports:
- name: my-app
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
And also create a ingress resource.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.myApp.com
http:
paths:
- path: /my-app
backend:
serviceName: my-app
servicePort: 80
The last step I open the 80 port in aws dashboard, how should I choice the ingress controller to realize my intend?
servicePort should be 3000, the same as port in your service object.
Note however that, setting up cluster with kubeadm on AWS is not the best way to go: EKS provides you optimized, well configured clusters with external load-balancers and ingress controllers.

Why can Kubernetes not route a service on public ELB on AWS?

I've been trying to follow the example (guestbook) to reproduce another application which has to be available on a public interface.
This is my Kubernetes configuration (YAML):
apiVersion: v1
kind: Service
metadata:
name: my-app-server
labels:
app: my-app-server
tier: backend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-server
spec:
replicas: 3
template:
metadata:
labels:
app: my-app-server
tier: backend
spec:
containers:
- name: ppm-server
image: docker/container:tag
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
Not sure why this is not working.
The guestbook all-in-one example seems to work just fine though.
I tried using the exact same configuration file while just changing the variables in the configuration.