I built an API in Django and I used docker-compose to orchestrate redis and celery services. Now, I would like to move the API to a Kubernetes cluster (AKS). However, I get the error python can't find manage.py when I run it into the cluster. I used Kompose tool to write the kubernets manifest.yaml. This is my Dockerfile, docker-compose.yml and kubernets.yaml files
# docker-compose.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python3 ./manage.py makemigrations &&
python3 ./manage.py migrate &&
python3 ./manage.py runserver 0.0.0.0:8000"
The Dockerfile
# Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
COPY ./app /app
WORKDIR /app
And the kubernets manifest
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: app
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- args:
- sh
- -c
- |-
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
image: <image pushed in a Azure Container Registry (ACR)>
name: app
ports:
- containerPort: 8000
resources: {}
volumeMounts:
- mountPath: /app
name: app-claim0
restartPolicy: Always
volumes:
- name: app-claim0
persistentVolumeClaim:
claimName: app-claim0
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: app-claim0
name: app-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
Log error
$ kubectl logs app-6fc488bf56-hb8g9 --previous
# python: can't open file 'manage.py': [Errno 2] No such file or director
I think your problem is with the volumes segment of your kubernetes config.
In your docker-compose config, you have a volume mounting
volumes:
- ./app:/app
This is great for local development because it'll take the copy of the code on your machine and overlay it on the docker image so the changes you make on your machine will be reflected in your running docker container, allowing runserver to see file changes and reload the django server as needed.
This is less great in kubernetes - when you're running in prod, you want to be using the code that's been baked in to the image. As this is currently configured, I think you're creating an empty PersistentVolumeClaim and then mounting it on top of your /app directory in the running container. Since it's empty, there is no manage.py file to be found.
Try making your kubernetes configuration look like this:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: app
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- args:
- sh
- -c
- |-
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
image: <image pushed in a Azure Container Registry (ACR)>
name: app
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
That should be the same as what you have minus any references to volumes.
If that works then you're off to the races. Once it's up and running, take a look at the Django deployment guide about some other settings you should configure - namely, take a look at gunicorn; runserver isn't your best path forward for production deploys.
Related
I am new to Kubernetes. By reading some blogs and documentation I have successfully created the EKS cluster. I am using ALB(layer 7 load balancing) for my Django app. For controlling the routes/paths I am using the ALB ingress controller. But I am unable to serve my static contents for Django admin. I know that I need a webserver(Nginx) to serve my static files. I'm not sure how to configure to serve static files.
note: (I don't want to use whitenoise)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "backend-ingress"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: subnet-1, subnet-2, subnet-3
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-1:***:certificate/*
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
labels:
app: stage
spec:
rules:
- host: *.somedomain.com
http:
paths:
- path: /*
backend:
serviceName: backend-service
servicePort: 8000
this is the ingress yaml i am using. But whenever i am trying to visit my Django admin it's not loading the css and js files.
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-dashboard-backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: server-dashboard
image: *.dkr.ecr.ap-southeast-1.amazonaws.com/*:4
volumeMounts:
- name: staticfiles
mountPath: /data
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c" , "cp -r /static /data/"]
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /data
I solved the problem creating a pod with the Django BE and Nginx reverse-proxy, sharing the static files volume:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: nginx
image: ...
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /data
- name: django
image: ...
ports:
- containerPort: 8000
volumeMounts:
- name: staticfiles
mountPath: /data
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /path/to/staticfiles /data/"]
Then, in the Service (and the Ingress), point the Nginx 80 port.
I have solved the problem.
i removed the command ["/bin/sh", "-c", "cp -r /path/to/staticfiles /data/"]
I was mounting in the wrong path. So the new deployment file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-dashboard-backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: server-dashboard
image: *.dkr.ecr.ap-southeast-1.amazonaws.com/*:4
volumeMounts:
- name: staticfiles
mountPath: /usr/src/code/static
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /usr/share/nginx/html/static/
I'm trying to deploy a small Django app, that creates its own db.sqlite3 database, in a Kubernetes pod. When doing so without a persistent volume to save the db.sqlite3, it works fine but when trying to save it in a persistent volume, the pod outputs django.db.utils.OperationalError: unable to open database file
The first problem I had is when these commands:
python ./manage.py migrate
sh -c "envdir ${ENVDIR} python manage.py collectstatic"
were run in the Dockerfile. After mounting the volume, none of my files would be visible. I learned that K8s volumes behaved differently from docker volumes and my solution was to put the commands in a shell script and execute it in the CMD or ENTYPOINT. This created the files after mounting and were visible but still doesn't help with the current problem.
I tried using a persistent volume, tried using a hostPath defined volume and even tried using an initContainer that has the same image as the app, only it first sets the permissions to 777 for db.sqlite3 and executes the two commands above, but it can't even start for some reason.
Here's the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
namespace: app-prod
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
fsGroupChangePolicy: "OnRootMismatch"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- on-demand-worker
terminationGracePeriodSeconds: 30
containers:
- name: notation
image: myimage
imagePullPolicy: Always
command: ["bash", "-c", "python ./manage.py runserver 0.0.0.0:8000"]
ports:
- containerPort: 8000
name: http
volumeMounts:
- name: myapp-data
mountPath: /app/db.sqlite3
subPath: db.sqlite3
securityContext:
privileged: true
#securityContext:
#allowPrivilegeEscalation: false
initContainers:
- name: sqlite-data-permission-fix
image: myimage
command: ["bash","-c","chmod -R 777 /app && python ./manage.py migrate && envdir ./notation/.devenv python manage.py collectstatic"]
volumeMounts:
- name: myapp-data
mountPath: /app/db.sqlite3
subPath: db.sqlite3
resources: {}
volumes:
- name: myapp-data
persistentVolumeClaim:
claimName: notation
The permissions I see on the host look good, 777 as I wanted them to be so I really don't know what the problem is... Any help would be appreciated.
I have a deployment on Kubernetes (AWS EKS), with several environment variables defined in the deployment .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myApp
name: myAppName
spec:
replicas: 2
(...)
spec:
containers:
- env:
- name: MY_ENV_VAR
value: "my_value"
image: myDockerImage:prodV1
(...)
If I want to upgrade the pods to another version of the docker image, say prodV2, I can perform a rolling update which replaces the pods from prodV1 to prodV2 with zero downtime.
However, if I add another env variable, say MY_ENV_VAR_2 : "my_value_2" and perform the same rolling update, I don't see the new env var in the container. The only solution I found in order to have both env vars was to manually execute
kubectl delete deployment myAppName
kubectl create deployment -f myDeploymentFile.yaml
As you can see, this is not zero downtime, as deleting the deployment will terminate my pods and introduce a downtime until the new deployment is created and the new pods start.
Is there a way to better do this? Thank you!
Here is an example you might want to test yourself:
Noice I used spec.strategy.type: RollingUpdate.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_value"
Apply:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl exec -it nginx-<hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_value
Notice the env is as set in yaml
Now we edit the env in deployment.yaml:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_new_value"
apply and wait for it to update:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl get po --watch
# after it updated use Ctrl+C to stop the watch and run:
➜ ~ kubectl exec -it nginx-<new_hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_new_value
As you should see, the env changed. That is pretty much it.
Following the steps outlined here, I created a basic Quorum network with 4 nodes and IBFT consensus. I then created a docker image for each of the nodes, copying the contents of each node's directory on to the image. The image was created from the official quorumengineering/quorum image, and when started as a container it executes the geth command. An example Dockerfile follows (different nodes have different rpcports/ports):
FROM quorumengineering/quorum
WORKDIR /opt/node
COPY . /opt/node
ENTRYPOINT []
CMD PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --rpcvhosts="*" --emitcheckpoints --port 30304
I then made a docker-compose file to run the images.
version: '2'
volumes:
qnode0-data:
qnode1-data:
qnode2-data:
qnode3-data:
services:
qnode0:
container_name: qnode0
image: <myDockerHub>/qnode0
ports:
- 22000:22000
- 30303:30303
volumes:
- qnode0-data:/opt/node
qnode1:
container_name: qnode1
image: <myDockerHub>/qnode1
ports:
- 22001:22001
- 30304:30304
volumes:
- qnode1-data:/opt/node
qnode2:
container_name: qnode2
image: <myDockerHub>/qnode2
ports:
- 22002:22002
- 30305:30305
volumes:
- qnode2-data:/opt/node
qnode3:
container_name: qnode3
image: <myDockerHub>/qnode3
ports:
- 22003:22003
- 30306:30306
volumes:
- qnode3-data:/opt/node
When running these images locally with docker-compose, the nodes start and I can even see the created blocks via a blockchain explorer. However, when I try to run this in a kubernetes cluster, either locally with minikube, or on AWS, the nodes do not start but rather crash.
To deploy on kubernetes I made the following three yaml files for each node (12 files in total):
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: qnode0
name: qnode0
spec:
replicas: 1
selector:
matchLabels:
app: qnode0
strategy:
type: Recreate
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- containerPort: 22000
- containerPort: 30303
resources: {}
volumeMounts:
- mountPath: /opt/node
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
status: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: qnode0-service
spec:
selector:
app: qnode0
ports:
- name: rpcport
protocol: TCP
port: 22000
targetPort: 22000
- name: netlistenport
protocol: TCP
port: 30303
targetPort: 30303
persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: qnode0-data
name: qnode0-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
When trying to run on a kubernetes cluster, each node runs into this error:
ERROR[] Cannot start mining without etherbase err="etherbase must be explicitly specified"
Fatal: Failed to start mining: etherbase missing: etherbase must be explicitly specified
which does not occur when running locally with docker-compose. After examining the logs, I saw there is a difference between how the nodes startup locally with docker-compose and on kubernetes, which is the following lines:
locally I see the following lines in each node's output:
INFO [] Initialising Ethereum protocol name=istanbul versions="[99 64]" network=10 dbversion=7
...
DEBUG[] InProc registered namespace=istanbul
on kubernetes (either in minikube or AWS) I see these lines differently:
INFO [] Initialising Ethereum protocol name=eth versions="[64 63]" network=10 dbversion=7
...
DEBUG[] IPC registered namespace=eth
DEBUG[] IPC registered namespace=ethash
Why is this happening? What is the significance of name=istanbul/eth? My common sense logic says that the error happens because of the use of name=eth, instead of name=istanbul. But I don't know the significance of this, and more importantly, I don't know what it is I did to inadvertently affect the kubernetes deployment.
Any ideas how to fix this?
EDIT
I tried to address what David Maze mentioned in his comment, i.e. that the node directory gets overwritten, so I created a new directory in the image with
RUN mkdir /opt/nodedata/
and used that to mount volumes in kubernetes. I also used StatefulSets instead of Deployments in kubernetes. The relevant yaml follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: qnode0
spec:
serviceName: qnode0
replicas: 1
selector:
matchLabels:
app: qnode0
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- protocol: TCP
containerPort: 22000
- protocol: TCP
containerPort: 30303
volumeMounts:
- mountPath: /opt/nodedata
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
Changing the volume mount immediately produced the correct behaviour of
INFO [] Initialising Ethereum protocol name=istanbul
However, I had networking issues, which I solved by using the environment variables that kubernetes sets for each service, which include the IP each service is running at, e.g.:
QNODE0_PORT_30303_TCP_ADDR=172.20.115.164
I also changed my kubernetes services a little, as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: qnode0
name: qnode0
spec:
ports:
- name: "22000"
port: 22000
targetPort: 22000
- name: "30303"
port: 30303
targetPort: 30303
selector:
app: qnode0
Using the environment variables to properly initialise the quorum files solved the networking problem.
However, when I delete my stateful sets and my services with:
kubectl delete -f <my_statefulset_and_service_yamls>
and then apply them again:
kubectl apply -f <my_statefulset_and_service_yamls>
quorum starts from scratch, i.e. it does not continue block creation from where it stopped but starts from 1 again, as follows:
Inserted new block number=1 hash=1c99d0…fe59bb
So the state of the blockchain is not saved, as was my initial fear. What should I do to address this?
I have an application with two images, one running NGINX and the other one running Flask/uWSGI. It works as expected using docker-compose. Now I try to deploy my application on a Kubernetes cluster, but I am unable to make a connection between NGINX and my uWSGI application server.
The logs is my nginx deployment say:
2019/09/13 11:29:53 [error] 6#6: *21 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.244.0.1, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://10.0.232.218:8080", host: "52.166.xxxxxx"
However, it appears my flask service is running correctly.
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default flask NodePort 10.0.232.218 <none> 8080:32043/TCP 23m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 19h
default nginx LoadBalancer 10.0.162.165 52.166.xxxxx 80:31669/TCP 23m
I have tried changing the settings in the services, but without success. I used kompose to convert my docker-compose.yaml to *-deployment, *-service.yaml files. Notably during the process kompose did not provide a flask-service.yaml. I had to create it manually.
Here is my code:
docker-compose.yaml
version: "3"
services:
flask:
image: registry.azurecr.io/model_flask
build: ./flask
container_name: flask
restart: always
environment:
- APP_NAME=fundamentalmodel
expose:
- 8080
nginx:
image: registry.azurecr.io/model_nginx
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
The Dockerfile for Flask
# Dockerfile to build glpk container images
# Based on Ubuntu
# Set the base image to Ubuntu
FROM ubuntu:latest
# Switch to root for install
USER root
# Install wget
RUN apt-get update -y && apt-get install -y \
wget \
build-essential \
python3 \
python3-pip \
python3.6-dev \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install setuptools Cython numpy wheel
RUN pip3 install uwsgi
# Create a user
ENV HOME /home/user
RUN useradd --create-home --home-dir $HOME user \
&& chmod -R u+rwx $HOME \
&& chown -R user:user $HOME
# switch back to user
WORKDIR $HOME
USER user
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN pip3 install -r requirements.txt
RUN pip3 install .
RUN export LC_ALL=C.UTF-8
RUN export LANG=C.UTF-8
RUN export FLASK_APP=server
CMD ["uwsgi", "uwsgi.ini"]
My nginx Dockerfile
# Use the Nginx image
FROM nginx
# Remove the default nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
# Replace with our own nginx.conf
COPY nginx.conf /etc/nginx/conf.d/
Now my Kubernetes configuration files:
flask-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: flask
name: flask
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: flask
spec:
containers:
- env:
- name: APP_NAME
value: fundamentalmodel
image: registry.azurecr.io/fundamentalmodel_flask
name: flask
resources: {}
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
flask-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: flask
name: flask
spec:
type: NodePort
selector:
app: flask
ports:
- name: http
port: 8080
targetPort: 8080
nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose.exe convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nginx
spec:
containers:
- image: registry.azurecr.io/fundamentalmodel_nginx
name: nginx
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose.exe convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
nginx.conf
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass flask:8080;
}
}
uswgi.ini
[uwsgi]
wsgi-file = run.py
callable = app
socket = :8080
processes = 4
threads = 2
master = true
chmod-socket = 660
vacuum = true
die-on-term = true
I think the solution here is that you change a couple of settings, namely your service should use a ClusterIP as NodePort serves a different purpose (read more here https://kubernetes.io/docs/concepts/services-networking/service/#nodeport):
apiVersion: v1
kind: Service
metadata:
labels:
name: flask
name: flask
spec:
type: ClusterIP
selector:
app: flask
ports:
- name: http
port: 8080
targetPort: 8080
Also your UWSGI config should specify 0.0.0.0:8080
[uwsgi]
wsgi-file = run.py
callable = app
socket = 0.0.0.0:8080
processes = 4
threads = 2
master = true
chmod-socket = 660
vacuum = true
die-on-term = true