Why does my django web application not load for my graphs that I have on kubernetes? - django

I have a Django web application that can display forecasts graphs using the machine learning library Sktime and the library plotly for graphs. It runs fine on my local machine. However, when I run it on Kubernetes it doesn't load. The web page just stays forever loading. I have tried changing my yaml's resource files by increasing CPU and memory to 2000m and 1000mi, respectively. Unfortunately that does not fix the problems. Right now the way I run my application is by using the minikube command: minikube service --url mywebsite. I don't know whether its the proper way to start my application. Does anyone know?
Service + Deployment YAML:
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 8000
resources:
requests:
cpu: 200m
memory: 100Mi
limits:
memory: "1Gi"
cpu: "200m"

Posted answer with general solution as there are no further details / logs provided.
According to the official minikube documentation for accessing apps minikube supports both NodePort and LoadBalancer services:
There are two major categories of services in Kubernetes: NodePort and LoadBalancer
For accessing NodePort service you should use minikube service --url <service-name> command - check this.
For accessing LoadBalancer service you should use minikube tunnel command - check this.
As LoadBalancer type is also exposing NodePort, it should work with a minikube service command as you tried. I installed a minikube with Docker driver. I created a sample deployment, then I created a sample LoadBalancer service for this deployment. After that I ran minikube service --url <my-service> - On the output, I got address like:
http://192.168.49.2:30711
30711 is a node port. It's working fine when I try to access this address.
Why doesn't it work for you? Some possible reasons:
You are not using Linux - on the other OSes, there are some limitations for Minikube - i.e check this answer for Mac. Also it depends which minikube driver you are using.
Your pods are not running - you can check this with kubectl get pods command
You specified wrong ports in the definitions
Something is wrong with your image
Also check the "Troubleshooting" section on the minikube website.

Related

Exposing a web application to the global network using Kubernetes

i'm new to K8S, trying some exercises for the first time.
i'm trying to expose a simple web app (nginx) to the outer network. i'm working on an EC2 instance, with elastic ip (for a static ip address).
my deployment.yml file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: "nginx:latest"
name: nginx
ports:
- containerPort: 80
after running the commands:
kubectl apply -f deployment.yml
kubectl expose deployment nginx-deployment --name my-service --port 8080 --target-port=80 --type=NodePort
i would expect that i could address this simple app by the elastic ip:port (in my situation - 8080). can't connect.
i've tried to see details about my app via the command:
kubectl get services my-service
and got this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service NodePort 10.99.98.56 <none> 8080:32725/TCP 26m
i've also tried to open ALL OF THE PORTS in my instance - to check if there's any connection. what i did manage to do is to retrieve the internal ip address with:
kubectl get nodes -o wide
and then by adding the port number (the 32725) with the curl command - i've managed to get the nginx base html page.
my question is this: why couldn't i get the nginx base page via the elastic ip?
and how can i access my simple app?

Google managed Cloud run container fails to start on deploy from CLI but the same image works when manually deploying via dashboard

So I have this issue, I have a (currently) only local devops process which is just a series of commands in bash building a docker container for a nodejs application and uploading to google container registry and then deploying it to google cloud run from there.
The issue I'm having is the deployment step always fails throwing:
ERROR: (gcloud.beta.run.services.replace) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. and there's nothing in the logs when I follow the link or manually try to access the log for that service in cloud run.
At some point I had a code issue which was preventing the container from starting and I could see that error in the cloud run logs.
I'm using the following command & yaml to deploy:
gcloud beta run services replace .gcp/cloud_run/auth.yaml
and my yaml file:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: auth-service
spec:
template:
spec:
containers:
- image: gcr.io/my_project_id/auth-service
serviceAccountName: abc#my_project_id.iam.gserviceaccount.com
EDIT:
I have since pulled the yaml file configuration for the service that I manually deployed, and it looks something like this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
client.knative.dev/user-image: gcr.io/my_project_id/auth-service
run.googleapis.com/ingress: all
run.googleapis.com/ingress-status: all
run.googleapis.com/launch-stage: BETA
labels:
cloud.googleapis.com/location: europe-west2
name: auth-service
namespace: "1032997338375"
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: "2"
run.googleapis.com/client-name: cloud-console
run.googleapis.com/sandbox: gvisor
name: auth-service-00002-nux
spec:
containerConcurrency: 80
containers:
- image: gcr.io/my_project_id/auth-service
ports:
- containerPort: 3000
resources:
limits:
cpu: 1000m
memory: 512Mi
serviceAccountName: abc#my_project_id.iam.gserviceaccount.com
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100
I've changed the name to the service I'm trying to deploy from the command line and deployed it as a new service just like before, and it worked right away without further modifications.
Although I'm not sure which of the configurations I'm missing in my initial file as the documentation on the YAML for cloud run deployments doesn't specify a minimum configuration.
Any ideas which configs I can keep & which can be filtered out?
If you check both yaml files, you can find the property containerPort in the file generated by the console
By default cloud run performs a healtcheck test and expects listen something in the port 8080 or in this example the dockerfile will run over the port that Docker/Cloud Run sent to the container
In your case you are running a container that runs over the port 3000, if you don't declare the port, cloud run can't run your image because is not detecting anything on 8080
You can define the yaml as this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: auth-service
spec:
template:
spec:
containers:
- image: gcr.io/myproject/myimage:latest
ports:
- containerPort: 3000
serviceAccountName: abc#my_project_id.iam.gserviceaccount.com

Configure Cloud Run on Anthos to forward HTTP2

How do you make Cloud Run for Anthos forward incoming HTTP2 requests to a Cloud Run service as HTTP2 instead of HTTP/1.1?
I'm using GCP with Cloud Run for Anthos to deploy a Java application that runs a GRPC server. The Cloud Run app is exposed publicly. I have also configured Cloud Run for Anthos with an SSL cert. When I try to use a GRPC client to call my service, client sends request over HTTP2 which the load balancer accepts but then when the request is forwarded to my Cloud Run service (a Java application running GRPC server), it comes in as HTTP/1.1 and gets rejected by the GRPC server. I assume somewhere between the k8 load balancer and my k8 pod, the request is being forwarded as HTTP/1.1 but I don't see how to fix this.
Bringing to together #whlee's answer and his very important followup comment, here's exactly what I had to do to get it to work.
You must deploy using gcloud cli in order to change the named port. The UI does not allow you to configure the port name. Deploying from service yaml is currently a beta feature. To deploy, run: gcloud beta run services replace /path/to/service.yaml
In my case, my service was initially deployed using the GCP cloud console UI, so here are the steps I ran to export and replace.
Export my existing service (named hermes-grpc) to yaml file:
gcloud beta run services describe hermes-grpc --format yaml > hermes-grpc.yaml
Edit my export yaml and make the following edits:
replaced:
ports:
- containerPort: 6565
with:
ports:
- name: h2c
containerPort: 6565
deleted the following lines:
tcpSocket:
port: 0
Deleted the name: line from the section
spec:
template:
metadata:
...
name:
Finally, redeploy service from edited yaml:
gcloud beta run services replace hermes-grpc.yaml
In the end my edited service yaml looked like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
annotations:
client.knative.dev/user-image: interledger4j/hermes-server:latest
run.googleapis.com/client-name: cloud-console
creationTimestamp: '2020-01-09T00:02:29Z'
generation: 3
name: hermes-grpc
namespace: default
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/services/hermes-grpc
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: '2'
autoscaling.knative.dev/minScale: '1'
run.googleapis.com/client-name: cloud-console
spec:
containerConcurrency: 80
containers:
image: interledger4j/hermes-server:latest
name: user-container
ports:
- name: h2c
containerPort: 6565
readinessProbe:
successThreshold: 1
resources:
limits:
cpu: 500m
memory: 384Mi
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100
https://github.com/knative/docs/blob/master/docs/serving/samples/grpc-ping-go/README.md
Describes how to configure named port to make HTTP/2 work

Access AWS cluster endpoint running Kubernetes

I am new to Kubernetes and I am currently deploying a cluster in AWS using Kubeadm. The containers are deployed just fine, but I can't seem to access them with by browser. When I used to do this via Docker Swarm I could simply use the IP address of the AWS node to access and login in my application with by browser, but this does not seem to work with my current Kubernetes setting.
Therefore my question is how can I access my running application under these new settings?
You should read about how to use Services in Kubernetes:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service.
Basically Services allows a Deployment (or Pod) to be reached from inside or outside the cluster.
In your case, if you want to expose a single service in AWS, it is as simple as:
apiVersion: v1
kind: Service
metadata:
name: myApp
labels:
app: myApp
spec:
ports:
- port: 80 #port that the service exposes
targetPort: 8080 #port of a container in "myApp"
selector:
app: myApp #your deployment must have the label "app: myApp"
type: LoadBalancer
You can check if the Service was created successfully in the AWS EC2 console under "Elastic Load Balancers" or using kubectl describe service myApp
Both answers were helpful in my pursuit for a solution to my problem, but I ended up getting lost in the details. Here is an example that may help others with a similar situation:
1) Consider the following application yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-web-app
labels:
app: my-web-app
spec:
serviceName: my-web-app
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: my-web-app
image: myregistry:443/mydomain/my-web-app
imagePullPolicy: Always
ports:
- containerPort: 8080
name: cp
2) I decided to adopt Node Port (thank you #Leandro for pointing it out) to expose my service, hence I added the following to my application yaml:
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app
labels:
name: my-web-app
spec:
type: NodePort
ports:
- name: http1
port: 80
nodePort: 30036
targetPort: 8080
protocol: TCP
selector:
name: my-web-app
One thing that I was missing is that the label names in both sets must match in order to link my-web-app:StatefulSet (1) to my-web-app:Service (2). Then, my-web-app:StatefulSet:containerPort must be the same as my-web-app:Service:targetPort (8080). Finally, my-web-app:Service:nodePort is the port that we expose publicly and it must be a value between 30000-32767.
3) The last step is to ensure that the security group in AWS allows inbound traffic for the chosen my-web-app:Service:nodePort, in this case 30036, if not add the rule.
After following these steps I was able to access my application via aws-node-ip:30036/my-web-app.
Basically the way kubernetes is constructed is different. First of all your containers are kept hidden from the world, unless you create a service to expose them, a load balancer or nodePort. If you create a service of the type of clusterIP, it will be available only from inside the cluster. For simplicity use port forwading to test your containers, if everything is working then create a service to expose them (Node Port or load balancer). The best and more difficult approach is to create an ingress to handle inbound traffic and routing to the services.
Port Forwading example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
Change redis for your pod name and the appropriate port of your container.

Flask with Gunicorn on Kubernetes ingress yields 502 nginx error

I have built a flask app that I would like to add to a Kubernetes ingress. Currently, I have 2 questions I cannot seem to get my head around:
In order for the flask app to be able to handle several requests, I figured I would add gunicorn. Do I need this, or can I mitigate this by using some kind of automatic horizontal scaling and the ingress routing layer handle it? I am new to Kubernetes, and perhaps the solution is simpler than what I am trying below.
With the presumption that I do need gunicorn, I have proceeded and added it to the flask docker. The problem I have with this is that I now get a 502 Bad Gateway Error nginx and the log of the pod have not printed any error. If I create a load balancer service instead of the clusterIP I use with the ingress, the flask app with unicorn works fine, just as the flask app does on the ingress without adding gunicorn. I have no idea why hence writing this question. The dockerfile installs all dependencies to run flask and finishes with:
EXPOSE 8080
CMD ["gunicorn", "--config", "/flaskapp/gunicorn_config.py", "run:app"]
I have configured my ingress like this:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/client-max-body-size: 128m
ingress.bluemix.net/rewrite-path: serviceName=flask-service rewrite=/;
spec:
rules:
- host: <my-domain>
http:
paths:
- backend:
serviceName: flask-service
servicePort: 8080
path: /flask/
tls:
- hosts:
- <my-domain>
secretName: <my-secret>
status:
loadBalancer:
ingress:
- ip: <ip>
The service looks like this:
apiVersion: v1
kind: Service
metadata:
name: flask-service
labels:
app: flask-service
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
selector:
app: flask
The deployment is also very simple specifying the correct image and port.
Given that I need gunicorn(or similar), how can I solve the 502 Bad Gateway Error I get?
IMO, you don't need gunicorn scaling (it's an overkill) since an HPA will do the scaling if your single application instances already. This depending on CPUs, memory or custom metrics.
The 502 errors seem to me it's more of how gunicorn is configured issue (is there a limit on the workers? can you see the workers to just 1 to test? how is it scaling inside the container? What are the resource limits on the container?). Hard to tell without looking at logs or the environment, but it could be that you gunicorn workers are thrashing in the container thus returning an invalid response. You might want to try --log-level debug on the gunicorn command line.
Hope it helps.