Google managed Cloud run container fails to start on deploy from CLI but the same image works when manually deploying via dashboard - google-cloud-platform

So I have this issue, I have a (currently) only local devops process which is just a series of commands in bash building a docker container for a nodejs application and uploading to google container registry and then deploying it to google cloud run from there.
The issue I'm having is the deployment step always fails throwing:
ERROR: (gcloud.beta.run.services.replace) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. and there's nothing in the logs when I follow the link or manually try to access the log for that service in cloud run.
At some point I had a code issue which was preventing the container from starting and I could see that error in the cloud run logs.
I'm using the following command & yaml to deploy:
gcloud beta run services replace .gcp/cloud_run/auth.yaml
and my yaml file:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: auth-service
spec:
template:
spec:
containers:
- image: gcr.io/my_project_id/auth-service
serviceAccountName: abc#my_project_id.iam.gserviceaccount.com
EDIT:
I have since pulled the yaml file configuration for the service that I manually deployed, and it looks something like this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
client.knative.dev/user-image: gcr.io/my_project_id/auth-service
run.googleapis.com/ingress: all
run.googleapis.com/ingress-status: all
run.googleapis.com/launch-stage: BETA
labels:
cloud.googleapis.com/location: europe-west2
name: auth-service
namespace: "1032997338375"
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: "2"
run.googleapis.com/client-name: cloud-console
run.googleapis.com/sandbox: gvisor
name: auth-service-00002-nux
spec:
containerConcurrency: 80
containers:
- image: gcr.io/my_project_id/auth-service
ports:
- containerPort: 3000
resources:
limits:
cpu: 1000m
memory: 512Mi
serviceAccountName: abc#my_project_id.iam.gserviceaccount.com
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100
I've changed the name to the service I'm trying to deploy from the command line and deployed it as a new service just like before, and it worked right away without further modifications.
Although I'm not sure which of the configurations I'm missing in my initial file as the documentation on the YAML for cloud run deployments doesn't specify a minimum configuration.
Any ideas which configs I can keep & which can be filtered out?

If you check both yaml files, you can find the property containerPort in the file generated by the console
By default cloud run performs a healtcheck test and expects listen something in the port 8080 or in this example the dockerfile will run over the port that Docker/Cloud Run sent to the container
In your case you are running a container that runs over the port 3000, if you don't declare the port, cloud run can't run your image because is not detecting anything on 8080
You can define the yaml as this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: auth-service
spec:
template:
spec:
containers:
- image: gcr.io/myproject/myimage:latest
ports:
- containerPort: 3000
serviceAccountName: abc#my_project_id.iam.gserviceaccount.com

Related

Why does my django web application not load for my graphs that I have on kubernetes?

I have a Django web application that can display forecasts graphs using the machine learning library Sktime and the library plotly for graphs. It runs fine on my local machine. However, when I run it on Kubernetes it doesn't load. The web page just stays forever loading. I have tried changing my yaml's resource files by increasing CPU and memory to 2000m and 1000mi, respectively. Unfortunately that does not fix the problems. Right now the way I run my application is by using the minikube command: minikube service --url mywebsite. I don't know whether its the proper way to start my application. Does anyone know?
Service + Deployment YAML:
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 8000
resources:
requests:
cpu: 200m
memory: 100Mi
limits:
memory: "1Gi"
cpu: "200m"
Posted answer with general solution as there are no further details / logs provided.
According to the official minikube documentation for accessing apps minikube supports both NodePort and LoadBalancer services:
There are two major categories of services in Kubernetes: NodePort and LoadBalancer
For accessing NodePort service you should use minikube service --url <service-name> command - check this.
For accessing LoadBalancer service you should use minikube tunnel command - check this.
As LoadBalancer type is also exposing NodePort, it should work with a minikube service command as you tried. I installed a minikube with Docker driver. I created a sample deployment, then I created a sample LoadBalancer service for this deployment. After that I ran minikube service --url <my-service> - On the output, I got address like:
http://192.168.49.2:30711
30711 is a node port. It's working fine when I try to access this address.
Why doesn't it work for you? Some possible reasons:
You are not using Linux - on the other OSes, there are some limitations for Minikube - i.e check this answer for Mac. Also it depends which minikube driver you are using.
Your pods are not running - you can check this with kubectl get pods command
You specified wrong ports in the definitions
Something is wrong with your image
Also check the "Troubleshooting" section on the minikube website.

Can’t connect the created cluster that is exposed by using NodePort Service

So, I have been trying to create a cluster on AWS EKS for couple of days. I managed to upload the docker image on ECR, created appropriate VPC but could not managed to connect it from http://<ip:port. I am using NodePort service for exposing the project. The project is a basic .NET Core REST API that returns JSON.
I am using AWS CLI and kubectl for operations. I have already implemented the generated nodePort IP in the inbound rules of the worker nodes (EC2 Instances) security protocols.
Here are my yaml files;
Cluster.yaml -> yaml file for using pre-created VPC setup and defining node groups
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: EKS-Demo-Cluster
region: eu-central-1
vpc:
id: vpc-056dbccebc402e9a8
cidr: "192.168.0.0/16"
subnets:
public:
eu-central-1a:
id: subnet-04192a691f3c156a6
eu-central-1b:
id: subnet-0f89762f3d78ccb47
private:
eu-central-1a:
id: subnet-07fe8b089287a16c4
eu-central-1b:
id: subnet-0ae524ea2c78b49a7
nodeGroups:
- name: EKS-public-workers
instanceType: t3.medium
desiredCapacity: 2
- name: EKS-private-workers
instanceType: t3.medium
desiredCapacity: 1
privateNetworking: true
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app
spec:
replicas: 2
selector:
matchLabels:
app: demojsonapp
template:
metadata:
labels:
app: demojsonapp
spec:
containers:
- name: back-end
image: 921915718885.dkr.ecr.eu-central-1.amazonaws.com/sample_repo:latest
ports:
- name: http
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
app: demojsonapp
ports:
- protocol: TCP
port: 80
targetPort: 80
I don’t understand where the problem is. If you could help me it is very much appreciated.
Finally, here is the docker file that created the image which I uploaded to ECR for EKS cluster;
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 3000
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["WebApplication2/WebApplication2.csproj", "WebApplication2/"]
RUN dotnet restore "WebApplication2/WebApplication2.csproj"
COPY . .
WORKDIR "/src/WebApplication2"
RUN dotnet build "WebApplication2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebApplication2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication2.dll"]
First of all, you need to add a nodePort value under spec.ports in your service.yaml (you can refer to documentation to find some examples).
And note that, by default, the range of nodePort you can assign is limited to the interval [30000-32767] by kube-apiserver (you can search by keyword 'nodeport' in this document). It would be very hard, and you may not want, to change the NodePort visibility because the kube-apiserver of a cluster resides in the control plane of the cluster, not in worker nodes.
In my case, FYI, requests are first accepted by port 443 of the load balancer, then forwarded to port 30000 of one of the worker nodes. Then the service with nodePort: 30000 will receive the requests and pass them to appropriate Pods.
Summary
Add nodePort under spec.ports in service.yaml, with a value between 30000 and 32767.
If you want to defy your fate, have a try at changing the NodePort visibility of kube-apiserver.

Configure Cloud Run on Anthos to forward HTTP2

How do you make Cloud Run for Anthos forward incoming HTTP2 requests to a Cloud Run service as HTTP2 instead of HTTP/1.1?
I'm using GCP with Cloud Run for Anthos to deploy a Java application that runs a GRPC server. The Cloud Run app is exposed publicly. I have also configured Cloud Run for Anthos with an SSL cert. When I try to use a GRPC client to call my service, client sends request over HTTP2 which the load balancer accepts but then when the request is forwarded to my Cloud Run service (a Java application running GRPC server), it comes in as HTTP/1.1 and gets rejected by the GRPC server. I assume somewhere between the k8 load balancer and my k8 pod, the request is being forwarded as HTTP/1.1 but I don't see how to fix this.
Bringing to together #whlee's answer and his very important followup comment, here's exactly what I had to do to get it to work.
You must deploy using gcloud cli in order to change the named port. The UI does not allow you to configure the port name. Deploying from service yaml is currently a beta feature. To deploy, run: gcloud beta run services replace /path/to/service.yaml
In my case, my service was initially deployed using the GCP cloud console UI, so here are the steps I ran to export and replace.
Export my existing service (named hermes-grpc) to yaml file:
gcloud beta run services describe hermes-grpc --format yaml > hermes-grpc.yaml
Edit my export yaml and make the following edits:
replaced:
ports:
- containerPort: 6565
with:
ports:
- name: h2c
containerPort: 6565
deleted the following lines:
tcpSocket:
port: 0
Deleted the name: line from the section
spec:
template:
metadata:
...
name:
Finally, redeploy service from edited yaml:
gcloud beta run services replace hermes-grpc.yaml
In the end my edited service yaml looked like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
annotations:
client.knative.dev/user-image: interledger4j/hermes-server:latest
run.googleapis.com/client-name: cloud-console
creationTimestamp: '2020-01-09T00:02:29Z'
generation: 3
name: hermes-grpc
namespace: default
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/services/hermes-grpc
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: '2'
autoscaling.knative.dev/minScale: '1'
run.googleapis.com/client-name: cloud-console
spec:
containerConcurrency: 80
containers:
image: interledger4j/hermes-server:latest
name: user-container
ports:
- name: h2c
containerPort: 6565
readinessProbe:
successThreshold: 1
resources:
limits:
cpu: 500m
memory: 384Mi
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100
https://github.com/knative/docs/blob/master/docs/serving/samples/grpc-ping-go/README.md
Describes how to configure named port to make HTTP/2 work

Access AWS cluster endpoint running Kubernetes

I am new to Kubernetes and I am currently deploying a cluster in AWS using Kubeadm. The containers are deployed just fine, but I can't seem to access them with by browser. When I used to do this via Docker Swarm I could simply use the IP address of the AWS node to access and login in my application with by browser, but this does not seem to work with my current Kubernetes setting.
Therefore my question is how can I access my running application under these new settings?
You should read about how to use Services in Kubernetes:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service.
Basically Services allows a Deployment (or Pod) to be reached from inside or outside the cluster.
In your case, if you want to expose a single service in AWS, it is as simple as:
apiVersion: v1
kind: Service
metadata:
name: myApp
labels:
app: myApp
spec:
ports:
- port: 80 #port that the service exposes
targetPort: 8080 #port of a container in "myApp"
selector:
app: myApp #your deployment must have the label "app: myApp"
type: LoadBalancer
You can check if the Service was created successfully in the AWS EC2 console under "Elastic Load Balancers" or using kubectl describe service myApp
Both answers were helpful in my pursuit for a solution to my problem, but I ended up getting lost in the details. Here is an example that may help others with a similar situation:
1) Consider the following application yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-web-app
labels:
app: my-web-app
spec:
serviceName: my-web-app
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: my-web-app
image: myregistry:443/mydomain/my-web-app
imagePullPolicy: Always
ports:
- containerPort: 8080
name: cp
2) I decided to adopt Node Port (thank you #Leandro for pointing it out) to expose my service, hence I added the following to my application yaml:
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app
labels:
name: my-web-app
spec:
type: NodePort
ports:
- name: http1
port: 80
nodePort: 30036
targetPort: 8080
protocol: TCP
selector:
name: my-web-app
One thing that I was missing is that the label names in both sets must match in order to link my-web-app:StatefulSet (1) to my-web-app:Service (2). Then, my-web-app:StatefulSet:containerPort must be the same as my-web-app:Service:targetPort (8080). Finally, my-web-app:Service:nodePort is the port that we expose publicly and it must be a value between 30000-32767.
3) The last step is to ensure that the security group in AWS allows inbound traffic for the chosen my-web-app:Service:nodePort, in this case 30036, if not add the rule.
After following these steps I was able to access my application via aws-node-ip:30036/my-web-app.
Basically the way kubernetes is constructed is different. First of all your containers are kept hidden from the world, unless you create a service to expose them, a load balancer or nodePort. If you create a service of the type of clusterIP, it will be available only from inside the cluster. For simplicity use port forwading to test your containers, if everything is working then create a service to expose them (Node Port or load balancer). The best and more difficult approach is to create an ingress to handle inbound traffic and routing to the services.
Port Forwading example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
Change redis for your pod name and the appropriate port of your container.

How to expose my pod to the internet and get to it from the browser?

first of all I downloaded kubernetes, kubectl and created a cluster from aws (export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
)
I added some lines to my project circle.yml to use circleCI services to build my image.
To support docker I added:
machine:
services:
- docker
and to create my image and send it to my artifacts I added:
deployment:
commands:
- docker login -e admin#comp.com -u ${ARTUSER} -p ${ARTKEY} docker-docker-local.someartifactory.com
- sbt -DBUILD_NUMBER="${CIRCLE_BUILD_NUM}" docker:publish
After that I created a 2 folders:
my project (MyApp) folder with two files:
controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: MyApp
labels:
name: MyApp
spec:
replicas: 1
selector:
name: MyApp
template:
metadata:
labels:
name: MyApp
version: 0.1.4
spec:
containers:
- name: MyApp
#this is the image artifactory
image: docker-docker-release.someartifactory.com/MyApp:0.1.4
ports:
- containerPort: 9000
imagePullSecrets:
- name: myCompany-artifactory
service.yaml
apiVersion: v1
kind: Service
metadata:
name: MyApp
labels:
name: MyApp
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 9000
selector:
name: MyApp
And I have another folder for my artifactory (Kind : Secret).
Now I created my pods with:
kubectl create -f controller.yaml
And now I have my pod running when I check in kubectl get pods.
Now, how do I access my pod from the browser? my project is a play project so I want to get to it from the browser...how do I expose it the simplest way?
thanks
The Replication Controller sole responsibility is ensuring that the amount of pods with the given configuration are run on your cluster.
The Service is what is public (or internally) exposing your pods to other parts of the system (or the internet).
You should create your service with your yaml file (kubectl create -f service.yaml) which will create the service, selecting pods by the label selector MyApp for handling the load on the given port in your file (9000).
Afterwards look at the registered service with kubectl get service to see which endpoint (ip) is allocated for it.