Google Kubernetes Engine & Github actions deploy deployments.apps "gke-deployment" not found - google-cloud-platform

I've been trying to run Google Kubernetes Engine deploy action for my github repo.
I have made a github workflow job run and everything works just fine except the deploy step.
Here is my error code:
Error from server (NotFound): deployments.apps "gke-deployment" not found
I'm assuming my yaml files are at fault, I'm fairly new to this so I got these from the internet and just edited a bit to fit my code, but I don't know the details.
Kustomize.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: arbitrary
# Example configuration for the webserver
# at https://github.com/monopole/hello
commonLabels:
app: videoo-render
resources:
- deployment.yaml
- service.yaml
deployment.yaml (I think the error is here):
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
deployment: video-render
template:
metadata:
labels:
deployment: video-render
spec:
containers:
- name: the-container
image: monopole/hello:1
command: ["/video-render",
"--port=8080",
"--enableRiskyFeature=$(ENABLE_RISKY)"]
ports:
- containerPort: 8080
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
name: the-map
key: altGreeting
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
name: the-map
key: enableRisky
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: the-service
spec:
selector:
deployment: video-render
type: LoadBalancer
ports:
- protocol: TCP
port: 8666
targetPort: 8080
Using ubuntu 20.04 image, repo is C++ code.

For anyone wondering why this happens:
You have to edit this line to an existing deployment:
DEPLOYMENT_NAME: gke-deployment # TODO: update to deployment name,
to:
DEPLOYMENT_NAME: existing-deployment-name

Related

Containers in pod won't talk to each other in Kubernetes

I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.

Kubernetes deployment resource limit

Here is my deployment & service file for Django. The 3 pods generated from deployment.yaml works, but the resource request and limits are being ignored.
I have seen a lot of tutorials about applying resource specifications on Pods but not on Deployment files, is there a way around it?
Here is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: djangoapi
type: web
name: djangoapi
namespace: "default"
spec:
replicas: 3
template:
metadata:
labels:
app: djangoapi
type: web
spec:
containers:
- name: djangoapi
image: wbivan/app:v0.8.1a
imagePullPolicy: Always
args:
- gunicorn
- api.wsgi
- --bind
- 0.0.0.0:8000
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
envFrom:
- configMapRef:
name: djangoapi-config
ports:
- containerPort: 8000
resources: {}
imagePullSecrets:
- name: regcred
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: djangoapi-svc
namespace: "default"
labels:
app: djangoapi
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: djangoapi
type: web
type: NodePort
There is one extra resource attribute under your container definition after ports.
resources: {}
This overrides original resource definition.
Remove this one and apply it again.
The simple way to avoid such issue is to use a YAML validator.
yamllint Seems like a great tool to validate and parse the YAML.
Once you run the validation, it provides a list of all the wrong things you have been doing.
Example:-
# yamllint file.yml
38:9 error duplication of key "resources" in mapping (key-duplicates)

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.

Connecting to Google cloud mysql instance from GKE cluster using cloudsqlproxy

I have two projects in GCP viz. project1 and project2
I have setup mysql instance in project1.
I have also setup cloudsqlproxy (pod) and mypod in a GKE cluster in project2.
I want to access mysql instance frommypodthroughcloudsqlproxy`.
I have the following code for cloudmysqlproxy
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy-service-mainnet
namespace: dev
spec:
ports:
- port: 3306
targetPort: port-mainnet
selector:
app: cloudsqlproxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsqlproxy
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: cloudsqlproxy
template:
metadata:
labels:
app: cloudsqlproxy
spec:
containers:
# Make sure to specify image tag in production
# Check out the newest version in release page
# https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases
- name: cloudsqlproxy
image: b.gcr.io/cloudsql-docker/gce-proxy:latest
# 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
imagePullPolicy: Always
name: cloudsqlproxy
command:
- /cloud_sql_proxy
- -dir=/cloudsql
- -instances=project1:asia-east1:development=tcp:0.0.0.0:3306
- -credential_file=/credentials/credentials.json
ports:
- name: port-mainnet
containerPort: 3306
volumeMounts:
- mountPath: /cloudsql
name: cloudsql
- mountPath: /credentials
name: cloud-sql-client-account-token
volumes:
- name: cloudsql
emptyDir:
- name: cloud-sql-client-account-token
secret:
secretName: cloud-sql-client-account-token
I have setup cloud-sql-client-account-token in the following manner:
kubectl create secret cloud-sql-client-account-token --from-file=credentials.json=$HOME/credentials.json
Where I downloaded the credentials.json file from a service account in project1.
When I try to access the mysql instance from my pod, I get the folløwing error:
mysql --host=cloudsqlproxy-service-mainnet.dev.svc.cluster.local --port=3306
ERROR 1045 (28000): Access denied for user 'root'#'cloudsqlproxy~35.187.201.86' (using password: NO)
And the cloud-proxy logs, I get the following:
2018/11/25 00:35:31 Instance project1:asia-east1:development closed connection
Is it necessary to launch a mysql instance in the same project (project2) as the pod? What am I missing?
EDIT
I can access the proxy on my local machine by setting up like this
/cloud_sql_proxy -instances=project1:asia-east1:development=tcp:3306
and then connecting to the proxy using a mysql client.

Why can Kubernetes not route a service on public ELB on AWS?

I've been trying to follow the example (guestbook) to reproduce another application which has to be available on a public interface.
This is my Kubernetes configuration (YAML):
apiVersion: v1
kind: Service
metadata:
name: my-app-server
labels:
app: my-app-server
tier: backend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-server
spec:
replicas: 3
template:
metadata:
labels:
app: my-app-server
tier: backend
spec:
containers:
- name: ppm-server
image: docker/container:tag
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
Not sure why this is not working.
The guestbook all-in-one example seems to work just fine though.
I tried using the exact same configuration file while just changing the variables in the configuration.