Failing Kubernetes deployment, expecting char '""' but got char '8' - google-cloud-platform

I have the following Deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: socket-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: socket-server
spec:
containers:
- name: socket-server
image: gcr.io/project-haswell-recon/socket-server:production-production-2
env:
- name: PORT
value: 80
ports:
- containerPort: 80
But I get the following error when I run kubectl create -f ./scripts/deployment.yml --namespace production
Error from server (BadRequest): error when creating "./scripts/deployment.yml": Deployment in version "v1beta1" cannot be handled as a Deployment: [pos 321]: json: expect char '"' but got char '8'
I pretty much copy and pasted this deployment from a previous working deployment, and altered a few details so I'm at a loss as to what this could be.

The problem is with the number 80. Here it's within an EnvVar context, so it has to be of type string and not int

Related

Google Kubernetes Engine & Github actions deploy deployments.apps "gke-deployment" not found

I've been trying to run Google Kubernetes Engine deploy action for my github repo.
I have made a github workflow job run and everything works just fine except the deploy step.
Here is my error code:
Error from server (NotFound): deployments.apps "gke-deployment" not found
I'm assuming my yaml files are at fault, I'm fairly new to this so I got these from the internet and just edited a bit to fit my code, but I don't know the details.
Kustomize.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: arbitrary
# Example configuration for the webserver
# at https://github.com/monopole/hello
commonLabels:
app: videoo-render
resources:
- deployment.yaml
- service.yaml
deployment.yaml (I think the error is here):
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
deployment: video-render
template:
metadata:
labels:
deployment: video-render
spec:
containers:
- name: the-container
image: monopole/hello:1
command: ["/video-render",
"--port=8080",
"--enableRiskyFeature=$(ENABLE_RISKY)"]
ports:
- containerPort: 8080
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
name: the-map
key: altGreeting
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
name: the-map
key: enableRisky
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: the-service
spec:
selector:
deployment: video-render
type: LoadBalancer
ports:
- protocol: TCP
port: 8666
targetPort: 8080
Using ubuntu 20.04 image, repo is C++ code.
For anyone wondering why this happens:
You have to edit this line to an existing deployment:
DEPLOYMENT_NAME: gke-deployment # TODO: update to deployment name,
to:
DEPLOYMENT_NAME: existing-deployment-name

Recreate Kubernetes deployment with 0 downtime on AWS EKS

I have a deployment on Kubernetes (AWS EKS), with several environment variables defined in the deployment .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myApp
name: myAppName
spec:
replicas: 2
(...)
spec:
containers:
- env:
- name: MY_ENV_VAR
value: "my_value"
image: myDockerImage:prodV1
(...)
If I want to upgrade the pods to another version of the docker image, say prodV2, I can perform a rolling update which replaces the pods from prodV1 to prodV2 with zero downtime.
However, if I add another env variable, say MY_ENV_VAR_2 : "my_value_2" and perform the same rolling update, I don't see the new env var in the container. The only solution I found in order to have both env vars was to manually execute
kubectl delete deployment myAppName
kubectl create deployment -f myDeploymentFile.yaml
As you can see, this is not zero downtime, as deleting the deployment will terminate my pods and introduce a downtime until the new deployment is created and the new pods start.
Is there a way to better do this? Thank you!
Here is an example you might want to test yourself:
Noice I used spec.strategy.type: RollingUpdate.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_value"
Apply:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl exec -it nginx-<hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_value
Notice the env is as set in yaml
Now we edit the env in deployment.yaml:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_new_value"
apply and wait for it to update:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl get po --watch
# after it updated use Ctrl+C to stop the watch and run:
➜ ~ kubectl exec -it nginx-<new_hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_new_value
As you should see, the env changed. That is pretty much it.

Quorum nodes run fine with docker-compose, but crash when deployed on kubernetes

Following the steps outlined here, I created a basic Quorum network with 4 nodes and IBFT consensus. I then created a docker image for each of the nodes, copying the contents of each node's directory on to the image. The image was created from the official quorumengineering/quorum image, and when started as a container it executes the geth command. An example Dockerfile follows (different nodes have different rpcports/ports):
FROM quorumengineering/quorum
WORKDIR /opt/node
COPY . /opt/node
ENTRYPOINT []
CMD PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --rpcvhosts="*" --emitcheckpoints --port 30304
I then made a docker-compose file to run the images.
version: '2'
volumes:
qnode0-data:
qnode1-data:
qnode2-data:
qnode3-data:
services:
qnode0:
container_name: qnode0
image: <myDockerHub>/qnode0
ports:
- 22000:22000
- 30303:30303
volumes:
- qnode0-data:/opt/node
qnode1:
container_name: qnode1
image: <myDockerHub>/qnode1
ports:
- 22001:22001
- 30304:30304
volumes:
- qnode1-data:/opt/node
qnode2:
container_name: qnode2
image: <myDockerHub>/qnode2
ports:
- 22002:22002
- 30305:30305
volumes:
- qnode2-data:/opt/node
qnode3:
container_name: qnode3
image: <myDockerHub>/qnode3
ports:
- 22003:22003
- 30306:30306
volumes:
- qnode3-data:/opt/node
When running these images locally with docker-compose, the nodes start and I can even see the created blocks via a blockchain explorer. However, when I try to run this in a kubernetes cluster, either locally with minikube, or on AWS, the nodes do not start but rather crash.
To deploy on kubernetes I made the following three yaml files for each node (12 files in total):
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: qnode0
name: qnode0
spec:
replicas: 1
selector:
matchLabels:
app: qnode0
strategy:
type: Recreate
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- containerPort: 22000
- containerPort: 30303
resources: {}
volumeMounts:
- mountPath: /opt/node
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
status: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: qnode0-service
spec:
selector:
app: qnode0
ports:
- name: rpcport
protocol: TCP
port: 22000
targetPort: 22000
- name: netlistenport
protocol: TCP
port: 30303
targetPort: 30303
persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: qnode0-data
name: qnode0-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
When trying to run on a kubernetes cluster, each node runs into this error:
ERROR[] Cannot start mining without etherbase err="etherbase must be explicitly specified"
Fatal: Failed to start mining: etherbase missing: etherbase must be explicitly specified
which does not occur when running locally with docker-compose. After examining the logs, I saw there is a difference between how the nodes startup locally with docker-compose and on kubernetes, which is the following lines:
locally I see the following lines in each node's output:
INFO [] Initialising Ethereum protocol name=istanbul versions="[99 64]" network=10 dbversion=7
...
DEBUG[] InProc registered namespace=istanbul
on kubernetes (either in minikube or AWS) I see these lines differently:
INFO [] Initialising Ethereum protocol name=eth versions="[64 63]" network=10 dbversion=7
...
DEBUG[] IPC registered namespace=eth
DEBUG[] IPC registered namespace=ethash
Why is this happening? What is the significance of name=istanbul/eth? My common sense logic says that the error happens because of the use of name=eth, instead of name=istanbul. But I don't know the significance of this, and more importantly, I don't know what it is I did to inadvertently affect the kubernetes deployment.
Any ideas how to fix this?
EDIT
I tried to address what David Maze mentioned in his comment, i.e. that the node directory gets overwritten, so I created a new directory in the image with
RUN mkdir /opt/nodedata/
and used that to mount volumes in kubernetes. I also used StatefulSets instead of Deployments in kubernetes. The relevant yaml follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: qnode0
spec:
serviceName: qnode0
replicas: 1
selector:
matchLabels:
app: qnode0
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- protocol: TCP
containerPort: 22000
- protocol: TCP
containerPort: 30303
volumeMounts:
- mountPath: /opt/nodedata
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
Changing the volume mount immediately produced the correct behaviour of
INFO [] Initialising Ethereum protocol name=istanbul
However, I had networking issues, which I solved by using the environment variables that kubernetes sets for each service, which include the IP each service is running at, e.g.:
QNODE0_PORT_30303_TCP_ADDR=172.20.115.164
I also changed my kubernetes services a little, as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: qnode0
name: qnode0
spec:
ports:
- name: "22000"
port: 22000
targetPort: 22000
- name: "30303"
port: 30303
targetPort: 30303
selector:
app: qnode0
Using the environment variables to properly initialise the quorum files solved the networking problem.
However, when I delete my stateful sets and my services with:
kubectl delete -f <my_statefulset_and_service_yamls>
and then apply them again:
kubectl apply -f <my_statefulset_and_service_yamls>
quorum starts from scratch, i.e. it does not continue block creation from where it stopped but starts from 1 again, as follows:
Inserted new block number=1 hash=1c99d0…fe59bb
So the state of the blockchain is not saved, as was my initial fear. What should I do to address this?

Istio DestinationRule subset label not found on matching host

I'm trying to configure an Istio VirtialService / DestinationRule so that a grpc call to the service from a pod labeled datacenter=chi5 is routed to a grpc server on a pod labeled datacenter=chi5.
I have Istio 1.4 installed on a cluster running Kubernetes 1.15.
A route is not getting created in istio-sidecar envoy config for the chi5 subset and traffic is being routed round robin between each service endpoint regardless of pod label.
Kiali is reporting an error in the DestinationRule config: "this subset's labels are not found in any matching host".
Do I misunderstand the functionality of these Istio traffic management objects or is there an error in my configuration?
I believe my pod's are correctly labeled:
$ (dev) kubectl get pods -n istio-demo --show-labels
NAME READY STATUS RESTARTS AGE LABELS
ticketclient-586c69f77d-wkj5d 2/2 Running 0 158m app=ticketclient,datacenter=chi6,pod-template-hash=586c69f77d,run=client-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-bqnqb 2/2 Running 0 158m app=ticketserver,datacenter=chi5,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-pms25 2/2 Running 0 158m app=ticketserver,datacenter=chi6,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
The port-name on my k8s Service object is correctly prefixed with the grpc protocol:
$ (dev) kubectl describe service -n istio-demo ticket-service
Name: ticket-service
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
Selector: run=ticket-service
Type: ClusterIP
IP: 10.234.14.53
Port: grpc-ticket 10000/TCP
TargetPort: 6001/TCP
Endpoints: 10.37.128.37:6001,10.44.0.0:6001
Session Affinity: None
Events: <none>
I've deployed the following Istio objects to Kubernetes:
Kind: VirtualService
Name: ticket-destinationrule
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: DestinationRule
Spec:
Host: ticket-service.istio-demo.svc.cluster.local
Subsets:
Labels:
Datacenter: chi5
Name: chi5
Labels:
Datacenter: chi6
Name: chi6
Events: <none>
---
Name: ticket-virtualservice
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Spec:
Hosts:
ticket-service.istio-demo.svc.cluster.local
Http:
Match:
Name: ticket-chi5
Port: 10000
Source Labels:
Datacenter: chi5
Route:
Destination:
Host: ticket-service.istio-demo.svc.cluster.local
Subset: chi5
Events: <none>
I have made reproduction of your issue with 2 nginx pods.
What you want to have can be achieved with sourceLabel,check below example, I think it explain everything.
For start I made 2 ubuntu pods, 1 with label app:ubuntu and 1 without any labels.
apiVersion: v1
kind: Pod
metadata:
name: ubu2
labels:
app: ubuntu
spec:
containers:
- name: ubu2
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
apiVersion: v1
kind: Pod
metadata:
name: ubu1
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
Then 2 deployments with service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
Another thing is virtual service with mesh gateway, so it works only in the mesh, with 2 matches, 1 with sourceLabel which goes from pods with app: ubuntu label to nginx pod with v1 subset, and another match which goes to the nginx pod with v2 subset.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match-myuid
match:
- sourceLabels:
app: ubuntu
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
- name: default
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v2
And the last thing is DestinationRule which take subsets from virtual service and sent it to proper nginx pod with label either nginx1 or nginx2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
- name: v2
labels:
run: nginx2
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx1-5c5b84567c-tvtzm 2/2 Running 0 23m app=frontend,run=nginx1,security.istio.io/tlsMode=istio
nginx2-5d95c8b96-6m9zb 2/2 Running 0 23m app=frontend,run=nginx2,security.istio.io/tlsMode=istio
ubu1 2/2 Running 4 3h19m security.istio.io/tlsMode=istio
ubu2 2/2 Running 2 10m app=ubuntu,security.istio.io/tlsMode=istio
Results from ubuntu pods
Ubuntu with label
curl nginx/
Hello nginx1
Ubuntu without label
curl nginx/
Hello nginx2
Let me know if that help.

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.