I have a deployment on Kubernetes (AWS EKS), with several environment variables defined in the deployment .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myApp
name: myAppName
spec:
replicas: 2
(...)
spec:
containers:
- env:
- name: MY_ENV_VAR
value: "my_value"
image: myDockerImage:prodV1
(...)
If I want to upgrade the pods to another version of the docker image, say prodV2, I can perform a rolling update which replaces the pods from prodV1 to prodV2 with zero downtime.
However, if I add another env variable, say MY_ENV_VAR_2 : "my_value_2" and perform the same rolling update, I don't see the new env var in the container. The only solution I found in order to have both env vars was to manually execute
kubectl delete deployment myAppName
kubectl create deployment -f myDeploymentFile.yaml
As you can see, this is not zero downtime, as deleting the deployment will terminate my pods and introduce a downtime until the new deployment is created and the new pods start.
Is there a way to better do this? Thank you!
Here is an example you might want to test yourself:
Noice I used spec.strategy.type: RollingUpdate.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_value"
Apply:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl exec -it nginx-<hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_value
Notice the env is as set in yaml
Now we edit the env in deployment.yaml:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_new_value"
apply and wait for it to update:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl get po --watch
# after it updated use Ctrl+C to stop the watch and run:
➜ ~ kubectl exec -it nginx-<new_hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_new_value
As you should see, the env changed. That is pretty much it.
Related
I've been trying to run Google Kubernetes Engine deploy action for my github repo.
I have made a github workflow job run and everything works just fine except the deploy step.
Here is my error code:
Error from server (NotFound): deployments.apps "gke-deployment" not found
I'm assuming my yaml files are at fault, I'm fairly new to this so I got these from the internet and just edited a bit to fit my code, but I don't know the details.
Kustomize.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: arbitrary
# Example configuration for the webserver
# at https://github.com/monopole/hello
commonLabels:
app: videoo-render
resources:
- deployment.yaml
- service.yaml
deployment.yaml (I think the error is here):
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
deployment: video-render
template:
metadata:
labels:
deployment: video-render
spec:
containers:
- name: the-container
image: monopole/hello:1
command: ["/video-render",
"--port=8080",
"--enableRiskyFeature=$(ENABLE_RISKY)"]
ports:
- containerPort: 8080
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
name: the-map
key: altGreeting
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
name: the-map
key: enableRisky
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: the-service
spec:
selector:
deployment: video-render
type: LoadBalancer
ports:
- protocol: TCP
port: 8666
targetPort: 8080
Using ubuntu 20.04 image, repo is C++ code.
For anyone wondering why this happens:
You have to edit this line to an existing deployment:
DEPLOYMENT_NAME: gke-deployment # TODO: update to deployment name,
to:
DEPLOYMENT_NAME: existing-deployment-name
I built an API in Django and I used docker-compose to orchestrate redis and celery services. Now, I would like to move the API to a Kubernetes cluster (AKS). However, I get the error python can't find manage.py when I run it into the cluster. I used Kompose tool to write the kubernets manifest.yaml. This is my Dockerfile, docker-compose.yml and kubernets.yaml files
# docker-compose.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python3 ./manage.py makemigrations &&
python3 ./manage.py migrate &&
python3 ./manage.py runserver 0.0.0.0:8000"
The Dockerfile
# Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
COPY ./app /app
WORKDIR /app
And the kubernets manifest
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: app
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- args:
- sh
- -c
- |-
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
image: <image pushed in a Azure Container Registry (ACR)>
name: app
ports:
- containerPort: 8000
resources: {}
volumeMounts:
- mountPath: /app
name: app-claim0
restartPolicy: Always
volumes:
- name: app-claim0
persistentVolumeClaim:
claimName: app-claim0
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: app-claim0
name: app-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
Log error
$ kubectl logs app-6fc488bf56-hb8g9 --previous
# python: can't open file 'manage.py': [Errno 2] No such file or director
I think your problem is with the volumes segment of your kubernetes config.
In your docker-compose config, you have a volume mounting
volumes:
- ./app:/app
This is great for local development because it'll take the copy of the code on your machine and overlay it on the docker image so the changes you make on your machine will be reflected in your running docker container, allowing runserver to see file changes and reload the django server as needed.
This is less great in kubernetes - when you're running in prod, you want to be using the code that's been baked in to the image. As this is currently configured, I think you're creating an empty PersistentVolumeClaim and then mounting it on top of your /app directory in the running container. Since it's empty, there is no manage.py file to be found.
Try making your kubernetes configuration look like this:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: app
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kb_manifests.yaml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- args:
- sh
- -c
- |-
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000
image: <image pushed in a Azure Container Registry (ACR)>
name: app
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
That should be the same as what you have minus any references to volumes.
If that works then you're off to the races. Once it's up and running, take a look at the Django deployment guide about some other settings you should configure - namely, take a look at gunicorn; runserver isn't your best path forward for production deploys.
I'm trying to configure an Istio VirtialService / DestinationRule so that a grpc call to the service from a pod labeled datacenter=chi5 is routed to a grpc server on a pod labeled datacenter=chi5.
I have Istio 1.4 installed on a cluster running Kubernetes 1.15.
A route is not getting created in istio-sidecar envoy config for the chi5 subset and traffic is being routed round robin between each service endpoint regardless of pod label.
Kiali is reporting an error in the DestinationRule config: "this subset's labels are not found in any matching host".
Do I misunderstand the functionality of these Istio traffic management objects or is there an error in my configuration?
I believe my pod's are correctly labeled:
$ (dev) kubectl get pods -n istio-demo --show-labels
NAME READY STATUS RESTARTS AGE LABELS
ticketclient-586c69f77d-wkj5d 2/2 Running 0 158m app=ticketclient,datacenter=chi6,pod-template-hash=586c69f77d,run=client-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-bqnqb 2/2 Running 0 158m app=ticketserver,datacenter=chi5,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-pms25 2/2 Running 0 158m app=ticketserver,datacenter=chi6,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
The port-name on my k8s Service object is correctly prefixed with the grpc protocol:
$ (dev) kubectl describe service -n istio-demo ticket-service
Name: ticket-service
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
Selector: run=ticket-service
Type: ClusterIP
IP: 10.234.14.53
Port: grpc-ticket 10000/TCP
TargetPort: 6001/TCP
Endpoints: 10.37.128.37:6001,10.44.0.0:6001
Session Affinity: None
Events: <none>
I've deployed the following Istio objects to Kubernetes:
Kind: VirtualService
Name: ticket-destinationrule
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: DestinationRule
Spec:
Host: ticket-service.istio-demo.svc.cluster.local
Subsets:
Labels:
Datacenter: chi5
Name: chi5
Labels:
Datacenter: chi6
Name: chi6
Events: <none>
---
Name: ticket-virtualservice
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Spec:
Hosts:
ticket-service.istio-demo.svc.cluster.local
Http:
Match:
Name: ticket-chi5
Port: 10000
Source Labels:
Datacenter: chi5
Route:
Destination:
Host: ticket-service.istio-demo.svc.cluster.local
Subset: chi5
Events: <none>
I have made reproduction of your issue with 2 nginx pods.
What you want to have can be achieved with sourceLabel,check below example, I think it explain everything.
For start I made 2 ubuntu pods, 1 with label app:ubuntu and 1 without any labels.
apiVersion: v1
kind: Pod
metadata:
name: ubu2
labels:
app: ubuntu
spec:
containers:
- name: ubu2
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
apiVersion: v1
kind: Pod
metadata:
name: ubu1
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
Then 2 deployments with service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
Another thing is virtual service with mesh gateway, so it works only in the mesh, with 2 matches, 1 with sourceLabel which goes from pods with app: ubuntu label to nginx pod with v1 subset, and another match which goes to the nginx pod with v2 subset.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match-myuid
match:
- sourceLabels:
app: ubuntu
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
- name: default
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v2
And the last thing is DestinationRule which take subsets from virtual service and sent it to proper nginx pod with label either nginx1 or nginx2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
- name: v2
labels:
run: nginx2
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx1-5c5b84567c-tvtzm 2/2 Running 0 23m app=frontend,run=nginx1,security.istio.io/tlsMode=istio
nginx2-5d95c8b96-6m9zb 2/2 Running 0 23m app=frontend,run=nginx2,security.istio.io/tlsMode=istio
ubu1 2/2 Running 4 3h19m security.istio.io/tlsMode=istio
ubu2 2/2 Running 2 10m app=ubuntu,security.istio.io/tlsMode=istio
Results from ubuntu pods
Ubuntu with label
curl nginx/
Hello nginx1
Ubuntu without label
curl nginx/
Hello nginx2
Let me know if that help.
I have created a GKE cluster using terraform script. I have a scenario where the /etc/hosts file has to be updated. Is it possible to update the host file on worker nodes during K8 cluster creation using terraform?
With terraform it's not possible to access the directory, You can use a DeamonSet with Security Context as privileged see below:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: ssd-startup-script
labels:
app: ssd-startup-script
spec:
template:
metadata:
labels:
app: ssd-startup-script
spec:
hostPID: true
containers:
- name: ssd-startup-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#!/bin/bash
<YOUR COMMAND LINE>
<YOUR COMMAND LINE>
<YOUR COMMAND LINE>
echo Done
you need to run the kubectl apply -f <demonset yaml file>
I have the following Deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: socket-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: socket-server
spec:
containers:
- name: socket-server
image: gcr.io/project-haswell-recon/socket-server:production-production-2
env:
- name: PORT
value: 80
ports:
- containerPort: 80
But I get the following error when I run kubectl create -f ./scripts/deployment.yml --namespace production
Error from server (BadRequest): error when creating "./scripts/deployment.yml": Deployment in version "v1beta1" cannot be handled as a Deployment: [pos 321]: json: expect char '"' but got char '8'
I pretty much copy and pasted this deployment from a previous working deployment, and altered a few details so I'm at a loss as to what this could be.
The problem is with the number 80. Here it's within an EnvVar context, so it has to be of type string and not int