How to disable Istio sidecar injection for the Kubernetes Job?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pod-restart
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *'
jobTemplate:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command: ['kubectl', 'rollout', 'restart', 'deployment/myapp']
Sidecar still gets injected.
The annotation is in wrong place. You have to put it on the pod template.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
spec:
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
There is working CronJob example with istio injection disabled.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
Also there is related github issue about that.
Now the annotation has been deprecated as per doc https://istio.io/latest/docs/reference/config/annotations/
it would be best if you use a label instead:
apiVersion: batch/v1
kind: CronJob
metadata:
name: jobs-cleanup
spec:
schedule: "*/4 * * * *"
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: cleaner
containers:
- name: kubectl-container
image: bitnami/kubectl:latest
command: ["sh", "/tmp/clean.sh"]
volumeMounts:
- name: cleaner-script
mountPath: /tmp/
restartPolicy: Never
volumes:
- name: cleaner-script
configMap:
name: cleaner-script
Related
I have an EKS cluster with multiple deployments (microservices). I would like all of them to write logs to same folder in an EFS mount even between restarts/scaling etc. Currently it creates a folder with the persistent volume id, which breaks our requirement. Is it possible to always mount the same folder even when the persistent volume is recreated. Can it always to point to the same folder in EFS?
It currently creates folders like this:
"/logs/pvc-2exxxcxs4-0xx7-4e11-813a-65xxxxxxxx/"
Instead, I would like it to be just "/logs" or a fixed path without any dependency on pvc id/name.
Below are the current yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
........
volumeMounts:
- name: dev-logs-efs
mountPath: /logs
volumes:
- name: dev-logs-efs
persistentVolumeClaim:
claimName: dev-logs-efs-pvc
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dev-logs-efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-xxxxxxxxxxx
basePath: "/logs"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-logs-efs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: dev-logs-efs-sc
resources:
requests:
storage: 5Gi
I am deploying a jenkins on one master one node Kubernetes cluster, iam getting error when i try to do Dynamic Volume Provisioning. not sure what went wrong. please help.
my storageclass file
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
my PVC file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: standard
volumeMode: Filesystem
Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
selector:
matchLabels:
app: jcasc
replicas: 1
template:
metadata:
labels:
app: jcasc
spec:
volumes:
- name: jenkins-pvc
persistentVolumeClaim:
claimName: jenkins-pvc
containers:
- name: jenkins
image: jenkins:latest
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-pvc
mountPath: "/var/jenkins_home"
Find this troubleshooting doc for Fixing Pod Has Unbound Immediate PersistentVolumeClaims Error and also
For dynamic provisioning you can see this doc.
For pv doc
After kubectl apply -f pvc.yaml the below yaml file, I can able to find the mount path /var/local/pvctest inside the container that has been created. But, the host path /var/local/pvctest in the worker node is not created.
I'm new to PV & PVC with EKS and any help to fix this issue is much appreciated!
kind: Deployment
apiVersion: apps/v1
metadata:
name: pvctest
labels:
alias: pvctest
spec:
selector:
matchLabels:
alias: pvctest
replicas: 1
template:
metadata:
labels:
alias: pvctest
spec:
containers:
- name: pvctest
image: neo4j
ports:
- containerPort: 7474
- containerPort: 7687
volumeMounts:
- name: testpv
mountPath: /var/local/pvctest
volumes:
- name: testpv
persistentVolumeClaim:
claimName: pvctest-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvtest
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/local/pvctest
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvctest-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
PersistentVolume with hostPath requires the directory on the host to be pre-created. If you want the directory to be created automatically for you:
...
containers:
- name: pvctest
image: neo4j
...
volumeMounts:
- name: testpv
mountPath: /var/local/pvctest
volumes:
- name: testpv
hostPath:
path: /data
type: DirectoryOrCreate
PV/PVC is actually optonal for hostPath.
I think that there are lots of DevOps engineer realized this issue. Because I am from a software background. Explanations for syntax not enough for me. Below YAML is working for the Azure environment but not working for EKS and AWS.
Error:
error validating data: ValidationError(Deployment.spec): unknown field "spec" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false
My deployment yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask
spec:
selector:
matchLabels:
app: my-flask
replicas: 2
template:
metadata:
labels:
app: my-flask
spec:
containers:
- name: my-flask
image: yusufkaratoprak/awsflaskeks:latest
ports:
- containerPort: 5000
there is some indentation problem with your yamls.
the field secondspec is under the template.
will also encourage you to see the official docs of kubernetes_deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask
spec:
selector:
matchLabels:
app: my-flask
replicas: 2
template:
metadata:
labels:
app: my-flask
spec:
containers:
- name: my-flask
image: yusufkaratoprak/awsflaskeks:latest
ports:
- containerPort: 5000
I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80