kubectl get -o yaml: is it possible to hide metadata.managedFields? - kubectl

Using kubectl version 1.18, on microk8s 1.18.3
When getting a resource definition in yaml format. Example kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml. The output contains a section related to metadata.managedFields
Is there a way to hide that metadata.managedFields to shorten the console output?
Below is an example of output to better illustrate the question.
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}}
creationTimestamp: "2020-05-28T05:22:41Z"
labels:
app: productpage
service: productpage
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:service: {}
f:spec:
f:ports:
.: {}
k:{"port":9080,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-05-28T05:22:41Z"
name: productpage
namespace: bookinfo
resourceVersion: "121804"
selfLink: /api/v1/namespaces/bookinfo/services/productpage
uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763
spec:
clusterIP: 10.152.183.9
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 9080
selector:
app: productpage
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

Kubectl 1.21 doesn't include managed fields by default anymore
kubectl get will omit managed fields by default now.
Users could set --show-managed-fields to true to show managedFields when the output format is either json or yaml.
https://github.com/kubernetes/kubernetes/pull/96878

check out this kubectl plugin: https://github.com/itaysk/kubectl-neat.
it not only removes managedField but many other fields users are not interested in.
for example: kubectl get pod mymod -oyaml | kubectl neat or kubectl neat pod mypod -oyaml

For those who like to download yaml and delete unwanted keys try this:
Install yq then try(please make sure you get yq version 4.x):
cat k8s-config.yaml | yq eval 'del(.status)' -
--OR--
kubectl --namespace {namespace} --context {cluster} get pod {podname} | yq ...
You may add/join more yq to delete more keys. Here is what I did:
cat k8s-config.yaml | yq eval 'del(.status)' - | yq eval 'del (.metadata.managedFields)' - | yq eval 'del (.metadata.annotations)' - | yq eval 'del (.spec.tolerations)' - | yq eval 'del(.metadata.ownerReferences)' - | yq eval 'del(.metadata.resourceVersion)' - | yq eval 'del(.metadata.uid)' - | yq eval 'del(.metadata.selfLink)' - | yq eval 'del(.metadata.creationTimestamp)' - | yq eval 'del(.metadata.generateName)' -
--OR--
cat k8s-config.yaml | yq eval 'del(.status)' - \
| yq eval 'del (.metadata.managedFields)' - \
| yq eval 'del (.metadata.annotations)' - \
| yq eval 'del (.spec.tolerations)' - \
| yq eval 'del(.metadata.ownerReferences)' - \
| yq eval 'del(.metadata.resourceVersion)' - \
| yq eval 'del(.metadata.uid)' - \
| yq eval 'del(.metadata.selfLink)' - \
| yq eval 'del(.metadata.creationTimestamp)' - \
| yq eval 'del(.metadata.generateName)' -
Another way is to have a neat() function on your ~/.bashrc or ~/.zshrc and call it as below:
neat() function:
neat () {
yq eval 'del(.items[].metadata.managedFields,
.metadata,
.apiVersion,
.items[].apiVersion,
.items[].metadata.namespace,
.items[].kind,
.items[].status,
.items[].metadata.annotations,
.items[].metadata.resourceVersion,
.items[].metadata.selfLink,.items[].metadata.uid,
.items[].metadata.creationTimestamp,
.items[].metadata.ownerReferences)' -
}
then:
kubectl get pods -o yaml | neat
cat k8s-config.yaml | neat
You may read more on yq delete here

I'd like to add some basic information about that feature:
ManagedFields is a section created by ServerSideApply feature. It helps tracking changes in cluster objects by different managers.
If you disable it in the kube-apiserver manifests all object created after this change won't have metadata.managedFields sections, but it doesn't affect the existing objects.
Open the kube-apiserver manifest with your favorite text editor:
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add command line argument to spec.containers.command:
- --feature-gates=ServerSideApply=false
kube-apiserver will restart immediately.
It usually takes couple of minutes for the kube-apiserver to start serving requests again.
You can also disable ServerSideApply feature gate on the cluster creation stage.
Alternatively, managedFields can be patched to an empty list for the existing object:
$ kubectl patch pod podname -p '{"metadata":{"managedFields":[{}]}}'
This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that just setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field.

Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information you don't want in this situation) you can do the following using yq v4.x:
kubectl get <resource> -n <namespace> <resource-name> -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -

First thing what came to my mind was to just use stream editor like sed to just skip this part beggining form managedFields: to another specific pattern.
It's bit hardcoded as you would need to specify 2 patterns like managedFields: and ending pattern like name: productpage but will work for this scenario. If this won't fit you, pleas add more details, how you would like to achieve this.
sed command would look like:
sed -n '/(Pattern1)/{p; :a; N; /(Pattern2)/!ba; s/.*\n//}; p'
For example Ive used Nginx pod:
$ kubectl get po nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
nginx'
creationTimestamp: "2020-05-29T10:54:18Z"
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
...
status:
conditions:
...
startedAt: "2020-05-29T10:54:19Z"
hostIP: 10.154.0.29
phase: Running
podIP: 10.52.1.6
podIPs:
- ip: 10.52.1.6
qosClass: Burstable
startTime: "2020-05-29T10:54:18Z"
After using sed
$ kubectl get po nginx -o yaml | sed -n '/annotations:/{p; :a; N; /hostIP: 10.154.0.29/!ba; s/.*\n//}; p'
apiVersion: v1
kind: Pod
metadata:
annotations:
hostIP: 10.154.0.29
phase: Running
podIP: 10.52.1.6
podIPs:
- ip: 10.52.1.6
qosClass: Burstable
startTime: "2020-05-29T10:54:18Z"
In your case command like:
$ kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml | sed -n '/managedFields:/{p; :a; N; /name: productpage/!ba; s/.*\n//}; p'
Should give output like:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}}
creationTimestamp: "2020-05-28T05:22:41Z"
labels:
app: productpage
service: productpage
managedFields:
name: productpage
namespace: bookinfo
resourceVersion: "121804"
selfLink: /api/v1/namespaces/bookinfo/services/productpage
uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763
spec:
clusterIP: 10.152.183.9
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 9080
selector:
app: productpage
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

Related

How to use kubectl patch to add list entry without duplicate?

I have the following Minikube default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10953591"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
I can add a new imagePullSecrets entry using the following kubectl patch command:
kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]'
Here's the update default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10956724"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10957065"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries.
I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.
AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime.
As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.

ALB/Ingress to Keycloak SSL communication configuration

I have some services running on the cluster, and the ALB is working fine. I want to configure SSL communication from ALB/ingress to Keycloak17.0.1 by creating a self signed certificate and establish a communication to route through port 8443 instead of http(80). Keycloak is being built by the docker image. Docker compose file is exposing the port 8443.I should also make sure to have the keystore defined as a Kubernetes PVC within the deployment instead of a docker volume.
Below is deployment file:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "keycloak"
namespace: "test"
spec:
volumes:
- name: keycloak-pv-volume
persistentVolumeClaim:
claimName: keycloak-pv-claim
spec:
selector:
matchLabels:
app: "keycloak"
replicas: 3
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "keycloak"
spec:
containers:
-
name: "keycloak"
image: "quay.io/keycloak/keycloak:17.0.1"
imagePullPolicy: "Always"
livenessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 30
env:
-
name: "KEYCLOAK_USER"
value: "admin"
-
name: "KEYCLOAK_PASSWORD"
value: "admin"
-
name: "PROXY_ADDRESS_FORWARDING"
value: "true"
-
name: HTTPS_PROXY
value: "https://engineering-exodos*********:3128"
-
name: KC_HIDE_REALM_IN_ISSUER
value:************
ports:
- name: "http"
containerPort: 8080
- name: "https"
containerPort: 8443
self certificate is being created like below: (.groovy)
def secretPatch = 'kc alb-secret-patch.yaml'
sh loadMixins() + """
openssl req -newkey rsa:4096
-x509
-sha256
-days 395
-nodes
-out keycloak_alb.crt
-keyout keycloak_alb.key
-subj "/C=US/ST=MN/L=MN/O=Security/OU=IT Department/CN=www.gateway.com"
EXISTS=$(kubectl -n istio-system get secret --ignore-not-found keycloak_alb-secret)
if [ -z "$EXISTS" ]; then
kubectl -n create secret tls keycloak_alb-secret --key="keycloak_alb.key" --cert="keycloak_alb.crt"
else
# base64 needs the '-w0' flag to avoid wrapping long lines
echo -e "data:\n tls.key: $(base64 -w0 keycloak_alb.key)\n tls.crt: $(base64 -w0 keycloak.crt)" > ${secretPatch}
kubectl -n istio-system patch secret keycloak_alb-secret -p "$(cat ${secretPatch})"
fi
"""
}
Docker file:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_CACHE=ispn
ENV KC_DB=postgres
USER keycloak
RUN chmod -R 755 /opt/keycloak \
&& chown -R keycloak:keycloak /opt/keycloak
COPY ./keycloak-benchmark-dataset.jar /opt/keycloak/providers/keycloak-benchmark-dataset.jar
COPY ./ness-event-listener.jar /opt/keycloak/providers/ness-event-listener.jar
# RUN curl -o /opt/keycloak/providers/ness-event-listener-17.0.0.jar https://repo1.uhc.com/artifactory/repo/com/optum/dis/keycloak/ness_event_listener/17.0.0/ness-event-listener-17.0.0.jar
# Changes for hiding realm in issuer claim in access token
COPY ./keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
RUN /opt/keycloak/bin/kc.sh build`enter code here`
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
COPY --from=builder /opt/keycloak/providers /opt/keycloak/providers
COPY --from=builder /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
COPY --chown=keycloak:keycloak cache-ispn-remote.xml /opt/keycloak/conf/cache-ispn-remote.xml
COPY conf /opt/keycloak/conf/
# Elastic APM integration changes
USER root
RUN mkdir -p /opt/elastic/apm
RUN chmod -R 755 /opt/elastic/apm
RUN curl -L https://repo1.uhc.com/artifactory/Thirdparty-Snapshots/com/elastic/apm/agents/java/current/elastic-apm-agent.jar -o /opt/elastic/apm/elastic-apm-agent.jar
ENV ES_AGENT=" -javaagent:/opt/elastic/apm/elastic-apm-agent.jar"
ENV ELASTIC_APM_SERVICE_NAME="AIDE_007"
ENV ELASTIC_APM_SERVER_URL="https://nonprod.uhc.com:443"
ENV ELASTIC_APM_VERIFY_SERVER_CERT="false"
ENV ELASTIC_APM_ENABLED="true"
ENV ELASTIC_APM_LOG_LEVEL="WARN"
ENV ELASTIC_APM_ENVIRONMENT="Test-DIS"
CMD export JAVA_OPTS="$JAVA_OPTS $ES_AGENT"
USER keycloak
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start"]
Docker Compose:
services:
keycloak:
image: quay.io/keycloak/keycloak:17.0.1
command: start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HTTPS_CERTIFICATE_FILE: /opt/keycloak/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /opt/keycloak/tls.key
ports:
- 8080:8080
- 8443:8443
volumes:
- ./localhost.crt:/opt/keycloak/conf/tls.crt
- ./localhost.key:/opt/keycloak/conf/tls.key
What is best standard way of practice to go ahead and route the traffic via SSL from ALB to Keycloak?

Using worker identity with gcloud

I have 2 docker imags with gcloud sdk and my entrypoint script performs some checks using gcloud, like following
gcloud pubsub subscriptions describe $GCP_SUB_NAME --quiet
result="$?"
if [ "$result" -ne 0 ]; then
echo "Subscription not found, exited with non-zero status $result"
exit $result
fi
I am running these in gke...
I have a different GCP Service Account for each docker image which is connected to GKE Service Account using workload-identity.
My problem is that both deployments don't succeed at the same time. The one which runs first succeeds and other fails with following error. Something to do with the gke/GCP credentials.
I get following error
gcloud pubsub subscriptions describe local-test-v1 --quiet
ERROR: (gcloud.pubsub.subscriptions.describe) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials.
If you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
Even if I make following changes I don't get it through
gcloud config set account sa#project.iam.gserviceaccount.com
gcloud pubsub subscriptions describe $GCP_SUB_NAME --quiet
result="$?"
if [ "$result" -ne 0 ]; then
echo "Subscription not found, exited with non-zero status $result"
exit $result
fi
Error I get now
gcloud config set account sa#project.iam.gserviceaccount.com
Updated property [core/account].
+ gcloud pubsub subscriptions describe local-test-v1 --quiet
ERROR: (gcloud.pubsub.subscriptions.describe) Your current active account [sa#project.iam.gserviceaccount.com] does not have any valid credentials
Please run:
$ gcloud auth login
to obtain new credentials.
For service account, please activate it first:
$ gcloud auth activate-service-account ACCOUNT
I don't wanna use the GCP client libraries as I want to keep it light weight so either gcloud r curl r the best option.
Can I use gcloud in GKE without the key file?
Can I call googleapis via curl without passing bearer token or how shall I get that in the docker container?
Any ideas... Thanks...
Note#1: workload identity
resource "google_service_account_iam_member" "workload_identity_iam" {
member = "serviceAccount:${var.gcp_project}.svc.id.goog[${var.kubernetes_namespace}/${var.kubernetes_service_account_name}]"
role = "roles/iam.workloadIdentityUser"
service_account_id = google_service_account.sa.name
depends_on = [google_project_iam_member.pubsub_subscriber_iam, google_project_iam_member.bucket_object_admin_iam] }
Note#2: GKE SAs
Name: sa1
Namespace: some-namespace
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: sa1#project.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: sa1-token-shj9w
Tokens: sa1-token-shj9w
Events: <none>
Name: sa2
Namespace: some-namespace
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: sa2#project.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: sa2-token-dkhdl
Tokens: sa2-token-dkhdl
Events: <none>
Note#3: job template for container
apiVersion: batch/v1
kind: Job
metadata:
namespace: some-namespace
name: check
labels:
helm.sh/chart: check-0.1.0
app.kubernetes.io/name: check
app.kubernetes.io/instance: check
app: check
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-weight: "-4"
spec:
backoffLimit: 1
completions: 1
parallelism: 1
template:
metadata:
name: check
labels:
app.kubernetes.io/name: check
app.kubernetes.io/instance: check
app: check
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 0
serviceAccountName: sa1
securityContext:
{}
containers:
- name: check
securityContext:
{}
image: "eu.gcr.io/some-project/check:500c4166"
imagePullPolicy: Always
env:
# Define the environment variable
- name: GCP_PROJECT_ID
valueFrom:
configMapKeyRef:
name: check
key: gcpProjectID
- name: GCP_SUB
valueFrom:
configMapKeyRef:
name: check
key: gcpSubscriptionName
- name: GCP_BUCKET
valueFrom:
configMapKeyRef:
name: check
key: gcpBucket
resources:
limits:
cpu: 1000m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
Docker image:
FROM ubuntu:18.04
COPY /checks/pre/ /checks/pre/
ENV HOME /checks/pre/
# Install needed packages
RUN apt-get update && \
apt-get -y install --no-install-recommends curl \
iputils-ping \
tar \
jq \
python \
ca-certificates \
&& mkdir -p /usr/local/gcloud && cd /usr/local/gcloud \
&& curl -o google-cloud-sdk.tar.gz -L -O https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz \
&& tar -xzf google-cloud-sdk.tar.gz \
&& rm -f google-cloud-sdk.tar.gz \
&& ./google-cloud-sdk/install.sh --quiet \
&& mkdir -p /.config/gcloud && chmod 775 -R /checks/pre /.config/gcloud \
&& apt-get autoclean \
&& apt-get autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
WORKDIR /checks/pre
USER 1001
ENTRYPOINT [ "/checks/pre/entrypoint.sh" ]

How can I automate the removal of kubernetes secrets from a yaml file?

I have a file containing many Kubernetes YAML objects.
I am seeking a way of removing all K8s Secret YAML objects from the text file, identified by the "kind: Secret" string contained within the YAML block. This should remove everything from the "apiVersion" through to just before the "---" signifying the start of the next object.
I've looked into Sed, Python and yq tools with no luck.
The YAML may contain any number of secrets in any order.
How can I automate stripping out of these "Secret" blocks?
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-1
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
name: test-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
---
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-2
type: Opaque
---
yq can do this (and jq underneath)
pip install yq
yq --yaml-output 'select(.kind != "Secret")' input.yaml
You might need to remove the null document at the end of your example, it caused a little bit of weirdness in the output
Note that there is also a different yq utility that doesn't seem to do what jq does so I'm not sure how to make that one work.
What about a shell script that splits the file at every occurrence of --- by using the command awk? (See sections 5 and 6 of this link for an example of that.) In this way, the script can evaluate each part separately and send those who do not correspond to Secret to a new output file.
Purely with regex, you might search for
(^|---).*?kind: Secret.*?(---|$)
and replace with:
---
Test here.
Note: at the end, you might have some extra --- which you need to remove "manually" - but that should not be a big deal.

How to set dynamic values with Kubernetes yaml file

For example, a deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{Here want to read value from config file outside}}
There is a ConfigMap feature with Kubernetes, but that's also write the key/value to the yaml file. Is there a way to set the key to environment variables?
You can also use envsubst when deploying.
e.g.
cat app/deployment.yaml | envsubst | kubectl apply ...
It will replace all variables in the file with their values.
We are successfully using this approach on our CI when deploying to multiple environments, also to inject the CI_TAG etc into the deployments.
You can't do it automatically, you need to use an external script to "compile" your template, or use helm as suggested by #Jakub.
You may want to use a custom bash script, maybe integrated with your CI pipeline.
Given a template yml file called deploy.yml.template very similar to the one you provided, you can use something like this:
#!/bin/bash
# sample value for your variables
MYVARVALUE="nginx:latest"
# read the yml template from a file and substitute the string
# {{MYVARNAME}} with the value of the MYVARVALUE variable
template=`cat "deploy.yml.template" | sed "s/{{MYVARNAME}}/$MYVARVALUE/g"`
# apply the yml with the substituted value
echo "$template" | kubectl apply -f -
I don't think it is possible to set image through variable or Config Map in Kubernetes. But you can use for example Helm to make your deployments much more flexible and configurable.
One line:
cat app-deployment.yaml | sed "s/{{BITBUCKET_COMMIT}}/$BITBUCKET_COMMIT/g" | kubectl apply -f -
In yaml:
...
containers:
- name: ulisses
image: niceuser/niceimage:{{BITBUCKET_COMMIT}}
...
This kind of thing is painfully easy with ytt:
deployment.yml
## load("#ytt:data", "data")
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ## data.values.image
values.yml
##data/values
image: nginx#sha256:fe2fa7bb1ceb86c6d9c935bc25c3dd8cbd64f2e95ed5b894f93ae7ffbd1e92bb
Then...
$ ytt -f deployment.yml -f values.yml | kubectl apply -f -
or even better, use ytt's cousin, kapp for a high-control deployment experience:
$ ytt -f deployment.yml -f values.yml | kapp deploy -a guestbook -f -
I create a script called kubectl_create and use it to run the create command. It will substitute any value in the template that is referenced in an environment variable.
#!/bin/bash
set -e
eval "cat <<EOF
$(<$1)
EOF
" | kubectl create -f -
For example, if the template file has:
apiVersion: v1
kind: Service
metadata:
name: nginx-external
labels:
app: nginx
spec:
loadBalancerIP: ${PUBLIC_IP}
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: nginx
Run kubectl_create nginx-service.yaml and then the environment variable PUBLIC_IP will be substituted before running the actual kubectl create command.
After trying sed and envsubst I found Kustomize the most elegant and Kubernetes-native way. As an alternative also yq comes in handy sometimes.
Use Kustomize to change image name
Install the kustomize CLI (e.g. on a Mac this is brew install kustomize) and create a new file called kustomization.yaml in the same directory as your deployment.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
Now use the kustomize edit set image command to change the image name
# optionally define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
kustomize edit set image $IMAGE_NAME
Finally apply your kustomized deployment.yml to your cluster using kubectl apply -k directory/where/your/kustomization/file/is like this:
kubectl apply -k .
For debugging you can see the resulting deployment.yml if you run kustomize build . :
$ kustomize build .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
Alternative: Use yq to change image name
Install the YAML processor yq (e.g. via homebrew brew install yq), define your variables and let yq do the replacement:
# define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
yq e ".spec.template.spec.containers[0].image = \"$IMAGE_NAME\"" -i deployment.yaml
Now your deployment.yaml get's the new image version and then looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
FYI: Your deployment.yaml isn't really valid Kubernetes configuration - the template.spec.container should not reside under the metadata tag - and also it is spelled containers.
yaml does not read values from another yaml file. As an alternative approach you could try this.
kind: Pod
metadata:
creationTimestamp: null
annotations:
namespace: &namespaceId dev
imageId: &imgageId nginx
podName: &podName nginx-pod
containerName: &containerName nginx-container
name: *podName
namespace: *namespaceId
spec:
containers:
- image: *imgageId
name: *containerName
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
My approach:
tools/jinja2-cli.py:
#!/usr/bin/env python3
import os
import sys
from jinja2 import Environment, FileSystemLoader
sys.stdout.write(Environment(loader=FileSystemLoader('templates/')).from_string(sys.stdin.read()).render(env=os.environ) + "\n")
Make rule:
_GENFILES = $(basename $(TEMPLATES))
GENFILES = $(_GENFILES:templates/%=%)
$(GENFILES): %: templates/%.j2 $(MKFILES) tools/jinja2-cli.py .env
env $$(cat .env | xargs) tools/jinja2-cli.py < $< > $# || (rm -f $#; false)
Inside the .j2 template file you can use any jinja syntax construct, e.g. {{env.GUEST}} will be replaced by the value of GUEST defined in .env
So your templates/deploy.yaml.j2 would look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{env.GUEST}}
Another approach (using just bash builtins and xargs) might be
env $(cat .env | xargs) cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ${GUEST}
EOF
I have been using kubetpl
It has three different template flavors and supports ConfigMap/Secret freezing.
I think the standard - Helm should be used instead of custom scripts to solve this problem nowadays. You don't need to deploy to generate Kubernets yamls on the machine.
An example:
Install helm on your machine so helm command exists
https://artifacthub.io/packages/helm/pauls-helm-charts/helloworld - Install button
helm repo add pauls-helm-charts http://tech.paulcz.net/charts
helm pull pauls-helm-charts/helloworld --version 2.0.0
tar -zxvf helloworld-2.0.0.tgz && cd helloworld
helm template -f values.yaml --output-dir helloworld . --namespace my-namespace --name-template=my-name
So it created these files from values.yaml:
wrote helloworld/helloworld/templates/serviceaccount.yaml
wrote helloworld/helloworld/templates/service.yaml
wrote helloworld/helloworld/templates/deployment.yaml
Inside values.yaml, you can change predefined repository (or 100% any value can be repeated in Kubernetes yamls as you wish):
image:
repository: paulczar/spring-helloworld
Now if you want to deploy, make sure kubectl works and just apply these generated files using kubectl apply -f serviceaccount.yaml, etc.
I create a script called kubectl_apply. It loads variables from .env, replace ${CUSTOMVAR} in yml and pass it to kubectl command
#!/bin/bash
set -a
source .env
set +a
eval "cat <<EOF
$(<$1)
EOF
" | kubectl apply -f -
I've published a command-line tool ysed that helps exactly with that, in case you plan to script it.
If you just want to change the image or a tag while your deployment is running, you could set the image of a specific container in your deployment:
kubectl apply -f k8s
kubectl set image deployments/worker-deployment worker=IMAGE:TAG
create a file called kubectl_advance as below and enjoy calling it just like kubectl commands.
e.g.
EXPORT MY_VAL="my-v1"
kubectl_advance -c -f sample.yaml # -c option is to call create command
kubectl_advance -r -f sample2.yaml # -r option is to call replace command
Assuming the yaml file has the value like ${MY_VAL} to be replaced by the environment variable.
#!/usr/bin/env bash
helpFunction()
{
echo "Supported option is [-f] for file"
exit 1
}
while getopts "f:cr" opt
do
case "$opt" in
f ) yamlFile="$OPTARG" ;;
c ) COMMAND_IS_CREATE="true" ;;
r ) COMMAND_IS_REPLACE="true" ;;
? ) helpFunction ;; # Print helpFunction in case parameter is non-existent
esac
done
echo 'yaml file is : '$yamlFile
YAML_CONTENT=`eval "cat <<EOF
$(<$yamlFile)
EOF
"`
echo 'Final File Content is :=>'
echo '------------------'
echo "$YAML_CONTENT"
if [[ "$COMMAND_IS_CREATE" == "true" ]]; then
COMMAND="create"
fi
if [[ "$COMMAND_IS_REPLACE" == "true" ]]; then
COMMAND="replace"
fi
echo "$YAML_CONTENT" | kubectl $COMMAND -f -
Helm is exactly meant for such things and a lot more. It handle complex set of resource deployment as a group etc.
But if we are still looking for some simple alternative then how about using ant?
If you want to modify the file as part of build process or test process then you can go with ant task as well.
Using ant you can load all environment values as property or you can simply load properties file like:
<property environment="env" />
<property file="build.properties" />
Then you can have a target which converts template files into your desired yaml file.
<target name="generate_from_template">
<!-- Copy task to replaces values and create new file -->
<copy todir="${dest.dir}" verbose="true" overwrite="true" failonerror="true">
<!-- List of files to be processed -->
<fileset file="${source.dir}/xyz.template.yml" />
<!-- Mapper to transform filename. Removes '.template' from the file
name when copying the file to output directory -->
<mapper type="regexp" from="(.*).template(.*)" to="\1\2" />
<!-- Filter chain that replaces the template values with actual values
fetched from properties file -->
<filterchain>
<expandproperties />
</filterchain>
</copy>
</target>
Of course you can use a fileset instead of file in case you want to change values dynamically for multiple files (nested or whatever)
Your template file xyz.template.yml should look like:
apiVersion: v1
kind: Service
metadata:
name: ${XYZ_RES_NAME}-ser
labels:
app: ${XYZ_RES_NAME}
version: v1
spec:
type: NodePort
ports:
- port: ${env.XYZ_RES_PORT}
protocol: TCP
selector:
app: ${XYZ_RES_NAME}
version: v1
env. property being loaded from environment variables and other from property file
Hope it helped :)
In the jitsi project the tpl == frep command is used to substitute values, an extension to envsubst
https://github.com/jitsi/docker-jitsi-meet/issues/65
I keep on using the old shell tools like sed and friends but such code is quickly unreadable when its more than a handful of value to deal with.
For my deployments, I typically use Helm charts. It requires me to update values.yaml files periodically.
For dynamically updating YAML files, I used 'envsubst' since it is simple and does not require sophisticated configuration.
In addition, most of the tools only work with valid Kubernetes manifests, not simple YAML files.
I created a simple script to handle the YAML modification to simplify the usage
https://github.com/alexusarov/vars_replacer
Example:
./vars_replacer.sh -i [input_file] -o [output_file] -p "[key=value] [key=value]"