How to set dynamic values with Kubernetes yaml file - templates
For example, a deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{Here want to read value from config file outside}}
There is a ConfigMap feature with Kubernetes, but that's also write the key/value to the yaml file. Is there a way to set the key to environment variables?
You can also use envsubst when deploying.
e.g.
cat app/deployment.yaml | envsubst | kubectl apply ...
It will replace all variables in the file with their values.
We are successfully using this approach on our CI when deploying to multiple environments, also to inject the CI_TAG etc into the deployments.
You can't do it automatically, you need to use an external script to "compile" your template, or use helm as suggested by #Jakub.
You may want to use a custom bash script, maybe integrated with your CI pipeline.
Given a template yml file called deploy.yml.template very similar to the one you provided, you can use something like this:
#!/bin/bash
# sample value for your variables
MYVARVALUE="nginx:latest"
# read the yml template from a file and substitute the string
# {{MYVARNAME}} with the value of the MYVARVALUE variable
template=`cat "deploy.yml.template" | sed "s/{{MYVARNAME}}/$MYVARVALUE/g"`
# apply the yml with the substituted value
echo "$template" | kubectl apply -f -
I don't think it is possible to set image through variable or Config Map in Kubernetes. But you can use for example Helm to make your deployments much more flexible and configurable.
One line:
cat app-deployment.yaml | sed "s/{{BITBUCKET_COMMIT}}/$BITBUCKET_COMMIT/g" | kubectl apply -f -
In yaml:
...
containers:
- name: ulisses
image: niceuser/niceimage:{{BITBUCKET_COMMIT}}
...
This kind of thing is painfully easy with ytt:
deployment.yml
## load("#ytt:data", "data")
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ## data.values.image
values.yml
##data/values
image: nginx#sha256:fe2fa7bb1ceb86c6d9c935bc25c3dd8cbd64f2e95ed5b894f93ae7ffbd1e92bb
Then...
$ ytt -f deployment.yml -f values.yml | kubectl apply -f -
or even better, use ytt's cousin, kapp for a high-control deployment experience:
$ ytt -f deployment.yml -f values.yml | kapp deploy -a guestbook -f -
I create a script called kubectl_create and use it to run the create command. It will substitute any value in the template that is referenced in an environment variable.
#!/bin/bash
set -e
eval "cat <<EOF
$(<$1)
EOF
" | kubectl create -f -
For example, if the template file has:
apiVersion: v1
kind: Service
metadata:
name: nginx-external
labels:
app: nginx
spec:
loadBalancerIP: ${PUBLIC_IP}
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: nginx
Run kubectl_create nginx-service.yaml and then the environment variable PUBLIC_IP will be substituted before running the actual kubectl create command.
After trying sed and envsubst I found Kustomize the most elegant and Kubernetes-native way. As an alternative also yq comes in handy sometimes.
Use Kustomize to change image name
Install the kustomize CLI (e.g. on a Mac this is brew install kustomize) and create a new file called kustomization.yaml in the same directory as your deployment.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
Now use the kustomize edit set image command to change the image name
# optionally define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
kustomize edit set image $IMAGE_NAME
Finally apply your kustomized deployment.yml to your cluster using kubectl apply -k directory/where/your/kustomization/file/is like this:
kubectl apply -k .
For debugging you can see the resulting deployment.yml if you run kustomize build . :
$ kustomize build .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
Alternative: Use yq to change image name
Install the YAML processor yq (e.g. via homebrew brew install yq), define your variables and let yq do the replacement:
# define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
yq e ".spec.template.spec.containers[0].image = \"$IMAGE_NAME\"" -i deployment.yaml
Now your deployment.yaml get's the new image version and then looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
FYI: Your deployment.yaml isn't really valid Kubernetes configuration - the template.spec.container should not reside under the metadata tag - and also it is spelled containers.
yaml does not read values from another yaml file. As an alternative approach you could try this.
kind: Pod
metadata:
creationTimestamp: null
annotations:
namespace: &namespaceId dev
imageId: &imgageId nginx
podName: &podName nginx-pod
containerName: &containerName nginx-container
name: *podName
namespace: *namespaceId
spec:
containers:
- image: *imgageId
name: *containerName
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
My approach:
tools/jinja2-cli.py:
#!/usr/bin/env python3
import os
import sys
from jinja2 import Environment, FileSystemLoader
sys.stdout.write(Environment(loader=FileSystemLoader('templates/')).from_string(sys.stdin.read()).render(env=os.environ) + "\n")
Make rule:
_GENFILES = $(basename $(TEMPLATES))
GENFILES = $(_GENFILES:templates/%=%)
$(GENFILES): %: templates/%.j2 $(MKFILES) tools/jinja2-cli.py .env
env $$(cat .env | xargs) tools/jinja2-cli.py < $< > $# || (rm -f $#; false)
Inside the .j2 template file you can use any jinja syntax construct, e.g. {{env.GUEST}} will be replaced by the value of GUEST defined in .env
So your templates/deploy.yaml.j2 would look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{env.GUEST}}
Another approach (using just bash builtins and xargs) might be
env $(cat .env | xargs) cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ${GUEST}
EOF
I have been using kubetpl
It has three different template flavors and supports ConfigMap/Secret freezing.
I think the standard - Helm should be used instead of custom scripts to solve this problem nowadays. You don't need to deploy to generate Kubernets yamls on the machine.
An example:
Install helm on your machine so helm command exists
https://artifacthub.io/packages/helm/pauls-helm-charts/helloworld - Install button
helm repo add pauls-helm-charts http://tech.paulcz.net/charts
helm pull pauls-helm-charts/helloworld --version 2.0.0
tar -zxvf helloworld-2.0.0.tgz && cd helloworld
helm template -f values.yaml --output-dir helloworld . --namespace my-namespace --name-template=my-name
So it created these files from values.yaml:
wrote helloworld/helloworld/templates/serviceaccount.yaml
wrote helloworld/helloworld/templates/service.yaml
wrote helloworld/helloworld/templates/deployment.yaml
Inside values.yaml, you can change predefined repository (or 100% any value can be repeated in Kubernetes yamls as you wish):
image:
repository: paulczar/spring-helloworld
Now if you want to deploy, make sure kubectl works and just apply these generated files using kubectl apply -f serviceaccount.yaml, etc.
I create a script called kubectl_apply. It loads variables from .env, replace ${CUSTOMVAR} in yml and pass it to kubectl command
#!/bin/bash
set -a
source .env
set +a
eval "cat <<EOF
$(<$1)
EOF
" | kubectl apply -f -
I've published a command-line tool ysed that helps exactly with that, in case you plan to script it.
If you just want to change the image or a tag while your deployment is running, you could set the image of a specific container in your deployment:
kubectl apply -f k8s
kubectl set image deployments/worker-deployment worker=IMAGE:TAG
create a file called kubectl_advance as below and enjoy calling it just like kubectl commands.
e.g.
EXPORT MY_VAL="my-v1"
kubectl_advance -c -f sample.yaml # -c option is to call create command
kubectl_advance -r -f sample2.yaml # -r option is to call replace command
Assuming the yaml file has the value like ${MY_VAL} to be replaced by the environment variable.
#!/usr/bin/env bash
helpFunction()
{
echo "Supported option is [-f] for file"
exit 1
}
while getopts "f:cr" opt
do
case "$opt" in
f ) yamlFile="$OPTARG" ;;
c ) COMMAND_IS_CREATE="true" ;;
r ) COMMAND_IS_REPLACE="true" ;;
? ) helpFunction ;; # Print helpFunction in case parameter is non-existent
esac
done
echo 'yaml file is : '$yamlFile
YAML_CONTENT=`eval "cat <<EOF
$(<$yamlFile)
EOF
"`
echo 'Final File Content is :=>'
echo '------------------'
echo "$YAML_CONTENT"
if [[ "$COMMAND_IS_CREATE" == "true" ]]; then
COMMAND="create"
fi
if [[ "$COMMAND_IS_REPLACE" == "true" ]]; then
COMMAND="replace"
fi
echo "$YAML_CONTENT" | kubectl $COMMAND -f -
Helm is exactly meant for such things and a lot more. It handle complex set of resource deployment as a group etc.
But if we are still looking for some simple alternative then how about using ant?
If you want to modify the file as part of build process or test process then you can go with ant task as well.
Using ant you can load all environment values as property or you can simply load properties file like:
<property environment="env" />
<property file="build.properties" />
Then you can have a target which converts template files into your desired yaml file.
<target name="generate_from_template">
<!-- Copy task to replaces values and create new file -->
<copy todir="${dest.dir}" verbose="true" overwrite="true" failonerror="true">
<!-- List of files to be processed -->
<fileset file="${source.dir}/xyz.template.yml" />
<!-- Mapper to transform filename. Removes '.template' from the file
name when copying the file to output directory -->
<mapper type="regexp" from="(.*).template(.*)" to="\1\2" />
<!-- Filter chain that replaces the template values with actual values
fetched from properties file -->
<filterchain>
<expandproperties />
</filterchain>
</copy>
</target>
Of course you can use a fileset instead of file in case you want to change values dynamically for multiple files (nested or whatever)
Your template file xyz.template.yml should look like:
apiVersion: v1
kind: Service
metadata:
name: ${XYZ_RES_NAME}-ser
labels:
app: ${XYZ_RES_NAME}
version: v1
spec:
type: NodePort
ports:
- port: ${env.XYZ_RES_PORT}
protocol: TCP
selector:
app: ${XYZ_RES_NAME}
version: v1
env. property being loaded from environment variables and other from property file
Hope it helped :)
In the jitsi project the tpl == frep command is used to substitute values, an extension to envsubst
https://github.com/jitsi/docker-jitsi-meet/issues/65
I keep on using the old shell tools like sed and friends but such code is quickly unreadable when its more than a handful of value to deal with.
For my deployments, I typically use Helm charts. It requires me to update values.yaml files periodically.
For dynamically updating YAML files, I used 'envsubst' since it is simple and does not require sophisticated configuration.
In addition, most of the tools only work with valid Kubernetes manifests, not simple YAML files.
I created a simple script to handle the YAML modification to simplify the usage
https://github.com/alexusarov/vars_replacer
Example:
./vars_replacer.sh -i [input_file] -o [output_file] -p "[key=value] [key=value]"
Related
How to use kubectl patch to add list entry without duplicate?
I have the following Minikube default service account: # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 imagePullSecrets: - name: gcr-secret - name: awsecr-cred - name: dpr-secret - name: acr-secret kind: ServiceAccount metadata: creationTimestamp: "2022-11-18T20:21:13Z" name: default namespace: default resourceVersion: "10953591" uid: edcc687f-dbb5-472d-8847-b4dc29096b48 I can add a new imagePullSecrets entry using the following kubectl patch command: kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]' Here's the update default service account: # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 imagePullSecrets: - name: gcr-secret - name: awsecr-cred - name: dpr-secret - name: acr-secret - name: artifactory-credentials kind: ServiceAccount metadata: creationTimestamp: "2022-11-18T20:21:13Z" name: default namespace: default resourceVersion: "10956724" uid: edcc687f-dbb5-472d-8847-b4dc29096b48 However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added: # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 imagePullSecrets: - name: gcr-secret - name: awsecr-cred - name: dpr-secret - name: acr-secret - name: artifactory-credentials - name: artifactory-credentials kind: ServiceAccount metadata: creationTimestamp: "2022-11-18T20:21:13Z" name: default namespace: default resourceVersion: "10957065" uid: edcc687f-dbb5-472d-8847-b4dc29096b48 How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries. I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.
AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime. As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.
OpenShift Dockerfile Build that references an ImageStream?
I would like to build an image from a Dockerfile using an OpenShift BuildConfig that references an existing ImageStream in the FROM line. That is, if I have: $ oc get imagestream openshift-build-example -o yaml apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: openshift-build-example namespace: sandbox spec: lookupPolicy: local: true I would like to be able to submit a build that uses a Dockerfile like this: FROM openshift-build-example:parent But this doesn't work. If I use a fully qualified image specification, like this... FROM image-registry.openshift-image-registry.svc:5000/sandbox/openshift-build-example:parent ...it works, but this is problematic, because it requires referencing the namespace in the image specification. This means the builds can't be conveniently deployed into another namespace. Is there any way to make this work? For reference purposes, the build is configure in the following BuildConfig resource: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: buildconfig-child spec: failedBuildsHistoryLimit: 5 successfulBuildsHistoryLimit: 5 output: to: kind: ImageStreamTag name: openshift-build-example:child runPolicy: Serial source: git: ref: main uri: https://github.com/larsks/openshift-build-example type: Git contextDir: image/child strategy: dockerStrategy: dockerfilePath: Dockerfile type: Docker triggers: - type: "GitHub" github: secretReference: name: "buildconfig-child-webhook" - type: "Generic" generic: secret: "buildconfig-child-webhook" And the referenced Dockerfile is: # FIXME FROM openshift-build-example:parent COPY index.html /var/www/localhost/htdocs/index.html
How can I automate the removal of kubernetes secrets from a yaml file?
I have a file containing many Kubernetes YAML objects. I am seeking a way of removing all K8s Secret YAML objects from the text file, identified by the "kind: Secret" string contained within the YAML block. This should remove everything from the "apiVersion" through to just before the "---" signifying the start of the next object. I've looked into Sed, Python and yq tools with no luck. The YAML may contain any number of secrets in any order. How can I automate stripping out of these "Secret" blocks? apiVersion: v1 data: username: dGVzdAo= password: dGVzdHBhc3N3b3JkCg== kind: Secret metadata: name: my-secret-1 type: Opaque --- apiVersion: v1 kind: Pod metadata: name: test-site labels: app: web spec: containers: - name: front-end image: nginx ports: - containerPort: 80 - name: rss-reader image: nickchase/rss-php-nginx:v1 ports: - containerPort: 88 --- apiVersion: v1 data: username: dGVzdAo= password: dGVzdHBhc3N3b3JkCg== kind: Secret metadata: name: my-secret-2 type: Opaque ---
yq can do this (and jq underneath) pip install yq yq --yaml-output 'select(.kind != "Secret")' input.yaml You might need to remove the null document at the end of your example, it caused a little bit of weirdness in the output Note that there is also a different yq utility that doesn't seem to do what jq does so I'm not sure how to make that one work.
What about a shell script that splits the file at every occurrence of --- by using the command awk? (See sections 5 and 6 of this link for an example of that.) In this way, the script can evaluate each part separately and send those who do not correspond to Secret to a new output file.
Purely with regex, you might search for (^|---).*?kind: Secret.*?(---|$) and replace with: --- Test here. Note: at the end, you might have some extra --- which you need to remove "manually" - but that should not be a big deal.
kubectl get -o yaml: is it possible to hide metadata.managedFields?
Using kubectl version 1.18, on microk8s 1.18.3 When getting a resource definition in yaml format. Example kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml. The output contains a section related to metadata.managedFields Is there a way to hide that metadata.managedFields to shorten the console output? Below is an example of output to better illustrate the question. apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}} creationTimestamp: "2020-05-28T05:22:41Z" labels: app: productpage service: productpage managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app: {} f:service: {} f:spec: f:ports: .: {} k:{"port":9080,"protocol":"TCP"}: .: {} f:name: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: .: {} f:app: {} f:sessionAffinity: {} f:type: {} manager: kubectl operation: Update time: "2020-05-28T05:22:41Z" name: productpage namespace: bookinfo resourceVersion: "121804" selfLink: /api/v1/namespaces/bookinfo/services/productpage uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763 spec: clusterIP: 10.152.183.9 ports: - name: http port: 9080 protocol: TCP targetPort: 9080 selector: app: productpage sessionAffinity: None type: ClusterIP status: loadBalancer: {}
Kubectl 1.21 doesn't include managed fields by default anymore kubectl get will omit managed fields by default now. Users could set --show-managed-fields to true to show managedFields when the output format is either json or yaml. https://github.com/kubernetes/kubernetes/pull/96878
check out this kubectl plugin: https://github.com/itaysk/kubectl-neat. it not only removes managedField but many other fields users are not interested in. for example: kubectl get pod mymod -oyaml | kubectl neat or kubectl neat pod mypod -oyaml
For those who like to download yaml and delete unwanted keys try this: Install yq then try(please make sure you get yq version 4.x): cat k8s-config.yaml | yq eval 'del(.status)' - --OR-- kubectl --namespace {namespace} --context {cluster} get pod {podname} | yq ... You may add/join more yq to delete more keys. Here is what I did: cat k8s-config.yaml | yq eval 'del(.status)' - | yq eval 'del (.metadata.managedFields)' - | yq eval 'del (.metadata.annotations)' - | yq eval 'del (.spec.tolerations)' - | yq eval 'del(.metadata.ownerReferences)' - | yq eval 'del(.metadata.resourceVersion)' - | yq eval 'del(.metadata.uid)' - | yq eval 'del(.metadata.selfLink)' - | yq eval 'del(.metadata.creationTimestamp)' - | yq eval 'del(.metadata.generateName)' - --OR-- cat k8s-config.yaml | yq eval 'del(.status)' - \ | yq eval 'del (.metadata.managedFields)' - \ | yq eval 'del (.metadata.annotations)' - \ | yq eval 'del (.spec.tolerations)' - \ | yq eval 'del(.metadata.ownerReferences)' - \ | yq eval 'del(.metadata.resourceVersion)' - \ | yq eval 'del(.metadata.uid)' - \ | yq eval 'del(.metadata.selfLink)' - \ | yq eval 'del(.metadata.creationTimestamp)' - \ | yq eval 'del(.metadata.generateName)' - Another way is to have a neat() function on your ~/.bashrc or ~/.zshrc and call it as below: neat() function: neat () { yq eval 'del(.items[].metadata.managedFields, .metadata, .apiVersion, .items[].apiVersion, .items[].metadata.namespace, .items[].kind, .items[].status, .items[].metadata.annotations, .items[].metadata.resourceVersion, .items[].metadata.selfLink,.items[].metadata.uid, .items[].metadata.creationTimestamp, .items[].metadata.ownerReferences)' - } then: kubectl get pods -o yaml | neat cat k8s-config.yaml | neat You may read more on yq delete here
I'd like to add some basic information about that feature: ManagedFields is a section created by ServerSideApply feature. It helps tracking changes in cluster objects by different managers. If you disable it in the kube-apiserver manifests all object created after this change won't have metadata.managedFields sections, but it doesn't affect the existing objects. Open the kube-apiserver manifest with your favorite text editor: $ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml Add command line argument to spec.containers.command: - --feature-gates=ServerSideApply=false kube-apiserver will restart immediately. It usually takes couple of minutes for the kube-apiserver to start serving requests again. You can also disable ServerSideApply feature gate on the cluster creation stage. Alternatively, managedFields can be patched to an empty list for the existing object: $ kubectl patch pod podname -p '{"metadata":{"managedFields":[{}]}}' This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that just setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field.
Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information you don't want in this situation) you can do the following using yq v4.x: kubectl get <resource> -n <namespace> <resource-name> -o yaml \ | yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -
First thing what came to my mind was to just use stream editor like sed to just skip this part beggining form managedFields: to another specific pattern. It's bit hardcoded as you would need to specify 2 patterns like managedFields: and ending pattern like name: productpage but will work for this scenario. If this won't fit you, pleas add more details, how you would like to achieve this. sed command would look like: sed -n '/(Pattern1)/{p; :a; N; /(Pattern2)/!ba; s/.*\n//}; p' For example Ive used Nginx pod: $ kubectl get po nginx -o yaml apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container nginx' creationTimestamp: "2020-05-29T10:54:18Z" ... spec: containers: - image: nginx imagePullPolicy: Always name: nginx ... status: conditions: ... startedAt: "2020-05-29T10:54:19Z" hostIP: 10.154.0.29 phase: Running podIP: 10.52.1.6 podIPs: - ip: 10.52.1.6 qosClass: Burstable startTime: "2020-05-29T10:54:18Z" After using sed $ kubectl get po nginx -o yaml | sed -n '/annotations:/{p; :a; N; /hostIP: 10.154.0.29/!ba; s/.*\n//}; p' apiVersion: v1 kind: Pod metadata: annotations: hostIP: 10.154.0.29 phase: Running podIP: 10.52.1.6 podIPs: - ip: 10.52.1.6 qosClass: Burstable startTime: "2020-05-29T10:54:18Z" In your case command like: $ kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml | sed -n '/managedFields:/{p; :a; N; /name: productpage/!ba; s/.*\n//}; p' Should give output like: apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}} creationTimestamp: "2020-05-28T05:22:41Z" labels: app: productpage service: productpage managedFields: name: productpage namespace: bookinfo resourceVersion: "121804" selfLink: /api/v1/namespaces/bookinfo/services/productpage uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763 spec: clusterIP: 10.152.183.9 ports: - name: http port: 9080 protocol: TCP targetPort: 9080 selector: app: productpage sessionAffinity: None type: ClusterIP status: loadBalancer: {}
How to get cluster subdomain in kubernetes deployment config template
On kubernetes 1.6.1 (Openshift 3.6 CP) I'm trying to get the subdomain of my cluster using $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN) but it's not dereferencing at runtime. Not sure what I'm doing wrong, docs show this is how environment parameters should be acquired. https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6/#container-v1-core - apiVersion: v1 kind: DeploymentConfig spec: template: metadata: labels: deploymentconfig: ${APP_NAME} name: ${APP_NAME} spec: containers: - name: myapp env: - name: CLOUD_CLUSTER_SUBDOMAIN value: $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN)
You'll need to set that value as an environment variable, this is the usage: oc set env <object-selection> KEY_1=VAL_1 for example if your pod is named foo and your subdomain is foo.bar, you would use this command: oc set env dc/foo OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN=foo.bar