Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I like the readability of yaml, therefore I am trying this(no json) - do you know why this runs without error, but do not update/change anything?
kubectl patch configmap/config-domain -n knative-serving --type merge -p '
data:
example.com:""
'
And this is working, but I have no idea why:
kubectl patch configmap/config-network -n knative-serving --type merge -p '
data:
autoTLS: Enabled
httpProtocol: Redirected
'
Here is a sample cm with two keys key1 and key2, the output format is yaml:
k get cm my-config -o yaml
apiVersion: v1
data:
key1: config1
key2: config2
kind: ConfigMap
metadata:
creationTimestamp: "2021-06-22T14:15:07Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:key1: {}
f:key2: {}
manager: kubectl-create
operation: Update
time: "2021-06-22T14:15:07Z"
name: my-config
namespace: default
resourceVersion: "755842"
selfLink: /api/v1/namespaces/default/configmaps/my-config
uid: 18d87151-ae27-4aa1-8cf1-eee609c0dd7f
Patching the cm:
k patch cm my-config -p $'data:\n key1: "new_config1"'
configmap/my-config patched
Here is the updated cm:
k get cm my-config -o yaml
apiVersion: v1
data:
key1: new_config1 #<----------this is updated
key2: config2
kind: ConfigMap
metadata:
creationTimestamp: "2021-06-22T14:15:07Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:key2: {}
manager: kubectl-create
operation: Update
time: "2021-06-22T14:15:07Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
f:key1: {}
manager: kubectl-patch
operation: Update
time: "2021-06-22T14:16:11Z"
name: my-config
namespace: default
resourceVersion: "755928"
selfLink: /api/v1/namespaces/default/configmaps/my-config
uid: 18d87151-ae27-4aa1-8cf1-eee609c0dd7f
Similarly, you can o/p the cm in json, you can built the query after printing the cm in json format.
Related
I have the following Minikube default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10953591"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
I can add a new imagePullSecrets entry using the following kubectl patch command:
kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]'
Here's the update default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10956724"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10957065"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries.
I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.
AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime.
As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.
I have a sidecar like this:
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: test
namespace: testns
spec:
workloadSelector:
labels:
app: test
...
and a kustomization like:
resources:
- ../../base
nameSuffix: -dev
But kustomize doesn't adapt the workloadSelector label app to test-dev as I would expect it to do. The name suffix is only appended to the name of the sidecar. Any ideas why?
By default kustomize namePrefix and nameSuffix only apply to metadata/name for all resources.
There are a set of configured nameReferences that will also be transformed with the appropriate name, but they are limited to resource names.
See here for more info: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#prefixsuffix-transformer
OBSOLETE:
I keep this post for further reference, but you can check better diagnose (not solved yet, but workarounded) in
Istio: RequestAuthentication jwksUri does not resolve internal services names
UPDATE:
In Istio log we see the next error. uaa is a kubernetes pod serving OAUTH authentication/authorization. It is accessed with the name uaa from the normal services. I do not know why the istiod cannot find uaa host name. Have I to use an specific name? (remember, standard services find uaa host perfectly)
2021-03-03T18:39:36.750311Z error model Failed to fetch public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-03T18:39:36.750364Z error Failed to fetch jwt public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-03T18:39:36.753394Z info ads LDS: PUSH for node:product-composite-5cbf8498c7-jd4n5.chp18 resources:29 size:134.3kB
2021-03-03T18:39:36.754623Z info ads RDS: PUSH for node:product-composite-5cbf8498c7-jd4n5.chp18 resources:14 size:14.2kB
2021-03-03T18:39:36.790916Z warn ads ADS:LDS: ACK ERROR sidecar~10.1.1.56~product-composite-5cbf8498c7-jd4n5.chp18~chp18.svc.cluster.local-10 Internal:Error adding/updating listener(s) virtualInbound: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
2021-03-03T18:39:55.618106Z info ads ADS: "10.1.1.55:41162" sidecar~10.1.1.55~review-65b6886c89-bcv5f.chp18~chp18.svc.cluster.local-6 terminated rpc error: code = Canceled desc = context canceled
Original question
I have a service that is working fine, after injecting istio sidecar to a standard kubernetes pod.
I'm trying to add jwt Authentication, and for this, I'm following the official guide Authorization with JWT
My problem is
If I create the JWT resources (RequestAuthorization and AuthorizationPolicy) AFTER injecting the istio dependencies, everything (seems) to work fine
But if I create the JWT resources (RequestAuthorization and AuthorizationPolicy) and then inject the Istio the pod doesn't start. After checking the logs, seems that the sidecar is not able to work (maybe checking the health?)
My code:
JWT Resources
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://uaa:8090/uaa/token_keys"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "ap-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
action: ALLOW
# rules:
# - from:
# - source:
# requestPrincipals: ["http://uaa:8090/uaa/oauth/token/faf5e647-74ab-42cc-acdb-13cc9c573d5d"]
# b99ccf71-50ed-4714-a7fc-e85ebae4a8bb
2- I use destination rules as follows
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: dr-product-composite
spec:
host: product-composite
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
3- My service deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-composite
spec:
replicas: 1
selector:
matchLabels:
app: product-composite
template:
metadata:
labels:
app: product-composite
version: latest
spec:
containers:
- name: comp
image: bthinking/product-composite-service
imagePullPolicy: Never
env:
- name: SPRING_PROFILES_ACTIVE
value: "docker"
- name: SPRING_CONFIG_LOCATION
value: file:/config-repo/application.yml,file:/config-repo/product-composite.yml
envFrom:
- secretRef:
name: rabbitmq-client-secrets
ports:
- containerPort: 80
resources:
limits:
memory: 350Mi
livenessProbe:
httpGet:
scheme: HTTP
path: /actuator/info
port: 4004
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 20
successThreshold: 1
readinessProbe:
httpGet:
scheme: HTTP
path: /actuator/health
port: 4004
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 3
successThreshold: 1
volumeMounts:
- name: config-repo-volume
mountPath: /config-repo
volumes:
- name: config-repo-volume
configMap:
name: config-repo-product-composite
---
apiVersion: v1
kind: Service
metadata:
name: product-composite
spec:
selector:
app: "product-composite"
ports:
- port: 80
name: http
targetPort: 80
- port: 4004
name: http-mgm
targetPort: 4004
4- Error log in the pod (combined service and sidecar)
2021-03-02 19:34:41.315 DEBUG 1 --- [undedElastic-12] o.s.s.w.s.a.AuthorizationWebFilter : Authorization successful
2021-03-02 19:34:41.315 DEBUG 1 --- [undedElastic-12] .b.a.e.w.r.WebFluxEndpointHandlerMapping : [0e009bf1-133] Mapped to org.springframework.boot.actuate.endpoint.web.reactive.AbstractWebFluxEndpointHandlerMapping$ReadOperationHandler#e13aa23
2021-03-02 19:34:41.316 DEBUG 1 --- [undedElastic-12] ebSessionServerSecurityContextRepository : No SecurityContext found in WebSession: 'org.springframework.web.server.session.InMemoryWebSessionStore$InMemoryWebSession#48e89a58'
2021-03-02 19:34:41.319 DEBUG 1 --- [undedElastic-15] .s.w.r.r.m.a.ResponseEntityResultHandler : [0e009bf1-133] Using 'application/vnd.spring-boot.actuator.v3+json' given [*/*] and supported [application/vnd.spring-boot.actuator.v3+json, application/vnd.spring-boot.actuator.v2+json, application/json]
2021-03-02 19:34:41.320 DEBUG 1 --- [undedElastic-15] .s.w.r.r.m.a.ResponseEntityResultHandler : [0e009bf1-133] 0..1 [java.util.Collections$UnmodifiableMap<?, ?>]
2021-03-02 19:34:41.321 DEBUG 1 --- [undedElastic-15] o.s.http.codec.json.Jackson2JsonEncoder : [0e009bf1-133] Encoding [{}]
2021-03-02 19:34:41.326 DEBUG 1 --- [or-http-epoll-3] r.n.http.server.HttpServerOperations : [id: 0x0e009bf1, L:/127.0.0.1:4004 - R:/127.0.0.1:57138] Detected non persistent http connection, preparing to close
2021-03-02 19:34:41.327 DEBUG 1 --- [or-http-epoll-3] o.s.w.s.adapter.HttpWebHandlerAdapter : [0e009bf1-133] Completed 200 OK
2021-03-02 19:34:41.327 DEBUG 1 --- [or-http-epoll-3] r.n.http.server.HttpServerOperations : [id: 0x0e009bf1, L:/127.0.0.1:4004 - R:/127.0.0.1:57138] Last HTTP response frame
2021-03-02 19:34:41.328 DEBUG 1 --- [or-http-epoll-3] r.n.http.server.HttpServerOperations : [id: 0x0e009bf1, L:/127.0.0.1:4004 - R:/127.0.0.1:57138] Last HTTP packet was sent, terminating the channel
2021-03-02T19:34:41.871551Z warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected
5- Istio injection
kubectl get deployment product-composite -o yaml | istioctl kube-inject -f - | kubectl apply -f -
NOTICE: I have checked a lot of post in SO, and it seems that health checking create a lot of problems with sidecars and other configurations. I have checked the guide Health Checking of Istio Services with no success. Specifically, I tried to disable the sidecar.istio.io/rewriteAppHTTPProbers: "false", but it is worse (in this case, doesn't start neither the sidecar neither the service.
I have a file containing many Kubernetes YAML objects.
I am seeking a way of removing all K8s Secret YAML objects from the text file, identified by the "kind: Secret" string contained within the YAML block. This should remove everything from the "apiVersion" through to just before the "---" signifying the start of the next object.
I've looked into Sed, Python and yq tools with no luck.
The YAML may contain any number of secrets in any order.
How can I automate stripping out of these "Secret" blocks?
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-1
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
name: test-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
---
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-2
type: Opaque
---
yq can do this (and jq underneath)
pip install yq
yq --yaml-output 'select(.kind != "Secret")' input.yaml
You might need to remove the null document at the end of your example, it caused a little bit of weirdness in the output
Note that there is also a different yq utility that doesn't seem to do what jq does so I'm not sure how to make that one work.
What about a shell script that splits the file at every occurrence of --- by using the command awk? (See sections 5 and 6 of this link for an example of that.) In this way, the script can evaluate each part separately and send those who do not correspond to Secret to a new output file.
Purely with regex, you might search for
(^|---).*?kind: Secret.*?(---|$)
and replace with:
---
Test here.
Note: at the end, you might have some extra --- which you need to remove "manually" - but that should not be a big deal.
Using kubectl version 1.18, on microk8s 1.18.3
When getting a resource definition in yaml format. Example kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml. The output contains a section related to metadata.managedFields
Is there a way to hide that metadata.managedFields to shorten the console output?
Below is an example of output to better illustrate the question.
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}}
creationTimestamp: "2020-05-28T05:22:41Z"
labels:
app: productpage
service: productpage
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:service: {}
f:spec:
f:ports:
.: {}
k:{"port":9080,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-05-28T05:22:41Z"
name: productpage
namespace: bookinfo
resourceVersion: "121804"
selfLink: /api/v1/namespaces/bookinfo/services/productpage
uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763
spec:
clusterIP: 10.152.183.9
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 9080
selector:
app: productpage
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Kubectl 1.21 doesn't include managed fields by default anymore
kubectl get will omit managed fields by default now.
Users could set --show-managed-fields to true to show managedFields when the output format is either json or yaml.
https://github.com/kubernetes/kubernetes/pull/96878
check out this kubectl plugin: https://github.com/itaysk/kubectl-neat.
it not only removes managedField but many other fields users are not interested in.
for example: kubectl get pod mymod -oyaml | kubectl neat or kubectl neat pod mypod -oyaml
For those who like to download yaml and delete unwanted keys try this:
Install yq then try(please make sure you get yq version 4.x):
cat k8s-config.yaml | yq eval 'del(.status)' -
--OR--
kubectl --namespace {namespace} --context {cluster} get pod {podname} | yq ...
You may add/join more yq to delete more keys. Here is what I did:
cat k8s-config.yaml | yq eval 'del(.status)' - | yq eval 'del (.metadata.managedFields)' - | yq eval 'del (.metadata.annotations)' - | yq eval 'del (.spec.tolerations)' - | yq eval 'del(.metadata.ownerReferences)' - | yq eval 'del(.metadata.resourceVersion)' - | yq eval 'del(.metadata.uid)' - | yq eval 'del(.metadata.selfLink)' - | yq eval 'del(.metadata.creationTimestamp)' - | yq eval 'del(.metadata.generateName)' -
--OR--
cat k8s-config.yaml | yq eval 'del(.status)' - \
| yq eval 'del (.metadata.managedFields)' - \
| yq eval 'del (.metadata.annotations)' - \
| yq eval 'del (.spec.tolerations)' - \
| yq eval 'del(.metadata.ownerReferences)' - \
| yq eval 'del(.metadata.resourceVersion)' - \
| yq eval 'del(.metadata.uid)' - \
| yq eval 'del(.metadata.selfLink)' - \
| yq eval 'del(.metadata.creationTimestamp)' - \
| yq eval 'del(.metadata.generateName)' -
Another way is to have a neat() function on your ~/.bashrc or ~/.zshrc and call it as below:
neat() function:
neat () {
yq eval 'del(.items[].metadata.managedFields,
.metadata,
.apiVersion,
.items[].apiVersion,
.items[].metadata.namespace,
.items[].kind,
.items[].status,
.items[].metadata.annotations,
.items[].metadata.resourceVersion,
.items[].metadata.selfLink,.items[].metadata.uid,
.items[].metadata.creationTimestamp,
.items[].metadata.ownerReferences)' -
}
then:
kubectl get pods -o yaml | neat
cat k8s-config.yaml | neat
You may read more on yq delete here
I'd like to add some basic information about that feature:
ManagedFields is a section created by ServerSideApply feature. It helps tracking changes in cluster objects by different managers.
If you disable it in the kube-apiserver manifests all object created after this change won't have metadata.managedFields sections, but it doesn't affect the existing objects.
Open the kube-apiserver manifest with your favorite text editor:
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add command line argument to spec.containers.command:
- --feature-gates=ServerSideApply=false
kube-apiserver will restart immediately.
It usually takes couple of minutes for the kube-apiserver to start serving requests again.
You can also disable ServerSideApply feature gate on the cluster creation stage.
Alternatively, managedFields can be patched to an empty list for the existing object:
$ kubectl patch pod podname -p '{"metadata":{"managedFields":[{}]}}'
This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the object. Note that just setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field.
Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information you don't want in this situation) you can do the following using yq v4.x:
kubectl get <resource> -n <namespace> <resource-name> -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -
First thing what came to my mind was to just use stream editor like sed to just skip this part beggining form managedFields: to another specific pattern.
It's bit hardcoded as you would need to specify 2 patterns like managedFields: and ending pattern like name: productpage but will work for this scenario. If this won't fit you, pleas add more details, how you would like to achieve this.
sed command would look like:
sed -n '/(Pattern1)/{p; :a; N; /(Pattern2)/!ba; s/.*\n//}; p'
For example Ive used Nginx pod:
$ kubectl get po nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
nginx'
creationTimestamp: "2020-05-29T10:54:18Z"
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
...
status:
conditions:
...
startedAt: "2020-05-29T10:54:19Z"
hostIP: 10.154.0.29
phase: Running
podIP: 10.52.1.6
podIPs:
- ip: 10.52.1.6
qosClass: Burstable
startTime: "2020-05-29T10:54:18Z"
After using sed
$ kubectl get po nginx -o yaml | sed -n '/annotations:/{p; :a; N; /hostIP: 10.154.0.29/!ba; s/.*\n//}; p'
apiVersion: v1
kind: Pod
metadata:
annotations:
hostIP: 10.154.0.29
phase: Running
podIP: 10.52.1.6
podIPs:
- ip: 10.52.1.6
qosClass: Burstable
startTime: "2020-05-29T10:54:18Z"
In your case command like:
$ kubectl get pod/mypod-6f855c5fff-j8mrw -o yaml | sed -n '/managedFields:/{p; :a; N; /name: productpage/!ba; s/.*\n//}; p'
Should give output like:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"productpage","service":"productpage"},"name":"productpage","namespace":"bookinfo"},"spec":{"ports":[{"name":"http","port":9080}],"selector":{"app":"productpage"}}}
creationTimestamp: "2020-05-28T05:22:41Z"
labels:
app: productpage
service: productpage
managedFields:
name: productpage
namespace: bookinfo
resourceVersion: "121804"
selfLink: /api/v1/namespaces/bookinfo/services/productpage
uid: feb5a62b-8784-41d2-b104-bf6ebc4a2763
spec:
clusterIP: 10.152.183.9
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 9080
selector:
app: productpage
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}