How to use kubectl patch to add list entry without duplicate? - kubectl

I have the following Minikube default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10953591"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
I can add a new imagePullSecrets entry using the following kubectl patch command:
kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]'
Here's the update default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10956724"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10957065"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries.
I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.

AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime.
As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.

Related

OpenShift Dockerfile Build that references an ImageStream?

I would like to build an image from a Dockerfile using an OpenShift BuildConfig that references an existing ImageStream in the FROM line. That is, if I have:
$ oc get imagestream openshift-build-example -o yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
name: openshift-build-example
namespace: sandbox
spec:
lookupPolicy:
local: true
I would like to be able to submit a build that uses a Dockerfile like
this:
FROM openshift-build-example:parent
But this doesn't work. If I use a fully qualified image specification,
like this...
FROM image-registry.openshift-image-registry.svc:5000/sandbox/openshift-build-example:parent
...it works, but this is problematic, because it requires referencing
the namespace in the image specification. This means the builds can't
be conveniently deployed into another namespace.
Is there any way to make this work?
For reference purposes, the build is configure in the following
BuildConfig resource:
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: buildconfig-child
spec:
failedBuildsHistoryLimit: 5
successfulBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: openshift-build-example:child
runPolicy: Serial
source:
git:
ref: main
uri: https://github.com/larsks/openshift-build-example
type: Git
contextDir: image/child
strategy:
dockerStrategy:
dockerfilePath: Dockerfile
type: Docker
triggers:
- type: "GitHub"
github:
secretReference:
name: "buildconfig-child-webhook"
- type: "Generic"
generic:
secret: "buildconfig-child-webhook"
And the referenced Dockerfile is:
# FIXME
FROM openshift-build-example:parent
COPY index.html /var/www/localhost/htdocs/index.html

use AWS Secrets & Configuration Provider for EKS: Error from server (BadRequest)

I'm following this AWS documentation which explains how to properly configure AWS Secrets Manager to let it works with EKS through Kubernetes Secrets.
I successfully followed step by step all the different commands as explained in the documentation.
The only difference I get is related to this step where I have to run:
kubectl get po --namespace=kube-system
The expected output should be:
csi-secrets-store-qp9r8 3/3 Running 0 4m
csi-secrets-store-zrjt2 3/3 Running 0 4m
but instead I get:
csi-secrets-store-provider-aws-lxxcz 1/1 Running 0 5d17h
csi-secrets-store-provider-aws-rhnc6 1/1 Running 0 5d17h
csi-secrets-store-secrets-store-csi-driver-ml6jf 3/3 Running 0 5d18h
csi-secrets-store-secrets-store-csi-driver-r5cbk 3/3 Running 0 5d18h
As you can see the names are different, but I'm quite sure it's ok :-)
The real problem starts here in step 4: I created the following YAML file (as you ca see I added some parameters):
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: aws-secrets
spec:
provider: aws
parameters:
objects: |
- objectName: "mysecret"
objectType: "secretsmanager"
And finally I created a deploy (as explain here in step 5) using the following yaml file:
# test-deployment.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx-secrets-store-inline
spec:
serviceAccountName: iamserviceaccountforkeyvaultsecretmanagerresearch
containers:
- image: nginx
name: nginx
volumeMounts:
- name: mysecret-volume
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: mysecret-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aws-secrets"
After the deployment through the command:
kubectl apply -f test-deployment.yaml -n mynamespace
The pod is not able to start properly because the following error is generated:
Error from server (BadRequest): container "nginx" in pod "nginx-secrets-store-inline" is waiting to start: ContainerCreating
But, for example, if I run the deployment with the following yaml the POD will be successfully created
# test-deployment.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx-secrets-store-inline
spec:
serviceAccountName: iamserviceaccountforkeyvaultsecretmanagerresearch
containers:
- image: nginx
name: nginx
volumeMounts:
- name: keyvault-credential-volume
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: keyvault-credential-volume
emptyDir: {} # <<== !! LOOK HERE !!
as you can see I used
emptyDir: {}
So as far I can see the problem here is related to the following YAML lines:
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aws-secrets"
To be honest it's even not clear in my mind what's happing here.
Probably I didn't properly enabled the volume permission in EKS?
Sorry but I'm a newbie in both AWS and Kubernetes configurations.
Thanks for you time
--- NEW INFO ---
If I run
kubectl describe pod nginx-secrets-store-inline -n mynamespace
where nginx-secrets-store-inline is the name of the pod, I get the following output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30s default-scheduler Successfully assigned mynamespace/nginx-secrets-store-inline to ip-10-0-24-252.eu-central-1.compute.internal
Warning FailedMount 14s (x6 over 29s) kubelet MountVolume.SetUp failed for volume "keyvault-credential-volume" : rpc error: code = Unknown desc = failed to get secretproviderclass mynamespace/aws-secrets, error: SecretProviderClass.secrets-store.csi.x-k8s.io "aws-secrets" not found
Any hints?
Finally I realized why it wasn't working. As explained here, the error:
Warning FailedMount 3s (x4 over 6s) kubelet, kind-control-plane MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to get secretproviderclass default/azure, error: secretproviderclasses.secrets-store.csi.x-k8s.io "azure" not found
is related to namespace:
The SecretProviderClass being referenced in the volumeMount needs to exist in the same namespace as the application pod.
So both the yaml file should be deployed in the same namespace (adding, for example, the -n mynamespace argument).
Finally I got it working!

Kustomize doesn't adapt workloadSelector label of sidecar when using nameSuffix?

I have a sidecar like this:
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: test
namespace: testns
spec:
workloadSelector:
labels:
app: test
...
and a kustomization like:
resources:
- ../../base
nameSuffix: -dev
But kustomize doesn't adapt the workloadSelector label app to test-dev as I would expect it to do. The name suffix is only appended to the name of the sidecar. Any ideas why?
By default kustomize namePrefix and nameSuffix only apply to metadata/name for all resources.
There are a set of configured nameReferences that will also be transformed with the appropriate name, but they are limited to resource names.
See here for more info: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#prefixsuffix-transformer

How can I automate the removal of kubernetes secrets from a yaml file?

I have a file containing many Kubernetes YAML objects.
I am seeking a way of removing all K8s Secret YAML objects from the text file, identified by the "kind: Secret" string contained within the YAML block. This should remove everything from the "apiVersion" through to just before the "---" signifying the start of the next object.
I've looked into Sed, Python and yq tools with no luck.
The YAML may contain any number of secrets in any order.
How can I automate stripping out of these "Secret" blocks?
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-1
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
name: test-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
---
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-2
type: Opaque
---
yq can do this (and jq underneath)
pip install yq
yq --yaml-output 'select(.kind != "Secret")' input.yaml
You might need to remove the null document at the end of your example, it caused a little bit of weirdness in the output
Note that there is also a different yq utility that doesn't seem to do what jq does so I'm not sure how to make that one work.
What about a shell script that splits the file at every occurrence of --- by using the command awk? (See sections 5 and 6 of this link for an example of that.) In this way, the script can evaluate each part separately and send those who do not correspond to Secret to a new output file.
Purely with regex, you might search for
(^|---).*?kind: Secret.*?(---|$)
and replace with:
---
Test here.
Note: at the end, you might have some extra --- which you need to remove "manually" - but that should not be a big deal.

How to get cluster subdomain in kubernetes deployment config template

On kubernetes 1.6.1 (Openshift 3.6 CP) I'm trying to get the subdomain of my cluster using $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN) but it's not dereferencing at runtime. Not sure what I'm doing wrong, docs show this is how environment parameters should be acquired.
https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6/#container-v1-core
- apiVersion: v1
kind: DeploymentConfig
spec:
template:
metadata:
labels:
deploymentconfig: ${APP_NAME}
name: ${APP_NAME}
spec:
containers:
- name: myapp
env:
- name: CLOUD_CLUSTER_SUBDOMAIN
value: $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN)
You'll need to set that value as an environment variable, this is the usage:
oc set env <object-selection> KEY_1=VAL_1
for example if your pod is named foo and your subdomain is foo.bar, you would use this command:
oc set env dc/foo OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN=foo.bar