Kustomize doesn't adapt workloadSelector label of sidecar when using nameSuffix? - istio

I have a sidecar like this:
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: test
namespace: testns
spec:
workloadSelector:
labels:
app: test
...
and a kustomization like:
resources:
- ../../base
nameSuffix: -dev
But kustomize doesn't adapt the workloadSelector label app to test-dev as I would expect it to do. The name suffix is only appended to the name of the sidecar. Any ideas why?

By default kustomize namePrefix and nameSuffix only apply to metadata/name for all resources.
There are a set of configured nameReferences that will also be transformed with the appropriate name, but they are limited to resource names.
See here for more info: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#prefixsuffix-transformer

Related

How to use kubectl patch to add list entry without duplicate?

I have the following Minikube default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10953591"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
I can add a new imagePullSecrets entry using the following kubectl patch command:
kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]'
Here's the update default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10956724"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10957065"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries.
I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.
AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime.
As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.

OpenShift Dockerfile Build that references an ImageStream?

I would like to build an image from a Dockerfile using an OpenShift BuildConfig that references an existing ImageStream in the FROM line. That is, if I have:
$ oc get imagestream openshift-build-example -o yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
name: openshift-build-example
namespace: sandbox
spec:
lookupPolicy:
local: true
I would like to be able to submit a build that uses a Dockerfile like
this:
FROM openshift-build-example:parent
But this doesn't work. If I use a fully qualified image specification,
like this...
FROM image-registry.openshift-image-registry.svc:5000/sandbox/openshift-build-example:parent
...it works, but this is problematic, because it requires referencing
the namespace in the image specification. This means the builds can't
be conveniently deployed into another namespace.
Is there any way to make this work?
For reference purposes, the build is configure in the following
BuildConfig resource:
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: buildconfig-child
spec:
failedBuildsHistoryLimit: 5
successfulBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: openshift-build-example:child
runPolicy: Serial
source:
git:
ref: main
uri: https://github.com/larsks/openshift-build-example
type: Git
contextDir: image/child
strategy:
dockerStrategy:
dockerfilePath: Dockerfile
type: Docker
triggers:
- type: "GitHub"
github:
secretReference:
name: "buildconfig-child-webhook"
- type: "Generic"
generic:
secret: "buildconfig-child-webhook"
And the referenced Dockerfile is:
# FIXME
FROM openshift-build-example:parent
COPY index.html /var/www/localhost/htdocs/index.html

Fixing DataDog agent congestion issues in Amazon EKS cluster

A few months ago I integrated DataDog into my Kubernetes cluster by using a DaemonSet configuration. Since then I've been getting congestion alerts with the following message:
Please tune the hot-shots settings
https://github.com/brightcove/hot-shots#errors
By attempting to follow the docs with my limited Orchestration/DevOps knowledge, what I could gather is that I need to add the following to my DaemonSet config:
spec
.
.
securityContext:
sysctls:
- name: net.unix.max_dgram_qlen
value: "1024"
- name: net.core.wmem_max
value: "4194304"
I attempted to add that configuration piece to one of the auto-deployed DataDog pods directly just to try it out but it hangs indefinitely and doesn't save the configuration (Instead of adding to DaemonSet and risking bringing all agents down).
That hot-shots documentation also mentions that the above sysctl configuration requires unsafe sysctls to be enabled in the nodes that contain the pods:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen, net.core.wmem_max'
The cluster I am working with is fully deployed with EKS by using the Dashboard in AWS (Little knowledge on how it is configured). The above seems to be indicated for manually deployed and managed cluster.
Why is the configuration I am attempting to apply to a single DataDog agent pod not saving/applying? Is it because it is managed by DaemonSet or is it because it doesn't have the proper unsafe sysctl allowed? Something else?
If I do need to enable the suggested unsafe sysctlon all nodes of my cluster. How do I go about it since the cluster is fully deployed and managed by Amazon EKS?
So we managed to achieve this using a custom launch template with our managed node group and then passing in a custom bootstrap script. This does mean however you need to supply the AMI id yourself and lose the alerts in the console when it is outdated. In Terraform this would look like:
resource "aws_eks_node_group" "group" {
...
launch_template {
id = aws_launch_template.nodes.id
version = aws_launch_template.nodes.latest_version
}
...
}
data "template_file" "bootstrap" {
template = file("${path.module}/files/bootstrap.tpl")
vars = {
cluster_name = aws_eks_cluster.cluster.name
cluster_auth_base64 = aws_eks_cluster.cluster.certificate_authority.0.data
endpoint = aws_eks_cluster.cluster.endpoint
}
}
data "aws_ami" "eks_node" {
owners = ["602401143452"]
most_recent = true
filter {
name = "name"
values = ["amazon-eks-node-1.21-v20211008"]
}
}
resource "aws_launch_template" "nodes" {
...
image_id = data.aws_ami.eks_node.id
user_data = base64encode(data.template_file.bootstrap.rendered)
...
}
Then the bootstrap.hcl file looks like this:
#!/bin/bash
set -o xtrace
systemctl stop kubelet
/etc/eks/bootstrap.sh '${cluster_name}' \
--b64-cluster-ca '${cluster_auth_base64}' \
--apiserver-endpoint '${endpoint}' \
--kubelet-extra-args '"--allowed-unsafe-sysctls=net.unix.max_dgram_qlen"'
The next step is to set up the PodSecurityPolicy, ClusterRole and RoleBinding in your cluster so you can use the securityContext as you described above and then pods in that namespace will be able to run without a SysctlForbidden message.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: sysctl
spec:
allowPrivilegeEscalation: false
allowedUnsafeSysctls:
- net.unix.max_dgram_qlen
defaultAllowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: allow-sysctl
rules:
- apiGroups:
- policy
resourceNames:
- sysctl
resources:
- podsecuritypolicies
verbs:
- '*'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-sysctl
namespace: app-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: allow-sysctl
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:app-namespace
If using the DataDog Helm chart you can set the following values to update the securityContext of the agent. But you will have to update the chart PSP manually to set allowedUnsafeSysctls
datadog:
securityContext:
sysctls:
- name: net.unix.max_dgram_qlen"
value: 512"

Istio 1.5.2: how to apply an AuthorizationPolicy with HTTP-conditions to a service?

Problem
We run Istio on our Kubernetes cluster and we're implementing AuthorizationPolicies.
We want to apply a filter on email address, an HTTP-condition only applicable to HTTP services.
Our Kiali service should be an HTTP service (it has an HTTP port, an HTTP listener, and even has HTTP conditions applied to its filters), and yet the AuthorizationPolicy does not work.
What gives?
Our setup
We have a management namespace with an ingressgateway (port 443), and a gateway+virtual service for Kiali.
These latter two point to the Kiali service in the kiali namespace.
Both the management and kiali namespace have a deny-all policy and an allow policy to make an exception for particular users.
(See AuthorizationPolicy YAMLs below.)
Authorization on the management ingress gateway works.
The ingress gateway has 3 listeners, all HTTP, and HTTP conditions are created and applied as you would expect.
You can visit its backend services other than Kiali if you're on the email list, and you cannot do so if you're not on the email list.
Authorization on the Kiali service does not work.
It has 99 listeners (!), including an HTTP listener on its configured 20001 port and its IP, but it does not work.
You cannot visit the Kiali service (due to the default deny-all policy).
The Kiali service has port 20001 enabled and named 'http-kiali', so the VirtualService should be ok with that. (See YAMls for service and virtual service below).
EDIT: it was suggested that the syntax of the email values matters.
I think that has been taken care of:
in the management namespace, the YAML below works as expected
in the kiali namespace, the same YAML fails to work as expected.
the empty brackets in the 'property(map[request.auth.claims[email]:{[brackets#test.com] []}])' message I think are the Values (present) and NotValues (absent), respectively, as per 'constructed internal model: &{Permissions:[{Properties:[map[request.auth.claims[email]:{Values:[brackets#test.com] NotValues:[]}]]}]}'
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: testpolicy-brackets
namespace: kiali
spec:
action: ALLOW
rules:
- when:
- key: source.namespace
values: ["brackets"]
- key: request.auth.claims[email]
values: ["brackets#test.com"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: testpolicy-yamllist
namespace: kiali
spec:
action: ALLOW
rules:
- when:
- key: source.namespace
values:
- list
- key: request.auth.claims[email]
values:
- list#test.com
debug rbac found authorization allow policies for workload [app=kiali,pod-template-hash=5c97c4bb66,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=kiali,service.istio.io/canonical-revision=v1.16.0,version=v1.16.0] in kiali
debug rbac constructed internal model: &{Permissions:[{Services:[] Hosts:[] NotHosts:[] Paths:[] NotPaths:[] Methods:[] NotMethods:[] Ports:[] NotPorts:[] Constraints:[] AllowAll:true v1beta1:true}] Principals:[{Users:[] Names:[] NotNames:[] Group: Groups:[] NotGroups:[] Namespaces:[] NotNamespaces:[] IPs:[] NotIPs:[] RequestPrincipals:[] NotRequestPrincipals:[] Properties:[map[source.namespace:{Values:[brackets] NotValues:[]}] map[request.auth.claims[email]:{Values:[brackets#test.com] NotValues:[]}]] AllowAll:false v1beta1:true}]}
debug rbac generated policy ns[kiali]-policy[testpolicy-brackets]-rule[0]: permissions:<and_rules:<rules:<any:true > > > principals:<and_ids:<ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"source.principal" > value:<string_match:<safe_regex:<google_re2:<> regex:".*/ns/brackets/.*" > > > > > > > ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"request.auth.claims" > path:<key:"email" > value:<list_match:<one_of:<string_match:<exact:"brackets#test.com" > > > > > > > > > >
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[brackets#test.com] []}])
debug rbac role skipped for no principals found
debug rbac found authorization allow policies for workload [app=kiali,pod-template-hash=5c97c4bb66,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=kiali,service.istio.io/canonical-revision=v1.16.0,version=v1.16.0] in kiali
debug rbac constructed internal model: &{Permissions:[{Services:[] Hosts:[] NotHosts:[] Paths:[] NotPaths:[] Methods:[] NotMethods:[] Ports:[] NotPorts:[] Constraints:[] AllowAll:true v1beta1:true}] Principals:[{Users:[] Names:[] NotNames:[] Group: Groups:[] NotGroups:[] Namespaces:[] NotNamespaces:[] IPs:[] NotIPs:[] RequestPrincipals:[] NotRequestPrincipals:[] Properties:[map[source.namespace:{Values:[list] NotValues:[]}] map[request.auth.claims[email]:{Values:[list#test.com] NotValues:[]}]] AllowAll:false v1beta1:true}]}
debug rbac generated policy ns[kiali]-policy[testpolicy-yamllist]-rule[0]: permissions:<and_rules:<rules:<any:true > > > principals:<and_ids:<ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"source.principal" > value:<string_match:<safe_regex:<google_re2:<> regex:".*/ns/list/.*" > > > > > > > ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"request.auth.claims" > path:<key:"email" > value:<list_match:<one_of:<string_match:<exact:"list#test.com" > > > > > > > > > >
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[list#test.com] []}])
debug rbac role skipped for no principals found
(Follows: a list of YAMLs mentioned above)
# Cluster AuthorizationPolicies
## Management namespace
Name: default-deny-all-policy
Namespace: management
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
---
Name: allow-specified-email-addresses
Namespace: management
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
Action: ALLOW
Rules:
When:
Key: request.auth.claims[email]
Values:
my.email#my.provider.com
---
## Kiali namespace
Name: default-deny-all-policy
Namespace: kiali
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
---
Name: allow-specified-email-addresses
Namespace: kiali
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
Action: ALLOW
Rules:
When:
Key: request.auth.claims[email]
Values:
my.email#my.provider.com
---
# Kiali service YAML
apiVersion: v1
kind: Service
metadata:
labels:
app: kiali
version: v1.16.0
name: kiali
namespace: kiali
spec:
clusterIP: 10.233.18.102
ports:
- name: http-kiali
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: kiali
version: v1.16.0
sessionAffinity: None
type: ClusterIP
---
# Kiali VirtualService YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: kiali-virtualservice
namespace: management
spec:
gateways:
- kiali-gateway
hosts:
- our_external_kiali_url
http:
- match:
- uri:
prefix: /
route:
- destination:
host: kiali.kiali.svc.cluster.local
port:
number: 20001
Marking as solved: I had forgotten to apply a RequestAuthentication to the Kiali namespace.
The problematic situation, with the fix in bold:
RequestAuthentication on the management namespace adds a user JWT (through an EnvoyFilter that forwards requests to an authentication service)
AuthorizationPolicy on the management namespace checks the request.auth.claims[email]. These fields exist in the JWT and all is well.
RequestAuthentication on the Kiali namespace missing. I fixed the problem by adding a RequestAuthentication for the Kiali namespace, which populates the user information, which allows the AuthorizationPolicy to perform its checks on actually existing fields.
AuthorizationPolicy on the Kiali namespace also checks the request.auth.claims[email] field, but since there is no authentication, there is no JWT with these fields. (There are some fields populated, e.g. source.namespace, but nothing like a JWT.) Hence, user validation on that field fails, as you would expect.
According to istio documentation:
Unsupported keys and values are silently ignored.
In your debug log there is:
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[my.email#my.provider.com] []}])
As You can see there are []}] chars there that might suggest that the value got parsed the wrong way and got ignored as unsupported value.
Try to put Your values like suggested in documentation inside [""]:
request.auth.claims
Claims from the origin JWT. The actual claim name is surrounded by brackets HTTP only
key: request.auth.claims[iss]
values: ["*#foo.com"]
Hope it helps.

How to get cluster subdomain in kubernetes deployment config template

On kubernetes 1.6.1 (Openshift 3.6 CP) I'm trying to get the subdomain of my cluster using $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN) but it's not dereferencing at runtime. Not sure what I'm doing wrong, docs show this is how environment parameters should be acquired.
https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6/#container-v1-core
- apiVersion: v1
kind: DeploymentConfig
spec:
template:
metadata:
labels:
deploymentconfig: ${APP_NAME}
name: ${APP_NAME}
spec:
containers:
- name: myapp
env:
- name: CLOUD_CLUSTER_SUBDOMAIN
value: $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN)
You'll need to set that value as an environment variable, this is the usage:
oc set env <object-selection> KEY_1=VAL_1
for example if your pod is named foo and your subdomain is foo.bar, you would use this command:
oc set env dc/foo OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN=foo.bar