How to add ArgoCD dex.config from argo App yaml Helm params? - argocd

In a self-managed ArgoCD the ArgoCD itself is defined as an application installed as a Helm chart with parameters.
How to add dex.config into Helm parameters inside application definition?
This does NOT work!
Here come en error that dex.config comes as a map not as a string.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
spec:
project: argocd
source:
chart: argo-cd
repoURL: https://argoproj.github.io/argo-helm
targetRevision: 5.19.1
helm:
parameters:
- name: configs.cm."timeout\.reconciliation"
value: "120s"
- name: configs.cm."dex\.config"
value: |
logger:
level: debug
format: json
connectors:
- type: saml
id: saml
name: AzureAD
config:
entityIssuer: https://argocd.example.com/api/dex/callback
ssoURL: https://login.microsoftonline.com/xxx/saml2
caData: |
...
redirectURI: https://argocd.example.com/api/dex/callback
usernameAttr: email
emailAttr: email
groupsAttr: Group
destination:
server: https://kubernetes.default.svc
namespace: argocd

Related

Google Kubernetes Engine & Github actions deploy deployments.apps "gke-deployment" not found

I've been trying to run Google Kubernetes Engine deploy action for my github repo.
I have made a github workflow job run and everything works just fine except the deploy step.
Here is my error code:
Error from server (NotFound): deployments.apps "gke-deployment" not found
I'm assuming my yaml files are at fault, I'm fairly new to this so I got these from the internet and just edited a bit to fit my code, but I don't know the details.
Kustomize.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: arbitrary
# Example configuration for the webserver
# at https://github.com/monopole/hello
commonLabels:
app: videoo-render
resources:
- deployment.yaml
- service.yaml
deployment.yaml (I think the error is here):
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
deployment: video-render
template:
metadata:
labels:
deployment: video-render
spec:
containers:
- name: the-container
image: monopole/hello:1
command: ["/video-render",
"--port=8080",
"--enableRiskyFeature=$(ENABLE_RISKY)"]
ports:
- containerPort: 8080
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
name: the-map
key: altGreeting
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
name: the-map
key: enableRisky
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: the-service
spec:
selector:
deployment: video-render
type: LoadBalancer
ports:
- protocol: TCP
port: 8666
targetPort: 8080
Using ubuntu 20.04 image, repo is C++ code.
For anyone wondering why this happens:
You have to edit this line to an existing deployment:
DEPLOYMENT_NAME: gke-deployment # TODO: update to deployment name,
to:
DEPLOYMENT_NAME: existing-deployment-name

istio oauth filter error with secrets creation - updating listener(s) 0.0.0.0_8443: paths must refer to an existing path in the system does not exist

I'm running istio on kubernetes (container istio/proxyv2:1.13.2) and currently use oauth2-proxy pod to authenticate with keycloak. I have a requirement to replace oauth2-proxy with an istio oauth filter, and I'm attempting to deploy the oauth filter to the istio ingressgateway by following this blog. When deploying the YAML below I get see the following error in istiod logs:
2022-05-12T16:59:58.080449Z warn ads ADS:LDS: ACK ERROR istio-ingressgateway-7fd568fc99-fvvcc.istio-system-150 Internal:Error adding/updating listener(s) 0.0.0.0_8443: paths must refer to an existing path in the system: '/etc/istio/config/token-secret.yaml' does not exist
It looks to me that there is an issue loading the secret YAML files into the ingressgateway pod using SDS - but I could be wrong on this as I dont fully understand how this secret loading should be working in this example. I cant find documentation on this in the latest istio versions so as such I'm struggling. Older docs talk about an sds container running in the istiogateway pod but this doesn't seem to be relevant in the more recent istio versions.
Can anyone help, or explain how the secrets get loaded into ingressgateway in the example that I'm following, and what the issue might be/how to diagnose ? Any help gratefully received.
The code is as follows:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: oauth2-ingress
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: CLUSTER
match:
cluster:
service: oauth
patch:
operation: ADD
value:
name: oauth
dns_lookup_family: V4_ONLY
type: LOGICAL_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: keycloak.mydomain.com
load_assignment:
cluster_name: oauth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: keycloak.mydomain.com
port_value: 443
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.jwt_authn"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: oauth
uri: https://keycloak.mydomain.com/auth/realms/myrealm/protocol/openid-connect/token
timeout: 3s
authorization_endpoint: https://keycloak.mydomain.com/auth/realms/myrealm/protocol/openid-connect/auth
redirect_uri: "https://%REQ(:authority)%/callback"
redirect_path_matcher:
path:
exact: /callback
signout_path:
path:
exact: /signout
credentials:
client_id: myclient
token_secret:
name: token
sds_config:
path: "/etc/istio/config/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
path: "/etc/istio/config/hmac-secret.yaml"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-oauth2
namespace: istio-system
data:
token-secret.yaml: |-
resources:
- "#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: token
generic_secret:
secret:
inline_string: "myclientsecrettext"
hmac-secret.yaml: |-
resources:
- "#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: hmac
generic_secret:
secret:
# generated using `head -c 32 /dev/urandom | base64`
inline_bytes: "XYJ7ibKwXwmRrO/yL/37ZV+T3Q/WB+xfhmVlio+wmc0="
---
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-authentication
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
jwtRules:
- issuer: "https://keycloak.mydomain.com/auth/realms/myrealm"
jwksUri: "https://keycloak.mydomain.com/auth/realms/myrealm/protocol/openid-connect/certs"
forwardOriginalToken: true
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: known-user
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
rules:
- when:
- key: request.headers[Authorization]
notValues:
- 'Bearer*'
- when:
- key: request.auth.audiences
values:
- 'oauth'
- key: request.auth.presenter
values:

Unable to connect to AWS services form EKS

I have created an EKS cluster using eksctl. I am following these steps to establish connectivity to AWS services like S3, cloudwatch using spring-boot.
Create EKS using eksctl - This has my service account details and OIDC enabled.
List the service accounts to see if they were created fine
Create a deployment using the account name
Create a service
I am seeing a 403 in the logs:
User: arn:aws:sts:xxx/xxxx is not authorized to perform:
cloudformation:DescribeStackResources because no identity-based policy allows
the cloudformation:DescribeStackResources action (Service: AmazonCloudFormation; Status Code: 403;
Error Code: AccessDenied; Request ID: xxxx)
Can I get some help here to troubleshoot this issue, please?
What I have figured out after posting this issue is my node which is provisioned by eksctl, has been applied with rules. This is the rule which my app is picking up due to the default CredentialChain.
What I haven't still figured out is how do I enable the apps in the pod to assume a service account role.
YAML for #1
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: name
region: ap-south-1
availabilityZones: ["xxxx", "xxxx", "xxxx"]
managedNodeGroups:
- name: c5large-nodes
desiredCapacity: 1
instanceType: c5.large
labels:
node-type: large
volumeSize: 5
cloudWatch:
clusterLogging:
enableTypes: [ "*" ]
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: cluster-autoscaler
namespace: kube-system
labels: {aws-usage: "autoscaling"}
wellKnownPolicies:
autoScaler: true
roleName: eksctl-cluster-autoscaler-role
roleOnly: true
- metadata:
name: backend-stage-iam-role
namespace: backend-stage
labels: { aws-usage: "all-backend-allow" }
attachPolicyARNs:
- "arn:aws:iam::xxxx"
- metadata:
name: mq-access
namespace: backend-stage
labels: { aws-usage: "MQ" }
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AmazonMQFullAccess"
YAML for deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
namespace: backend-stage
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: backend-stage-iam-role
containers:
- image: xxx/my-app:latest
imagePullPolicy: Always
name: my-app
ports:
- containerPort: 8080
protocol: TCP
env:
- name: SPRING_PROFILES_ACTIVE
value: stage
YAML for service
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: backend-stage
spec:
selector:
app: my-app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
The role is defined like this for now:
- Effect: Allow
Action:
- cloudformation:*
Resource: "*"
I did further debugging, by describing the pod, I can see the role passed as an ENV parameter:
AWS_ROLE_ARN: arn:aws:iam::MYACCOUNT:role/MyRole```
Just add missing permission to arn:aws:sts:xxx/xxxx assumed role.

ISTIO MIXER ADAPTER - Cannot make OPA adapter to work with the simplest example

I am trying to setup the OPA adapter in Istio with the simplest rule to deny everything by default:
---
apiVersion: "config.istio.io/v1alpha2"
kind: authorization
metadata:
name: authz-instance
namespace: istio-demo
spec:
subject:
user: source.uid | ""
action:
namespace: destination.namespace | "default"
service: destination.service | ""
method: request.method | ""
path: request.path | ""
---
apiVersion: "config.istio.io/v1alpha2"
kind: opa
metadata:
name: opa-handler
namespace: istio-demo
spec:
policy:
- |+
package mixerauthz
default allow = false
checkMethod: "data.mixerauthz.allow"
failClose: true
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: authz-rule
namespace: istio-demo
spec:
match: "true"
actions:
- handler: opa-handler.opa.istio-demo
instances:
- authz-instance.authorization.istio-demo
When I apply it, Istio's policy complains about not finding the handler:
istio-system/istio-policy-7f86484668-fc8lv[mixer]: 2019-08-12T15:58:21.798783Z info Built new config.Snapshot: id='9'
istio-system/istio-policy-7f86484668-fc8lv[mixer]: 2019-08-12T15:58:21.798819Z error 2 errors occurred:
istio-system/istio-policy-7f86484668-fc8lv[mixer]: * action='authz-rule.rule.istio-demo[0]': Handler not found: handler='opa-handler.opa.istio-demo'
istio-system/istio-policy-7f86484668-fc8lv[mixer]: * rule=authz-rule.rule.istio-demo: No valid actions found in rule
I've tried to apply it in the istio-system namespace, but same issue.
Anyone can help out here?
Thanks in advance.
I got this to work with Istio 1.4 installed with demo profile.
It was also necessary to enable policies check by running:
istioctl manifest apply --set values.global.disablePolicyChecks=false --set values.pilot.policy.enabled=true
Find handler, authorization template and rule configuration below
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: opa-handler
namespace: istio-system
spec:
compiledAdapter: opa
params:
policy:
- |+
package mixerauthz
default allow = false
checkMethod: "data.mixerauthz.allow"
failClose: true
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: authz-instance
namespace: istio-system
spec:
compiledTemplate: authorization
params:
subject:
user: source.uid | ""
action:
namespace: destination.namespace | "default"
service: destination.service.host | ""
path: request.path | ""
method: request.method | ""
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: auth
namespace: istio-system
spec:
actions:
- handler: opa-handler.handler.istio-system
instances:
- authz-instance.instance.istio-system
Then I got 403 with this message in my web service (httpbin)
PERMISSION_DENIED:opa-handler.istio-system:opa: request was rejected, opa-handler.istio-system:opa: request was rejected
Alternatively you can try out the OPA/Istio/Envoy integration that enforces the same type of policies at the proxy layer

kubectl - cert manager - credentials not found

I want to have TLS termination enabled on ingress (on top of kubernetes) on google cloud platform.
My ingress cluster is working, my cert manager is failing with the error message
textPayload: "2018/07/05 22:04:00 Error while processing certificate during sync: Error while creating ACME client for 'domain': Error while initializing challenge provider googlecloud: Unable to get Google Cloud client: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /opt/google/kube-cert-manager.json: no such file or directory
"
This is what I did in order to get into the current state:
created cluster, deployment, service, ingress
executed:
gcloud --project 'project' iam service-accounts create kube-cert-manager-sv-security --display-name "kube-cert-manager-sv-security"
gcloud --project 'project' iam service-accounts keys create ~/.config/gcloud/kube-cert-manager-sv-security.json --iam-account kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com
gcloud --project 'project' projects add-iam-policy-binding --member serviceAccount:kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com --role roles/dns.admin
kubectl create secret generic kube-cert-manager-sv-security-secret --from-file=/home/perre/.config/gcloud/kube-cert-manager-sv-security.json
and created the following resources:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kube-cert-manager-sv-security-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-cert-manager-sv-security
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security
rules:
- apiGroups: ["*"]
resources: ["certificates", "ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: ["events"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security-service-account
subjects:
- kind: ServiceAccount
namespace: default
name: kube-cert-manager-sv-security
roleRef:
kind: ClusterRole
name: kube-cert-manager-sv-security
apiGroup: rbac.authorization.k8s.io
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificates.stable.k8s.psg.io
spec:
scope: Namespaced
group: stable.k8s.psg.io
version: v1
names:
kind: Certificate
plural: certificates
singular: certificate
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
replicas: 1
template:
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
serviceAccount: kube-cert-manager-sv-security
containers:
- name: kube-cert-manager
env:
- name: GCE_PROJECT
value: solidair-vlaanderen-207315
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /opt/google/kube-cert-manager.json
image: bcawthra/kube-cert-manager:2017-12-10
args:
- "-data-dir=/var/lib/cert-manager-sv-security"
#- "-acme-url=https://acme-staging.api.letsencrypt.org/directory"
# NOTE: the URL above points to the staging server, where you won't get real certs.
# Uncomment the line below to use the production LetsEncrypt server:
- "-acme-url=https://acme-v01.api.letsencrypt.org/directory"
# You can run multiple instances of kube-cert-manager for the same namespace(s),
# each watching for a different value for the 'class' label
- "-class=kube-cert-manager"
# You can choose to monitor only some namespaces, otherwise all namespaces will be monitored
#- "-namespaces=default,test"
# If you set a default email, you can omit the field/annotation from Certificates/Ingresses
- "-default-email=viae.it#gmail.com"
# If you set a default provider, you can omit the field/annotation from Certificates/Ingresses
- "-default-provider=googlecloud"
volumeMounts:
- name: data-sv-security
mountPath: /var/lib/cert-manager-sv-security
- name: google-application-credentials
mountPath: /opt/google
volumes:
- name: data-sv-security
persistentVolumeClaim:
claimName: kube-cert-manager-sv-security-data
- name: google-application-credentials
secret:
secretName: kube-cert-manager-sv-security-secret
anyone knows what I'm missing?
Your secret resource kube-cert-manager-sv-security-secret may contains a JSON file named kube-cert-manager-sv-security.json and it is not matched with GOOGLE_APPLICATION_CREDENTIALS value. You can confirm file name in the secret resource with kubectl get secret -oyaml YOUR-SECRET-NAME.
So you change the file path to the actual file name, cert-manager works fine.
- name: GOOGLE_APPLICATION_CREDENTIALS
# value: /opt/google/kube-cert-manager.json
value: /opt/google/kube-cert-manager-sv-security.json