Openshift/Kubernetes: Use token from Service account in yaml file - templates

I currently have the following problem. I am creating a Template in which I specify a ServiceAccount adn a RoleBinding. Openshift Creates a Token on its own and stores it in a secret with the name [service-account-name]-[a-z,1-9{5}]. Now I want to pass that secret on to an env Variable (as it will be consumed by another config in that container that can process env variables)
Now you can easily use env variables like
env:
- name: something
valueFrom:
secretKeyRef:
name: someKey
key: someValue
But now I've got the problem, that there is a secret, but I don't know the exact name as part of it is random. Now my question is
Is there a way to use the contents of a secret of a serviceaccount in a template?

You can check your secrets by running
kubectl get secret and then view more by running kubectl describe secret mysecret You will need to decode it to view it (I do not have experience with OpenShift). You can also use them as Environment Variables as explained here.
As for ServiceAccount and the token you can use it inside a container as specified in the OpenShift documentation
A file containing an API token for a pod’s service account is
automatically mounted at
/var/run/secrets/kubernetes.io/serviceaccount/token.
I think you could add commands from the documentation to the Pod Template into command: section similar to this example. Also you can find more about using secrets here.

Related

Not able to read the Kubernetes secret from a nested

I am very new to Kubernetes. My task is to move the existing application from Kubernetes to EKS. I am using CDK EKS Blueprints to create the cluster in AWS and have AWS secret manager to create the Kubernetes secret. I followed the same steps as given in here https://aws-quickstart.github.io/cdk-eks-blueprints/addons/secrets-store/
As mentioned on the above page I got the service account, a role in the service account to access the secret and the secret created.
Though I have a volume block, mount path for the secret and used env variables to refer the secret, I am not able to get my pod up and running. Instead it complains that the key is not found in the secret.
The reason may be because when I try to create a secret manually using the create command the Kubernetes create the secret as below.
enter image description here
But when the Kubernetes secret is created by EKS blueprints by lookingup the existing AWS secret like
secretProvider: new blueprints.LookupSecretsManagerSecretByName('test-aws-secret'),
it is creating as an encoded object.
enter image description here
Now I am not sure how to reference the nested object in the yaml. I tried many iterations, something like enter image description here. But no luck. Any help is much appreciated.
Thanks.
The value of the key field should be key1:
- name: key1-value
valueFrom:
secretKeyRef:
name: secret-test
key: key1
Including data/secret-test/ before the key name is unnecessary because Kubernetes already knows the secret name from the name field and knows to look for keys under the data field of secrets.
See Secrets for more information.

How to pass(use) google SA json in AWX to run anisble playbook which creates/updates/modify the VM in gcp?

I have an ansible-playbook, which will connect to GCP using SA and its JSON file.
I have downloaded the JSONn file in my local and provided the path value to "credentials_file". this works if I run the playbook from my local machine.
Now, I want to run this playbook using awx and below are the steps I have done.
Created a Credential.
a. Credential Type: Google Compute Engine
b. name: ansible-gcp-secret
c. under type details, I have uploaded the SAJSONn file and it loaded the rest of the data such as SA email, project and RSA key.
Created project and synched my git repo, which has my playbook.
Created a template to run my playbook.
Now, I am not sure how to use the GCP SA credentials in AWX to run my playbook. Any help or documentation would greatly help.
Below is example of my playbook.
- name: Update Machine Type of GCE Instance
hosts: localhost
gather_facts: no
connection: local
vars:
instance_name: ansible-test
machine_type: e2-medium
image: Debian GNU/Linux 11 (bullseye)
zone: us-central1-a
service_account_email: myuser#project-stg-xxxxx.iam.gserviceaccount.com
credentials_file: /Users/myuser/ansible/hackthonproject-stg-xxxxx-67d90cb0819c.json
project_id: project-stg-xxxxx
tasks:
- name: Stop(Terminate) a instance
gcp_compute_instance:
name: "{{instance_name}}"
project: "{{ project_id }}"
zone: "{{zone}}"
auth_kind: serviceaccount
service_account_file: "{{ credentials_file }}"
status: TERMINATED
Below are the steps we did.
Created credential type in AWX to pull the secrets from the vault. Let's say secret_type. This will give out of env key "vaultkv_secret".
Created a secret to connect to the vault using a token with type=HC Vault secret lookup, name=vault_token
Create a another secret to pull the secret(kv type) from vault with type=custom_vault_puller (this used the first secret create "vault_toke"). Let say name=secret_for_template
Create kv secret in the vault and provide the key and JSON content as value.
Create a template and used the secret "secret_for_template". and provide the secret path and key.
Now, when the template is run, the env var "vaultkv_secret" will have the content of the JSON file. and we can save those content as file and use it as file input to GCP commands.

workload identity can work 2 different GCP project?

ON GCP,I need to use 2 GCP project; One is for web-application, the other is for storing secrets for web-application ( which structure comes from google's repository
As written in README, I'll store secrets using GCP Secret Manager
This project is allocated for GCP Secret Manager for secrets shared by the organization.
procedure I'm planning
prj-secret : create secrets in secrets-manager
prj-application : read secret using kubernetes-external-secrets
in prj-application I want to use workload identity , because I don't want to use as serviceaccountkey doc saying
What I did
create cluser with -workload-pool=project-id.svc.id.goog option
helm install kubernetes-external-secrets
[skip] kubectl create namespace k8s-namespace ( because I install kubernetes-external-secrets on default name space)
[skip] kubectl create serviceaccount --namespace k8s-namespace ksa-name ( because I use default serviceaccount with exist by default when creating GKE)
create google-service-account with module "workload-identity
module "workload-identity" {
source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine//modules/workload-identity"
use_existing_k8s_sa = true
cluster_name = var.cluster_name
location = var.cluter_locaton
k8s_sa_name = "external-secrets-kubernetes-external-secrets"
name = "external-secrets-kubernetes"
roles = ["roles/secretmanager.admin","roles/secretmanager.secretAccessor"]
project_id = var.project_id #it is prj-aplication's project_id
}
kubernetes_serviceaccount called external-secrets-kubernetes-external-secrets was already created when installing kubernetes-external-secrets with helm. and it bind k8s_sa_name &' external-secrets-kubernetes#my-project-id.iam.gserviceaccount.com, which has ["roles/secretmanager.admin","roles/secretmanager.secretAccessor"].
create externalsecret and apply
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: external-key-test
spec:
backendType: gcpSecretsManager
projectId: my-domain
data:
- key: key-test
name: password
result
I got permission problem
ERROR, 7 PERMISSION_DENIED: Permission 'secretmanager.versions.access' denied for resource 'projects/project-id/secrets/external-key-test/versions/latest' (or it may not exist).
I already checked that, if I prj-secret and prj-application is same project, it worked.
So what I thought is,
kubernetes serviceaccount (in prj-secret) & google serviceaccount (in prj-application) cannot bind correctly.
I wonder if someone know
workload-identity works only in same project or not
if it is, how can I get secret data from different project
Thank you.
You have an issue in your role binding I think. When you say this:
kubernetes_serviceaccount called external-secrets-kubernetes-external-secrets was already created when installing kubernetes-external-secrets with helm. and it bind k8s_sa_name &' external-secrets-kubernetes#my-project-id.iam.gserviceaccount.com, which has ["roles/secretmanager.admin","roles/secretmanager.secretAccessor"].
It's unclear.
external-secrets-kubernetes#my-project-id.iam.gserviceaccount.com, is created on which project? I guess in prj-application, but not clear.
I take the assumption (with the name and the link with the cluster) that the service account is created in the prj-application. you grant the role "roles/secretmanager.admin","roles/secretmanager.secretAccessor" on which resource?
On the IAM page of the prj-application?
On the IAM page of the prj-secret?
On the secretId of the secret in the prj-secret?
If you did the 1st one, it's the wrong binding, the service account can only access to the secret of the prj-application, and not these of prj-secret.
Note, if you only need to access the secret, don't grand the admin role, only the accessor is required.

AWS Secrets Manager can’t find the specified secret

I'm using AWS Fargate and storing sensitive data with Secrets Manager. Task definition should get environment variables from secrets store
- name: "app"
image: "ecr-image:tag"
essential: true
secrets:
- name: "VAR1"
valueFrom: "arn:aws:secretsmanager:us-east-1:111222333444:secret:var-one-secret"
- name: "VAR2"
valueFrom: "arn:aws:secretsmanager:us-east-1:111222333444:secret:var-two-secret"
- name: "VAR3"
valueFrom: "arn:aws:secretsmanager:us-east-1:111222333444:secret:var-two-private"
but for some reason it fails with the error below
ResourceNotFoundException: Secrets Manager can’t find the specified secret. status code: 400, request id
It seems a bit strange to me because
IAM has permissions for get secret value, moreover
when leaving only VAR1 variable everything works as expected
AWS CLI is able to retrieve each secret without any issue
e.g.
aws secretsmanager get-secret-value --secret-id var-two-secret
What might be wrong with my configuration? Any hints appreciated
ok, so the trick was to specify ARN explicitly. Instead of just providing secret name you should use full identifier
arn:aws:secretsmanager:us-east-1:111222333444:secret:var-two-secret-ID0o2R
Note -ID0o2R suffix at the end of secret name.
It's still not clear for me why for some variables it works without it.
UPD
However, if your secret has a name that ends in a hyphen followed by
six characters (before Secrets Manager adds the hyphen and six
characters to the ARN) and you try to use that as a partial ARN, then
those characters cause Secrets Manager to assume that you’re
specifying a complete ARN. This confusion can cause unexpected
results.
So as you can see from my variable name containing a hyphen Secrets Manager had hard times when resolving it by short name
Secrets Manager tries to do partial ARN matching when you do not specify the GUID on the end of the ARN. However, it is imperfect because partial ARNs could collide. If you are fetching secrets within the same account, you can just use the secret name (the part after secret: and excluding the dash 6 character -GUID) instead of the full ARN. But using the full ARN, when you have it, is always best.
Another potential cause of this error is that the secret isn’t set; the secret name might exist, but not have a value. See https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_update-secret.html for steps on setting a value.
Just add a double colon to the end of the ARN:
"arn:aws:secretsmanager:us-east-1:1234567890:secret:example-ABC12:VARIABLE_NAME::"
Explanation:
arn:aws:secretsmanager:us-east-1:1234567890:secret:example-ABC12 is
the ARN of your secrets (vault)
VARIABLE_NAME is the actual variable you added, with the addition of :: to the ARN.
Check all the possible combinations in the docs.

Getting GKE secrets back even after deleting the KMS keys used for encryption

I followed this document to create a GKE cluster (1.13.6-gke.6) with --database-encryption-key flag giving a KMS key for enabling Application-layer Secrets Encryption.
I created a secret using the following command:
kubectl create secret generic dev-db-secret --from-literal=username=someuser --from-literal=password=somepass
So if my assumption is correct, these secrets are stored encrypted using the KMS key provided by me while creating the cluster. However, even after I have destroyed all the versions of the used key, I am able to see the secret stored inside the GKE etcd using kubectl get secret dev-db-secret -o yaml as well as I am able to see them in a pod created using the below manifest:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: test-secret
key: password
restartPolicy: Never
If I exec into the above pod and do echo SECRET_USERNAME and echo SECRET_PASSWORD I get the username and password printed on my console in plain text.
Is this the way the encryption supposed to work? If yes, where is the encryption happening exactly?
What am I doing wrong? Are the secrets really encrypted?
I'm not 100% sure, but I think those keys are cached so it's probably will take a while before the decryption will fail. This is the case for Azure, I guess it's similar for GKE.
BTW you might want to read how to protect the manifest files so you can store them on Git. I wrote a blog post describing some of the options you can use.