Fluxcd ImageRepository authentication with AWS Elastic Container Registry Not working on ARM64 graviton node.
After debugging I found that the image used in the init container to get cred credentials is not supporting Arm64 instances.
image name:-bitnami/kubectl
doc link:-https://fluxcd.io/docs/guides/image-update/
There are some workarounds, provided on the fluxcd documentation portal:
AWS Elastic Container Registry
Using a JSON key
Using Static Credentials
AWS Elastic Container Registry
The solution proposed is to create a cronjob that runs every 6 hours which would re-create the docker-registry secret using a new token.
JSON key
A Json key doesn’t expire, so we don’t need a cronjob, we just need to create the secret and reference it in the ImagePolicy.
First, create a json key file by following this documentation. Grant the service account the role of Container Registry Service Agent so that it can access GCR and download the json file.
Static Credentials
Instead of creating the Secret directly into your Kubernetes cluster, encrypt it using Mozilla SOPS or Sealed Secrets, then commit and push the encypted file to git.
This Secret should be in the same Namespace as your flux ImageRepository object. Update the ImageRepository.spec.secretRef to point to it.
Related
I'm configuring a docker agent template in jenkins and specifying a docker image in an ECR. When I select the "registry authentication" credentials drop down, I can only see username/password credentials despite having other things like AWS credentials configured in jenkins. The credentials have global scope and I'm using them in another pipeline using withCredentials(aws(...)) binding plugin syntax.
I tried creating a username/password credential using the AWS access and secret key, but that doesn't seem to work - likely because it's just passing them directly as user/pass rather than using aws ecr get-login-password.
Is there any way to do this from the agent template configuration? I know I can specify the docker agent more explicitly in the Jenkins file, but I'm trying to stay consistent with other agents we use.
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
According to the above document in order to use encryption configuration, we need to edit the kube-apiserver.yaml file. But in GCP, Azure or AWS we cannot view this the api-server as it is managed by the cloud provider. How can we use encryption configuration in this case? Has anyone managed to use encryption configuration to encrypt secrets in GCP,Azure and AWS?
Google Secret Manager(GSM)is GCP’s flagship service for storing, rotation and retrieving secrets. A secret in GSM could be stored in encrypted form. It supports IAM for authentication and fine grained access controls
Azure Key Vault FlexVolume and for aws Amazon Elastic Container Service for Kubernetes (EKS) are the other tools that can be used
When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/
I am trying to figure out is there a way to access secrets like db password from AWS Batch job using Vault/ K8S secrets. The option to use Secret Manager provided by AWS is there, but as per my company policy we are using vault across the projects. AWS batch documentation does not have any documentation on options to store secrets other than Secret Manager.
Can anyone help me if using vault/ k8s secret is an option with AWS Batch.
I have a general question for the rds feature within aws credentials manager. When I get the secret, it looks like this:
Does this mean that these credentials directly will work or is the password encrypted? Like if I wanted to sign into my database with a connection what credentials do I use and do these credentials auto rotate with the cycling feature?
I assume you mean the RDSDataClient to access a database such as a Serverless Amazon Aurora instance.
To successfully connect to the database using the RdsDataClient object, you must setup an AWS Secrets Manager secret that is used for authentication. For information, see Rotate Amazon RDS database credentials automatically with AWS Secrets Manager.
To see an AWS tutorial that shows this concept and the corresponding code, see this example that uses the AWS SDK for Kotlin. You will need these values to make a successful connection:
private val secretArnVal = "<Enter the secret manager ARN>"
private val resourceArnVal = "<Enter the database ARN>" ;
See the full example here:
Creating the Serverless Amazon Aurora item tracker application using the Kotlin RdsDataClient API
I just tested this again (been a while since it was developed), and it works perfectly.
We will port this example to use other supported programming languages too - like AWS SDK for Java.
UPDATE
You only need to use Secret Manager when using the RDSDataClient. As mentioned in that tutorial, the RdsDataClient object is only supported for an Aurora Serverless DB cluster or an Aurora PostgreSQL. If you are using MySQL RDS, you cannot use the the RdsDataClient object. You would use a supported JDBC API.