I'm configuring a docker agent template in jenkins and specifying a docker image in an ECR. When I select the "registry authentication" credentials drop down, I can only see username/password credentials despite having other things like AWS credentials configured in jenkins. The credentials have global scope and I'm using them in another pipeline using withCredentials(aws(...)) binding plugin syntax.
I tried creating a username/password credential using the AWS access and secret key, but that doesn't seem to work - likely because it's just passing them directly as user/pass rather than using aws ecr get-login-password.
Is there any way to do this from the agent template configuration? I know I can specify the docker agent more explicitly in the Jenkins file, but I'm trying to stay consistent with other agents we use.
Related
When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/
Fluxcd ImageRepository authentication with AWS Elastic Container Registry Not working on ARM64 graviton node.
After debugging I found that the image used in the init container to get cred credentials is not supporting Arm64 instances.
image name:-bitnami/kubectl
doc link:-https://fluxcd.io/docs/guides/image-update/
There are some workarounds, provided on the fluxcd documentation portal:
AWS Elastic Container Registry
Using a JSON key
Using Static Credentials
AWS Elastic Container Registry
The solution proposed is to create a cronjob that runs every 6 hours which would re-create the docker-registry secret using a new token.
JSON key
A Json key doesn’t expire, so we don’t need a cronjob, we just need to create the secret and reference it in the ImagePolicy.
First, create a json key file by following this documentation. Grant the service account the role of Container Registry Service Agent so that it can access GCR and download the json file.
Static Credentials
Instead of creating the Secret directly into your Kubernetes cluster, encrypt it using Mozilla SOPS or Sealed Secrets, then commit and push the encypted file to git.
This Secret should be in the same Namespace as your flux ImageRepository object. Update the ImageRepository.spec.secretRef to point to it.
I am trying to create a workflow where developers in my organisation can upload docker images to our AWS ECR. The following commands work :
Step-1: Get Token
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <repo-url>
Step-2: Tag the already built image
docker tag <local-image:tag> <ecr-repo-url>:latest
Step-3: Finally Push
docker push <ecr-repo-url>:latest
Now this works absolutely fine.
However as I am trying to automate the above steps. I will NOT have AWS CLI configured on end users machine. So Step-1 will fail for the end user
So two quick queries:
Can I get the token from a remote machine and Step-2 and Step-3 can happen from client
Can I do all the three steps in remote and I have a service that uploads the local docker image to the remote server which in turn will take care of tag - push
I'm hoping that the end-user will have docker installed
In that case you can make use AWS CLI docker image to obtain the token from ECR.
The token itself is just a temporary password so whether you use the AWS CLI on the remote server or not it will be valid for the Docker credentials.
You of also have the option of using the AWS SDK that you could package with a small application to perform this action, such as Boto3 although you would need to ensure that the host itself has the relevant programming language configured.
Alternatively if you want this to be automated you could actually look at using a CI/CD pipeline.
GitHub has Actions, BitBucket has Pipelines and GitLab has arguably the most CI/CD built into it. This would have these services perform all of the above actions for you.
As a final suggestion you could use CodeBuild within a CodePipeline to build your image and then tag and deploy it to ECR for you. This will be automated by a trigger and not require any permanent infrastructure.
More information about this option is available in the Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source article.
I have a Linux server that is not hosted by AWS. Now, I want to use AWS CodePipeline and CodeBuild to build my CI/CD workflow. During the build phase with CodeBuild, I wan't to transfer the build result files to my remote Linux server. I know I can do this using scp <source> <destination> over SSH. But I don't know how to store the SSH keys in CodeBuild. Is this possible?
Yes it is possible.
You keep the secret (SSH private key) in AWS Secrets Manager or Parameter Store. CodeBuild has native support to fetch these secrets safely and they will never be echoed anywhere. See this StackOverflow response: How to retrieve Secret Manager data in buildspec.yaml
Since we use AWS for a number of other projects, when it came time to publish private docker images in a repository, I really wanted to use Amazon Elastic Container Registry.
However, the login process seems overly complicated.
Is it correct that the only way to log into the ECR is to use the aws command line tools to generate a 12hour token, and use that with the Docker login command?
Any advice on scripting this process without AWS tools?
You must use AWS tools to generate a temporary authorization token to be used by Docker CLI since it does not support the standard AWS authentication methods. Quoting the explanation from the official AWS ECR authentication documentation:
Because the Docker CLI does not support the standard AWS authentication methods, you must authenticate your Docker client another way so that Amazon ECR knows who is requesting to push or pull an image. If you are using the Docker CLI, then use the docker login command to authenticate to an Amazon ECR registry with an authorization token that is provided by Amazon ECR and is valid for 12 hours. The GetAuthorizationToken API operation provides a base64-encoded authorization token that contains a user name (AWS) and a password that you can decode and use in a docker login command. However, a much simpler get-login command (which retrieves the token, decodes it, and converts it to a docker login command for you) is available in the AWS CLI.
Pay attention that although you must use AWS tools to generate the authentication token, using the AWS CLI is not the only option. You can call the GetAuthorizationToken using any form of the AWS tools which is convenient to use from your scripts.
The get-login command is available only in the AWS CLI as opposed to other AWS tools. As quoted above, it is claimed to be a simple way to perform the authorization.