Connecting acr with argocd-1 - argocd

I am trying to deploy my image which is present in acr using argocd.
I have added a acr credentials as a secret.
But getting errorenter image description here

Related

Error while pulling docker image from ECR to EC2 using github actions

From main.yaml the docker pull command is executed in continoud deployment phase but it throws an error.
This was the error:
Error response from daemon: repository ***/*** not found: name unknown: The repository with name '***/***' does not exist in the registry with id '***'
(https://i.stack.imgur.com/frDre.png)
From AWS ECR copy the URI (Universal resource locator) without the registry name.

How to pull a docker image from AWS ECR to Minikube Kubernetes cluster with MFA enabled

I have a docker image in AWS ECR which is in my secondary account. I want to pull that image to the Minikube Kubernetes cluster using AWS IAM Role ARN where MFA is enabled on it. Due to this, my deployment failed while pulling the Image.
I enabled the registry-creds addon to access ECR Image but didn't work out.
May I know any other way to access AWS ECR of AWS Account B via AWS IAM Role ARN with MFA enabled using the credential of the AWS Account A?
For example, I provided details like this
Enter AWS Access Key ID: Access key of Account A
Enter AWS Secret Access Key: Secret key of Account A
(Optional) Enter AWS Session Token:
Enter AWS Region: us-west-2
Enter 12 digit AWS Account ID (Comma separated list): [AccountA, AccountB]
(Optional) Enter ARN of AWS role to assume: <role_arn of AccountB>
ERROR MESSAGE:
Warning Failed 2s (x3 over 42s) kubelet Failed to pull image "XXXXXXX.dkr.ecr.ca-central-1.amazonaws.com/sample-dev:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://XXXXXXX.dkr.ecr.ca-central-1.amazonaws.com/v2/sample-dev/manifests/latest": no basic auth credentials
Warning Failed 2s (x3 over 42s) kubelet Error: ErrImagePull
While the minikube addons based solution shown by #DavidMaze is probably cleaner and generally preferable, I wasn't able to get it to work.
Instead, I found out it is possible to give the service account of the pod a copy of the docker login tokens in the local home. If you haven't set a serviceaccount, it's default:
# Log in with aws ecr get-login or however
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
This will work fine in a bind.
Minikube doesn't have a way to provide the MFA token. You need to create temporary credentials somehow and provide those credentials to minikube addons configure registry-creds.
My day job uses aws-vault and so my typical sequence for setting this up involves running
aws-vault exec some-profile -- env | grep AWS
minikube addons configure registry-creds
and then copying the temporary access key (starts with ASIA...), secret, and session token into the Minikube configuration. I do not enter a role ARN in the final prompt; the temporary credentials are already associated with the right AWS role.
The same restrictions and workaround would apply if you were using the Kubernetes-level imagePullSecrets.

How to login to AWS ECR from Gitlab CI to download a Docker image

I am using Gitlab pipeline. The runner is hosted in Gitlab.
To decrease built time, I built a custom image which contains Maven dependencies. So, maven dependencies are not downloaded from internet during each build.
I pushed my custom image to AWS ECR. But Gitlab CI is unable to download this image.
Here is the error log:
Running with gitlab-runner 14.3.0-rc1 (ed15bfbf)
on docker-auto-scale z3WU8uu-
Preparing the "docker+machine" executor
Using Docker executor with image ***.dkr.ecr.eu-west-1.amazonaws.com/***:latest ...
Pulling docker image 301768173512.dkr.ecr.eu-west-1.amazonaws.com/inuka-ci:latest ...
WARNING: Failed to pull image with policy "always": Error response from daemon: Get https://301768173512.dkr.ecr.eu-west-1.amazonaws.com/v2/inuka-ci/manifests/latest: no basic auth credentials (manager.go:214:0s)
ERROR: Job failed (system failure): failed to pull image "301768173512.dkr.ecr.eu-west-1.amazonaws.com/inuka-ci:latest" with specified policies [always]: Error response from daemon: Get https://301768173512.dkr.ecr.eu-west-1.amazonaws.com/v2/inuka-ci/manifests/latest: no basic auth credentials (manager.go:214:0s)
Since pipeline is triggered by Gitlab CI, I am unable to execute a docker login command before pipeline starts.
How can I make my gitlab pipeline login to AWS ECR before pipeline starts?
Edited answer, I've previously misread the question:
create an IAM user with at least read-only access to ECR and set these environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION.
Before being able to pull images from ECR you need to obtain a token using the AWS cli.
One way to provide auth credentials to ECR is to define a variable called DOCKER_AUTH_CONFIG. Which has the following structure:
{
"auths": {
"myregistryurl.com": {
"auth": "base64(username:password)"
}
}
}
You need to define a step like this in another pipeline as the main pipeline needs the token upon launch:
aws_token:
image:
name: amazon/aws-cli
entrypoint: [""]
script:
- USER=AWS
- TOKEN=$(aws ecr get-login-password)
- AUTH=$(echo "$USER:$TOKEN" | base64 | tr -d "\n")
- echo $AUTH
Take the value displayed in the logs and put it in the main pipeline as the value of the variabile DOCKER_AUTH_CONFIG.
In this way, the next run of the pipeline will pull correctly the image.
Notice that this token expires after 12 hours, when that times expires you will need to launch again this job.

Can I use docker image registry from google cloud build?

With Google Cloud Build, I am creating a trigger to build using a Dockerfile, the end result of which is a docker image.
I'd like to tag and push this to the standard Docker image repository (docker.io), but i get the following error:
The push refers to repository [docker.io/xxx/yyy]
Pushing xxx/yyy:master
denied: requested access to the resource is denied
I assume that this is because within the context of the build workspace, there has been no login to the Docker registry.
Is there a way to do this, or do I have to use the Google Image Repository?
You can configure Google Cloud Build to push to a different repository with a cloudbuild.yaml in addition to the Dockerfile. You can log in to Docker by passing your password as an encrypted secret env variable. An example of using a secret env variable can be found here: https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials#example_build_request_using_an_encrypted_variable

Unable to pull the image from AWS ECR to my kubernetes cluster on AWS

enter image description hereI have created a Kubernetes cluster using Kubeadm on AWS. I have private repository for my docker images on AWS called ECR. When I tried to pull this image from AWS ECR to my kubernetes cluster using kubectl run command. It creates the deployment but in the pod It displays errImagePulling. I understand from kubernetes documents that we don't need to create a secret policy while pulling image from AWS ECR as both cluster and image registry on the AWS itself. I attached my policy document also. can someone please help me out?