Multiple GCR Project Registry Login in Jenkins Machine - google-cloud-platform

We have our cloud setup on GCP with multiple projects. In our Jenkins machines, I can see multiple entries of docker registries. One of them is something like this:-
"https://gcr.io/abc-defghi-212121": {
"auth": "somethingsomethingsomething=",
"email": "not#val.id"
I want to do same thing for another project which will be like:-
"https://gcr.io/jkl-mnopqr-313131": {
"auth": "somethingsomethingsomething=",
"email": "not#val.id"
So that if I do docker login to both registries it should work. I have followed below links:-
https://cloud.google.com/container-registry/docs/advanced-authentication
there are different methods in this but still confused. Please help.

The most common authentication method to use docker with container registry is using gcloud as a Docker credential helper. This authentication method is performed by running:
$ gcloud auth configure-docker
This will create a json file, ~/.docker/config.json, that will "tell" docker to authenticate in container registry using the gcloud current user.
However, I am assuming in your case it's Jenkins that builds and pushes your images to the container registry. You need to authenticate it, which I believe it's performed using the OAuth plugin. You can find it in the Jenkins interface:
Jenkins -> Credentials -> Global Credentials -> Add Credentials
Nevertheless, you could also refer to this documentation where explained how to troubleshoot common Container Registry and Docker issues. Also, make sure that you have the required permission to push or pull.

Related

docker pull: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource

How do I give a new service account this permission?
I have a VM with "Compute Engine default service account" and it works.
I changed the service account to one with just:
Artifact Registry Administrator
Artifact Registry Reader
and this results in the above error on docker pull.
Thanks
Check if you are correctly configured Docker to be able to pull and push images to Artifact registry : https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling
You also have to be sure you are using the expected Service Account in the place where you execute your command.
If you execute from you local machine and bash, check if you are connected on the expected Service Account with :
gcloud auth activate-service-account --key-file=your_key_file_path.json
export GOOGLE_APPLICATION_CREDENTIALS=your_key_file_path.json
The permissions you given to you Service Account seems to be corrects to execute the needed action.
This happens when you are trying to push/pull an image on a repository in which its specific hostname (associated with its repository location) is not yet added to the credential helper configuration for authentication.
For the gcloud credential helper or standalone credential helper, the Artifact Registry hosts you use must be in your Docker configuration file.
Artifact Registry does not automatically add all registry hosts to the Docker configuration file. Docker response time is significantly slower when there is a large number of configured registries. To minimize the number of registries in the configuration file, you add the hosts that you need to the file
You need to configure-docker while impersonating your service account ($SERVICE_ACCOUNT_EMAIL):
1. Run the following command to make sure you are still impersonating $SERVICE_ACCOUNT_EMAIL:
$ gcloud auth list
If the service account is not impersonated then run the following command:
$ gcloud auth activate-service-account \ "$SERVICE_ACCOUNT_EMAIL" \ --key-file=$SERVICE_ACCOUNT_JSON_FILE_PATH
2. Run the configure-docker command against the auth group:
$ gcloud auth configure-docker <location>-docker.pkg.dev
3. Finally, try pulling the Docker image again.
Refer Authenticating to a repository and stackpost for more information.

File upload from Docker container to Google Cloud Storage

I'm putting together a CI/CD pipeline on GKE based on
this guide by Google.
Everything is working well, except that as it's a Django project I also need to run the collectstatic command and upload the files to Google Storage.
In the Dockerfile I have the following commands:
RUN python manage.py collectstatic --noinput
RUN gsutil rsync -R /home/vmagent/app/myapp/static gs://mystorage/static
Collectstatic works as expected, but the gsutil upload fails with the following error message:
ServiceException: 401 Anonymous caller does not have storage.objects.create access to mystorage/static/...
What's the best way to authenticate gsutil?
If you are running your docker container on GCP (GKE), you use application credentials to authenticate a pod, which then has the same permissions as that service account. More information about this can be found here. Both GKE and other kubernetes clusters allow you to import key files as secrets. On all Kubernetes platforms this is done with the following commands. The full guide can be found here.
kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json
Then set the environment variable in your manifest like this:
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Too bad running pods under a certain role is not yet possible. GCP operates with service accounts, not with roles like AWS. In AWS you can assign a role to a task, which grants a container permissions under that role.

Is it possible to pull images from ECR without using docker login

I have an ECR and EC2 instance running docker. What I want to do is to pull images without doing docker login first.
Is it possible at all? If yes what kind of policy should I attach to EC2 instance and/or ECR repo? I did a lot of experiments, but did not succeed.
And please - no suggestions on how to use aws get-login. My aim is to get rid of it by using IAM policy/roles.
To use an EC2 Role without having to use docker login, https://github.com/awslabs/amazon-ecr-credential-helper can be used.
Place the docker-credential-ecr-login binary on your PATH and set the contents of your ~/.docker/config.json file to be:
{
"credsStore": "ecr-login"
}
Now commands such as docker pull or docker push will work transparently.
My aim is to get rid of it by using IAM policy/roles.
I don't see how this is possible since some form of authentication is required.

Can I use docker image registry from google cloud build?

With Google Cloud Build, I am creating a trigger to build using a Dockerfile, the end result of which is a docker image.
I'd like to tag and push this to the standard Docker image repository (docker.io), but i get the following error:
The push refers to repository [docker.io/xxx/yyy]
Pushing xxx/yyy:master
denied: requested access to the resource is denied
I assume that this is because within the context of the build workspace, there has been no login to the Docker registry.
Is there a way to do this, or do I have to use the Google Image Repository?
You can configure Google Cloud Build to push to a different repository with a cloudbuild.yaml in addition to the Dockerfile. You can log in to Docker by passing your password as an encrypted secret env variable. An example of using a secret env variable can be found here: https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials#example_build_request_using_an_encrypted_variable

Using Google Cloud Source Repositories with service account

Is it possible to access a Google Cloud Source Repository in an automated way, i.e. from a GCE instance using a service account?
The only authentication method I am seeing in the docs is to use the gcloud auth login command, which will authenticate my personal user to access the repo, not the machine I am running commands from.
If you want to clone with git rather than running through gcloud, you can run:
git config --global credential.helper gcloud.sh
...and then this will work:
git clone https://source.developers.google.com/p/$PROJECT/r/$REPO
On GCE vms running
gcloud source repos clone default ~/my_repo
should work automatically without extra step of authentication, as it will use VMs service account.
If you running on some other machine you can download from https://console.cloud.google.com service account .json key file and activate it with
gcloud auth activate-service-account --key-file KEY_FILE
and then run the above clone command.
In case somebody like me was trying to do this as part of Dockerfile, after struggling for a while I've only managed to get it to work like this:
RUN gcloud auth activate-service-account --key-file KEY_FILE ; \
gcloud source repos clone default ~/my_repo
As you can see, having it to be part of the same RUN command was the key, otherwise it kept failing with
ERROR: (gcloud.source.repos.clone) You do not currently have an active account selected.
Enable access to the "Cloud Source Repositories" Cloud API for the instance. You should do this while creating or editing the instance in the Admin console
From a shell inside the instance, execute gcloud source repos clone <repo_name_in_cloud_source> <target_path_to_clone_into>
If you are running on GCE, take advantage of the new authentication method that needs fewer lines of code.
When creating your VM instance, under "Access & Security," set "Cloud Platform" to "Enabled."
Then the authentication code is this simple:
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = credentials.authorize(httplib2.Http())
See
https://developers.google.com/identity/protocols/application-default-credentials