GCP - Impersonate Service Account from Local Machine - google-cloud-platform

I have used AWS in the past. So I might be trying to compare the behavior with GCP. Bear with me and provide the steps to do things in a correct way.
What I did:
I create a service account in GCP with storage object viewer role.
I also create a key-pair and downloaded the json file in my local.
If I have gcloud/gsutil installed in my local machine, How can I assume/impersonate the service account and work on GCP resources?
Where should i keep the downloaded json file? I already referred to this - https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating
I downloaded file as key.json and kept in my home directory.
Then I did this.
export GOOGLE_APPLICATION_CREDENTIALS="~/key.json"
Execute this command
gsutil cp dummy.txt gs://my-bucket/
Ideally, it should NOT work. but I am able to upload files.

You can set the path to your service account in env (when using Google Cloud SDKs):
#linux
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/path/to/serviceAccountFile.json"
#windows
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\path\to\serviceAccountFile.json"
If you don't have the key file, run the following command that'll download the key:
gcloud iam service-accounts keys create ./serviceAccount.json --iam-account=svc_name#project.iam.gserviceaccount.com
You can then use activate-service-account to use given service account as shown below:
gcloud auth activate-service-account --key-file=serviceAccount.json

Related

docker pull: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource

How do I give a new service account this permission?
I have a VM with "Compute Engine default service account" and it works.
I changed the service account to one with just:
Artifact Registry Administrator
Artifact Registry Reader
and this results in the above error on docker pull.
Thanks
Check if you are correctly configured Docker to be able to pull and push images to Artifact registry : https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling
You also have to be sure you are using the expected Service Account in the place where you execute your command.
If you execute from you local machine and bash, check if you are connected on the expected Service Account with :
gcloud auth activate-service-account --key-file=your_key_file_path.json
export GOOGLE_APPLICATION_CREDENTIALS=your_key_file_path.json
The permissions you given to you Service Account seems to be corrects to execute the needed action.
This happens when you are trying to push/pull an image on a repository in which its specific hostname (associated with its repository location) is not yet added to the credential helper configuration for authentication.
For the gcloud credential helper or standalone credential helper, the Artifact Registry hosts you use must be in your Docker configuration file.
Artifact Registry does not automatically add all registry hosts to the Docker configuration file. Docker response time is significantly slower when there is a large number of configured registries. To minimize the number of registries in the configuration file, you add the hosts that you need to the file
You need to configure-docker while impersonating your service account ($SERVICE_ACCOUNT_EMAIL):
1. Run the following command to make sure you are still impersonating $SERVICE_ACCOUNT_EMAIL:
$ gcloud auth list
If the service account is not impersonated then run the following command:
$ gcloud auth activate-service-account \ "$SERVICE_ACCOUNT_EMAIL" \ --key-file=$SERVICE_ACCOUNT_JSON_FILE_PATH
2. Run the configure-docker command against the auth group:
$ gcloud auth configure-docker <location>-docker.pkg.dev
3. Finally, try pulling the Docker image again.
Refer Authenticating to a repository and stackpost for more information.

How to read from google cloud stoage buckets using service credentials json

I want to read from google cloud storage using gsutil using a service credentials file.
However I don't really understand how to pass it the permissions file.
I have tried running
"gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE"
and then running "gsutil list". However I get the following output
You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
After authenticating with gcloud should I be able to use gsutil
Since you have done "gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE", the only thing you need to do is gcloud config set project <project id>
Note there's no -p option in gsutil
Add -p <project id> at your gsutil command or perform a gcloud config set project <project id> to set the project ID globally.
You can also refer to this documentation for more gsutil arguments.

gsutil permission in Ansible

I would like to use gsutil as a command in Ansible (2.5.X).
On the managed server I already setup Cloud access (service account).
When I use gsutil on the machine, it works without problems.
But when I create a playbook on my management machine and try to
run SDK command I have no access to cloud and permission denied
errors.
I suspect that SSH connection and environment is handled in
a specific way by Ansible. Could someone help me how to use SDK commands in Ansible?
- name: use ansible command
command: >
gsutil list gs://project.something.com
I know that there is gs_storage module. But I do not know
where to look for gs_access_key in an already configured setup.
In .config/gcloud? I'm still learning the Cloud, so some of this
things are new to me. The Cloud access was setup using .json key,
but after I delete this key from the managed machine (shouldn't be exposed).
Best Regards
Kamil
gsutil list would at least require role Viewer assigned to the instance service account - or roles/storage.objectViewer, in case it should also be able to get files from a bucket. Providing Credentials as Module Parameters shows how to authenticate with the gcp_compute_instance module - also see the Cloud Storage IAM Roles and Cloud Storage Authentication (the scopes).

Can I automate `gcloud auth login`?

I am unable to non-interactively activate my Google Cloud service account; even after reading several SO threads.
Creating a service account
gcloud iam service-accounts create my-awesome-acct ...
Creating a role for the service account
gcloud iam roles create AwesomeRole \
--permissions storage.objects.create,storage.objects.delete ....
Generating the keys
gcloud iam service-accounts keys create ~/awesome-key.json ...
Activating the service account
gcloud auth activate-service-account my-awesome-acct ~/awesome-key.json
My Issue
Even after following the above steps, when I run gsutil ... commands, I still get the error message:
$ gsutil cp my_file.tgz gs://my_bucket
Copying file://my_file.tgz [Content-Type=application/x-tar]...
Your credentials are invalid. Please run
$ gcloud auth login
The only way I could get this to work is to actually run gcloud auth login and allow the authentication in a web browser.
Am I doing something wrong? Or is this intended for every service account?
I'm going to answer my own question here.
My Solution
Instead of using gsutil, I decided to use the Google Cloud Client Libraries.
What I did:
gsutil cp my_file.tgz gs://my_bucket
What I am doing now:
from gcloud import storage
# key file is located in my current directory
os.environ.get('GOOGLE_APPLICATION_CREDENTIALS', 'gcloud-auth.json')
client = storage.Client()
bucket = client.get_bucket("my_bucket")
blob = bucket.blob("my_file.tgz")
blob.upload_from_filename("my_file.tgz")
Hindsight 20/20
After getting the above solution working, it seems if I also set the environment variable, GOOGLE_APPLICATION_CREDENTIALS, my gsutil should've worked too. (untested)

Using Google Cloud Source Repositories with service account

Is it possible to access a Google Cloud Source Repository in an automated way, i.e. from a GCE instance using a service account?
The only authentication method I am seeing in the docs is to use the gcloud auth login command, which will authenticate my personal user to access the repo, not the machine I am running commands from.
If you want to clone with git rather than running through gcloud, you can run:
git config --global credential.helper gcloud.sh
...and then this will work:
git clone https://source.developers.google.com/p/$PROJECT/r/$REPO
On GCE vms running
gcloud source repos clone default ~/my_repo
should work automatically without extra step of authentication, as it will use VMs service account.
If you running on some other machine you can download from https://console.cloud.google.com service account .json key file and activate it with
gcloud auth activate-service-account --key-file KEY_FILE
and then run the above clone command.
In case somebody like me was trying to do this as part of Dockerfile, after struggling for a while I've only managed to get it to work like this:
RUN gcloud auth activate-service-account --key-file KEY_FILE ; \
gcloud source repos clone default ~/my_repo
As you can see, having it to be part of the same RUN command was the key, otherwise it kept failing with
ERROR: (gcloud.source.repos.clone) You do not currently have an active account selected.
Enable access to the "Cloud Source Repositories" Cloud API for the instance. You should do this while creating or editing the instance in the Admin console
From a shell inside the instance, execute gcloud source repos clone <repo_name_in_cloud_source> <target_path_to_clone_into>
If you are running on GCE, take advantage of the new authentication method that needs fewer lines of code.
When creating your VM instance, under "Access & Security," set "Cloud Platform" to "Enabled."
Then the authentication code is this simple:
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = credentials.authorize(httplib2.Http())
See
https://developers.google.com/identity/protocols/application-default-credentials