Google Cloud Compute - Cannot get access to account security instance - google-cloud-platform

I'm trying to run a docker container via Compute (not Cloud Run as I need long term instance)
The container works fine on the local machine, it can access GCloud account resources.
I can see that it's running on a Container-Optimized OS on Compute
I tried running the following commands in order, but I get the same issue. CONSUMER_INVALID. The IAM account has access to all the required permissions, I tripled checked this.
// Fix gcloud issue
alias gcloud='(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v $HOME/.config:/.config -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker google/cloud-sdk gcloud'
// Setup GCloud
gcloud
// Enable access via to dockers gcloud
gcloud auth configure-docker
// Enable access via to dockers gcloud
gcloud auth configure-docker
Need to run
docker-credential-gcr configure-docker
export GOOGLE_CLOUD_PROJECT=project-352014
Not sure what to do now; seems Compute isn't communicating property with the internal account resources?

Related

How to copy a file from one gcp instance to another gcp instance in same project

I am currently running 29 instances in each available regions on GCP. And I need all of the instances to have some python script file.
As I was getting tired to upload them manually through the console 29 times, I was wondering if there's a way to upload the script in only one instance, and copy them all over to 28 other instances with gcloud scp command?
Currently, I was trying the following:
sudo gcloud compute scp --zone='asia-east1-b' /home/file.txt instance-asia-east1:/home/
The code above is trying to scp "file.txt" over to the instance-asia-east1.
I included the sudo command as it was having some permission issues. But after adding the sudo, I get another error message:
root#000.000.000.00: Permission denied (publickey).
lost connection
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
What can be the issue, and how can I resolve this?
You should avoid using sudo.
If you add --verbosity=debug to (any but in this case) gcloud compute ssh or gcloud compute scp command, you'll see that gcloud invokes your host's (probably /usr/bin) ssh and scp commands. It uses a private key that was generated by gcloud using your credentials (gcloud config get account or the default gcloud auth list).
gcloud compute scp \
${PWD}/${FILE} \
${INSTANCE}:. \
--project=${PROJECT} \
--zone=${ZONE} \
--verbosity=debug
Yielding:
DEBUG: Running [gcloud.compute.scp] with arguments: ...
...
DEBUG: Current SSH keys in project: ['...:ssh-rsa ... user#host']
DEBUG: Running command [/usr/bin/scp -i .../.ssh/google_compute_engine -o ...
INFO: Display format: "default"
DEBUG: SDK update checks are disabled.
NOTE /usr/bin/scp -i .../.ssh/google_compute_engine ...
When you run as sudo, even if you copy your credentialed user's google_compute_engine SSH keys (to e.g. /root/.ssh), the authenticated user won't match, unless you also duplicate the gcloud config...
I recommend you solve the permission issue that triggered your use of sudo.

Unable to SSH/gcloud into default Google Deep Learning VM

I created a new Google Deep Learning VM keeping all the defaults except for asking no GPU:
The VM instance was successfully launched:
But I cannot SSH into it:
Same issue when attempting to use with gcloud (using the command provided when clicking on the instance's arrow down button at the right of SSH):
ssh: connect to host 34.105.108.43 port 22: Connection timed out
ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Why?
VM instance details:
Turns out that the browser-based SSH client and browser-based gcloud client were disabled by my organization, this is why I couldn't access the VM. The reason I was given is that to allow browser-based SSH, one would have to expose the VMs to the entire web, because Google does not provide a list of the IPs they use for browser-based SSH.
So instead one can SSH into a GCP VM via one's local SSH client by first uploading one's SSH key using the GCP web console. See https://cloud.google.com/compute/docs/instances/connecting-advanced#linux-macos (mirror) for the documentation on how to use one's local SSH client with GCP.
Since the documentation can be a bit tedious to parse, here are the commands I run on my local Ubuntu 18.04 LTS x64 to upload my SSH key and connect to the VM:
If you haven't installed gcloud yet:
# https://cloud.google.com/sdk/docs/install#linux (<- go there to get the latest gcloud URL to download via curl):
sudo apt-get install -y curl
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-310.0.0-linux-x86_64.tar.gz
tar -xvf google-cloud-sdk-310.0.0-linux-x86_64.tar.gz./google-cloud-sdk/install.sh
./google-cloud-sdk/bin/gcloud init
Once gcloud is installed:
# Connect to gcloud
gcloud auth login
# Retrieve one's GCP "username"
gcloud compute os-login describe-profile
# The output will be "name: '[some large number, which is the username]'"
# Create a new SSH key
ssh-keygen -t rsa -f ~/.ssh/gcp001 -C USERNAME
chmod 400 ~/.ssh/gcp001
# if you want to view the public key: nano ~/.ssh/gcp001.pub
gcloud compute os-login ssh-keys add --key-file ~/.ssh/gcp001.pub
gcloud compute ssh --project PROJECT_ID --zone ZONE VM_NAME
# Note that PROJECT_ID can be viewed when running `gcloud auth login`,
# which will output "Your current project has been set to: [PROJECT_ID]".
In order to connect to the VM Instance you will have to follow the guide from GCP and then set up the role with the necessary authorization under IAM & Admin.
Please do:
sudo gcloud compute config-ssh
gcloud auth login
Login to your Gmail account. Accept access of Google Cloud.
Later set project if not yet done:
gcloud config set project YOU-PROJECT-ID
Run gcloud compute ssh with all you need.
If you still have a problem, please remove this:
rm .ssh/google_compute_engine
Run gcloud compute ssh with all you need again and the issue should be solved!

Why do I get "get-credentials requires edit permission" error in gcloud on my terminal, when it succeeds in Cloud Shell?

From my laptop, I am able to execute most gcloud commands, for example creating a cluster and many other commands. I have the Project Owner role.
But when I try to get credentials for a K8s cluster, I get a permission error. But in Cloud Shell, the command succeeds.
The logged-in account is the same in both.
% gcloud container clusters get-credentials my-first-cluster-1 --zone us-central1-c --project my-project
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) get-credentials requires edit permission on my-project
$ gcloud config list account --format "value(core.account)"
<MY EMAIL>
But in Cloud Shell, this succeeds!
$ gcloud container clusters get-credentials my-first-cluster-1 --zone us-central1-c --project my-project
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-first-cluster-1.
$ gcloud config list account --format "value(core.account)"
<MY EMAIL>
The error message is indeed incorrect and not very helpful in this case. This issue occurs when the gcloud config value container/use_client_certificate is set to True but no client certificate has been configured (note that client certificate is a legacy authentication method and is disabled by default for clusters created with GKE 1.12 and higher.). Setting it to False via the following gcloud command solves this issue:
gcloud config set container/use_client_certificate False
This config value is set to False by default in Cloud Shell, which explains the different behavior you experienced.

Location of /home/airflow

I have specified 3 nodes when creating a cloud composer environment. I tried to connect to worker nodes via SSH but I am not able to find airflow directory in /home. So where exactly is it located?
Cloud Composer runs Airflow on GKE, so you won't find data directly on any of the host GCE instances. Instead, Airflow processes are run within Kubernetes-managed containers, which either mount or sync data to the /home/airflow directory. To find the directory you will need to look within a running container.
Since each environment stores its Airflow data in a GCS bucket, you can alternatively inspect files by using Cloud Console or gsutil. If you really want to view /home/airflow with a shell, you can use kubectl exec which allows you to run commands/open a shell on any pod/container in the Kubernetes cluster. For example:
# Obtain the name of the Composer environment's GKE cluster
$ gcloud composer environments describe $ENV_NAME
# Fetch Kubernetes credentials for that cluster
$ gcloud container cluster get-credentials $GKE_CLUSTER_NAME
Once you have Kubernetes credentials, you can list running pods and SSH into them:
# List running pods
$ kubectl get pods
# SSH into a pod
$ kubectl exec -it $POD_NAME bash
airflow-worker-a93j$ ls /home/airflow

403: Request had insufficient authentication scopes - gcloud container clusters get-credentials

I need to connect to GKE kubernetes cluster from gitlab runner, but I don't want to use AutoDevops feature, I would like to setup all of those things on my own. So, basically I would like to install gcloud sdk on a gitlab runner, then set gcloud account to my service account, authorize with the generated key and finally perform "gcloud container clusters get-credentials ..." command to get a valid kubernetes config - to be able to interact with kubernetes cluster.
Interesting fact is, that I tried to perform the entire procedure on my local machine using docker with the same image - and it works here! I does not work only on gitlab runner. The only difference is that gitlab runner is running not with docker executor but on kubernetes one (on the same k8s I want to interact with).
So the working case is:
$ winpty docker run -it --entrypoint=sh lachlanevenson/k8s-kubectl:latest
# apk add python
# wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
# tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-# # reporting=false --path-update=true > /dev/null
# PATH="google-cloud-sdk/bin:${PATH}"
# gcloud config set account <my-service-account>
# gcloud auth activate-service-account --key-file=key.json --project=<my_project>
# gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# kubectl get all
but when I try do do the same with gitlab runner:
gitlab-ci-yml:
deployment_be:
image: lachlanevenson/k8s-kubectl:latest
stage: deploy
only:
- master
tags:
- kubernetes
before_script:
- apk add python
script:
# Download and install Google Cloud SDK
- wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
- tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-reporting=false --path-update=true
- PATH="google-cloud-sdk/bin:${PATH}"
# Authorize with service account and fetch k8s config file
- gcloud config set account <my_service_account>
- gcloud auth activate-service-account --key-file=key.json --project=<my_project>
- gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# Interact with kubectl
- kubectl get all
I get the following error:
$ gcloud config set account <my_service_account>
Updated property [core/account].
$ gcloud auth activate-service-account --key-file=key.json --project=<my_project>
Activated service account credentials for: [<my_service_account>]
$ gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
ERROR: Job failed: command terminated with exit code 1
I tried to set all possible roles for this service account, including: Compute Administrator,
Kubernetes Engine Administrator,
Kubernetes Engine Clusters Administrator,
Container Administrator,
Editor,
Owner
Why this service account works fine on isolated docker image and on the same image launched over kubernetes cluster it fails ?