How to SSH/SCP from Cloud Build thru IAP Tunnel? - google-cloud-platform

I need to execute commands on my Compute Engine VM. We need an initial setup for the SQL and the plan is to use cloud build (will only be triggered once) for this; IAP is implemented and Firewall rule is already in place. (Allow TCP 22 from 35.235.240.0/20)
This is my build step:
# Setup Cloud SQL
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: 'Setup Cloud SQL Tables'
entrypoint: 'bash'
args:
- -c
- |
echo "Upload File to $_SQL_JUMP_BOX_NAME" &&
gcloud compute scp --recurse cloud-sql/setup-sql.sh --tunnel-through-iap --zone $_ZONE "$_SQL_JUMP_BOX_NAME:~" &&
echo "SSH to $_SQL_JUMP_BOX_NAME" &&
gcloud compute ssh --tunnel-through-iap --zone $_ZONE "$_SQL_JUMP_BOX_NAME" --project "$_TARGET_PROJECT_ID" --command="chmod +x setup-sql.sh && ./setup-sql.sh"
I am receiving this error:
root#compute.3726515935009049919: Permission denied (publickey).
WARNING:
To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth
root#compute.3726515935009049919: Permission denied (publickey).
ERROR: (gcloud.compute.scp) Could not SSH into the instance. It is possible that your SSH key has not propagated to the instance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic.
This will also be triggered/executed to multiple environments, hence we use cloud build for reusability.

Already working!
I stumbled upon this blog -- https://hodo.dev/posts/post-14-cloud-build-iap/
Made changes on my script, need to specify user on SCP/SSH command:
Working Script/Step:
# Setup Cloud SQL
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: 'Setup Cloud SQL Tables'
entrypoint: 'bash'
args:
- -c
- |
echo "Upload File to $_SQL_JUMP_BOX_NAME" &&
gcloud compute scp --recurse cloud-sql/setup-sql.sh --tunnel-through-iap --zone $_ZONE cloudbuild#$_SQL_JUMP_BOX_NAME:~ &&
echo "SSH to $_SQL_JUMP_BOX_NAME" &&
gcloud compute ssh --tunnel-through-iap --zone $_ZONE cloudbuild#$_SQL_JUMP_BOX_NAME --project "$_TARGET_PROJECT_ID" --command="chmod +x setup-sql.sh && ./setup-sql.sh"
Need changes related to the destination VM
Before:
gcloud compute ssh --tunnel-through-iap --zone $_ZONE "$_SQL_JUMP_BOX_NAME"
After:
gcloud compute ssh --tunnel-through-iap --zone $_ZONE cloudbuild#$_SQL_JUMP_BOX_NAME

Related

Use Identity Aware Proxy to tunnel to a TPU

Can I use google cloud's identity aware proxy to connect to the gRPC endpoint on a TPU worker? By "TPU worker" I mean that I am creating a TPU with no associated compute instance (using gcloud compute tpus create) and I wish to connect to the gRPC endpoint found by running gcloud compute tpus describe my-tpu:
ipAddress: <XXX>
port: <YYY>
I can easily set up an SSH tunnel to connect to this endpoint from my local machine but I would like to use IAP to create that tunnel instead. I have tried the following:
gcloud compute start-iap-tunnel my-tpu 8470
but I get
- The resource 'projects/.../zones/.../instances/my-tpu' was not found
This makes sense because a TPU is a not a compute instance, and the command gcloud compute start-iap-tunnel expects an instance name.
Is there any way to use IAP to tunnel to an arbitrary internal IP address? Or more generally, is there any other way that I can use IAP to create a tunnel to my TPU worker?
Yes, it can be done using the internal ip address of the TPU Worker, here is an example:
gcloud alpha compute start-iap-tunnel \
10.164.0.2 8470 \
--local-host-port="localhost:$LOCAL_PORT" \
--region $REGION \
--network $SUBNET \
--project $PROJECT
Be aware that Private Google Access must be enabled in the TPU subnet, which can be easily done with the following command:
gcloud compute networks subnets update $SUBNET \
--region=$REGION \
--enable-private-ip-google-access
Just as a reference, here you have an example on how to create a TPU Worker with no external ip address:
gcloud alpha compute tpus tpu-vm create \
--project $PROJECT \
--zone $ZONE \
--internal-ips \
--version tpu-vm-tf-2.6.0 \
--accelerator-type v2-8 \
--network $SUBNET \
$NAME
AUTHENTICATION
To successfully authenticate the endpoint source of the IAP tunnel, you need to add the SSH keys to the project's metadata following these steps:
Check if you already have SSH keys generated in your endpoint:
ls -1 ~/.ssh/*
#=>
/. . ./id_rsa
/. . ./id_rsa.pub
If you don't have any, you can generate them with the command: ssh-keygen -t rsa -f ~/.ssh/id_rsa -C id_rsa.
Add the SSH keys to your project's metadata:
gcloud compute project-info add-metadata \
--metadata ssh-keys="$(gcloud compute project-info describe \
--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
$(whoami):$(cat ~/.ssh/id_rsa.pub)"
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
Assign the iap.tunnelResourceAccessor role to the user:
gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
--member=user:$USER_ID \
--role=roles/iap.tunnelResourceAccessor

Persistent disk missing when I SSH into GCP VM instance with Jupyter port forwarding

I have created a VM instance on Google Cloud, and also set up a Notebook instance. In this instance, I have a bunch of notebooks, python modules as well as a lot of data.
I want to run a script on my VM instance by using the terminal. I tried running it in a Jupyter Notebook, but it failed several hours in and crashed the notebook. I decided to try from the command line instead. However, when I used the commands found in the docs to ssh into my instance:
gcloud beta compute ssh --zone "<Zone>" "<Instance Name>" --project "<Project-ID>",
or
gcloud compute ssh --project <Project-ID> --zone <Zone> <Instance Name>
or
gcloud compute ssh --project $PROJECT_ID --zone $ZONE $INSTANCE_NAME -- -L 8080:localhost:8080
I successfully connect to the instance, but the file system is missing. I can't find my notebooks or scripts. The only way I can see those files is when I use the GUI and select 'Open Jupyter Lab' from the AI Platform > Notebooks console.
How do I access the VM through the command line so that I can still see my "persistent disk" that is associated with this VM instance?
I found the answer on the fast.ai getting started page. Namely you have to specify the user name as jupyter in the ssh command:
Solution 1: Default Zone and Project Configured:
gcloud compute ssh jupyter#<instance name>
or if you want to use port forwarding to have access to your notebook:
gcloud compute ssh jupyter#<instance name> -- -L 8080:localhost:8080
Solution 2: No Default Zone or Project:
Note that I left out the zone and project id from both of these commands. They are not necessary if you set a default zone and project during your initial gcloud init stage. If you did not do this, then the commands become:
gcloud compute ssh --project <project ID> --zone <zone> jupyter#<instance name>
or if you want to use port forwarding to run a notebook:
gcloud compute ssh --zone <zone> jupyter#<instance name> -- -L 8080:localhost:8080

Executing gcloud command `clusters get-credentials` in Bitbucket Pipeline is failing with --zone option

I am using the gcloud clusters credentials. When using it in the Cloud Shell it works fine. When using it in the *bitbucket-pipeline.yamlhowever it fails on the--zone` flag.
Used in Cloud Shell:
cloud container clusters get-credentials xetra11-cluster-dev --zone=europe-west3-a --project xetra11-project
Fetching cluster endpoint and auth data.
kubeconfig entry generated for xetra11-cluster-dev.
It executed fine as you can see.
So here is what I setup for the bitbucket-pipeline.yaml:
image: google/cloud-sdk:latest
pipelines:
branches:
master:
- step:
name: Build and push Docker image
deployment: dev
caches:
- docker
services:
- docker
script:
# gcloud setup
- echo $GCLOUD_API_KEYFILE > ~/.gcloud-api-key.json
- gcloud auth activate-service-account --key-file ~/.gcloud-api-key.json
- gcloud config set project xetra11-project
- gcloud container clusters get-credentials xetra11-cluster --zone=europe-west3-a --project xetra11-project
- gcloud auth configure-docker --quiet
The pipeline is failing on:
- gcloud container clusters get-credentials xetra11-cluster --zone=europe-west3-a --project xetra11-project
gcloud container clusters get-credentials $GCLOUD_CLUSTER --zone=$GCLOUD_ZONE --project $GCLOUD_PROJECT
ERROR: (gcloud.container.clusters.get-credentials) unrecognized arguments: europe-west3-a
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
Can somebody tell me why this is happening? I am very sure I setup everything fine.
EDIT: #Pievis gave me a hint to use the setter for the zone. Unfortunately it also resulted in an error:
+ gcloud config set compute zone $GCLOUD_ZONE
ERROR: (gcloud.config.set) unrecognized arguments: europe-west3-a
Putting variables in quotes helped to solve this error in my case.
I've realised bitbucket added some spaces at the start of the deployment variable.
Adding variable again solved the issue.

403: Request had insufficient authentication scopes - gcloud container clusters get-credentials

I need to connect to GKE kubernetes cluster from gitlab runner, but I don't want to use AutoDevops feature, I would like to setup all of those things on my own. So, basically I would like to install gcloud sdk on a gitlab runner, then set gcloud account to my service account, authorize with the generated key and finally perform "gcloud container clusters get-credentials ..." command to get a valid kubernetes config - to be able to interact with kubernetes cluster.
Interesting fact is, that I tried to perform the entire procedure on my local machine using docker with the same image - and it works here! I does not work only on gitlab runner. The only difference is that gitlab runner is running not with docker executor but on kubernetes one (on the same k8s I want to interact with).
So the working case is:
$ winpty docker run -it --entrypoint=sh lachlanevenson/k8s-kubectl:latest
# apk add python
# wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
# tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-# # reporting=false --path-update=true > /dev/null
# PATH="google-cloud-sdk/bin:${PATH}"
# gcloud config set account <my-service-account>
# gcloud auth activate-service-account --key-file=key.json --project=<my_project>
# gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# kubectl get all
but when I try do do the same with gitlab runner:
gitlab-ci-yml:
deployment_be:
image: lachlanevenson/k8s-kubectl:latest
stage: deploy
only:
- master
tags:
- kubernetes
before_script:
- apk add python
script:
# Download and install Google Cloud SDK
- wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
- tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-reporting=false --path-update=true
- PATH="google-cloud-sdk/bin:${PATH}"
# Authorize with service account and fetch k8s config file
- gcloud config set account <my_service_account>
- gcloud auth activate-service-account --key-file=key.json --project=<my_project>
- gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# Interact with kubectl
- kubectl get all
I get the following error:
$ gcloud config set account <my_service_account>
Updated property [core/account].
$ gcloud auth activate-service-account --key-file=key.json --project=<my_project>
Activated service account credentials for: [<my_service_account>]
$ gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
ERROR: Job failed: command terminated with exit code 1
I tried to set all possible roles for this service account, including: Compute Administrator,
Kubernetes Engine Administrator,
Kubernetes Engine Clusters Administrator,
Container Administrator,
Editor,
Owner
Why this service account works fine on isolated docker image and on the same image launched over kubernetes cluster it fails ?

how to use Google Container Registry

I tried to use Google Container Registry, but it did not work for me.
I wrote the following containers.yaml.
$ cat containers.yaml
version: v1
kind: Pod
spec:
containers:
- name: amazonssh
image: asia.gcr.io/<project-id>/amazonssh
imagePullPolicy: Always
restartPolicy: Always
dnsPolicy: Default
I run instance by the following command.
$ gcloud compute instances create containervm-amazonssh --image container-vm --network product-network --metadata-from-file google-container-manifest=containers.yaml --zone asia-east1-a --machine-type f1-micro
I set the following acl permission.
# gsutil acl ch -r -u <project-number>#developer.gserviceaccount.com:R gs://asia.artifacts.<project-id>.appspot.com
But Access denied occurs when docker pull image from Google Container Registry.
# docker pull asia.gcr.io/<project-id>.a/amazonssh
Pulling repository asia.gcr.io/<project-id>.a/amazonssh
FATA[0000] Error: Status 403 trying to pull repository <project-id>/amazonssh: "Access denied."
Can you verify from your instance that you can read data from your Google Cloud Storage bucket? This can be verified by:
$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/scopes
...
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/devstorage.read_only
...
If so then try:
On Google Compute Engine you can login without gcloud with:
$ METADATA=http://metadata.google.internal./computeMetadata/v1
$ SVC_ACCT=$METADATA/instance/service-accounts/default
$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token \
| cut -d'"' -f 4)
$ docker login -e not#val.id -u _token -p $ACCESS_TOKEN https://gcr.io
Then try your docker pull command again.
You have an extra .a after project-id here, not sure if you ran the command that way?
docker pull asia.gcr.io/<project-id>.a/amazonssh
The container-vm has a cron job running gcloud docker -a as root, so you should be able to docker pull as root.
The kubelet, which launches the container-vm Docker containers also understands how to natively authenticate with GCR, so it should just work.
Feel free to reach out to us at gcr-contact#google.com. It would be useful if you could include your project-id, and possibly the /var/log/kubelet.log from your container-vm.