403: Request had insufficient authentication scopes - gcloud container clusters get-credentials - google-cloud-platform

I need to connect to GKE kubernetes cluster from gitlab runner, but I don't want to use AutoDevops feature, I would like to setup all of those things on my own. So, basically I would like to install gcloud sdk on a gitlab runner, then set gcloud account to my service account, authorize with the generated key and finally perform "gcloud container clusters get-credentials ..." command to get a valid kubernetes config - to be able to interact with kubernetes cluster.
Interesting fact is, that I tried to perform the entire procedure on my local machine using docker with the same image - and it works here! I does not work only on gitlab runner. The only difference is that gitlab runner is running not with docker executor but on kubernetes one (on the same k8s I want to interact with).
So the working case is:
$ winpty docker run -it --entrypoint=sh lachlanevenson/k8s-kubectl:latest
# apk add python
# wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
# tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-# # reporting=false --path-update=true > /dev/null
# PATH="google-cloud-sdk/bin:${PATH}"
# gcloud config set account <my-service-account>
# gcloud auth activate-service-account --key-file=key.json --project=<my_project>
# gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# kubectl get all
but when I try do do the same with gitlab runner:
gitlab-ci-yml:
deployment_be:
image: lachlanevenson/k8s-kubectl:latest
stage: deploy
only:
- master
tags:
- kubernetes
before_script:
- apk add python
script:
# Download and install Google Cloud SDK
- wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
- tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-reporting=false --path-update=true
- PATH="google-cloud-sdk/bin:${PATH}"
# Authorize with service account and fetch k8s config file
- gcloud config set account <my_service_account>
- gcloud auth activate-service-account --key-file=key.json --project=<my_project>
- gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# Interact with kubectl
- kubectl get all
I get the following error:
$ gcloud config set account <my_service_account>
Updated property [core/account].
$ gcloud auth activate-service-account --key-file=key.json --project=<my_project>
Activated service account credentials for: [<my_service_account>]
$ gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
ERROR: Job failed: command terminated with exit code 1
I tried to set all possible roles for this service account, including: Compute Administrator,
Kubernetes Engine Administrator,
Kubernetes Engine Clusters Administrator,
Container Administrator,
Editor,
Owner
Why this service account works fine on isolated docker image and on the same image launched over kubernetes cluster it fails ?

Related

Google Cloud Compute - Cannot get access to account security instance

I'm trying to run a docker container via Compute (not Cloud Run as I need long term instance)
The container works fine on the local machine, it can access GCloud account resources.
I can see that it's running on a Container-Optimized OS on Compute
I tried running the following commands in order, but I get the same issue. CONSUMER_INVALID. The IAM account has access to all the required permissions, I tripled checked this.
// Fix gcloud issue
alias gcloud='(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v $HOME/.config:/.config -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker google/cloud-sdk gcloud'
// Setup GCloud
gcloud
// Enable access via to dockers gcloud
gcloud auth configure-docker
// Enable access via to dockers gcloud
gcloud auth configure-docker
Need to run
docker-credential-gcr configure-docker
export GOOGLE_CLOUD_PROJECT=project-352014
Not sure what to do now; seems Compute isn't communicating property with the internal account resources?

Why do I get "get-credentials requires edit permission" error in gcloud on my terminal, when it succeeds in Cloud Shell?

From my laptop, I am able to execute most gcloud commands, for example creating a cluster and many other commands. I have the Project Owner role.
But when I try to get credentials for a K8s cluster, I get a permission error. But in Cloud Shell, the command succeeds.
The logged-in account is the same in both.
% gcloud container clusters get-credentials my-first-cluster-1 --zone us-central1-c --project my-project
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) get-credentials requires edit permission on my-project
$ gcloud config list account --format "value(core.account)"
<MY EMAIL>
But in Cloud Shell, this succeeds!
$ gcloud container clusters get-credentials my-first-cluster-1 --zone us-central1-c --project my-project
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-first-cluster-1.
$ gcloud config list account --format "value(core.account)"
<MY EMAIL>
The error message is indeed incorrect and not very helpful in this case. This issue occurs when the gcloud config value container/use_client_certificate is set to True but no client certificate has been configured (note that client certificate is a legacy authentication method and is disabled by default for clusters created with GKE 1.12 and higher.). Setting it to False via the following gcloud command solves this issue:
gcloud config set container/use_client_certificate False
This config value is set to False by default in Cloud Shell, which explains the different behavior you experienced.

Executing gcloud command `clusters get-credentials` in Bitbucket Pipeline is failing with --zone option

I am using the gcloud clusters credentials. When using it in the Cloud Shell it works fine. When using it in the *bitbucket-pipeline.yamlhowever it fails on the--zone` flag.
Used in Cloud Shell:
cloud container clusters get-credentials xetra11-cluster-dev --zone=europe-west3-a --project xetra11-project
Fetching cluster endpoint and auth data.
kubeconfig entry generated for xetra11-cluster-dev.
It executed fine as you can see.
So here is what I setup for the bitbucket-pipeline.yaml:
image: google/cloud-sdk:latest
pipelines:
branches:
master:
- step:
name: Build and push Docker image
deployment: dev
caches:
- docker
services:
- docker
script:
# gcloud setup
- echo $GCLOUD_API_KEYFILE > ~/.gcloud-api-key.json
- gcloud auth activate-service-account --key-file ~/.gcloud-api-key.json
- gcloud config set project xetra11-project
- gcloud container clusters get-credentials xetra11-cluster --zone=europe-west3-a --project xetra11-project
- gcloud auth configure-docker --quiet
The pipeline is failing on:
- gcloud container clusters get-credentials xetra11-cluster --zone=europe-west3-a --project xetra11-project
gcloud container clusters get-credentials $GCLOUD_CLUSTER --zone=$GCLOUD_ZONE --project $GCLOUD_PROJECT
ERROR: (gcloud.container.clusters.get-credentials) unrecognized arguments: europe-west3-a
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
Can somebody tell me why this is happening? I am very sure I setup everything fine.
EDIT: #Pievis gave me a hint to use the setter for the zone. Unfortunately it also resulted in an error:
+ gcloud config set compute zone $GCLOUD_ZONE
ERROR: (gcloud.config.set) unrecognized arguments: europe-west3-a
Putting variables in quotes helped to solve this error in my case.
I've realised bitbucket added some spaces at the start of the deployment variable.
Adding variable again solved the issue.

Location of /home/airflow

I have specified 3 nodes when creating a cloud composer environment. I tried to connect to worker nodes via SSH but I am not able to find airflow directory in /home. So where exactly is it located?
Cloud Composer runs Airflow on GKE, so you won't find data directly on any of the host GCE instances. Instead, Airflow processes are run within Kubernetes-managed containers, which either mount or sync data to the /home/airflow directory. To find the directory you will need to look within a running container.
Since each environment stores its Airflow data in a GCS bucket, you can alternatively inspect files by using Cloud Console or gsutil. If you really want to view /home/airflow with a shell, you can use kubectl exec which allows you to run commands/open a shell on any pod/container in the Kubernetes cluster. For example:
# Obtain the name of the Composer environment's GKE cluster
$ gcloud composer environments describe $ENV_NAME
# Fetch Kubernetes credentials for that cluster
$ gcloud container cluster get-credentials $GKE_CLUSTER_NAME
Once you have Kubernetes credentials, you can list running pods and SSH into them:
# List running pods
$ kubectl get pods
# SSH into a pod
$ kubectl exec -it $POD_NAME bash
airflow-worker-a93j$ ls /home/airflow

How do I entitle serviceAccounts via gcloud command-line for Kubernetes API access?

I'm trying to automate creation of service accounts for use with GKE via the gcloud command-line tool. I've figured out a flow that appears to mirror the process used by the Google Cloud Console, but my users don't see to receive the appropriate access.
Here's the commands I'm executing in order:
# Environment:
# - uname=<username>
# - email=<user's email address>
# - GCLOUD_PROJECT_ID=<project identifier>
# - serviceAccount="${uname}#${GCLOUD_PROJECT_ID}.iam.gserviceaccount.com"
$ gcloud iam service-accounts \
create "${uname}" --display-name "email:${email}" --format json
$ gcloud projects \
add-iam-policy-binding "${GCLOUD_PROJECT_ID}" \
--member "serviceAccount:${serviceAccount}" \
--role=roles/container.developer --format=json
$ gcloud iam service-accounts keys \
create "${GCLOUD_PROJECT_ID}-${uname}.json" \
--iam-account="${serviceAccount}"
When this executes, it creates a new service account and generates a key file locally. I then try to use this key to get credentials for my Kubernetes cluster.
$ gcloud config configurations create devcluster --activate
$ gcloud config set project devnet-166017
$ gcloud config set compute/zone us-central1-b
$ gcloud auth activate-service-account \
--key-file="${GCLOUD_PROJECT_ID}-${uname}.json"
$ gcloud container clusters get-credentials devcluster
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: \
code=403, message=Required "container.clusters.get" permission for \
"projects/${GCLOUD_PROJECT_ID}/zones/us-central1-b/clusters/devcluster".
It appears that for some reason my service account doesn't have one of the permissions it needs to get credentials, but based on what I've read and what I've observed in the Console, I believe this permission should be part of the roles/container.developer role.
Thanks!
I assume by service account, you mean the Service Account for Google Cloud. Here are the IAM roles related to GKE: https://cloud.google.com/container-engine/docs/iam-integration (search for container.).
First create a service account:
gcloud iam service-accounts create --display-name "GKE cluster access" gke-test
Then create a key:
gcloud iam service-accounts keys create key.json --iam-account=gke-test#[PROJECT_ID].iam.gserviceaccount.com
Now you need to assign some roles to this service account, your options are:
roles/container.admin Full management of Container Clusters and their Kubernetes API objects.
roles/container.clusterAdmin Management of Container Clusters.
roles/container.developer Full access to Kubernetes API objects inside Container Clusters.
roles/container.viewer Read-only access to Container Engine resources.
Again look at https://cloud.google.com/container-engine/docs/iam-integration page for details.
I assign roles/container.viewer (a read-only role, minimum you can assign to get-credentials) to this service account:
gcloud projects add-iam-policy-binding [PROJECT_ID] --role=roles/container.viewer --member=serviceAccount:gke-test#[PROJECT_ID].iam.gserviceaccount.com
Logout on gcloud from your current account:
gcloud auth revoke
Login to gcloud with the service account key:
gcloud auth activate-service-account --key-file=key.json
Try get-credentials:
$ gcloud container clusters get-credentials test --zone us-west1-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test.
It works. I tried it with roles/container.developer, which also works.
You can try other permissions and see what works and what doesn't, although you made it clear that the documentation doesn't make it clear which roles have access to container.clusters.getCredentials.