I'd like to run a kubectl command from within a cronjob pod, to change the min replicas on a HPA for a deployment at the same time every week, i.e. time based scaling. I've been playing around with using the official google-sdk image with gcloud and kubectl installed.
I know I need to authenticate to the GKE cluster before I can run commands to interact via kubectl, and I really wanted to steer away from mounting a service account key (via a secret) to the pod, as we already have workload identity enabled.
Normal gcloud commands work fine using this method e.g. gcloud compute instances list but when I run gcloud container clusters get-credentials.... it fails saying I need to run gcloud auth login - can't be done of course.
I've read this post, I don't really want to use cURL if I can avoid it, and also know that gcloud doesn't use GOOGLE_APPLICATION_CREDENTIALS (this post)
Does anyone know of a way I can use workload identity and get this working?
I found a way to get this authenticated, I had to use the following command before I tried to run kubectl commands from within the cronjob pod:
gcloud --account <account-name>
Related
Working with a multicluster istio mesh:
is there a command or series of commands to get all the names of services on the other clusters
It is impossible to get all the names of services on the other clusters. The kubectl command, which returns the results you want, only works on its own cluster. If you want to collect data from each other cluster, you must log on to each cluster and execute kubectl commands separately.
You can of course create a script for this if you want to automate the process. In this case, it will be very helpful to use context. Look how to configure access to multiple clusters`.
I need a Compute Engine instance to import the exact configuration (IP, services, files, etc...) of the original machine, without impacting the frontend if it concerns a web server for example. While running this machine, I would be able to shut down the original machine to increase its RAM or vCPUs before starting it again and deleting the cloned instance.
The problem is that I want to automate this process, and that's why I need the gcloud command. So is there a way to clone an entire gcp instance using the gcloud command or another tool?
This is not possible with the gcloud. This is possible with the cloud console, but as you can see in this documentation:
Restrictions
You can only use this feature in the Cloud Console; this feature is not supported in the gcloud tool or the API.
What you could do is create similar (not completely equal) instances from a custom image, using that all you have to do is use the following command:
gcloud compute instances create --image=IMAGE
More details on that command can be found here
Is there an easy way to get the gcloud container clusters create ... command details for an existing cluster? (... Command that can be used to create the exact same cluster)
Someone from my team created a cluster on GKE through the UI with specific region and machine type details, and a few other customizations I can't remember. I'll be deleting the cluster, as it was for a test. We may need to recreate it and for this, instead of running through the UI, I was hoping to document the gcloud command that can be used to create the same cluster.
I couldn't find anything on the GCP UI to help with this. We can through the docs (https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster) construct the cmd that might build the same cluster, but wanted to check if there was a better way.
You can create your cluster on the GUI and use the button on the bottom to generate the HTTP Rest command or the Gcloud command line. You can find this on several pages in the GUI.
I recently was trying to get the gcloud command that can be used to recreate an existing cluster
I found a way to get the gcloud command with the parameters by going to your GKE-->create Cluster-->Clone an existing Cluster-->choose your Cluster and at the bottom you will have the Rest/command-line option.
This happens while trying to create a VPC-native GKE cluster. Per the documentation here the command to do this is
gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias
However this command, gives below error.
ERROR: (gcloud.container.clusters.create) Only alpha clusters (--enable_kubernetes_alpha) can use --enable-ip-alias
The command does work when option --enable_kubernetes_alpha is added. But gives another message.
This will create a cluster with all Kubernetes Alpha features enabled.
- This cluster will not be covered by the Container Engine SLA and
should not be used for production workloads.
- You will not be able to upgrade the master or nodes.
- The cluster will be deleted after 30 days.
Edit: The test was done in zone asia-south1-c
My questions are:
Is VPC-Native cluster production ready?
If yes, what is the correct way to create a production ready cluster?
If VPC-Native cluster is not production ready, what is the way to connect privately from a GKE cluster to another GCP service (like Cloud SQL)?
Your command seems correct. Seems like something is going wrong during the creation of your cluster on your project. Are you using any other flags than the command you posted?
When I set my Google cloud shell to region europe-west1
The cluster deploys error free and 1.11.6-gke.2(default) is what it uses.
You could try to manually create the cluster using the GUI instead of gcloud command. While creating the cluster, check the “Enable VPC-native (using alias ip)” feature. Try using a newest non-alpha version of GKE if some are showing up for you.
Public documentation you posted on GKE IP-aliasing and the GKE projects.locations.clusters API shows this to be in GA. All signs point this to be production ready. For whatever it’s worth, the feature has been posted last May In Google Cloud blog.
What you can try is to update your version of Google Cloud SDK. This will bring everything up to the latest release and remove alpha messages for features that are in GA right now.
$ gcloud components update
I was happily deploying to Kubernetes Engine for a while, but while working on an integrated cloud container builder pipeline, I started getting into trouble.
I don't know what changed. I can not deploy to kubernetes anymore, even in ways I did before without cloud builder.
The pods rollout process gives an error indicating that it is unable to pull from the registry. Which seems weird because the images exist (I can pull them using cli) and I granted all possibly related permissions to my user and the cloud builder service account.
I get the error ImagePullBackOff and see this in the pod events:
Failed to pull image
"gcr.io/my-project/backend:f4711979-eaab-4de1-afd8-d2e37eaeb988":
rpc error: code = Unknown desc = unauthorized: authentication required
What's going on? Who needs authorization, and for what?
In my case, my cluster didn't have the Storage read permission, which is necessary for GKE to pull an image from GCR.
My cluster didn't have proper permissions because I created the cluster through terraform and didn't include the node_config.oauth_scopes block. When creating a cluster through the console, the Storage read permission is added by default.
The credentials in my project somehow got messed up. I solved the problem by re-initializing a few APIs including Kubernetes Engine, Deployment Manager and Container Builder.
First time I tried this I didn't succeed, because to disable something you have to disable first all the APIs that depend on it. If you do this via the GCloud web UI then you'll likely see a list of services that are not all available for disabling in the UI.
I learned that using the gcloud CLI you can list all APIs of your project and disable everything properly.
Things worked after that.
The reason I knew things were messed up, is because I had a copy of the same things as a production environment, and there these problems did not exist. The development environment had a lot of iterations and messing around with credentials, so somewhere things got corrupted.
These are some examples of useful commands:
gcloud projects get-iam-policy $PROJECT_ID
gcloud services disable container.googleapis.com --verbosity=debug
gcloud services enable container.googleapis.com
More info here, including how to restore service account credentials.