Working with a multicluster istio mesh:
is there a command or series of commands to get all the names of services on the other clusters
It is impossible to get all the names of services on the other clusters. The kubectl command, which returns the results you want, only works on its own cluster. If you want to collect data from each other cluster, you must log on to each cluster and execute kubectl commands separately.
You can of course create a script for this if you want to automate the process. In this case, it will be very helpful to use context. Look how to configure access to multiple clusters`.
Related
I'd like to run a kubectl command from within a cronjob pod, to change the min replicas on a HPA for a deployment at the same time every week, i.e. time based scaling. I've been playing around with using the official google-sdk image with gcloud and kubectl installed.
I know I need to authenticate to the GKE cluster before I can run commands to interact via kubectl, and I really wanted to steer away from mounting a service account key (via a secret) to the pod, as we already have workload identity enabled.
Normal gcloud commands work fine using this method e.g. gcloud compute instances list but when I run gcloud container clusters get-credentials.... it fails saying I need to run gcloud auth login - can't be done of course.
I've read this post, I don't really want to use cURL if I can avoid it, and also know that gcloud doesn't use GOOGLE_APPLICATION_CREDENTIALS (this post)
Does anyone know of a way I can use workload identity and get this working?
I found a way to get this authenticated, I had to use the following command before I tried to run kubectl commands from within the cronjob pod:
gcloud --account <account-name>
I want to execute some commands in ECS instances, before the tasks start. Like installing something in EC2 instances. I am using ECS-CLI is there any possible way of achieving this?
Not sure where and when do exactly you want to run commands, but you can use ecs-cli up which has option:
--extra-user-data string - Specifies additional user data for your container instance. Files can be shell scripts or cloud-init directives. They are packaged into a MIME multipart archive along with user data provided by the Amazon ECS CLI that directs instances to join your cluster. For more information, see Specifying User Data.
We host Docker containers on AWS infrastructure using AWS EKS. My reading so far shows that the kubectl command-line tool gives me commands to query and manipulate the EKS cluster. The aws eks command-line tool also gives me commands to do this. To my inexperienced eye, they look like they offer the same facilities.
Are there certain situations when it's better to use one or the other?
aws eks command is for interacting with AWS EKS proprietary APIs to perform administrative tasks such as creating cluster, updating kubeconfig with correct credentials etc.
kubectl is an open source ClI tool which let you interact with kubernetes API server to perform tasks such create pods, deployments etc.
You can not use aws eks command to interact with Kubernetes API Server and perform any kubernetes specific operations because it does not understand kubernetes APIs.
Similarly you can not use kubectl to interact with AWS EKS proprietary APIs because kubectl does not understand it.
Is there an easy way to get the gcloud container clusters create ... command details for an existing cluster? (... Command that can be used to create the exact same cluster)
Someone from my team created a cluster on GKE through the UI with specific region and machine type details, and a few other customizations I can't remember. I'll be deleting the cluster, as it was for a test. We may need to recreate it and for this, instead of running through the UI, I was hoping to document the gcloud command that can be used to create the same cluster.
I couldn't find anything on the GCP UI to help with this. We can through the docs (https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster) construct the cmd that might build the same cluster, but wanted to check if there was a better way.
You can create your cluster on the GUI and use the button on the bottom to generate the HTTP Rest command or the Gcloud command line. You can find this on several pages in the GUI.
I recently was trying to get the gcloud command that can be used to recreate an existing cluster
I found a way to get the gcloud command with the parameters by going to your GKE-->create Cluster-->Clone an existing Cluster-->choose your Cluster and at the bottom you will have the Rest/command-line option.
I am looking to create a number of Deis clusters running in parallel on AWS and haven't been able to find any good documentation on how to do so. From what I understand I'd have to do the following:
When provisioning the cluster:
Create a new discovery URL
Give the stack a different name other than the standard "deis" when using the ./provision-aws-cluster.sh script
Create different Deis profiles in $HOME/.deis/client.json that map to each cluster
And when utilizing the deisctl and deis command line interfaces, I need to specify the DEISCTL_TUNNEL and the DEIS_PROFILE each time, respectively.
Am I missing anything? Will this impact my current Deis cluster if I install using the the changes listed above?
That is correct, I don't believe you are missing anything. You should save the cloud-config for each cluster (in contrib/coreos), that will have the discovery url in it and possibly other customizations depending on how your clusters will be configured. If the clusters are going to be different on the AWS side, make sure you save the cloudformation.json file for each as well.