How do I call kubectl from CloudFormation? - kubectl

I would like to execute kubectl commands using a Cloud-formation. Any idea how do I achieve?

Related

Is the update-kubeconfig command a client-only command or does it affect the cluster

I get the following warning/message when I run some k8s related commands
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update
and then I know I should run the command like so:
aws eks update-kubeconfig --name cluster_name --dry-run
I think the potential change will be client-side only and will not cause any change on the server side - the actual cluster. I just wanted some verification of this, or otherwise. Many thanks
Yes, update-kubeconfig does not make any changes to the cluster. It will only update your local .kube/config file with the cluster info. Note that with the --dry-run flag, no change will be made at all - the resulting configuration will just be printed to stdout.

Kubectl show expanded command when using alases or shorthand

Kubectl has many aliases like svc, po, deploy etc.
Is there a way to show the expanded command for a command with shorthand.
for example kubectl get po
to
kubectl get pods
On a similar question the api-resources is used # What's kubernetes abbreviation for deployments?
But it gives very top level shorthands,
for eg, kubeclt get svc expands to kubectl get services
but in kubectl create svc expands to kubectl create service
Kindly guide,
Thanks
kubectl explain may be of interest e.g.:
kubectl explain po
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
There are plugins for kubectl too.
I've not tried it but kubectl explore may be worth a try.
Unfortunately, kubectl isn't documented by explainshell.com which would be a boon as it would also document the various flags e.g. -n (--namespace) and -o (--output).

Error `executable aws not found` with kubectl config defined by `aws eks update-kubeconfig`

I defined my KUBECONFIG for the AWS EKS cluster:
aws eks update-kubeconfig --region eu-west-1 --name yb-demo
but got the following error when using kubectl:
...
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[opc#C eks]$ kubectl get sc
Unable to connect to the server: getting credentials: exec: executable aws not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
You can also append your custom aws cli installation path to the $PATH variable in ~/.bash_profile: export PATH=$PATH:<path to aws cli program directory>. This way you do not need to sed the kubeconfig file every time you add an EKS cluster. Also you will be able to use aws command at the command prompt without specifying full path to the program for every execution.
I had this problem when installing kubectx on Ubuntu Linux via a Snap package. It does not seem to be able to access the AWS CLI then. I worked around the issue by removing the Snap package and just using the shell scripts instead.
It seems that in ~/.kube/config the command: aws doesn't use the PATH environment and doesn't find it. Here is how to change it to the full path:
sed -e "/command: aws/s?aws?$(which aws)?" -i ~/.kube/config

Can you access the Airflow CLI within Google Cloud Composer?

I'm aware that many of the common Airflow management commands are made available through the gcloud CLI. However, I'm troubleshooting some DAG scheduling and would like to use the schedule and next_execution commands directly on the cluster.
Is there an easy way to do this?
It's possible to access the full Airflow CLI by using kubectl exec to SSH into Composer pods. To do so, obtain the name of the GKE cluster associated with your environment, and get cluster credentials for it:
gcloud container clusters get-credentials $CLUSTER_NAME --zone=$ZONE
Then, use kubectl to check for the Composer namespace, and then find a pod and SSH to it:
kubectl get namespaces | grep composer
kubectl get pods --namespace=$NAMESPACE | grep airflow
kubectl exec -it --namespace=$NAMESPACE $POD_NAME -- bash
From within a pod, you can use airflow with any command supported by that version of Airflow. However, it should also be noted that this also provides full access to commands that can make your environment permanently unusable (such as resetdb), so they should be used with care.

Get aws EMR DNS address using CLI

I am trying to set up some easy code to run when trying to spin up an EMR for some ad hoc work I have to do, time to time.
Right now I pass the 'aws emr create-cluster' command and then find the DNS in the console, once the cluster is created to then use ssh to connect.
I'd like to skip having to open the console at all, and use the cluster ID to get the DNS value to create my SSH connection, but I am not seeing a clear command to do this with. I'm new to CLI so I imagine this is a simple task I am merely failing at figuring out myself.
In my mind the solution should be something along the lines of
aws emr create-cluster [config for cluster here] > file.txt
set DNS = aws emr describe-cluster --cluster-id file.txt -MasterPublicDnsName
ssh -i Desktop/AWS/EMRKey.pem -o ServerAliveInterval=15 hadoop#$DNS
probably will have to append 'hadoop#' to the DNS variable before passing it into a command, but I'm more curious at the moment to if the above makes any functional sense, and if so, how I can get the functionality of the describe-cluster command to output the -MasterPublicDnsName, as that is obviously just something I made up and not an actual option that I have found.
The AWS CLI has a query option that lets you query the output of a command. You'll also want to use a waiter to make sure the instance is up before you try to connect to it.
You could simply run
cluster_id="j-2RNBSZZBLXTZ0"
aws emr wait cluster-running --cluster-id $cluster_id
hostname=`aws emr describe-cluster --output text --cluster-id $cluster_id --query Cluster.MasterPublicDnsName`
ssh hadoop#$hostname
That should work!