I'm aware that many of the common Airflow management commands are made available through the gcloud CLI. However, I'm troubleshooting some DAG scheduling and would like to use the schedule and next_execution commands directly on the cluster.
Is there an easy way to do this?
It's possible to access the full Airflow CLI by using kubectl exec to SSH into Composer pods. To do so, obtain the name of the GKE cluster associated with your environment, and get cluster credentials for it:
gcloud container clusters get-credentials $CLUSTER_NAME --zone=$ZONE
Then, use kubectl to check for the Composer namespace, and then find a pod and SSH to it:
kubectl get namespaces | grep composer
kubectl get pods --namespace=$NAMESPACE | grep airflow
kubectl exec -it --namespace=$NAMESPACE $POD_NAME -- bash
From within a pod, you can use airflow with any command supported by that version of Airflow. However, it should also be noted that this also provides full access to commands that can make your environment permanently unusable (such as resetdb), so they should be used with care.
Related
I'm trying to run a docker container via Compute (not Cloud Run as I need long term instance)
The container works fine on the local machine, it can access GCloud account resources.
I can see that it's running on a Container-Optimized OS on Compute
I tried running the following commands in order, but I get the same issue. CONSUMER_INVALID. The IAM account has access to all the required permissions, I tripled checked this.
// Fix gcloud issue
alias gcloud='(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v $HOME/.config:/.config -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker google/cloud-sdk gcloud'
// Setup GCloud
gcloud
// Enable access via to dockers gcloud
gcloud auth configure-docker
// Enable access via to dockers gcloud
gcloud auth configure-docker
Need to run
docker-credential-gcr configure-docker
export GOOGLE_CLOUD_PROJECT=project-352014
Not sure what to do now; seems Compute isn't communicating property with the internal account resources?
1.Can we get azure kubectl(exe/bin) as downloadable link instead of installing it using command line 'az aks install-cli' or can I use the kubectl from kuberenetes?
2.Is there any azure-cli command to change the default location of kubeconfig file, As by default it is pointing to '.kube\config' in windows ?
Example : Instead of using '--kubeconfig' flag like 'kubectl get nodes --kubeconfig D:\config' can we able to change the default location ?
you can use the default kubernetes CLI (kubectl) and you can get credentials with the Azure CLI az aks get-credentials --resource-group <RESOURCE_GROUP> --name <AKS_CLUSTER_NAME> or over the Azure portal UI.
You can use the flag --kubeconfig=<PATH> or you can overwrite the default value of the variable KUBECONFIG that is pointing to $HOME/.kube/config with KUBECONFIG=<PATH>.
Example:
KUBECONFIG=D:\config
#or
export KUBECONFIG=D:\config
I have specified 3 nodes when creating a cloud composer environment. I tried to connect to worker nodes via SSH but I am not able to find airflow directory in /home. So where exactly is it located?
Cloud Composer runs Airflow on GKE, so you won't find data directly on any of the host GCE instances. Instead, Airflow processes are run within Kubernetes-managed containers, which either mount or sync data to the /home/airflow directory. To find the directory you will need to look within a running container.
Since each environment stores its Airflow data in a GCS bucket, you can alternatively inspect files by using Cloud Console or gsutil. If you really want to view /home/airflow with a shell, you can use kubectl exec which allows you to run commands/open a shell on any pod/container in the Kubernetes cluster. For example:
# Obtain the name of the Composer environment's GKE cluster
$ gcloud composer environments describe $ENV_NAME
# Fetch Kubernetes credentials for that cluster
$ gcloud container cluster get-credentials $GKE_CLUSTER_NAME
Once you have Kubernetes credentials, you can list running pods and SSH into them:
# List running pods
$ kubectl get pods
# SSH into a pod
$ kubectl exec -it $POD_NAME bash
airflow-worker-a93j$ ls /home/airflow
It's been sometime I've visited all the web pages carrying word "KOps import" but did not find a way to import my manually created K8s cluster. Manually created cluster means "Deployed Infra on AWS using Terraform and Kubernetes using Terraform's provisioner script as Shell script". Now as I see managing the environment manually is a pain, I look forward to move it under KOps. For that I have done the following so far:
Installed aws cli, kubectl and kops in my local machine.
Created KOps user with policies AmazonEC2FullAccess,
AmazonRoute53FullAccess, AmazonS3FullAccess, IAMFullAccess,
AmazonVPCFullAccess and generated access and secret keys.
Configured credentials using aws configure.
Created S3 bucket to store state.
Set env variables like Region and Cluster name.
Finally, ran kops import command as below:
kops import cluster --region ${REGION} --name ${OLD_NAME}
But encountered below error:
Cluster.kops "jjm-prod-use1-kubernetes" not found
Verbosed:
$ kops import cluster --region ${REGION} --name ${OLD_NAME} -v 10
I0131 16:32:12.059651 25683 factory.go:68] state store s3://kops-state-store-jjm
I0131 16:32:13.133145 25683 s3context.go:194] found bucket in region "us-east-1"
I0131 16:32:13.133174 25683 s3fs.go:220] Reading file "s3://kops-state-store-jjm/jjm-prod-use1-kubernetes/config"
Which made me serious about posting this question. Is there any possible way where a K8s cluster created except using kubeup.sh can be brought under the control of KOps ? Please advise.
Note: There's no way I can re-create (destroy and create) the clusters as they are running in production.
EDIT: I know this can be achieved only the cluster was setup using kubeup.sh. But is there any other way ?
That is only possible with cluster bootstrapped via kube-up.sh script as officialy announced in Kops documentation pages. Actually, kube-up.sh has been excluded from the list of supported Kubernetes installation tools for AWS. Although, cluster composed by kube-up.sh provides a lot of customization settings which are specifically applicable to AWS, the initial script uses environmental variables to define these settings. Therefore, I assume that it's quite hard to achieve in your case.
I've been struggling with configuring Kubernetes for many hours and I don't know how to move it forward.
What I did :
I created few services using spring cloud
I created docker images for each service
I pushed those images to docker hub
I launched AWS by running
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
Command kubectl cluster-info shows that it actually works.
I created Kubernetes pods for each service. Command kubectl get pods
shows that all pods have status running.
The problem is that when I log to my AWS account I don't see any running instance, although I can see kubernetes-staging created in my S3 bucket.
My goal is to actually access my service , not on localhost. How can I do it ?
You should be able to see instances of course - as #kichik mentioned check whether your AWS console is using the same region as the deployment scripts.
To use your services/applications the next step is to expose them to the public with Kubernetes services as described here and here