ArgoCDO CLI get details of resources? - argocd

I am using ArgoCD CLI to access ArgoCD and watch/manage my K8s clusters.
How can I get details on some resources like events?
I know, I can list the resources with argocd app resources <app-name>.
But how can I get the details I can get in the web-app like here?
I need to debug eg. why a container fails to start:
Please note: yes, I know how to use kubectl, but I don't have permissions on all clusters to use kubectl and need to access the logs and events with ArgoCD. Hence the question.

These are actually the events from the resources themselves. You can run
kubectl describe <resource_type> <resource_name> and you will see the events on the bottom.

Related

How can I authenticate kubectl to an AWS EKS cluster from a CI/CD pipeline

I am setting up a CI/CD pipeline on Gitlab to deploy a full AWS EKS cluster using Terraform. I got that to work rather well, but now I want to be able to perform some tasks on the cluster from that pipeline. Specifically, I am following this guide from Gitlab on how to manually add one to Gitlab and am trying to put that in a script.
Now my issue is when I need to perform kubectl commands from within the pipeline, but I cannot figure out how to authenticate from there without actually creating a custom image containing the aws-iam-authenticator from AWS to authenticate. That honestly doesn't seem like the right way to do it, so I figure there has to be another way, a better way.
Maybe I am thinking totally wrong and I do not need to use kubectl and there is a totally different approach I can take. If that's true, please tell me so. If not, I'd love to know if there is a different way.
I have been researching the different ways of authenticating with a k8s cluster, but every single resource I have found that is tailored for EKS insists I need to use the aws-iam-authenticator program.

Multicluster istio without exposing kubeconfig between clusters

I managed to get multicluster istio working following the documentation.
However this requires the kubeconfig of the clusters to be setup on each other. I am looking for an alternative to doing that. Based on presentation from solo.io and admiral, it seems that it might be possible to setup ServiceEntries to accomplish this manually. Istio docs are scarce in this this area. Does anyone have pointers on how to make this work?
There are some advantages to setting up the discovery manually or thru our CD processes...
if one cluster gets compromised, the creds to other clusters dont leak
allows us to limit the which services are discovered
I posted the question on twitter as well and hope to get some feedback from the Istio contributors.
As per Admiral docs:
Admiral acts as a controller watching k8s clusters that have a credential stored as a secret object which the namespace Admiral is running in. Admiral delivers Istio configuration to each cluster to enable services to communicate.
No matter how you manage contol-plane configuration (manually or with controller) - you have store and provision credentials somehow. In this case with use of the secrets
You can store your secrets securely in git with sealed-secrets.
You can read more here.

How to get all the pod details using CLI for any given region

As part of my work, I need to get all the pods details for any given region.
I normally get the pod details by running kubectl get pods -n <my_name_space>. But now, I need to get the pod details for any given region. Is there any option to do that?
From the AWS UI (web) I can see them by changing the region manually but looking for automation.
I have tried with aws-cli as well. But, I could not find any option to do that. Any suggestion?
Is there any way to achieve this?

kubectl vs aws eks - which one to use when?

We host Docker containers on AWS infrastructure using AWS EKS. My reading so far shows that the kubectl command-line tool gives me commands to query and manipulate the EKS cluster. The aws eks command-line tool also gives me commands to do this. To my inexperienced eye, they look like they offer the same facilities.
Are there certain situations when it's better to use one or the other?
aws eks command is for interacting with AWS EKS proprietary APIs to perform administrative tasks such as creating cluster, updating kubeconfig with correct credentials etc.
kubectl is an open source ClI tool which let you interact with kubernetes API server to perform tasks such create pods, deployments etc.
You can not use aws eks command to interact with Kubernetes API Server and perform any kubernetes specific operations because it does not understand kubernetes APIs.
Similarly you can not use kubectl to interact with AWS EKS proprietary APIs because kubectl does not understand it.

Kubernetes Engine unable to pull image from non-private / GCR repository

I was happily deploying to Kubernetes Engine for a while, but while working on an integrated cloud container builder pipeline, I started getting into trouble.
I don't know what changed. I can not deploy to kubernetes anymore, even in ways I did before without cloud builder.
The pods rollout process gives an error indicating that it is unable to pull from the registry. Which seems weird because the images exist (I can pull them using cli) and I granted all possibly related permissions to my user and the cloud builder service account.
I get the error ImagePullBackOff and see this in the pod events:
Failed to pull image
"gcr.io/my-project/backend:f4711979-eaab-4de1-afd8-d2e37eaeb988":
rpc error: code = Unknown desc = unauthorized: authentication required
What's going on? Who needs authorization, and for what?
In my case, my cluster didn't have the Storage read permission, which is necessary for GKE to pull an image from GCR.
My cluster didn't have proper permissions because I created the cluster through terraform and didn't include the node_config.oauth_scopes block. When creating a cluster through the console, the Storage read permission is added by default.
The credentials in my project somehow got messed up. I solved the problem by re-initializing a few APIs including Kubernetes Engine, Deployment Manager and Container Builder.
First time I tried this I didn't succeed, because to disable something you have to disable first all the APIs that depend on it. If you do this via the GCloud web UI then you'll likely see a list of services that are not all available for disabling in the UI.
I learned that using the gcloud CLI you can list all APIs of your project and disable everything properly.
Things worked after that.
The reason I knew things were messed up, is because I had a copy of the same things as a production environment, and there these problems did not exist. The development environment had a lot of iterations and messing around with credentials, so somewhere things got corrupted.
These are some examples of useful commands:
gcloud projects get-iam-policy $PROJECT_ID
gcloud services disable container.googleapis.com --verbosity=debug
gcloud services enable container.googleapis.com
More info here, including how to restore service account credentials.