What is the default password of argocd? - argocd

I have installed argocd on aks using below command:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/install.yaml
Then I change it to load balancer service.
kubectl edit svc argocd-server -n argocd
Now, when I connect to argocd web ui, I wasm't able to connect with below credentials.
user: admin password: argocd-server-9b77b6575-ts54n
Password got from below command as mentioned in docs.
kubectl get po -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 21m
argocd-dex-server-5559bc9679-5mj4v 1/1 Running 1 21m
argocd-redis-74d8c6db65-sxbnt 1/1 Running 0 21m
argocd-repo-server-6866f58df-m59sr 1/1 Running 0 21m
argocd-server-9b77b6575-ts54n 1/1 Running 0 21m
Please suggest me how to login, what is the default credentials.
Even I tried resetting it using this command.
kubectl -n argocd patch secret argocd-secret -p '{"stringData": {
"admin.password": "$2a$10$Ix3Pd7mywOwVWOK8eSSY0uo60V6Vf6DtZljGuLwGRHQNnWNBbOLhW",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
But getting this error:
Error from server (BadRequest): invalid character 's' looking for beginning of object key string
Error from server (NotFound): secrets "2021-07-08T12:59:15IST" not found
Error from server (NotFound): secrets "\n }}" not found

You get the password by typing
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
With kubectl get pods you get the pod name, not the password.
It is common that applications save the password into a Kubernetes Secret. The secret values are base64 encoded, so to update the secret it has to be valid base64
echo newpassword | base64. Allthough keep in mind updating the secret does not change the application password.

user: admin
To get the password, type the command below:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Using tools like Rancher or Lens (or OpenLens), you can see the secrets.
You may find the Argocd admin password is in a argocd-initial-admin-secret secret (at least for Argocd v2.3.3) :
OpenLens :
Rancher :

Visit https://github.com/argoproj/argo-cd/blob/master/docs/faq.md
#bcrypt(password)=$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa
kubectl -n argocd patch secret argocd-secret \
-p '{"stringData": {
"admin.password": "$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
your new password is "password"

A solution to this (Local ArgoCD setup)👇🏼
Patch secret to update password.
kubectl -n argocd patch secret argocd-secret -p '{"data": {"admin.password": null, "admin.passwordMtime": null}}'
That will reset the password to the pod name.
Restart the api-server pod. Do this by scaling the pod replica to zero and then back to one.
kubectl -n argocd scale deployment argocd-server --replicas=0
once scaled-down, make sure to scale back up and wait a few minutes before
kubectl -n argocd scale deployment argocd-server --replicas=1
New password of the ArgoCD will be your api-server pod name with the numbers at the end name (kubectl -n argocd get po >> to find pod name)
i.e login:
user: admin
pass: argocd-server-6cdb9b4b84-jvl58
That should work.

To change the password, edit the argocd-secret secret and update the admin.password field with a new bcrypt hash.

Related

kubectl wait for Service on AWS EKS to expose Elastic Load Balancer (ELB) address reported in .status.loadBalancer.ingress field

As the kubernetes.io docs state about a Service of type LoadBalancer:
On cloud providers which support external load balancers, setting the
type field to LoadBalancer provisions a load balancer for your
Service. The actual creation of the load balancer happens
asynchronously, and information about the provisioned balancer is
published in the Service's .status.loadBalancer field.
On AWS Elastic Kubernetes Service (EKS) a an AWS Load Balancer is provisioned that load balances network traffic (see AWS docs & the example project on GitHub provisioning a EKS cluster with Pulumi). Assuming we have a Deployment ready with the selector app=tekton-dashboard (it's the default Tekton dashboard you can deploy as stated in the docs), a Service of type LoadBalancer defined in tekton-dashboard-service.yml could look like this:
apiVersion: v1
kind: Service
metadata:
name: tekton-dashboard-external-svc-manual
spec:
selector:
app: tekton-dashboard
ports:
- protocol: TCP
port: 80
targetPort: 9097
type: LoadBalancer
If we create the Service in our cluster with kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines, the AWS ELB get's created automatically:
There's only one problem: The .status.loadBalancer field is populated with the ingress[0].hostname field asynchronously and is therefore not available immediately. We can check this, if we run the following commands together:
kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}'
The output will be an empty field:
{}%
So if we want to run this setup in a CI pipeline for example (e.g. GitHub Actions, see the example project's workflow provision.yml), we need to somehow wait until the .status.loadBalancer field got populated with the AWS ELB's hostname. How can we achieve this using kubectl wait?
TLDR;
Prior to Kubernetes v1.23 it's not possible using kubectl wait, but using until together with grep like this:
until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done
or even enhance the command using timeout (brew install coreutils on a Mac) to prevent the command from running infinitely:
timeout 10s bash -c 'until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done'
Problem with kubectl wait & the solution explained in detail
As stated in this so Q&A and the kubernetes issues kubectl wait unable to not wait for service ready #80828 & kubectl wait on arbitrary jsonpath #83094 using kubectl wait for this isn't possible in current Kubernetes versions right now.
The main reason is, that kubectl wait assumes that the status field of a Kubernetes resource queried with kubectl get service/xyz --output=yaml contains a conditions list. Which a Service doesn't have. Using jsonpath here would be a solution and will be possible from Kubernetes v1.23 on (see this merged PR). But until this version is broadly available in managed Kubernetes clusters like EKS, we need another solution. And it should also be available as "one-liner" just as a kubectl wait would be.
A good starting point could be this superuser answer about "watching" the output of a command until a particular string is observed and then exit:
until my_cmd | grep "String Im Looking For"; do : ; done
If we use this approach together with a kubectl get we can craft a command which will wait until the field ingress gets populated into the status.loadBalancer field in our Service:
until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done
This will wait until the ingress field got populated and then print out the AWS ELB address (e.g. via using kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}' thereafter):
$ until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done
{"ingress":[{"hostname":"a74b078064c7d4ba1b89bf4e92586af0-18561896.eu-central-1.elb.amazonaws.com"}]}
Now we have a one-liner command that behaves just like a kubectl wait for our Service to become available through the AWS Loadbalancer. We can double check if this is working with the following commands combined (be sure to delete the Service using kubectl delete service/tekton-dashboard-external-svc-manual -n tekton-pipelines before you execute it, because otherwise the Service incl. the AWS LoadBalancer already exists):
kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done && \
kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Here's also a full GitHub Actions pipeline run if you're interested.

Unable to deploy aws-load-balancer-controller on Kubernetes

I am trying to deploy the aws-load-balancer-controller on my Kubernetes cluster on AWS = by following the steps given in https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
After the yaml file is applied and while trying to check the status of the deployment , I get :
$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 0/1 1 0 6m39s
I tried to debug it and I got this :
$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
{"level":"info","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"error","logger":"setup","msg":"unable to create controller","controller":"Ingress","error":"the server could not find the requested resource"}
The yaml file is pulled directly from https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml and apart from changing the Kubernetes cluster name, no other modifications are done.
Please let me know if I am missing some step in the configuration.
Any help would be highly appreciated.
I am not sure if this helps, but for me the issue was that the version of the aws-load-balancer-controller was not compatible with the version of Kubernetes.
aws-load-balancer-controller = v2.3.1
Kubernetes/EKS = 1.22
Github issue for more information:
https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495

How to pull a private container from AWS ECR to a local cluster

I am currently having trouble trying to pull my remote docker image hosted via AWS ECR. I am getting this error when running a deployment
Step 1)
run
aws ecr get-login-password --region cn-north-1 | docker login --username AWS --password-stdin xxxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn
Step 2)
run kubectl create -f backend.yaml
from here the following happens:
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 Pending 0 2s
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 ContainerCreating 0 4s
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 ErrImagePull 0 6s
➜ backend git:(kubernetes-fresh) ✗ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-89d75f7df-qwqdq 0/1 ImagePullBackOff 0 7s
So then I run kubectl describe pod backend and it will output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 117s default-scheduler Successfully assigned default/backend-89d75f7df-qwqdq to minikube
Normal Pulling 32s (x4 over 114s) kubelet, minikube Pulling image "xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals:latest"
Warning Failed 31s (x4 over 114s) kubelet, minikube Failed to pull image "xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/v2/baopals/manifests/latest: no basic auth credentials
Warning Failed 31s (x4 over 114s) kubelet, minikube Error: ErrImagePull
Warning Failed 19s (x6 over 113s) kubelet, minikube Error: ImagePullBackOff
Normal BackOff 4s (x7 over 113s) kubelet, minikube Back-off pulling image "xxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals:latest"
the main error being no basic auth credentials
Now what I am confused about is that I can push images to my ECR fine and I can also push to my remote EKS cluster I feel like essentially the only thing I cant do right now is pull from my private repository that is hosted on ECR.
Is there something obvious that I'm missing here that is preventing me from pulling from private repos so i can use them on my local machine?
For fetching ECR image locally you have login to ECR and fetch docker image. while if you are on Kubernetes you have to use secret for storing ECR login details and use it each time for pulling image from ECR.
here shell script if you are on Kubernetes, it will automatically take values from AWS configuration or else you can update variables at starting of script.
ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text) #aws account number
REGION=ap-south-1 #aws ECR region
SECRET_NAME=${REGION}-ecr-registry #secret_name
EMAIL=abc#xyz.com #can be anything
TOKEN=`aws ecr --region=$REGION get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://$ACCOUNT.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
imagePullSecret used in YAML file for pulling storing secret for private docker repos.
https://github.com/harsh4870/ECR-Token-automation/blob/master/aws-token.sh
When a node in your cluster launches a container, it needs the credentials to access the private registry to pull the image. Even if you have authenticated in your local machine, the node cannot reuse the login, because by design it could be running on another machine; so you have to provide the credentials in the pod template. Follow this guide to do that:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Basically you store the ECR credentials as a secret and provide it in the imagePullSecret of the container spec. The pod will then be able to to pull the image everytime.
If you are developing with your cluster running on local machine, you don't even need to do that. You can have the pod reuse the image that you have downloaded to your local cache by either setting the imagePullPolicy under container spec to IfNotPresent, or using a specific tag instead of latest for your image.

Two clusters on EKS, how to switch between them

I am not exactly sure what's going on which is why I am asking this question. When I run this command:
kubectl config get-clusters
I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1
then I run:
kubectl config current-context
and I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and if I run kubectl get pods, I get the expected output.
But how do I switch to the other cluster/context? what's the difference between the cluster and context? I can't figure out how these commands differ:
When I run them, I still get the pods from the wrong cluster:
root#4c2ab870baaf:/# kubectl config set-context arn:aws:eks:us-west-2:913617820371:cluster/eks1
Context "arn:aws:eks:us-west-2:913617820371:cluster/eks1" modified.
root#4c2ab870baaf:/#
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
root#4c2ab870baaf:/# kubectl config set-cluster arn:aws:eks:us-west-2:91xxxxxx371:cluster/eks1
Cluster "arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1" set.
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
so I really don't know how to properly switch between clusters or contexts and also switch the auth routine when doing so.
For example:
contexts:
- context:
cluster: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
user: arn:aws:eks:us-west-2:91xxxx371:cluster/ignitecluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
- context:
cluster: arn:aws:eks:us-west-2:91xxxx371:cluster/teros-eks-cluster
user: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
To clarify on the difference between set-context and use-context
A context is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. So when you do set-context, you just adding context details to your configuration file ~/.kube/config, but it doesn't switch you to that context, while use-context actually does.
Thus, as Vasily mentioned, in order to switch between clusters run
kubectl config use-context <CONTEXT-NAME>
Also, if you run kubectl config get-contexts you will see list of contexts with indication of the current one.
Use
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks
Consider using kubectx for managing your contexts.
Usage
View all contexts (the current context is bolded):
$kubectx
arn:aws:eks:us-east-1:12234567:cluster/eks_app
->gke_my_second_cluster
my-rnd
my-prod
Switch to other context:
$ kubectx my-rnd
Switched to context "my-rnd".
Bonus:
In the same link - check also the kubens tool.
This is the best command to switch between different EKS clusters.
I use it every day.
aws eks update-kubeconfig --name example
Documentation:
https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html

how to use Google Container Registry

I tried to use Google Container Registry, but it did not work for me.
I wrote the following containers.yaml.
$ cat containers.yaml
version: v1
kind: Pod
spec:
containers:
- name: amazonssh
image: asia.gcr.io/<project-id>/amazonssh
imagePullPolicy: Always
restartPolicy: Always
dnsPolicy: Default
I run instance by the following command.
$ gcloud compute instances create containervm-amazonssh --image container-vm --network product-network --metadata-from-file google-container-manifest=containers.yaml --zone asia-east1-a --machine-type f1-micro
I set the following acl permission.
# gsutil acl ch -r -u <project-number>#developer.gserviceaccount.com:R gs://asia.artifacts.<project-id>.appspot.com
But Access denied occurs when docker pull image from Google Container Registry.
# docker pull asia.gcr.io/<project-id>.a/amazonssh
Pulling repository asia.gcr.io/<project-id>.a/amazonssh
FATA[0000] Error: Status 403 trying to pull repository <project-id>/amazonssh: "Access denied."
Can you verify from your instance that you can read data from your Google Cloud Storage bucket? This can be verified by:
$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/scopes
...
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/devstorage.read_only
...
If so then try:
On Google Compute Engine you can login without gcloud with:
$ METADATA=http://metadata.google.internal./computeMetadata/v1
$ SVC_ACCT=$METADATA/instance/service-accounts/default
$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token \
| cut -d'"' -f 4)
$ docker login -e not#val.id -u _token -p $ACCESS_TOKEN https://gcr.io
Then try your docker pull command again.
You have an extra .a after project-id here, not sure if you ran the command that way?
docker pull asia.gcr.io/<project-id>.a/amazonssh
The container-vm has a cron job running gcloud docker -a as root, so you should be able to docker pull as root.
The kubelet, which launches the container-vm Docker containers also understands how to natively authenticate with GCR, so it should just work.
Feel free to reach out to us at gcr-contact#google.com. It would be useful if you could include your project-id, and possibly the /var/log/kubelet.log from your container-vm.