Argocd login operation cancelled - argocd

I am trying to login via ArgoCD CLI to my argo cd server like so: sudo argocd login **** --username **** --password ***** --loglevel debug. However, I get the error FATA[0250] Failed to establish connection to ****:443: dial tcp ****:443: operation was canceled. I have tried various options such as grpc-web, insecure, plaintext. I am able to login from my personal machine (MacOS) into the argocd server I setup, however, from virtual instance I cannot. Is there any settings that would need to be setup for argocd login to work?

In this case I was running the github runner as a virtual instance, and running argocd containers in cloud as well. I hadn't exposed a firewall rule to allow traffic from the runner into the argocd subnet.

Related

self-hosted gitlab-runner cannot registered self-hosted gitlab on GKE

gitlab version 13.8.1-ee (install with helm)
GKE version : 1.16.15-gke.6000
I install gitlab & gitlab-runner on GKE, private cluster.
Also, I have nginx-ingress-controller for firewall rule, following docs.
https://gitlab.com/gitlab-org/charts/gitlab/blob/70f31743e1ff37bb00298cd6d0b69a0e8e035c33/charts/nginx/index.md
nginx-ingress:
controller:
scope:
enabled: true
namespace: default
service:
loadBalancerSourceRanges:
["IP","ADDRESSES"]
With this setting, gitlab-runner pod has error
couldn't execute POST against https://gitlab.my-domain.com/api/v4/runners: Post https://gitlab.my-domain.com/api/v4/runners: dial tcp [my-domain's-IP]: i/o timeout
Issue is same as this one.
Gitlab Runner can't access Gitlab self-hosted instance
But I already set cloudNAT & cloud Route, also adding IP address of CloudNAT in loadBalancerSourceRanges in gitlab's value.yaml.
To check if cloudNAT worked or not, I tried to exec pod and check IP
$ kubectl exec -it gitlab-gitlab-runner-xxxxxxxx /bin/sh
wget -qO- httpbin.org/ip
and it showed IP address of CloudNAT.
So, the request must be called using CloudNAT IP as source IP.
https://gitlab.my-domain.com/api/v4/runners
What can I do to solve it ?
It worked when I added kubernetes-pod-inner-ipaddress in loadBalancerSourceRanges. Both stable/nginx, https://kubernetes.github.io/ingress-nginx worked.
gitlab-runner called https://my-domain/api/v4/runners . I thought it would go through public network, so added only CloudNAT IP, but maybe it was not.
Still, it's a little bit weird.
First time I set 0.0.0.0/0 in loadBalancerSourceRanges, then added only CloudNAT IP in FW, https://my-domain/api/v4/runners worked.
So, loadBalancerSourceRanges may be used in 2 places, 1 is FW rule which we can see on GCP, the other is hidden.

Error from server (InternalError): error when creating "v2_0_0_full.yaml": Internal error occurred: failed calling webhook "

I am trying to follow the instruction of AWS to create an ALB for EKS (Elastic K8s Services in AWS).
The instruction is here: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I have problems at step 7 (Install the controller manually). When I try to apply a yaml file in substep 7-b-c, I get an error:
Error from server (InternalError): error when creating "v2_0_0_full.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: x509: certificate is valid for ip-192-168-121-78.eu-central-1.compute.internal, not cert-manager-webhook.cert-manager.svc
Has anyone experienced similar kind of problem and what are the best ways to troubleshoot and solve the problem?
It seems that cert-manager doesn't run on Fargate as expected - #1606.
First option as a workaround is to install the helm chart which doesn't have the cert-manager dependency. Helm will generate the self-signed cert, and secret resources.
Different option is to remove all cert-manager stuff from the YAML manifest and provide your own self-signed certificate if you don't have helm as a dependency.
Take a look: alb-cert-manager, alb-eks-cert-manager.
Useful article: aws-fargate.
For EKS with Fargate, cert-manager-webhook server's port clashes with kubelet on the Fargate MicroVM.
Ref: https://github.com/jetstack/cert-manager/issues/3237#issuecomment-827523656
To remedy this, when installing the chart set the parameter webhook.securePort to a port that is not 10250 (e.g. 10260)
helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.3.1
--set webhook.securePort=10260
--set installCRDs=true
Or you could edit the cert-manager-webhook Deployment and Service to use this new port if cert-manager is already deployed.

Jenkins installed via docker cannot run on AWS EC2

I'm new to devops. I want to install Jenkins in AWS EC2 with docker.
I have installed the Jenkins by this command:
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On AWS security group, I have enabled port 8080 and 50000. I also enabled port 22 for SSH, 27017 for Mongo and 3000 for Node.
I can see the Jenkins container when I run docker ps. However, when I run https://xxxx.us-east-2.compute.amazonaws.com:8080, there is not a Jenkins window popup for Jenkins setting and display error, ERR_SSL_PROTOCOL_ERROR.
Does someone know what's wrong here? Should I install Nginx as well? I didn't install it yet.
The error is due to the fact that you are using https:
https://xxxx.us-east-2.compute.amazonaws.com:8080
From your description it does not seem that you've setup any type of ssl connection to your instance. So you should connect using http only:
http://xxxx.us-east-2.compute.amazonaws.com:8080
But this is not good practice as you communicate using plain text. A common solution is to access your jenkins web-ui through ssh tunnel. This way the connection is encrypted and you don't have to exposed any jenkins port in your security groups.

Kubernetes on AWS using Kops - kube-apiserver authentication for kubectl

I have setup a basic 2 node k8s cluster on AWS using KOPS .. I had issues connecting and interacting with the cluster using kubectl ... and I keep getting the error:
The connection to the server api.euwest2.dev.avi.k8s.com was refused - did you specify the right host or port? when trying to run any kubectl command .....
have done basic kops export kubecfg --name xyz.hhh.kjh.k8s.com --config=~$KUBECONFIG --> to export the kubeconfig for the cluster I have created. Not sure what else I'm missing to make a successful connection to the kubeapi-server to make kubectl work ?
Sounds like either:
Your kube-apiserver is not running.
Check with docker ps -a | grep apiserver on your Kubernetes master.
api.euwest2.dev.avi.k8s.com is resolving to an IP address where your nothing is listening.
208.73.210.217?
You have the wrong port configured for your kube-apiserver on your ~/.kube/config
server: https://api.euwest2.dev.avi.k8s.com:6443?

Unable to install Kubernetes dashboard on AWS

I'm trying to install kubernetes dashboard on AWS Linux image but I'm getting JSON output on the browser. I have run the dashboard commands and given token but it did not work.
Kubernetes 1.14+
1) Open terminal on your workstation: (standard ssh tunnel to port 8002)
$ ssh -i "aws.pem" -L 8002:localhost:8002 ec2-user#ec2-50-50-50-50.eu-west-1.compute.amazonaws.com
2) When you are connected type:
$ kubectl proxy -p 8002
3) Open the following link with a web browser to access the dashboard endpoint: http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Try this:
$ kubectl proxy
Open the following link with a web browser to access the dashboard endpoint:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
More info
I had similar issue with reaching the dashboard following your linked tutorial.
One way of approaching your issue is to change the type of the service to LoadBalancer:
Exposes the service externally using a cloud provider’s load balancer.
NodePort and ClusterIP services, to which the external load balancer
will route, are automatically created.
For that use:
kubectl get services --all-namespaces
kubectl edit service kubernetes-dashboard -n kube-system -o yaml
and change the type to LoadBalancer.
Wait till the the ELB gets spawned(takes couple of minutes) and then run
kubectl get services --all-namespaces again and you will see the address of your dashboard service and you will be able to reach it under the “External Address”.
As for the tutorial you have posted it is from 2016, and it turns out something went wrong with the /ui in the address url, you can read more about it in this github issue. There is a claim that you should use /ui after authentication, but it also does not work.
For the default settings of ClusterIP you will be able to reach the dashboard on this address:
‘YOURHOSTNAME’/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Another option is to delete the old dashboard:
Kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Install the official one:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Run kubectl proxy and reach it on localhost using:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview