Connection refused with kubectl on EC2 ubuntu instance? - amazon-web-services

I installed docker-go-kubernetes on an ubuntu EC2 AWS instance using this guide: http://kubernetes.io/docs/getting-started-guides/aws/
I have kubectl installed but when I run a test:
kubectl run my-nginx --image=nginx --replicas=2 --port=80
I receive and error:
The connection to the server localhost:8080 was refused - did you
specify the right host or port?
How do I specify the host or port?

It looks like you didn't configure your kubectl.
You need to either place proper kubeconfig into ~/.kube/config or provide in during call like:
kubectl --kubeconfig=kubeconfig run my-nginx --image=nginx --replicas=2 --port=80

Related

How to expose app running on kind cluster in ec2 localhost port 3000 to external link?

I have an argocd server running on a kind cluster that kind cluster is in a container, on an ec2, and then that ec2 has an alb with external url.
Right now it's able to pass into the ec2 layer..but it runs on localhost:3000. When I ssh into my ec2 and run this curl command, it gives me what I want..
curl --insecure -L localhost:3000
This is what it looks like in argocd
kubectl port-forward svc/argocd-server -n argocd 3000:443
Forwarding from 127.0.0.1:3000 -> 8080
Forwarding from [::1]:3000 -> 8080
(Also additional question I'm not sure where the 8080 is coming from, I think it's the argocd pod)
I am trying to access the UI externally outside of the ec2. How do I get that to work? Do I need to expose ports 3000 with the alb? like target and listener with tcp? or is there something else?

How can I use kubectl commands on my cluster hosted on ec2 instances from my local machine without ssh

I want to be able to use kubectl commands on my master ec2 instance from my local machine without ssh. I tried copying .kube into my local but the problem is that my kubeconfig is using the private network and so when i try to run kubectl from my local I can not connect.
Here is what I tried:
user#somehost:~/aws$ scp -r -i some-key.pem ubuntu#some.ip.0.0:.kube/ .
user#somehost:~/aws$ cp -r .kube $HOME/
user#somehost:~/aws$ kubectl version
and I got:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp some.other.ip.0:6443: i/o timeout
Is there a way to change the kubeconfig in a way that would tell my kubectl when I run commands from local to run them on the master on the ec2 instance?
You have to change clusters.cluster.server key in your kubectl config with externally accessible IP.
For this your VM with master node must have external IP assigned.
Depending on how you provisioned your cluster, you may need to add additional name to Kubernetes API server certificate
With kubeadm you can just reset cluster with
kubeadm reset
on all nodes (including master), and then
kubeadm init --apiserver-cert-extra-sans=<master external IP>
Alternatively you can issue your commands with --insecure-skip-tls-verify flag. E.g.
kubectl --insecure-skip-tls-verify get pods

Kubernetes on AWS using Kops - kube-apiserver authentication for kubectl

I have setup a basic 2 node k8s cluster on AWS using KOPS .. I had issues connecting and interacting with the cluster using kubectl ... and I keep getting the error:
The connection to the server api.euwest2.dev.avi.k8s.com was refused - did you specify the right host or port? when trying to run any kubectl command .....
have done basic kops export kubecfg --name xyz.hhh.kjh.k8s.com --config=~$KUBECONFIG --> to export the kubeconfig for the cluster I have created. Not sure what else I'm missing to make a successful connection to the kubeapi-server to make kubectl work ?
Sounds like either:
Your kube-apiserver is not running.
Check with docker ps -a | grep apiserver on your Kubernetes master.
api.euwest2.dev.avi.k8s.com is resolving to an IP address where your nothing is listening.
208.73.210.217?
You have the wrong port configured for your kube-apiserver on your ~/.kube/config
server: https://api.euwest2.dev.avi.k8s.com:6443?

Unable to install Kubernetes dashboard on AWS

I'm trying to install kubernetes dashboard on AWS Linux image but I'm getting JSON output on the browser. I have run the dashboard commands and given token but it did not work.
Kubernetes 1.14+
1) Open terminal on your workstation: (standard ssh tunnel to port 8002)
$ ssh -i "aws.pem" -L 8002:localhost:8002 ec2-user#ec2-50-50-50-50.eu-west-1.compute.amazonaws.com
2) When you are connected type:
$ kubectl proxy -p 8002
3) Open the following link with a web browser to access the dashboard endpoint: http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Try this:
$ kubectl proxy
Open the following link with a web browser to access the dashboard endpoint:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
More info
I had similar issue with reaching the dashboard following your linked tutorial.
One way of approaching your issue is to change the type of the service to LoadBalancer:
Exposes the service externally using a cloud provider’s load balancer.
NodePort and ClusterIP services, to which the external load balancer
will route, are automatically created.
For that use:
kubectl get services --all-namespaces
kubectl edit service kubernetes-dashboard -n kube-system -o yaml
and change the type to LoadBalancer.
Wait till the the ELB gets spawned(takes couple of minutes) and then run
kubectl get services --all-namespaces again and you will see the address of your dashboard service and you will be able to reach it under the “External Address”.
As for the tutorial you have posted it is from 2016, and it turns out something went wrong with the /ui in the address url, you can read more about it in this github issue. There is a claim that you should use /ui after authentication, but it also does not work.
For the default settings of ClusterIP you will be able to reach the dashboard on this address:
‘YOURHOSTNAME’/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Another option is to delete the old dashboard:
Kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Install the official one:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Run kubectl proxy and reach it on localhost using:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview

Kubectl is not working on AWS EC2 instance

I am unable to install kubectl on AWS ec2 instance(Amazon ami and ubuntu).
After installing kops and kubectl tried to check the version of kubectl but it is throwing the error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have already opened the ports, but still, I'm getting the same error.
I have installed Minikube also, but still, I am facing the same issue.
This is because your ~/.kube/config file is not correct. Configure it correctly so that you can connect to your cluster using kubectl.
Kubectl is the tool to control your cluster. It can be installed by Kops, for example.
If you already have the cluster to manage it from the host you did not use for the initialization, you should export your Kubeconfig by kops export kubecfg command on the node where you have the configured installation of kops.
If not, initialize the cluster first, and Kops will setup the Kubectl configuration for you automatically.
If you want to run with cluster,
You should try after getting token by kubeadm init, which gives advice that
-run:
sudo cp /etc/kubernetes/config $HOME/
sudo chown $(id -u):$(id -g) $HOME/config
export KUBECONFIG=$HOME/config
~/.kube/config is your missing file.