kubectl error while querying URL of service in minikube - kubectl

I have been trying to run the echo server program (hello world) as part of leaning minikube with kubectl.
I was able to run and expose the service using the below commands..
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
However, while trying to get the URL minikube service hello-minikube --urlof the above service, got the below error:
Error: unknown command "service" for "kubectl"
did anyone face the similar issue ?

I have been following this document https://github.com/kubernetes/minikube
also, searched for the solution. I got a workaround timebeing with the following commands..
minikube ip
gives me the VM IP
kubectl get services
gives the port of the hello-minikube service. So, now I'm able to hit in the browser as http://192.168.99.100:32657/
Note: Your IP address may be different.
Approach 2:
Found and easy option - opens the service in browser itself.
minikube service hello-minikube
Hope this helps if someone faces similar issue.

What command are you running? You should be running
$ minikube service hello-minikube --url directly, without the kubectl prefix.

Use below command
.\minikube.exe service hello-minikube --url

Related

Testing connection out from within running container. Kubernetes. Amazon Linux 2

I am trying to test an outbound connection from within a Amazon Linux 2 container that is running in Kubernetes. I have a service set up and I am able to telnet to that service through a VPN. But I want to test a connection coming out from that container. Is there a way that this can be done.
I have tried the ping, etc. but the commands all say "command not found"
Is there any command I can run that can test an outbound connection?
Please provide more context. What exact image are you running? When debugging connectivity of kubernetes pods and services, you can exec into the pod with
kubectl exec -it <pod_name> -n <namespace> -- <bash|ash|sh>
Once you gain access to the pod and can emulate a shell inside, you can update + upgrade the runtime with the package manager (apt, yum, depends on the distro).
After upgrading, you can install curl and try to curl an external site.

Cannot access to localhost

I deployed an application on Google Cloud (GKE). In order to access its UI, I did port-forwarding(port 9090). When I use Cloud Shell web preview I can access the UI. However, when I tried to open localhost:9090 in my browser, I cannot access. Do you know why I cannot access from my browser, is it normal?
Thank you!
Answered provided in the comments by a community member.
Do you know why I cannot access from my browser, is it normal?
Cloud Shell is where you're running kubectl port-forward. Port forwarding only applies to the host on which the command is run unless you have a chain of port-forwarding commands. If you want to access the UI from your local host, then you will need to run the kubectl port-forward on your local host too.
So how can I can run kubectl port-forward command on my local host for the application that I deployed cloud? Should I install Google Cloud CLI on my local machine?
I assumed (!) that you're using kubectl port-forward on Cloud Shell. If that's correct, then you need to install kubectl on your local machine to run it there. Because of the way that GKE authenticates, it may also be prudent to install gcloud on your local machine. You can then use gcloud container clusters get-credentials ... to create a local Kubernete (GKE) config file on your local machine that is then used by kubectl commands.

kubernetes-dashboard I have got 404 error if I try to get access from external

I'm trying to run dashboard in Kube v. 1.21 on AWS EKS
Kubernetes-dashboard work well using kubectl proxy.
But through Istio I have got 404 error.
https://jazz.lnk/kubedash/dashboard/
Perhaps URL ../dashboard/.. isn't correct? Also, I may get a mistake in configuration.
I run dashboard wit this configurations. https://gist.github.com/rmalenko/fc7c0f515cafa578f14dcafee1654f9c

Kubernetes/AWS - ELB is not deleted when the service is deleted

I'm using the --cloud-provider=aws flag to integrate Kubernetes with AWS. I'm able to create and expose a service via ELB using the following commands:
kubectl run sample-nginx --image=docker.io/nginx --port=80
kubectl expose deployment/sample-nginx --port=80 --target-port=80 \
--name=sample-nginx-service --type=LoadBalancer && \
kubectl annotate svc sample-nginx-service \
service.beta.kubernetes.io/aws-load-balancer-internal=0.0.0.0/0
This exposes the nginx service on an internal-ELB on a private subnet. I'm able to access the service on the ELB as well.
Now, when I delete the service, the service is deleted, but the ELB is not. Here's how I deleted the service:
kubectl delete services sample-nginx-service
Any pointers to what could be going wrong? I did not see any errors in the kube-controller-manager log when I ran the deletion command. What other logs should I be checking?
Upgrading to etcd v3.0.17 from v3.0.10 fixed the issue. I found another log messaged in the controller logs which pointed to the issue here:
https://github.com/kubernetes/kubernetes/issues/41760
You seem to have a typo on your service delete command.
It should be kubectl delete services sample-nginx-service.
I just tried this on my AWS cluster and confirmed that the ELB was deleted successfully after running this.
I'm running Kubernetes 1.6.2 with kubectl 1.6.3
Can you do the same process but with a config file and see if it's command or config related?

How do I bind my kubernetes cluster master to an elastic ip with AWS?

I ran the install script:
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
And it set up my cluster just fine. The problem is that the master was not on an elastic ip. So I went to the VPC settings in the AWS management console and bound it to one. This obviously changed the ip of the master which I correspondingly changed in .kube/config.
Now whenever I try to do anything with kubectl (e.g. kubectl get pods) I get the error: error: couldn't read version from server: Get https://NEW_IP/api: x509: certificate is valid for OLD_IP, 10.0.0.1, not NEW_IP.
Is there a correct way to bind the master to an elastic IP? How do I fix this?
Solved by doing: kube-down then export MASTER_RESERVED_IP=[NEW_IP] and then kube-up