How do I kubectl using a proxy server? - kubectl

I have an EKS cluster running using private endpoint (using steps mentioned here). I can kubectl from the worker.
I have an inbound (in Route 53) to my worker and I am able to resolve dig #<inbound_ip> <cluster_endpoint>.xyz.region.eks.amazonaws.com from my laptop.
I wanted to know if there is a way to make kubectl use a proxy so that it resolves to the inbound_ip. I tried using export HTTPS_PROXY=https://inbound_ip:443 but that did not work (I want to do this without setting up the dns forwarder).
I looked at this kubectl behind a proxy but that is more about minikube and couldn't find anything else related to this. Any help will be greatly appreciated. Thanks.

Related

AWS Loadbalancer is not accessible

I have a solution (AnzoGraph DB) deployed on my AWS Kubernetes Cluster (EC2 Instance), and was working totally fine.
Suddenly this solution stopped and i could not access it via the DNS anymore.
I tested the solution deployed on my cluster using kubectl port-forward command and they are working fine (the pods and services), thus i assume the problem is with AWS Loadbalancer.
To access the application we need to go through this path:
Request -> DNS -> AWS Load Balancer -> Services -> Pods.
The LoadBalancer is (classic) internal, so it's only accessible for me or the company using VPN.
Every time when I try to access the DNS , I got no response.
Any idea how i can fix it ? or where is the exact issue ? how can I troubleshoot this issue and follow the traffic on AWS ?
Thanks a lot for the help!
sorry I missed your post earlier.
lets start with a few questions...
You say you use k8s on AWS EC2, do you actually use EKS, or do you run a different k8s stack?
Also ... you mentioned that you access the LB from your (DB) client/ your software by DNS resolving the LB and then access AnzoGraph DB.
I want to make sure that the solution is actually DNS resolving the LB via DNS every time. if you have a long running service, and AWS changes the IP address of the LB, and your SW possibly had cached the IP, you would not be able to connect to the LB.
on the system you run your Software accessing AnzoGraph DB ... (I assume CentOS (7) )
make sure you have dig installed (yum install bind-utils)
dig {{ your DNS name of your LB }}
is that actually the IP address your SW is accessing?
has the IP address of the client changed? make sure the LB SG allows access
(https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-groups.html)
I assume you access the AnzoGraph DB frontend POD via 443?
as you write
"I tested the solution deployed on my cluster using kubectl port-forward command and they are working fine (the pods and services)"
we probably do not have to look for pod logs.
(if that was not the case, the LB would obviously block traffic as well.)
So I agree, that the most likely issue is (bad) DNS caching or SG due to different SRC IP being rejected by the classic LB SG.
also for completeness .. please tell us more about your env.
AnzoGraph DB image
EKS/k8s version
helm chart / AnzoGraph operator used.
Best - Frank

istio default installation - traffic blocked?

I'm quite new to Istio.
I just installed Istio on a k8 cluster in GCP. I have 2 services in my private cluster. one of them needs to talk to a Redis memorystore (over internal private IP - 10.x.x.x).
I'm seeing errors trying to connect to redis. What am I missing in my Istio configuration?
Update: I have found that the redis error is misleading. The real issue it seems is something else - see one my comments below. I don't understand what that error means.
Some additional background: this is for a Tyk installation. The issue it seems is communication between the Tyk Dashboard and Tyk Gateway pods. I'm seeing the SSL error (see comments below) when trying to connect from Gateway to Dashboard (Dashboard to Gateway is fine). The error goes away if I rebuild everything without Istio. I must be doing something really silly. :( Both pods are in the same cluster, same namespace.
I managed to fix the issue. Redis wasn't the issue. Issue was communication from Tyk Gateway -> Tyk Dashboard was failing. The gateway talks to the dashboard to register its presence. The connection logs showed what looked like a tls origination issue with Istio envoy proxy when it is routing the traffic. I configured a DestinationRule that explicitly turned off mtls for the dashboard and the problem went away.

AWS EKS The connection to the server ASFASF.da2.ap-northeast-1.eks.amazonaws.com was refused - did you specify the right host or port?

I'm trying to create kubernetes cluster, but whatever mode I try in the end kubectl fails with
The connection to the server ASFASF.da2.ap-northeast-1.eks.amazonaws.com was refused - did you specify the right host or port?
I already tried:
1) Install with Terraform ( following all official docs )
2) Manual EKS installation through UI
3) Install with eksctl tool
It all ends with this error, I already tried to tweak all possible subnets, roles, users, routes, dns, basically everything what might help in other SA threads / github issues, but no success..
What am I missing?
inb4 yes I tried update kubectl config or specify role there
like here:
https://github.com/weaveworks/eksctl/issues/1510
https://medium.com/#savvythrough/aws-eks-auth-optimization-for-k8s-ae054be0a31b
vpn, turning it on might help
I spent lot of time on this issue and it all because some DNS blocks in the area I was connecting from, lol

Kubernetes DNS fails to resolve most of the times but sometimes it works. What can i do to solve this?

Kubernetes not able to resolve DNS. Container/Pods not able to access Internet.
I have a Kubernetes 2 node cluster on separate AWS EC2 instances (t2.Medium) container networking has been done using:
Flannel version: flannel:v0.10.0-amd64 (image)
Kubernetes version: 1.15.3
DNS Logs
DNS Logs
nodes
Kubernetes svc:
enter image description here
enter image description here
At times when I delete core-dns pods, the DNS issue gets resolved for some time but it is not consistant. Please suggest what can be done. I flannel mapping may have something to do with this. Please let me know if any other information is also needed.
Errors such you get: nslookup: can't resolve 'kubernetes.default' indicate that you have problem with the coredns/kube-dns add-on or associated Services.
Please check if you did following steps to debug DNS: coredns.
It also seems that like DNS inside busybox does not work properly.
Try to use busybox images <= 1.28.4
Change pod configuration file:
containers:
- name: busybox-image
image: busybox:1.28.3
Learn more about most known dns kubernetes issues: kubernetes-dns.

Unable to validate Kubernetes cluster using Kops

I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is:
When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig needed to access the kops cluster.
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Here is how I resolved the issue :
Looks like there is a bug with kops library though it shows
**Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api **
when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
Go to page where u can ec2 instance and your k8's instances running
copy "Public IPv4 address" of your master k8 node
post login to ec2 instance on command prompt login to master node as below
ssh ubuntu#<<"Public IPv4 address" of your master k8 node>>
Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there
kubectl get nodes --all-namespaces