I am trying to get Bridge to Kubernetes to work with my aws eks cluster. I have the command line kubectl cmds working and can communicate to my eks cluster in vscode via the Kubernetes extension. So I believe my .kube/config is correct. When I hit "Kubernetes: Debug (Local Tunnel)" -> I get an error:
Failed to configure Bridge to Kubernetes: Failed to find any services running in the namespace <correct_namespace> of cluster <correct_cluster>
What am I missing? Everything I've seen shows that bridge to Kubernetes should be able to connect. Is there an additional EKS security policy Bridge to Kubernetes requires to work?
Related
I am using local setup of Jenkins
I have already running AWS k8s cluster
I tried with adding kubeconfig file confuguration in Jenkins credentials
But when I try it from Jenkins Test Connection it gives me following error
Then I tried to follow the steps mentioned in StackOverflow_Ticket, even that as well giving me UnknowHostException.
Any idea what is missing ?
I'm following the kOps tutorial to set up a cluster on AWS. I am able to create a cluster with
kops create cluster
kops update cluster --yes
However, when validating whether my cluster is set up correctly with
kops validate cluster
I get stuck with error:
unexpected error during validation: error listing nodes: Unauthorized
The same error happens in many other kOps operations.
I checked my kOps/K8s version and it is 1.19:
> kops version
Version 1.19.1 (git-8589b4d157a9cb05c54e320c77b0724c4dd094b2)
> kubectl version
Client Version: version.Info{Major:"1", Minor:"20" ...
Server Version: version.Info{Major:"1", Minor:"19" ...
How can I fix this?
As of kOps 1.19 there are two reasons you will suddenly get this error:
If you delete a cluster and reprovision it, your old admin is not removed from the kubeconfig and kOps/kubectl tries to reuse it.
New certificates have a TTL of 18h by default, so you need to reprovision them about once a day.
Both issues above are fixed by running kops export kubecfg --admin.
Note that using the default TLS credentials is discouraged. Consider things like using an OIDC provider instead.
Kubernetes v1.19 removed basic auth support, incidentally making the default kOps credentials unable to authorize. To work around this, we will update our cluster to use a Network Load Balancer (NLB) instead of the default Classic Load Balancer (CLB). The NLB can be accessed with non-deprecated AuthZ mechanisms.
After creating your cluster, but before updating cloud resources (before running with --yes), edit its configuration to use a NLB:
kops edit cluster
Then update your load balancer class to Network:
spec:
api:
loadBalancer:
class: Network
Now update cloud resources with
kops update cluster --yes
And you'll be able to pass AuthZ with kOps on your cluster.
Note that there are several other advantages to using an NLB as well, check the AWS docs for a comparison.
If you have a pre-existing cluster you want to update to a NLB, there are more steps to follow to ensure clients don't start failing AuthZ, to delete old resources, etc. You'll find a better guide for that in the kOps v1.19 release notes.
We have a running kubernetes cluster with master and 3 worker nodes on azure cloud. Now we want to add a new node which is running on AWS cloud. When tried to add this node into existing cluster then we are getting error as.
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
but if the node is existing on same cloud provider then it is working fine.
Please let me know if anyone faced the same issue.
As per documentation here.
Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
So please verify the "status" of your cluster:
kubectl get nodes -o wide
kubectl get pods --all-namespaces
For "Cross Cloud Kubernetes cluster" please take a look here
I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is:
When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig needed to access the kops cluster.
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Here is how I resolved the issue :
Looks like there is a bug with kops library though it shows
**Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api **
when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
Go to page where u can ec2 instance and your k8's instances running
copy "Public IPv4 address" of your master k8 node
post login to ec2 instance on command prompt login to master node as below
ssh ubuntu#<<"Public IPv4 address" of your master k8 node>>
Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there
kubectl get nodes --all-namespaces
I have an app with several containers running just fine using kubernetes on AWS however now I need to port this to a AWS Dedicated Host VPC where the cluster has previously been created NOT using Kubernetes so I am not able to execute kube-up.sh or its kops equivalent
Is it possible to orchestrate my containers using kubernetes on a pre-existing cluster ? ( IE. have kubernetes probe the parent AWS cluster and treat it as if it created it )
Of course until this linkage is made between my calls to kubectl and the parent AWS Dedicated Host VPC it has no Kubernetes context and just times out :
kubectl create -f /my/app/goodie.yaml
Unable to connect to the server: dial tcp 34.199.89.247:443: i/o timeout
Possible alternative would be to call kube-up.sh or kops and demand the new cluster live inside a specified AWS Dedicated Host ... alas its not apparent Kubernetes has this flexibility ... yet !
Yes, definitely. kubectl is just a client application and it can connect to any kubernetes cluster and orchestrate it.
If you get i/o timeout, you most likely have connectivity issues and some firewall/proxy in place. Did you try to just access the kubernetes API through curl or telnet?