Followed https://www.mesosphere.com/amazon/ I created a DCOS cluster on Amazon AWS.
Then I followed http://kubernetes.io/v1.1/docs/getting-started-guides/dcos.html and installed Kubernete on it.
Then I followed http://kubernetes.io/v1.1/docs/user-guide/quick-start.html
I was able to launch pods successfully.
Then I ran into problem with expose the service to public.
$ dcos kubectl expose rc my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
$ dcos ssun$ dcos kubectl get svc my-nginx
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.10.10.32 80/TCP run=my-nginx 8s
The EXTERNAL_IP address does not exists. According to the tutorial, it should. So I'm thinking it has something to do with the fact that my Kubernete is inside DCOS.
Please help. Thank you very much!
Kubernetes on Mesos/DCOS does not support automatic LoadBalancer creation yet.
As the quick start states:
Through integration with some cloud providers (for example Google Compute Engine and AWS EC2), Kubernetes enables you to request that it provision a public IP address for your application.
AFAIK, only GCE, GKE, and AWS support automatic LoadBalancer creation so far.
Another key difference about DCOS (compared to kubernetes) is that it comes by default with two zones: public and private. So nothing scheduled on the private nodes is externally accessible without a reverse-proxy on the public nodes.
Additionally, Kubernetes on DCOS does not yet support IP-per-container. Support for IP-per-container is under development with the DCOS/Calico integration. Some community members have also reportedly attempted using cluster-wide docker overlay networking.
For now, there are a few alternative options for reaching your pod externally:
Deploy your pod on all the public slaves (using resource role annotations) and hostPort:80. Then hit the address of the DCOS public slave AWS ELB.
Create your own load balancer nginx pod (e.g. service-loadbalancer and schedule it on the public slaves with hostPort:80. Then hit the IP of the host node it's on.
It's definitely a priority of the Mesosphere Kubernetes Team to make this experience smoother on DCOS. Hopefully the solution will include automatic LoadBalancer creation.
Related
I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.
i am running a minimal stateful database service on GKE. single node cluster. i've setup a database as a stateful set on a single pod as of now. the database has exposed a management console on a particular port along with the mandatory database port. i am attempting to do two things.
expose management port over a global HTTP(S) load balancer
expose database port outside of GKE to be consumed by the likes of Cloud Functions or App Engine Applications.
My stateful set is running fine and i can see from the container logs that the database is properly booted up and is listening on required ports.
i am attempting to setup a standalone NEG (ref: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg) using a simple ClusterIP service.
the cluster service comes up fine and i can see it using
kubectl get service service-name
but i dont see the NEG setup as such... the following command returns nothing
$ gcloud compute network-endpoint-groups list
Listed 0 items.
my pod exposes the port 8080 my service maps 51000 to 8080 and i have provided the neg annotation
cloud.google.com/neg: '{"exposed_ports": {"51000":{}}'
I dont see any errors as such but neither do i see a NEG created/listed.
Any suggestions on how i would go about debugging this.
As a followup question...
when exposing NEG over global load balancer, how do i enforce authn?
im ok with either of service account roles or oauth/openid.
would i be able to expose multiple ports using a single NEG? for
e.g. if i wanted to expose one port to my global load balancer and
another to local services, is this possible with a single NEG or
should i expose each port using a dedicated ClusterIP service?
where can i find documentation/specification for google kubernetes
annotations. i tried to expose two ports on the neg using the
following annotation syntax. is that even supported/meaningful?
cloud.google.com/neg: '{"exposed_ports": {"51000":{},"51010":{}}'
Thanks in advance!
In order to create the service that is backed by a network endpoint group, you need to be working on a GKE Cluster that is VPC Native:
https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#before_you_begin
When you create a new cluster, this option is disabled by default and you must enable it upon creation. You can confirm if your cluster is VPC Native going to your Cluster details in GKE. It should appear like this:
VPC-native (alias IP) Enabled
If the cluster is not VPC Native, you won’t be able to use this feature as described on their restrictions:
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions
In case you have VPC Native enabled, make sure as well that the pods have the same labels “purpose:” and “topic:” to make sure they are members of the service:
kubectl get pods --show-labels
You can also create multi-port services as it is described on Kubernetes documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
We have a running kubernetes cluster with master and 3 worker nodes on azure cloud. Now we want to add a new node which is running on AWS cloud. When tried to add this node into existing cluster then we are getting error as.
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
but if the node is existing on same cloud provider then it is working fine.
Please let me know if anyone faced the same issue.
As per documentation here.
Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
So please verify the "status" of your cluster:
kubectl get nodes -o wide
kubectl get pods --all-namespaces
For "Cross Cloud Kubernetes cluster" please take a look here
I am new to Kubernetes. I am using Kops to deploy my Kubernetes application on AWS. I have already registered my domain on AWS and also created a hosted zone and attached it to my default VPC.
Creating my Kubernetes cluster through kops succeeds. However, when I try to validate my cluster using kops validate cluster, it fails with the following error:
unable to resolve Kubernetes cluster API URL dns: lookup api.ucla.dt-api-k8s.com on 149.142.35.46:53: no such host
I have tried debugging this error but failed. Can you please help me out? I am very frustrated now.
From what you describe, you created a Private Hosted Zone in Route 53. The validation is probably failing because Kops is trying to access the cluster API from your machine, which is outside the VPC, but private hosted zones only respond to requests coming from within the VPC. Specifically, the hostname api.ucla.dt-api-k8s.com is where the Kubernetes API lives, and is the means by which you can communicate and issue commands to the cluster from your computer. Private Hosted Zones wouldn't allow you to access this API from the outside world (your computer).
A way to resolve this is to make your hosted zone public. Kops will automatically create a VPC for you (unless configured otherwise), but you can still access the API from your computer.
I encountered this last night using a kops-based cluster creation script that had worked previously. I thought maybe switching regions would help, but it didn't. This morning it is working again. This feels like an intermittency on the AWS side.
So the answer I'm suggesting is:
When this happens, you may need to give it a few hours to resolve itself. In my case, I rebuilt the cluster from scratch after waiting overnight. I don't know whether or not it was necessary to start from scratch -- I hope not.
This is all I had to run:
kops export kubecfg (cluster name) --admin
This imports the "new" kubeconfig needed to access the kops cluster.
I came across this problem with an ubuntu box. What I did was to add the dns record in the hosted zone in route 53 to /etc/hosts.
Here is how I resolved the issue :
Looks like there is a bug with kops library though it shows
**Validation failed: unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api **
when u try kops validate cluster post waiting for 10-15 mins. Behind the scene the kubernetes cluster is up ! You can verify same by doing ssh in to master node of your kunernetes cluster as below
Go to page where u can ec2 instance and your k8's instances running
copy "Public IPv4 address" of your master k8 node
post login to ec2 instance on command prompt login to master node as below
ssh ubuntu#<<"Public IPv4 address" of your master k8 node>>
Verify if you can see all node of k8 cluster with below command it should show your master node and worker node listed there
kubectl get nodes --all-namespaces
I have an app with several containers running just fine using kubernetes on AWS however now I need to port this to a AWS Dedicated Host VPC where the cluster has previously been created NOT using Kubernetes so I am not able to execute kube-up.sh or its kops equivalent
Is it possible to orchestrate my containers using kubernetes on a pre-existing cluster ? ( IE. have kubernetes probe the parent AWS cluster and treat it as if it created it )
Of course until this linkage is made between my calls to kubectl and the parent AWS Dedicated Host VPC it has no Kubernetes context and just times out :
kubectl create -f /my/app/goodie.yaml
Unable to connect to the server: dial tcp 34.199.89.247:443: i/o timeout
Possible alternative would be to call kube-up.sh or kops and demand the new cluster live inside a specified AWS Dedicated Host ... alas its not apparent Kubernetes has this flexibility ... yet !
Yes, definitely. kubectl is just a client application and it can connect to any kubernetes cluster and orchestrate it.
If you get i/o timeout, you most likely have connectivity issues and some firewall/proxy in place. Did you try to just access the kubernetes API through curl or telnet?