I'm trying to wrap my head around istio gateways and virtual services. I want to see the configured gateways and virtual services in the system. How do I view this, is there a kubectl command to view this?
I tried
kubectl describe pod istio-ingressgateway-id -n istio-system
But this does not give the details or I don't know how to interpret them.
You are looking for Gateway and VirtualService CRDs
Those are Kubernetes CRDs, and you can query them with
kubectl get Gateway
And
kubectl get VirtualService
For more information, use kubectl describe
Related
I am trying to deploy the jpetstore app to EKS and have come across this variable that needs to be filled in. (https://github.com/IBM-Cloud/jpetstore-kubernetes/blob/master/jpetstore/jpetstore.yaml)
In the documentation it states that i can it using the commands:
Edit jpetstore/jpetstore.yaml and jpetstore/jpetstore-mmssearch.yaml and replace all instances of:
<CLUSTER DOMAIN> with your Ingress Subdomain (ibmcloud ks cluster get --cluster CLUSTER_NAME)
(https://github.com/IBM-Cloud/jpetstore-kubernetes)
but this is for ibm cloud and i need it for EKS.
Where can i get this <CLUSTER DOMAIN> value from?
You can define your own name in the ingress spec, example host: jpetstore.example.com. You need to install AWS LB Controller; which will handle your ingress request and create an ALB based on your ingress spec. You then access like: curl -H 'HOST: jpetstore.example.com' http://<your new alb endpoint>
I'm working on Cloud Run Anthos at GCP and host on GKE cluster.
Which I follow this qwiklabs for study the Cloud Run Anthos,
https://www.qwiklabs.com/focuses/5147?catalog_rank=%7B%22rank%22%3A6%2C%22num_filters%22%3A0%2C%22has_search%22%3Atrue%7D&parent=catalog&search_id=7054914
The example in hands-on lab. They used below command to check the service is working or not.
curl -H Host : <URL> <IP_CLUSTER>
And I wonder about reality used. No one add Host in the every request to working.
My question is, It have any possible to solve this issue? I just want to used the invoke request by browser or any application but no sure is possible?
I reach the resource document about Istio ingress, Which the example of qwiklab used it also.
It about VirtualSerivce and look like I have a Istio Ingress before to build this proxy.
Is that a correct way to trobleshooting?
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRewrite
You can change the config-domain config map in the knative namespace. you can see the config like this
kubectl describe configmap config-domain --namespace knative-serving
Then you can update it like this
Create a config file in a file config-domain.yaml (for example)
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
gblaquiere.dev: ""
Apply the configuration
kubectl apply -f config-domain.yaml
more detail here
With the new domain name, configure your DNS registrar to match your domain name to the load balancer external IP and you website will present the correct host on each request.
The curl -H Host... is a cheat to lie to the Istio controller and say to it "Yes I come from there". If you really come from there (your own domain name) no need to cheat!
I am trying to deploy microservices architecture on kubernetes cluster, do any one knows how to create ingress for AWS.
I recommend you use the ALB Ingress Controller https://github.com/kubernetes-sigs/aws-alb-ingress-controller, as it is recommended by AWS and creates Application Load Balancers for each Ingress.
Alternatively, know that you can use any kind of Ingress, such as Nginx, in AWS. You will create the Nginx Service of type LoadBalancer, so that all requests to that address are redirected to Nginx. Nginx itself will take care to redirect the requests to the correct service inside Kubernetes.
To create an Ingress Resource we first need to deploy Ingress Controller. Ingress Controller can be very easily deployed using helm. Follow the below steps to install Helm and ingress Controller:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ Kubectl createserviceaccount --namespace kube-system tiller
$ Kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-acount=tiller
$ helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
Once Ingress Controller is installed check it by running kubectl get pods and you should see 2 pods running. One is the Ingress Controller and the second is Default Backend.
And now if you will go to your AWS Management Console, you should see an Elastic Load Balancer running which routes traffic to ingress controller which in turn routes traffic to appropriate services based on appropriate rules.
To test Ingress Follow Steps 1 to 4 of this link here: Setting up HTTP Load Balancing with Ingress
Hope this helps!
I'm trying to figure out how to get this setup to work:
I am using Kube 1.7 (no RBAC) spun up from kops in AWS
I have a single nginx ingress controller for my entire cluster that is using a LoadBalancer service in the kube-system, namespace installed via Helm
I have cert-manager setup in kube-system, installed via Helm and
using ClusterIssuers
I have external-dns setup in kube-system installed via Helm
I have multiple applications, one per namespace, with associated Ingress objects in each namespace.
I am annotating the Ingresses with the appropriate annotations for both cert-manager (certmanager.k8s.io/cluster-issuer: letsencrypt-prod) and external-dns (dns.alpha.kubernetes.io/external: app.contoso.com)
In this scenario, cert-manager is reacting appropriately to the Ingress object (modifying it to complete the ACME challenge), but external-dns is not doing anything (logs are saying all hostnames are up to date). If I manually add a Route53 record for the ELB associated with the LB service, everything works as expected. Inspecting the Ingress object, I see that the status block looks like so:
status:
loadBalancer:
ingress:
- {}
which I suppose is why external-dns isn't reacting? How do I get this to work? Per the documentation
More troubleshooting information (pod definitions, ingress definitions, controller logs, etc.) can be found here: https://gist.github.com/DWSR/f6d596850346223393bec23b289c9731
I solved this myself. The nginx ingress controller has a --publish-service command line argument which will cause it to update the status fields on the ingress objects which, in turn, will cause external-dns to create the appropriate DNS records. When installing via Helm, simply set .Values.controller.publishService.enabled to true and this will take effect.
Sources:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md
https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration
The CoreOS Multinode Cluster guide appears to have a problem. When I create a cluster and configure connectivity, everything appears fine -- however, I'm unable to create an ELB through service exposing:
$ kubectl expose rc my-nginx --port 80 --type=LoadBalancer
service "my-nginx" exposed
$ kubectl describe services
Name: my-nginx
Namespace: temp
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.6.247
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32224/TCP
Endpoints: 10.244.37.2:80,10.244.73.2:80
Session Affinity: None
No events.
The IP line that says 10.100.6.247 looks promising, but no ELB is actually created in my account. I can otherwise interact with the cluster just fine, so it seems bizarre. A "kubectl get services" listing is similar -- it shows the private IP (same as above) but the EXTERNAL_IP column is empty.
Ultimately, my goal is a solution that allows me to easily configure my VPC (ie. private subnets with NAT instances) and if I can get this working, it'd be easy enough to drop into CloudFormation since it's based on user-data. The official method of kube-up doesn't leave room for VPC-level customization in a repeatable way.
Unfortunately, that getting-started guide isn't nearly as up to date as the kube-up implementation. For instance, I don't see a --cloud-provider=aws flag anywhere, and the kubernetes-controller-manager would need that in order to know to call the AWS APIs.
You may want to check out the official CoreOS on AWS guide:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
If you hit a deadend or find a problem, I recommend asking in the AWS Special Interest Group forum:
https://groups.google.com/forum/#!forum/kubernetes-sig-aws