Envoy (Istio Proxy) always intercepts requests regardless of the virtualservice setting - istio

I have 4 cases regarding istio (virtual service) configuration, and I'm curious how each case works. I will suppose that virtual service is created in namespace mynamespace.
It's obvious that in every case, traffic incoming to myapp.example.com is sent to service named myapp. However, I'm unsure about the traffic within the kubernetes cluster. When I sent cURL request from another pod (in another namespace) using command $ curl myapp.mynamespace.svc.cluster.local:8080, all the cases below seemed to be equal as same log was left at sidecar istio-proxy (envoy) alongside the myapp container.
However, I'm not sure why the traffic gets intercepted by envoy in all cases. Then what's the point in specifying mesh gateway, or specifying FQDN in hosts? I also tried sending request from a pod with and without istio-proxy enabled, but the results were the same.
Please fix me if I'm wrong ! Thank you.
Case 1
Spec:
Gateways:
external
Hosts:
myapp.example.com
Http:
Route:
Destination:
Host: myapp
Port:
Number: 8080
Case 2
Spec:
Gateways:
external
mesh
Hosts:
myapp.example.com
Http:
Route:
Destination:
Host: myapp
Port:
Number: 8080
Case 3
Spec:
Gateways:
external
Hosts:
myapp.example.com
myapp.mynamespace.svc.cluster.local
Http:
Route:
Destination:
Host: myapp
Port:
Number: 8080
Case 4
Spec:
Gateways:
external
mesh
Hosts:
myapp.example.com
myapp.mynamespace.svc.cluster.local
Http:
Route:
Destination:
Host: myapp
Port:
Number: 8080

Related

Istio virtual service spec host and destination rule host

I'm trying to understand Istio configuration model but the more I read the more I get confused, especially around the hosts and host fields. In their examples, they all use the same short name and I'm not sure whether they mean the virtual service name, the Kubernetes service hostname or the dns service address.
Assuming I have the following configuration:
My Kubernetees project namespace is called poc-my-ns
Inside poc-my-ns I have my pods (both version 1 and 2) a Kubernetes route and a Kubernetes service.
The service hostname is: poc-my-ns.svc.cluster.local and the route is https://poc-my-ns.orgdevcloudapps911.myorg.org.
Everything is up and running and the service selector gets all pods from all versions as it should. (Istio virtual service suppose to do the final selection by version).
The intended Istio configuration looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: poc-my-dr
spec:
host: poc-my-ns.svc.cluster.local # ???
subsets:
- name: v1
labels:
version: 1.0
- name: v2
labels:
version: 2.0
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: poc-my-vs
spec:
hosts:
- poc-my-ns.svc.cluster.local # ???
http:
- route:
- destination:
host: poc-my-dr # ???
subset: v1
weight: 70
- destination:
host: poc-my-dr # ???
subset: v2
weight: 30
My questions are:
Is the destination rule spec/host refers to the Kubernetes service hostname?
Is the virtual service spec/hosts refers to the Kubernetes service hostname, Is it the route https://poc-my-ns.orgdevcloudapps911.myorg.org or something else?
Is the virtual service spec/http/route/destination/host refers to the destination rule name or does it suppose to point to the Kubernetes service hostname or should it be the virtual service metadata/name?
I will really appreciate clarifications.
The VirtualService and DestinationRule basically configure the envoy-proxy of the istio mesh. The VirtualService defines where to route the traffic to and the DestinationRule defines what to additionally do with the traffic.
For the VS the spec.hosts list can contain kubernetes internal and external hosts.
Say you want the define how to route traffic for api.example.com coming from outside the kubernetes cluster through the istio-ingressgateway my-gateway into the mesh. It should be routed to the rating app in the store namespace, so the VS would look like this:
spec:
hosts:
- api.example.com # external host
gateway:
- my-gateway # the ingress-gateway
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # kubernetes service
If you want to define how cluster/mesh internal traffic is routed, you set rating.store.svc.cluster.local in the spec.hosts list and define the mesh gateway (or leave it out like you did, because mesh is the default) and route it to the rating.store.svc.cluster.local service. You also add a DR where you define subsets and route all mesh internal traffic to subset v1.
# VS
[...]
spec:
hosts:
- rating.store.svc.cluster.local # cluster internal host
gateway:
- mesh # mesh internal gateway (default when omitted)
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # cluster internal host
subset: v1 # defined in destinationrule below
---
[...]
spec:
host: rating.store.svc.cluster.local # cluster internal host
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
But it could also be that you want to route traffic to a cluster external destination. In that case destination.host would be an external fqdn, like in this example from docs:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wikipedia
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-wiki-rule
spec:
hosts:
- wikipedia.org
http:
- timeout: 5s
route:
- destination:
host: wikipedia.org
Think about it as "I want to route traffic from HOST_FROM to HOST_TO", where
HOST_FROM is spec.host and spec.hosts
HOST_TO is destination.host
and both can be inside the kubernetes cluster or outside.
So to answer all your questions:
It depends: If you want to route from/to cluster internal traffic you'll use a kubernetes service fqdn. For cluster external traffic you'll use the external target fqdn.
I highly recommend reading through the docs of VirtualService and DestinationRule where you can see several examples with explanations.

Kubernetes: How to setup TLS termination in the Load Balancer on AWS using Nginx Ingress Controller

In the documentation of Nginx Ingress for AWS it says:
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
Link: https://kubernetes.github.io/ingress-nginx/deploy/#tls-termination-in-aws-load-balancer-nlb
So, I follow the instructions: set AWS ACM certification, set VPC CIDR and deploy.
Then check ingress nginx service:
kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.124.56 adba41948av49484z55137c374e1e17d-09af54e014df8676.elb.us-east-1.amazonaws.com 80:31985/TCP,443:32330/TCP 17h
ingress-nginx-controller-admission ClusterIP 10.100.175.52 <none> 443/TCP 17h
In the AWS console, the Load Balancer has necessary certificate and all seems fine.
Next, I create Ingress rules and Service with type: ClusterIP
Service:
apiVersion: v1
kind: Service
metadata:
name: test-app-service
spec:
selector:
name: test-app-pod
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-app-ingress
spec:
ingressClassName: nginx
rules:
- host: foobar.com # forwards to LB
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: test-app-service
port:
number: 80
Check the Ingress:
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-app-ingress nginx foobar.com adba41948av49484z55137c374e1e17d-09af54e014df8676.elb.us-east-1.amazonaws.com 80 29m
So I am just stuck here.
When I go to http://foobar.com -- it works perfectly fine.
But when I go to https://foobar.com - it says 'Could not resolve host: foobar.com'
And I would expect that when I go to https://foobar.com then it terminates TLS on LB and sends the traffic to the service.
I have also found an article, where it is described the same, and in comments there are the same questions like I have, so I am not the only one :D : https://habeeb-umo.medium.com/using-nginx-ingress-in-eks-with-tls-termination-in-a-network-load-balancer-1783fc92935 (I followed this instructions also - no luck as well)
UPDATE:
as per #mdaniel
When I do curl -v http://foobar.com and curl -v https://foobar.com - it both says Could not resolve host: foobar.com:
http:
* Could not resolve host: foobar.com
* Closing connection 0
curl: (6) Could not resolve host: foobar.com
https:
* Could not resolve host: foobar.com
* Closing connection 0
curl: (6) Could not resolve host: foobar.com
And in the browser when I go to http://foobar.com - it opens the page OK, BUT when I refresh the page it shows 'This site can’t be reached'.
UPDATE2:
I think I have found an issue.
I used httpd container inside the pod and opened 8080 port
spec:
containers:
- name: some-app
image: httpd:2.4.54
ports:
- containerPort: 8080
So when I do port-forward
kubectl port-forward test-app-deployment-f59984d85-qckr9 8081:8080
The first GET request http://127.0.0.1:8081 is fine, but after another one - it fails
Forwarding from 127.0.0.1:8081 -> 8080
Forwarding from [::1]:8081 -> 8080
Handling connection for 8081
E1124 11:48:43.466682 94768 portforward.go:406] an error occurred forwarding 8081 -> 8080: error forwarding port 8080 to pod d79172ed802e00f93a834aab7b89a0da053dba00ad327d71fff85f582da9819e, uid : exit status 1: 2022/11/24 10:48:43 socat[15820] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
So I changed containerPort to 80 and it helped:
spec:
containers:
- name: some-app
image: httpd:2.4.54
ports:
- containerPort: 80 # changed port to 80
Run port forwarding: kubectl port-forward test-app-deployment-f59984d85-qckr9 8081:80
Make 3 GET requests http://127.0.0.1:8081
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081

AWS Load Balancer Controller. Ingress with Custom Fixed response 503

maybe someone can help me with this.
I have deployed AWS Load Balancer Controller to EKS.
When I created such ingress:
ingressClassName: alb
rules:
- host: myhost.com
http:
paths:
- backend:
service:
name: frontned
port:
number: 80
path: /*
pathType: ImplementationSpecific
In AWS it looks like this:
Load Balancer Rule in
The question Is that I want to change this default Fixed response 503 message. How can I do it?
I know that I can create something like this in annotations:
alb.ingress.kubernetes.io/actions.response-503: |
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":" ERROR ..."}}
And specify a rule for this:
- host: myhost.com
http:
paths:
- backend:
service:
name: response-503
port:
name: use-annotation
path: /*
pathType: ImplementationSpecific
But in this case, all of the traffic will go to this rule, If I specify 2 rules, 1 which will lead to annotation, and another one that will lead to the correct service, only one rule will work.
So I'm looking for a solution that can help me create 1 rule with the condition: if success - > route traffic to the pod, if not -> show custom 503 message.
I will be grateful for your help, thanks in advance!

Kubernetes Ingress Controller GPC GKE can't reach the site

Kubernetes Ingress Controller can't reach the site
Hi, this is the first time I am trying to deploy an application with kubernetes. The problem I am facing is I want to be able link subdomains with my svc, but when I try to navigate to the links I get
This site can’t be reached
I will explain the steps I made for these, probably I something is wrong or missing
I installed ingress-controller on google cloud platform
In GCP -> Networking Services -> Cloud DNS
a. I pointed testcompany.com with google dns
b. I created an A record pointing the public IP from the previous step "ingress-nginx-controller"
my svc manifest
apiVersion: v1
kind: Service
metadata:
namespace: staging
name: testcompany-svc
labels:
app: testcompany-svc
spec:
type: NodePort
ports:
- name: test-http
port: 80
protocol: TCP
targetPort: 3001
selector:
app: testcompany
my ingress manifest
apiVersion: networking.k8s.io/v1beta1
- host: api.testcompany.com
http:
paths:
- backend:
serviceName: testcompany-svc
servicePort: test-http
Everything is green and it seems to be working, but when I try to reach the url I get the This site can’t be reached
Update 1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: staging
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: front.stagingtestcompany.com
http:
paths:
- backend:
serviceName: testcompanyfront-svc
servicePort: testcompanyfront-http
- host: api.stagingtestcompanysrl.com
http:
paths:
- backend:
serviceName: testcompanynodeapi-svc
servicePort: testcompanyapi-http
You should check this, in order:
your Service, Pod, Ingress are in the same namespace: kubectl get all -n staging
your Pod is listening on port 3001: run it locally if you can, or use kubectl port-forward pods/[pod-name] -n staging 3001:3001 and try it locally with http://localhost:3001/...
your Service is reaching your Pod correctly: use kubectl port-forward service/testcompany-svc -n staging 3001:3001 and try it locally with http://localhost:3001/...
check any other Ingress spec rules before the one you posted
check for firewall rules in your VPC network, they should allow traffic from Google LBs

Exposing the same service with same URL but two different ports with traefik?

recently I am trying to set up CI/CD flow with Kubernetes v1.7.3 and jenkins v2.73.2 on AWS in China (GFW blocking dockerhub).
Right now I can expose services with traefik but it seems I cannot expose the same service with the same URL with two different ports.
Ideally I would want expose http://jenkins.mydomain.com as jenkins-ui on port 80, as well as the jenkin-slave (jenkins-discovery) on port 50000.
For example, I'd want this to work:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
namespace: default
spec:
rules:
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 50000
and my jenkins-svc is defined as
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave
In reality the latter rule overwrites the former rule.
Furthermore, There are two plugins I have tried: kubernetes-cloud and kubernetes.
With the former option I cannot configure jenkins-tunnel URL, so the slave fails to connect with the master; with the latter option I cannot pull from a private docker registry such as AWS ECR (no place to provice credential), therefore not able to create the slave (imagePullError).
Lastly, really I am just trying to get jenkins to work (create slaves with my custom image, build with slaves and delete slaves after jobs' finished ), any other solution is welcomed.
If you want your jenkins to be reachable from outside of your cluster then you need to change your ingress configuration.
Default type of ingress type is ClusterIP
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
You want it type to be NodePort
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :
So your service should look like:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
type: NodePort
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave