I've been reading documentation up and down all day and I can't seem to get this to work.
I have an unruly application that opens a connection for every HTTP request. I would like to improve performance by forcing HTTP multiplexing over long lived TCP connections.
I tried making a ServiceEntry and a DestinationRule, but I didn't see that have any effect. I still saw a large number of TCP connections made.
I figured Istio would pool the connections via maxRequestsPerConnection from DestinationRule. Is that wrong?
I imagined I needed:
A ServiceEntry to make known the external service.
A DestinationRule to govern how the service is accessed.
In general though I would just like the Envoy side car to just pool and multiplex all egress connections for this service no matter the destination.
Destination Rule
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: api
spec:
host: google.com
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s
Service Entry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: api
spec:
hosts:
- google.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
istioctl -i istio proxy-config cluster api.namespace
...
google.com 443 - outbound STRICT_DNS api.namespace
...
istioctl -i istio proxy-config cluster api.namespace --fqdn google.com -o json
"circuitBreakers": {
"thresholds": [
{
"maxConnections": 100,
"maxPendingRequests": 4294967295,
"maxRequests": 4294967295,
"maxRetries": 4294967295,
"trackRemaining": true
}
]
},
"upstreamConnectionOptions": {
"tcpKeepalive": {
"keepaliveTime": 7200,
"keepaliveInterval": 75
}
},
OK, so I think what I discovered above in my question is that maxConnections does not cause the Envoy Proxy to make a pool with 100 connections. Instead this sets a circuit breaking rule that prevents the application from opening more connections.
Perhaps I need to use an HTTP Connection pool?
Related
I have an HTTP2 service. It's deployed on EKS (AWS Kubernetes). And I am trying to expose it to the internet.
If I am exposing it without TLS (with the code below) everything works fine. I can access it.
apiVersion: v1
kind: Service
metadata:
name: demoapp
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 5000
selector:
name: demoapp
If I am adding TLS, I am getting HTTP 502 (Bad Gateway).
apiVersion: v1
kind: Service
metadata:
name: demoapp
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: somearn
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 5000
selector:
name: demoapp
I have a guess (which could be wrong) that service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http for reason assumes that it's HTTP 1.1 (vs HTTP 2.0) and bark when one of the sides start sending binary (vs textual data).
Additional note: I am not using any Ingress controller.
And a thought. Potentially, I can bring TLS termination within the app (vs doing it on the load balancer) and switch as an example to NLB. However, brings a lot of hair in the solution and I would rather use load balancer for it.
Base on the annotations in your question; the TLS should terminate at the CLB and you should remove service.beta.kubernetes.io/aws-load-balancer-backend-protocol (default to tcp).
i am fairly new to Istio - so far i have a k8s cluster (using kops) on AWS , behind ELB.
All traffic is routed via TCP.
Ingress gateway service is configured as NodePort with following config
istio-system istio-ingressgateway NodePort 100.65.241.150 <none> 15020:31038/TCP,80:30205/TCP,31400:30204/TCP,15029:31714/TCP,15030:30016/TCP,15031:32508/TCP,15032:30110/TCP,15443:32730/TCP
I have used 'demo' helm option to deploy Istio 1.4.0.
Have created gateway, VS and DR with following config -
Gateway is in istio-system namespace, VS and DR on default namespace
kind: Gateway
metadata:
name: ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
hosts:
- "*"
gateways:
- ingress-gateway
http:
- route:
- destination:
host: webapp
subset: original
weight: 100
- destination:
host: webapp
subset: v2
weight: 0
---
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
host: webapp
subsets:
- labels:
version: original
name: original
- labels:
version: v2
name: v2
Service pods listen on port 80 - and i have tested via port forwarding - and are functioning as expected.
Although when i do curl on https://hostname externally i get a
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
i have enabled debug logging in the envoy - but dont see anything meaningful in the logs relating to the timeout.
Any suggestion on where i might be going wrong?
Do i need to add any service annotations relating to ELB in istio ingress gateway?
Any other suggestions?
I found few things which need to be fixed
1. Connect with loadbalancer
As I mentioned in comments you need to fix your ingress-gateway to automaticly get EXTERNAL-IP addres as in istio documentation, for now your ingress is a NodePort so as far as I'm concerned it won't work, you can configure it to use with nodeport, but I assume you want the loadbalancer.
The first step would be to change istio-ingressgateway svc type from NodePort to loadbalancer and check if you get the EXTERNAL-IP.
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
It should look like there
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h
And then everything goes through the external-ip address which is 130.211.10.121
2. Fix your yamls
Note, for tcp traffic like that, we must match on the incoming port, in this case port 31400
Check this example from istio documentation
Specially this part with gateway, virtual service and destination rule.
You should add this to your virtual service.
tcp:
- match:
- port: 31400
3. Remember about namespaces.
In your example, because it's default it should work, but if you create another namespace, remember that if gateway and virtual service are in another namespace then your need to show virtual service where is the gateway.
Example here
Specially the part in virtual service
gateways:
- some-config-namespace/my-gateway
I hope it help you with your issues. Let me know if you have any more questions.
I am looking at this example of Istio, and they are craeting a ServiceEntry and a VirtualService to access the external service, but I don't understand why are they creating a VirtualService as well.
So, this is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https-port
protocol: HTTPS
resolution: DNS
With just this object, if I try to curl edition.cnn.com, I get 200:
/ # curl edition.cnn.com -IL 2>/dev/null | grep HTTP
HTTP/1.1 301 Moved Permanently
HTTP/1.1 200 OK
While I can't access other services:
/ # curl google.com -IL
HTTP/1.1 502 Bad Gateway
location: http://google.com/
date: Fri, 10 Jan 2020 10:12:45 GMT
server: envoy
transfer-encoding: chunked
But in the example they create this VirtualService as well.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
tls:
- match:
- port: 443
sni_hosts:
- edition.cnn.com
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
What's the purpose of the VirtualService in this scenario?.
The VirtualService object is basically an abstract pilot resource that modifies envoy filter.
So creating VirtualService is a way of modification of envoy and its main purpose is like answering the question: "for a name, how do I route to backends?"
VirtualService can also be bound to Gateway.
In Your case lack of VirtualService results in lack of modification of the envoy from the default/global configuration. That means that the default configuration was enough for this case to work correctly.
So the Gateway which was used was most likely default. With same protocol and port that you requested with curl which all matched Your ServiceEntry requirements for connectivity.
This is also mentioned in istio documentation:
Virtual
services,
along with destination
rules,
are the key building blocks of Istio’s traffic routing functionality.
A virtual service lets you configure how requests are routed to a
service within an Istio service mesh, building on the basic
connectivity and discovery provided by Istio and your platform. Each
virtual service consists of a set of routing rules that are evaluated
in order, letting Istio match each given request to the virtual
service to a specific real destination within the mesh. Your mesh can
require multiple virtual services or none depending on your use case.
You can use VirtualService to add thing like timeout to the connection like in this example.
You can check the routes for Your service with the following command from istio documentation istioctl proxy-config routes <pod-name[.namespace]>
For bookinfo productpage demo app it is:
istioctl pc routes $(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}') --name 9080 -o json
This way You can check how routes look without VirtualService object.
Hope this helps You in understanding istio.
The VirtualService is not really doing anything, but as the docs say:
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio
The ServiceEntry adds the CNN site as an entry to Istio’s internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, whereas the VirtualService would allow the proper routing of request (basically traffic management).
This page in the docs gives a bit more background info on using ServiceEntries and VirtualServices, but basically the ServiceEntry makes sure your mesh knows about the service and can monitor it, and the VirtualService controls what traffic is going to the service, which in this case is all of it.
I am trying to run a bitcoin node on kubernetes. My stateful set is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: bitcoin-stateful
namespace: dev
spec:
serviceName: bitcoinrpc-dev-service
replicas: 1
selector:
matchLabels:
app: bitcoin-node
template:
metadata:
labels:
app: bitcoin-node
spec:
containers:
- name: bitcoin-node-mainnet
image: myimage:v0.13.2-addrindex
imagePullPolicy: Always
ports:
- containerPort: 8332
volumeMounts:
- name: bitcoin-chaindata
mountPath: /root/.bitcoin
livenessProbe:
exec:
command:
- bitcoin-cli
- getinfo
initialDelaySeconds: 60 #wait this period after staring fist time
periodSeconds: 15 # polling interval
timeoutSeconds: 15 # wish to receive response within this time period
readinessProbe:
exec:
command:
- bitcoin-cli
- getinfo
initialDelaySeconds: 60 #wait this period after staring fist time
periodSeconds: 15 # polling interval
timeoutSeconds: 15 # wish to receive response within this time period
command: ["/bin/bash"]
args: ["-c","service ntp start && \
bitcoind -printtoconsole -conf=/root/.bitcoin/bitcoin.conf -reindex-chainstate -datadir=/root/.bitcoin/ -daemon=0 -bind=0.0.0.0"]
Since, the bitcoin node doesn't serve any http get requests and only can serve post requests, I am trying to use bitcoin-cli command for liveness and readiness probe
My service is as follows:
kind: Service
apiVersion: v1
metadata:
name: bitcoinrpc-dev-service
namespace: dev
spec:
selector:
app: bitcoin-node
ports:
- name: mainnet
protocol: TCP
port: 80
targetPort: 8332
When I describe the pods, they are running ok and all the health checks seem to be ok.
However, I am also using ingress controller with the following config:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dev-ingress
namespace: dev
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "dev-ingress"
spec:
rules:
- host: bitcoin.something.net
http:
paths:
- path: /rpc
backend:
serviceName: bitcoinrpc-dev-service
servicePort: 80
The health checks on the L7 load balancer seem to be failing. The tests are automatically configured in the following manner.
However, these tests are not the same as the ones configured in the readiness probe. I tried to delete the ingress and recreate however, it still behaves the same way.
I have the following questions:
1. Should I modify/delete this health check manually?
2. Even if the health check is failing (wrongly configured), since the containers and ingress are up, does it mean that I should be able to access the service through http?
What is is missing is that you are performing the liveness and readiness probe as exec command, therefor you need to create a pod that includes an Exec readiness probe and other pod that includes Exec readiness probe as well, Here and Here is described how to do it.
Another thing is to receive traffic through the GCE L7 Loadbalancer Controller you need:
At least 1 Kubernetes NodePort Service (this is the endpoint for your Ingress), so your service is not configured well. therefor you will not be able to able to access the service.
The health check in picture in for the Default backend (where your MIG is using it to check the health of the node) that mean your nodes health check not the container.
No, you don't have to delete the health check as it will get created automatically even if you delete it.
No, you won't be able to access the services until the health checks pass because the traffic in case of gke is passed using NEGs which depend on health checks to know where they can route traffic to.
One possible solution this could be that you need to add a basic http router to your application that returns 200, this can be used health check endpoint.
Other possible options include:
Creating a service of type NodePort and using LoadBalancer to route traffic on the given port to the node pool/instance groups as backend service rather than using NEG
Create the service of type LoadBalancer. This step is the easiest but you need to ensure that the load balancer ip is protected using best security policies like IAP, firewall rules, etc.
I am using istio 1.0.2 version with istio-demo-auth.yaml, I have one mssql and activemq deployed in the same namespaces with other applications, both were be injected by istioctl. The applications can connect to those two services inside the cluster, but I make those two services' type as NodePort, it succeeded, but I cannot access those nodeport(52433, 51618, or 58161).
kubectl get svc -n $namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
amq-master-01 NodePort 10.254.176.151 61618:51618/TCP,8161:58161/TCP 4h
mssql-master NodePort 10.254.209.36 2433:52433/TCP 33m
kubectl get deployment -n $namespace
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
activemq 1 1 1 1 4h
mssql-master 1 1 1 1 44m
Then I try to use gateway and virtualservice for using ingressgateway tcp port 31400. It works, as below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tcp-gateway
namespace: multitenancy
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mssql-tcp
namespace: multitenancy
spec:
gateways:
- tcp-gateway
hosts:
- "*"
tcp:
- match:
- port: 31400
route:
- destination:
host: mssql-master
port:
number: 2433
My question is,
1. How to configure for another http connection for 61618 or other tcp connections? Currently I can only use 31400 for one service(mssql-2433).
2. Why is that nodeport is not working after I inject those application into istio, how could it be work?
Thanks.
Referring to the documentation:
Type NodePort
If you set the type field to NodePort, the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service. That port will be reported in your Service’s .spec.ports[*].nodePort field.
Just update your config of all masters and you will be able to allocate any port.
Regarding to the second question:
I suggest you to create an issue on github, because it looks like a bug, there are no restrictions to use nodePort in the documentation.