We are using gcp ingress gateway for our kubernetes cluster for public access url . Is there is any way or any specific annotation that can we use to restricts no of rps to our services.
With the help of below annotation you can set the RPS in Nginx-ingress.
nginx.ingress.kubernetes.io/limit-rps
As an example below, the rate limit was set to 5 requests per second.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/limit-rps: "5"
Related
Istio has a custom resource called DestinationRule, and this resource has an argument called spec.host as shown below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-destination-rule
spec:
host: my-svc
I want to make all traffic through this service go through DestinationRule.
If I enter the name of the service in the host, will all traffic go through the DestinationRule, not just the case of calling with the name of the service using Service Discovery? For example, does the DestinationRule apply even when prodding through external DNS?
I have a gcp cluster with api services and I was using ambassador 1.9 for edge routing. Now we have decided to upgrade the ambassador to 2.3.2. So I follow the steps in ambassador docs for upgradation by parallelly running both ambassador versions. But after the process is finished the backend service is unhealthy, making the ingress down.
Multiple deployments with corresponding services.
Ambassador Edge Stack as API gateway
Ingress for exposing the edge stack service
I'm a beginner in both ambassador and stackoverflow, so please let me know if more details are needed.
The solution worked for me is add a backend config
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: ambassador-hc-config
spec:
# https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
timeoutSec: 30
connectionDraining:
drainingTimeoutSec: 30
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 10
timeoutSec: 10
port: 8877
type: HTTP
requestPath: /ambassador/v0/check_alive
add this yaml and add the annotation
cloud.google.com/backend-config: '{"default": "ambassador-hc-config"}'
to the ambassador/edge-stack service.
I use aws-load-balancer-eip-allocations assign static IP to LoadBalancer service using k8s on AWS. The version of EKS is v1.16.13. The doc at https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211, line 210 and 211 says "static IP addresses for the NLB. Only supported on elbv2 (NLB)". I do not know what the elbv2 is. I use the code below. But, I did not get static IP. Is elbv2 the problem? How do I use elbv2? Please also refer to https://github.com/kubernetes/kubernetes/pull/69263 as well.
apiVersion: v1
kind: Service
metadata:
name: ingress-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-0187de53333555567"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
have in mind that you need 1 EIP per subnet/zone and by default EKS uses a minimum of 2 zones.
This is a working example you may found useful:
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxxxxxxxxxxxxxxx,subnet-yyyyyyyyyyyyyyyyy"
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-wwwwwwwwwwwwwwwww,eipalloc-zzzzzzzzzzzzzzzz"
I hope this is useful to you
Following this task I can access to external services by defining ServiceEntry configurations. And in another task I can limit the traffic to a service, it works in cluster. But I failed to limit the traffic from service in cluster to external urls like www.google.com.
This is my adapter configuration
apiVersion: "config.istio.io/v1alpha2"
kind: memquota
metadata:
name: handler
namespace: samples
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 15
validDuration: 10s
and quota configuration
apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: samples
spec:
dimensions:
source: source.labels["app"] | source.labels["svc"] | "unknown"
destination: dnsName("www.google.com") | uri("https://www.google.com") | "unknown"
How to enable rate limits for outside urls in Istio?
You should direct the traffic through an egress gateway, and then apply the rate limiting. The issue is that in Istio, policy enforcement is performed by the destination. In case of the external services, they are represented by an egress gateway.
I have question about kubernetes ingress.
I want to use ingress with my Amazon account and/or private cloud and want to assign external IP.
It is possible to assign external ip for services :
Services documentation - chapter external IP
but cannot find a way to do that for Ingress : Ingress documentation.
My question is direct especially to Kubernetes team.
Similar question was asked by Simon in this topic : How to force SSL for Kubernetes Ingress on GKE 2
but he asked about GKE while I am interested in private cloud, AWS.
Thank you in advance.
[UPDATE]
Guys found that my question may was answered already in this topic.
Actually answer that #anigosa put there is specific for GCloud.
His solution won't work in private cloud neither in AWS cloud. In my opinion the reason for that is that he use type: LoadBalancer (which cannot be used in private cloud) and use loadBalancerIP property which will works only on GCloud(for AWS it cause error : "Failed to create load balancer for service default/nginx-ingress-svc: LoadBalancerIP cannot be specified for AWS ELB
").
Looking at this issue, it seems you can define annotation on your service and map it to existing elastic ip.
Something like that:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <>
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Please note this will create ELB for this service, not ingress.
As an ingress is simply one service (=ELB) handling requests for many other services, it should be possible to do something similar for ingress, but I couldn't find any docs for it.
There are two main ways you can do this. One is using a static IP annotation as shown in Omer's answer (which is cloud specific, and normally relies on the external IP being setup beforehand), the other is using an ingress controller (which is generally cloud agnostic).
The ingress controller will obtain an external IP on its service and then pass that to your ingress which will then use that IP as its own.
Traffic will then come into the cluster via the controller's service and the controller will route to your ingress.
Here's an example of the ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: my-ingress-class
spec:
tls:
- hosts:
- ssl.somehost.com
rules:
- host: ssl.somehost.com
http:
paths:
- backend:
serviceName: backend-service
servicePort: 8080
The line
kubernetes.io/ingress.class: my-ingress-class
Tells the cluster we want only an ingress controller that handles this "class" of ingress traffic -- you can have multiple ingress controllers in the cluster, each declaring they are handling a different class of ingress traffic so when you install the ingress controller, you also need to declare which ingress class you want it to handle.
Caveat: If you do not declare the ingress class on an ingress resource, ALL the ingress controllers in the cluster will attempt to route traffic to the ingres
Now if you want an external IP that is private, you can do that via the controller. For AWS and GCP you have annotations that tell the cloud provider you want an IP that is internal only by adding a specific annotation to the loadbalancer of the ingress controller
For AWS:
service.beta.kubernetes.io/aws-load-balancer-type: "internal"
For GCP:
networking.gke.io/load-balancer-type: "Internal"
or (< Kubernetes 1.17)
cloud.google.com/load-balancer-type: "Internal"
Your ingress will inherit the IP obtained by the ingress controller's loadbalancer