I have a gcp cluster with api services and I was using ambassador 1.9 for edge routing. Now we have decided to upgrade the ambassador to 2.3.2. So I follow the steps in ambassador docs for upgradation by parallelly running both ambassador versions. But after the process is finished the backend service is unhealthy, making the ingress down.
Multiple deployments with corresponding services.
Ambassador Edge Stack as API gateway
Ingress for exposing the edge stack service
I'm a beginner in both ambassador and stackoverflow, so please let me know if more details are needed.
The solution worked for me is add a backend config
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: ambassador-hc-config
spec:
# https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
timeoutSec: 30
connectionDraining:
drainingTimeoutSec: 30
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 10
timeoutSec: 10
port: 8877
type: HTTP
requestPath: /ambassador/v0/check_alive
add this yaml and add the annotation
cloud.google.com/backend-config: '{"default": "ambassador-hc-config"}'
to the ambassador/edge-stack service.
Related
I have deployed AWS Load Balancer Controller on AWS EKS. I have created k8s Ingress resource
I am deploying java web application with k8s Deployment. I want to make sure sticky session holds to make my application work.
I have read that if I set below annotation then sticky sessions will work :
alb.ingress.kubernetes.io/target-type: ip
But I am seeing ingress is routing requests to different replica each time letting login fail as session cookies are not persisting.
What am I missing here ?
alb.ingress.kubernetes.io/target-type: ip is required.
but the annotation to enable stickiness is:
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
Also you can set cookie_duration_settings.
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=300
If you want to manage the stick session from the K8s level you can use the, sessionAffinity: ClientIP
kind: Service
apiVersion: v1
metadata:
name: service
spec:
selector:
app: app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress
We are trying to setup an egress gateway in a multi-cluster/multi-primary mesh
configuration where the egress gateway is located in only one cluster but used from both.
diagram of desired setup
The use case is that the clusters are in different network zones and we want to be able
to route traffic to one zone transparently to the clients in the other zone.
We followed this guide in one cluster and it worked fine. However we have trouble setting up the VirtualService in the second cluster
to use the egress gateway in the first cluster.
When deploying the following virtual service to the second cluster we get 503 with cluster_not_found.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- edition.cnn.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: edition.cnn.com
port:
number: 80
weight: 100
The endpoints proxy config on a pod in the second cluster misses the istio-egressgateway.istio-gateways.svc.cluster.local
endpoints (all other services are discovered and directed to the eastwest gateway of the other cluster).
We believe that this is the reason that this VirtualService doesn't work in the second cluster.
As a workaround we could redirect the egress traffic to the ingress gateway of the first cluster but this
has the disadvantage that the traffic leaves and re-enters the mesh which probably has an impact on tracing and monitoring.
Is it currently possible to setup a single egress gateway that can be used by all clusters in the mesh or do we have to go with the workaround?
According to the comments the solution should works as below:
To create a multi-cluster deployment you can use this tutorial. In this situation cross cluster workload of normal services works fine. However, there is a problem with getting the traffic to the egress gateway routed via the eastwest gateway. This can be solved with this example.
You should also change kind: VirtualService to kind: ServiceEntry in both clusters.
Like Tobias Henkel mentioned:
I got it to work fine with the service entry if I target the ingress gateway on ports 80/443 which then dispatches further to the mesh external services.
You can also use Admiral to automate traffic routing.
See also:
multi cluster mesh automation using Admiral
multi cluster service mesh od GKE
tutorial on GKE to create similar situation
How do you make Cloud Run for Anthos forward incoming HTTP2 requests to a Cloud Run service as HTTP2 instead of HTTP/1.1?
I'm using GCP with Cloud Run for Anthos to deploy a Java application that runs a GRPC server. The Cloud Run app is exposed publicly. I have also configured Cloud Run for Anthos with an SSL cert. When I try to use a GRPC client to call my service, client sends request over HTTP2 which the load balancer accepts but then when the request is forwarded to my Cloud Run service (a Java application running GRPC server), it comes in as HTTP/1.1 and gets rejected by the GRPC server. I assume somewhere between the k8 load balancer and my k8 pod, the request is being forwarded as HTTP/1.1 but I don't see how to fix this.
Bringing to together #whlee's answer and his very important followup comment, here's exactly what I had to do to get it to work.
You must deploy using gcloud cli in order to change the named port. The UI does not allow you to configure the port name. Deploying from service yaml is currently a beta feature. To deploy, run: gcloud beta run services replace /path/to/service.yaml
In my case, my service was initially deployed using the GCP cloud console UI, so here are the steps I ran to export and replace.
Export my existing service (named hermes-grpc) to yaml file:
gcloud beta run services describe hermes-grpc --format yaml > hermes-grpc.yaml
Edit my export yaml and make the following edits:
replaced:
ports:
- containerPort: 6565
with:
ports:
- name: h2c
containerPort: 6565
deleted the following lines:
tcpSocket:
port: 0
Deleted the name: line from the section
spec:
template:
metadata:
...
name:
Finally, redeploy service from edited yaml:
gcloud beta run services replace hermes-grpc.yaml
In the end my edited service yaml looked like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
annotations:
client.knative.dev/user-image: interledger4j/hermes-server:latest
run.googleapis.com/client-name: cloud-console
creationTimestamp: '2020-01-09T00:02:29Z'
generation: 3
name: hermes-grpc
namespace: default
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/services/hermes-grpc
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: '2'
autoscaling.knative.dev/minScale: '1'
run.googleapis.com/client-name: cloud-console
spec:
containerConcurrency: 80
containers:
image: interledger4j/hermes-server:latest
name: user-container
ports:
- name: h2c
containerPort: 6565
readinessProbe:
successThreshold: 1
resources:
limits:
cpu: 500m
memory: 384Mi
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100
https://github.com/knative/docs/blob/master/docs/serving/samples/grpc-ping-go/README.md
Describes how to configure named port to make HTTP/2 work
I'm trying to set up my Kubernetes services as being external by using type: LoadBalancer on AWS. After I created my service using kubectl I can see the change but no ELB is created, not even async. Any hints on what could cause this? The pod I'm trying to expose is running a Docker image which exposes a web-server on port 8001.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
name: my-service
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8001
selector:
name: my-service
This was answered by Jan Garaj in Google Container Engine: Kubernetes is not exposing external IP after creating container regarding a GCE deployment and the answer for AWS is the same: you need to wait a few minutes for the reconciler to kick in, notice that the ELB should be created, talk to the AWS APIs and create it for you.