how to use mutliple service paths in AWS EKS Ingress - kubectl

I deployed all my resources in Amazon EKS Cluster, now i want to access each services using ingress.i have 3 micro-services.when i added only one service in ingress yaml file it is working please find that code below.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dummy.us-east-2.elb.amazonaws.com
http:
paths:
- path: /
backend:
serviceName: user-api-service
servicePort: 80
the above code is working for me and this i changed the ingress file to support multiple paths. the changed code is below
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dummy.elb.amazonaws.com
http:
paths:
- path: /user(/|$)(.*)
backend:
serviceName: user-api-service
servicePort: 80
after this i try to access the service using the below link in postman
http://dummy.us-east-2.elb.amazonaws.com/user/api/user/register
but the postman throwing the error 404
can anyone please help me with this issue? please ask if you need more informations

Related

How can I create different auth-type for different target in ingress controller?

I am deploying a EKS cluster to AWS and using alb ingress controller points to my K8S service. The ingress spec is shown as below.
There are two targets path: /* and path: /es/*. And I also configured alb.ingress.kubernetes.io/auth-type to use cognito as authentication method.
My question is how can I configure different auth-type for different target? I'd like to use cognito for /* and none for /es/*. How can I achieve that?
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sidecar
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.order: '1'
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
# Auth
alb.ingress.kubernetes.io/auth-type: cognito
alb.ingress.kubernetes.io/auth-idp-cognito: '{"userPoolARN":"xxxx","userPoolClientID":"xxxx","userPoolDomain":"xxxx"}'
alb.ingress.kubernetes.io/auth-scope: 'email openid aws.cognito.signin.user.admin'
alb.ingress.kubernetes.io/certificate-arn: xxxx
spec:
rules:
- http:
paths:
- path: /es/*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
- path: /*
backend:
serviceName: server-entrypoint
servicePort: 8081
This question comes up a lot, so I guess it needs to be PR-ed into their documentation.
Ingress resources are cumulative, so you can separate your paths into two separate Ingress resources in order to annotate each one differently. They will be combined with all other Ingress resources across the entire cluster to form the final config
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar-star
namespace: default
annotations:
kubernetes.io/ingress.class: alb
# ... and the rest ...
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: server-entrypoint
servicePort: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar-es
namespace: default
annotations:
kubernetes.io/ingress.class: alb
# ... and the rest ...
spec:
rules:
- http:
paths:
- path: /es/*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
The solution above didn't work for me. If you want, you can use each auth-related annotation in your service manifests, which is more human-readable than writing more than one ingress object and combining it all together. See the below code:
apiVersion: v1
kind: Service
metadata:
name: admin-webapp
annotations:
alb.ingress.kubernetes.io/auth-type: cognito
alb.ingress.kubernetes.io/auth-scope: openid
alb.ingress.kubernetes.io/auth-session-timeout: '3600'
alb.ingress.kubernetes.io/auth-session-cookie: AWSELBAuthSessionCookie
alb.ingress.kubernetes.io/auth-idp-cognito: '{"UserPoolArn": "arn:aws:cognito-idp:us-east-1:xxx:userpool/xxxx","UserPoolClientId":"xxx","UserPoolDomain":"xxx"}'
alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate
spec:
selector:
app: admin-webapp-deployment
ports:
- name: http
port: 80
type: NodePort
I had the same issue and the this code solved my issue :)

Redirecting traffic to external url

Updates based on comments:
Lets say there's an API hosted # hello.company1.com in another GCP Project...
I would like to have a possibility that when some1 visits a url abc.company.com they are serverd traffic from hello.company1.com something similar to an API gateway...
It could be easily done with an API gateway, I am just trying to figure out if its possible to with K8S service & ingress.
I have created a Cloud DNS zone as abc.company.com
When someone would visit abc.company.com/google I would like the request to be forwarded to an external url let's say google.com
Could this be achieved by creating a service of type external name and an ingress with host name abc.company.com
kind: Service
apiVersion: v1
metadata:
name: test-srv
spec:
type: ExternalName
externalName: google.com
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: abc.company.com
- http:
paths:
- path: /google
backend:
serviceName: test-srv
It's possible to achieve what you want, however you will need to use Nginx Ingress to do that, as you will need to use specific annotation - nginx.ingress.kubernetes.io/upstream-vhost.
It was well described in this Github issue based on storage.googleapis.com.
apiVersion: v1
kind: Service
metadata:
name: google-storage-buckets
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-assets-ingress
annotations:
kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/rewrite-target: /[BUCKET_NAME]/[BUILD_SHA]
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
spec:
rules:
- host: abc.company.com
http:
paths:
- path: /your/path
backend:
serviceName: google-storage-buckets
servicePort: 443
Depends on your needs, if you would use it on non https you would need to change servicePort to 80 and remove annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS".
For additional details, you can check other similar Stackoverflow question.
Please remember to not use - in spec.rules.host and spec.rules.http in the same manifest. You should use - only with http, if you don't have host in your configuration.

ingress always showing default backend - 404

I am trying to install the Cyclos Mobile app on GCP using the demo given on youtube on this URL: https://www.youtube.com/watch?v=LmfdhOXK_ik
Everything setup perfectly as per this video but when I am trying to access the setup on browser it always showing default backend - 404. I have tried everything as per stack overflow several previous questions but still same error.
Any I idea to fix this ?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencypt-staging
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/cluster-issuer":"letsencypt-staging","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/proxy-connect-timeout":"3600"},"name":"cyclos-ingress-nginx-https","namespace":"cyclos-name-space"},"spec":{"backend":{"serviceName":"default-http-backend","servicePort":80},"rules":[{"host":"ip-address.xip.io","http":{"paths":[{"backend":{"serviceName":"cyclos-app-stateful","servicePort":80},"path":"/*"}]}}],"tls":[{"hosts":["ip-address.xip.io"],"secretName":"ip-address.xip.io-tls-secret"}]}}
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-09-29T07:00:01Z"
generation: 11
name: cyclos-ingress-nginx-https
namespace: cyclos-name-space
resourceVersion: "643221534"
selfLink: /apis/extensions/v1beta1/namespaces/cyclos-name-space/ingresses/cyclos-ingress-nginx-https
uid: uid
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: ip-address.xip.io
http:
paths:
- backend:
serviceName: cyclos-app-stateful
servicePort: 80
path: /*
tls:
- hosts:
- ip-address.xip.io
secretName: ip-address.xip.io-tls-secret
status:
loadBalancer:
ingress:
- ip: IP

Health check problem with setting up GKE with istio-gateway

Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway

Problems configuring Ingress with cookie affinity

I was looking for how to use cookie affinity in GKE, using Ingress for that.
I've found the following link to do it: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
I've created a yaml with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
Everything seems to go well. When I inspect the created Ingress I see 2 backend services. One of them has the cookie configured, but the other doesn't.
If I create the deployment, and from GCP's console, create the Service and Ingress, only one backend service appears.
Somebody knows why using a yaml I get 2, but doing it from console I only get one?
Thanks in advance
Oscar
Your definition is good.
The reason you have two backend's is because your ingress does not define a default backend. GCE LB require a default backend so during LB creation, a second backend is added to the LB to act as the default (this backend does nothing but serve 404 responses). The default backend does not use the backendConfig.
This shouldn't be a problem, but if you want to ensure only your backend is used, define a default backend value in your ingress definition by adding the spec.backend:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
But, like I said, you don't NEED to define this, the additional backend won't really come into play and no sessions affinity is required (there is only a single pod anyway). If you are curious, the default backend pod in question is called l7-default-backend-[replicaSet_hash]-[pod_hash] in the kube-system namespace
You can enable the cookies on the ingress like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-sticky
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: ingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
You can create the service like :
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you are using the traefik ingress instead of the nginx and deault GKe ingress you can write the service like this
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app: session-affinity-demo