We want Istio to allow incoming traffic to a service only from a particular namespace. How can we do this with Istio? We are runnning Istio 1.1.3 version.
Update:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- podSelector:
matchLabels:
istio: ingress
This did not work I am able to access the service from other namespaces as well. Next i tried:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: external-api-caller
namespace: test-ns
spec:
rules:
- services: ["testapp"]
methods: ["*"]
constraints:
- key: "destination.labels[version]"
values: ["v1", "v2"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: external-api-caller
namespace: test-ns
spec:
subjects:
- properties:
source.namespace: "default"
roleRef:
kind: ServiceRole
name: "external-api-caller"
I am able to access the service from all the namespaces. Where i expected it should be allowed only from "default" namespace
I'm not sure if this is possible for particular namespace, but it will work on labels.
You can create a network policy in Istio, this is nicely explained on Traffic Routing in Kubernetes via Istio and Envoy Proxy.
...
ingress:
- from:
- podSelector:
matchLabels:
zone: trusted
...
In the example only pods with label zone: trusted will be allowed to make incoming connection to the pod.
You can read about Using Network Policy with Istio.
I would also recommend reading Security Concepts in Istio as well as Denials and White/Black Listing.
Hope this helps You.
Using k8s Network Policy: Yes it is possible. The example that is posted in the question is not allowing from a different namespace. In the Ingress rule you have to use namespace selector which will be used to specify the namespace from which you want to allow the traffic. In the example below, namespace with label 'ns-group: prod-ns' will be allowed to access the pod with label 'app: testapp' on port 80 and protocol TCP
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- namespaceSelector:
matchLabels:
ns-group: prod-ns
Using Istio White Listing Policy: You can go through white listing policy examples and attribute vocabulary
Below is the example,
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist-namespace
spec:
compiledAdapter: listchecker
params:
overrides: ["prod-ns"]
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: source-ns-instance
spec:
compiledTemplate: listentry
params:
value: source.namespace
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: rule-policy-1
spec:
match: destination.labels["app"] == "testapp" && destination.namespace == "test-ns"
actions:
- handler: whitelist-namespace
instances: [ source-ns-instance ]
Related
I need help connecting a private cluster (from now On Cluster 1) of EKS through AWS Mesh and CloudMap to another public/private cluster (from now on Cluster 2).
I have managed to get Cluster 2 to connect to Cluster 1 through a virtual mesh, making a 'curl a core.app.svc.cluster.local:8080'; but I can't do it the other way around.
I clarify that if I do 'curl a core.app.svc.cluster.local:9000' it gives me a connection error because there is nothing on that port.
I have created an Endpoints for Mesh on the private networks of cluster 1, and the security group of Cluster 1 has access through port 8080 of CLuster 2.
I have also created router and virtual service for the CLuster 2.
In short, I've created the same thing for both clusters.
The fact is that if I do from inside the pod of Cluster 1 'curl front.app.svc.cluster.local:8080', it does not make any connection, I have checked the file /etc/resolv.conf and it has the DNS inside but the result is:
curl: (6) Could not resolve host: front.app.svc.cluster.local:8080
If I make a 'traceroute front.app.svc.cluster.local:8080' it responds with:
traceroute: bad address 'front.app.svc.cluster.local:8080'
I leave my settings:
CLUSTER 1 (private)
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: app
spec:
namespaceSelector:
matchLabels:
mesh: app
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: core
namespace: app
spec:
podSelector:
matchLabels:
app: core
version: v1
listeners:
- portMapping:
port: 8080
protocol: http
serviceDiscovery:
awsCloudMap:
namespaceName: app.pvt.aws.local
serviceName: core
backends:
- virtualService:
virtualServiceARN: arn:aws:appmesh:eu-west-2:238523995933:mesh/app/virtualService/front.app.svc.cluster.local
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: core
namespace: app
spec:
awsName: core.app.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: core-router
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: app
name: core-router
spec:
listeners:
- portMapping:
port: 8080
protocol: http
routes:
- name: core-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name: core
weight: 1
CLUSTER 2 (public/private)
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: app
spec:
namespaceSelector:
matchLabels:
mesh: app
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: front
namespace: app
spec:
awsName: front.app.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: front-router
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: app
name: front-router
spec:
listeners:
- portMapping:
port: 8080
protocol: http
routes:
- name: front-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name:front
weight: 1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: front
namespace: app
spec:
podSelector:
matchLabels:
app: front
listeners:
- portMapping:
port: 8080
protocol: http
serviceDiscovery:
awsCloudMap:
namespaceName: app.pvt.aws.local
serviceName: front
backends:
- virtualService:
virtualServiceARN: arn:aws:appmesh:eu-west-2:238523995933:mesh/app/virtualService/core.app.svc.cluster.local
Could you help me understand why it works for one side and not for the other?
Thanks in advance.
Here is my setup.
I have 2 AWS accounts.
Applications account
Monitoring account
Application account has EKS + Istio + Application related microservices + promtail agents.
Monitoring account has centralized logging system within EKS + Istio + (Grafana & Prometheus & loki pods running)
From Applications account, I want to push logs to Loki on Monitoring a/c. I tried exposing Loki service outside monitoring a/c but I am facing issues to set loki url as https://<DNS_URL>/loki. This change I tried by using suggestions at here and here, but that is not working for me. I have installed the loki-stack from this url
The question is how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
Please note both accounts are using pods within EKS and not standalone loki or promtail.
Thanks and regards.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2021-10-25T14:59:20Z"
labels:
app: loki
app.kubernetes.io/managed-by: Helm
chart: loki-2.5.0
heritage: Helm
release: loki
name: loki
namespace: monitoring
resourceVersion: "18279654"
uid: 7eba14cb-41c9-445d-bedb-4b88647f1ebc
spec:
clusterIP: 172.20.217.122
clusterIPs:
- 172.20.217.122
ports:
- name: metrics
port: 80
protocol: TCP
targetPort: 3100
selector:
app: loki
release: loki
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
generation: 14
name: grafana-vs
namespace: monitoring
resourceVersion: "18256422"
uid: e8969da7-062c-49d6-9152-af8362c08016
spec:
gateways:
- my-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /grafana/
name: grafana-ui
rewrite:
uri: /
route:
- destination:
host: prometheus-operator-grafana.monitoring.svc.cluster.local
port:
number: 80
- match:
- uri:
prefix: /loki
name: loki-ui
rewrite:
uri: /loki
route:
- destination:
host: loki.monitoring.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"my-gateway","namespace":"monitoring"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*"],"port":{"name":"http","number":80,"protocol":"HTTP"}}]}}
creationTimestamp: "2021-10-18T12:28:05Z"
generation: 1
name: my-gateway
namespace: monitoring
resourceVersion: "16618724"
uid: 9b254a22-958c-4cc4-b426-4e7447c03b87
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internal","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"ingress-alb","namespace":"istio-system"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"istio-ingressgateway","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
generation: 1
name: ingress-alb
namespace: istio-system
resourceVersion: "4447931"
uid: 74b31fba-0f03-41c6-a63f-6a10dee8780c
spec:
rules:
- http:
paths:
- backend:
service:
name: istio-ingressgateway
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: internal-k8s-istiosys-ingressa-25a256ef4d-1368971909.us-east-1.elb.amazonaws.com
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The ingress is associated with AWS ALB.
I want to access Loki from ALB URL like http(s)://my-alb-url/loki
I hope I have provided the required details now.
Let me know. Thanks.
...how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
You didn't describe what issue when you use external LB above which should work, anyway, since this method will go thru Internet, the security risk is higher with egress cost consider the volume of logging. You can use Privatelink in this case, see page 16 Shared Services. Your promtail will use the ENI DNS name as the loki target.
I am trying to set up external dns from Eks manifest file.
I created EKS cluster and created 3 fargate profiles, default, kube-system and dev.
Coredns pods are up and running.
I then installed AWS Load Balancer Controller by following this doc.
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
The load balancer controller came up in kube-system.
I then installed external-dns deployment using the following manifest file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxx:role/eks-externaldnscontrollerRole-XST756O4A65B
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
selector:
matchLabels:
app: external-dns
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: bitnami/external-dns:0.7.1
args:
- --source=service
- --source=ingress
#- --domain-filter=xxxxxxxxxx.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
#- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-identifier
#securityContext:
# fsGroup: 65534
I used both namespace kube-system and dev for external-dns, both came up fine.
I then deployed, the application and ingress manifest files. I used both namespaces, kube-system and dev.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: kube-nginxapp1:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-nodeport-service
labels:
app: app1-nginx
namespace: kube-system
annotations:
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
type: NodePort
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
----------
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
namespace: kube-system
annotations:
# Ingress Core Settings
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
## SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0xxxxxxxxxx:certificate/0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
# SSL Redirect Setting
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# External DNS - For creating a Record Set in Route53
external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com
# For Fargate
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /* # SSL Redirect Setting
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app1-nginx-nodeport-service
port:
number: 80
All pods came up fine but it is not dynamically registering the the dns alias for thr alb.
Can you please guide me to know what i am going wrong?
First, check the ingress itself works. Check the AWS load balancers and load balancer target groups. The target group targets should be active.
If you do a kubectl get ingress this should be also output the DNS name of the load balancer created.
Use curl to check this url works!
The annotation external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com does not work for ingresses. It is only valid for services. But you don't need it. Just specify the host field for the ingress spec.rules. In your example there is no such property. Specify it!
Updated
So, I followed the AWS docs on how to setup an EKS cluster with Fargate using the eksctl tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:
NAME HOSTS ADDRESS PORTS AGE
testapp-ingress * 80 129m
So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested here and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck.
This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps
This is the alb article I've followed: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my old configs, new config below:
deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp-deployment"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
name: "testapp-service"
namespace: "testapp-qa"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: testapp-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp-service"
servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
Here are some of the logs from the ingress controller>>
E0316 22:32:39.776535 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
Per the suggestion in the comments from #Michael Hausenblas, I've added an annotation to my service for the alb ingress.
Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's /healthcheck. The logs:
E0317 16:00:45.643937 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939 1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674 1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{ Field: "path-pattern", Values: ["/*"] }]
Update
I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
name: "testapp"
namespace: "testapp-qa"
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 80
targetPort: 9898
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
labels:
app: testapp
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp"
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
In your service, try adding the following annotation:
annotations:
alb.ingress.kubernetes.io/target-type: ip
And also you'd need to explicitly tell the Ingress resource via the alb.ingress.kubernetes.io/healthcheck-path annotation where/how to perform the health checks for the target group. See the ALB Ingress controller docs for the annotation semantics.
I was looking for how to use cookie affinity in GKE, using Ingress for that.
I've found the following link to do it: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
I've created a yaml with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
Everything seems to go well. When I inspect the created Ingress I see 2 backend services. One of them has the cookie configured, but the other doesn't.
If I create the deployment, and from GCP's console, create the Service and Ingress, only one backend service appears.
Somebody knows why using a yaml I get 2, but doing it from console I only get one?
Thanks in advance
Oscar
Your definition is good.
The reason you have two backend's is because your ingress does not define a default backend. GCE LB require a default backend so during LB creation, a second backend is added to the LB to act as the default (this backend does nothing but serve 404 responses). The default backend does not use the backendConfig.
This shouldn't be a problem, but if you want to ensure only your backend is used, define a default backend value in your ingress definition by adding the spec.backend:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
But, like I said, you don't NEED to define this, the additional backend won't really come into play and no sessions affinity is required (there is only a single pod anyway). If you are curious, the default backend pod in question is called l7-default-backend-[replicaSet_hash]-[pod_hash] in the kube-system namespace
You can enable the cookies on the ingress like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-sticky
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: ingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
You can create the service like :
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you are using the traefik ingress instead of the nginx and deault GKe ingress you can write the service like this
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app: session-affinity-demo