I've created a global static IP Address using terraform in GCP. However, when I try to assign it to an ingress controller inside a GKE cluster, it gets ignored:
Here's my kubernetes configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: homefully-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "homefully-ingress-root"
labels:
app: homefully-ingress
spec:
# ...
the IP address referenced here looks like this:
NAME REGION ADDRESS STATUS
homefully-ingress-root europe-west3 35.234.83.106 RESERVED
However, ingress does not use that IP Address but another random one. That's quite a problem, as I am not using google's DNS services, so I need to rely on a static IP.
Instead, this is what I am getting:
Name: homefully-ingress
Namespace: default
Address: 35.227.252.112
Default backend: auth-proxy-staging:4180 (10.4.0.7:4180)
Rules:
Host Path Backends
---- ---- --------
adminpanel.homefully.tech
homefully-management-frontend-website-staging:80 (10.4.2.7:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce","kubernetes.io/ingress.global-static-ip-name":"homefully-ingress-root"},"labels":{"app":"homefully-ingress"},"name":"homefully-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"auth-proxy-staging","servicePort":4180},"rules":[{"host":"adminpanel.homefully.tech","http":{"paths":[{"backend":{"serviceName":"homefully-management-frontend-website-staging","servicePort":80}}]}}]}}
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: homefully-ingress-root
ingress.kubernetes.io/backends: {"k8s-be-31611--026ed6556721059b":"Unknown","k8s-be-32450--026ed6556721059b":"Unknown"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-homefully-ingress--026ed6556721059b
ingress.kubernetes.io/target-proxy: k8s-tp-default-homefully-ingress--026ed6556721059b
ingress.kubernetes.io/url-map: k8s-um-default-homefully-ingress--026ed6556721059b
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 8m loadbalancer-controller default/homefully-ingress
Normal CREATE 7m loadbalancer-controller ip: 35.227.252.112
I can't find an error message or any hint on what's wrong here. Would be very thankful for some suggestions
stupid me - it was not a global ip address. in which case just a random new one got assigned
Related
I am trying to get letsencrypt work with GKE LB, I know there are GCP Managed Certs but it will not work with internal LB as the challenge will not get passed. Letsencrypt DNS certification using cert-manager is there and ready to be used.
❯ k get secrets letsencrypt-prod -o yaml
apiVersion: v1
data:
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdlVZTVhXdGNZZUJpMkdadzljRFRLNzY==
kind: Secret
metadata:
creationTimestamp: "2021-01-24T15:03:39Z"
name: letsencrypt-prod
namespace: elastic-system
resourceVersion: "3636289"
selfLink: /api/v1/namespaces/elastic-system/secrets/letsencrypt-prod
uid: f4bec5a9-d3b5-4f4a-9ec6-01a4ce3ba47c
type: Opaque
spec:
tls:
- hosts:
- staging.example.com
- staging2.example.com
secretName: letsencrypt-prod
GCP Reporting this error Error syncing to GCP: error running load balancer syncing routine: error getting secrets for Ingress: secret "letsencrypt-prod" does not specify cert as string data
can anybody help me with what it is missing?
As per this, you must provide a valid format for GCP, like this from your already provided Let's Encrypt valid certs:
kubectl create secret generic letsencrypt-prod --from-file=tls.crt="cert.pem" --from-file=tls.key="privkey.pem" --dry-run -o yaml > output
kubectl apply -f output
Also, (it seems you are already using it, but better safe than sorry), you must define this in the tls section of your Ingress as per this
Actually, it is missed in doc or I am missing as example uses the same name everywhere as metadata.
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: cert-example
namespace: example
spec:
secretName: REAL_NAME_OF_SECRET << This need to include in ingress.
issuerRef:
name: letsencrypt-prod
dnsNames:
- 'staging.domain.com'
- '*.staging.domain.com'
so REAL_NAME_OF_SECRET you should put in ingress or anywhere, where you want to use tls.crt or tls.key.
I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin
i am fairly new to Istio - so far i have a k8s cluster (using kops) on AWS , behind ELB.
All traffic is routed via TCP.
Ingress gateway service is configured as NodePort with following config
istio-system istio-ingressgateway NodePort 100.65.241.150 <none> 15020:31038/TCP,80:30205/TCP,31400:30204/TCP,15029:31714/TCP,15030:30016/TCP,15031:32508/TCP,15032:30110/TCP,15443:32730/TCP
I have used 'demo' helm option to deploy Istio 1.4.0.
Have created gateway, VS and DR with following config -
Gateway is in istio-system namespace, VS and DR on default namespace
kind: Gateway
metadata:
name: ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
hosts:
- "*"
gateways:
- ingress-gateway
http:
- route:
- destination:
host: webapp
subset: original
weight: 100
- destination:
host: webapp
subset: v2
weight: 0
---
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
host: webapp
subsets:
- labels:
version: original
name: original
- labels:
version: v2
name: v2
Service pods listen on port 80 - and i have tested via port forwarding - and are functioning as expected.
Although when i do curl on https://hostname externally i get a
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
i have enabled debug logging in the envoy - but dont see anything meaningful in the logs relating to the timeout.
Any suggestion on where i might be going wrong?
Do i need to add any service annotations relating to ELB in istio ingress gateway?
Any other suggestions?
I found few things which need to be fixed
1. Connect with loadbalancer
As I mentioned in comments you need to fix your ingress-gateway to automaticly get EXTERNAL-IP addres as in istio documentation, for now your ingress is a NodePort so as far as I'm concerned it won't work, you can configure it to use with nodeport, but I assume you want the loadbalancer.
The first step would be to change istio-ingressgateway svc type from NodePort to loadbalancer and check if you get the EXTERNAL-IP.
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
It should look like there
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h
And then everything goes through the external-ip address which is 130.211.10.121
2. Fix your yamls
Note, for tcp traffic like that, we must match on the incoming port, in this case port 31400
Check this example from istio documentation
Specially this part with gateway, virtual service and destination rule.
You should add this to your virtual service.
tcp:
- match:
- port: 31400
3. Remember about namespaces.
In your example, because it's default it should work, but if you create another namespace, remember that if gateway and virtual service are in another namespace then your need to show virtual service where is the gateway.
Example here
Specially the part in virtual service
gateways:
- some-config-namespace/my-gateway
I hope it help you with your issues. Let me know if you have any more questions.
I want to connect service from one GKE cluster to another one. I created service as a internal load balancer and I would like to attach a static ip to it. I created my service.yml
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
kubernetes.io/ingress.global-static-ip-name: es-test
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
However after apply -f when I check the service the load balancer ingress looks like this:
status:
loadBalancer:
ingress:
- ip: 10.156.0.60
And I cannot connect using the static ip. How to solve it ?
EDIT:
After suggestion I changed the yml file to:
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
loadBalancerIP: "xx.xxx.xxx.xxx" -- here my static ip
Service now looks like it:
spec:
clusterIP: 11.11.1.111
externalTrafficPolicy: Cluster
loadBalancerIP: xx.xxx.xxx.xxx
ports:
- nodePort: 31894
port: 80
protocol: TCP
targetPort: 8080
selector:
app: hello
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
And I still cannot connect
November 2021 Update
It is possible to create a static internal IP and assign it to a LoadBalancer k8s service type.
Go to the VPC networks -> Select your VPC -> Static Internal IP Addresses
Click Reserve Static Address, then select a name for your IP and click Reserve. You can choose IP address manually here as well.
In your Service YAML add the following annotation. Also make sure type is LoadBalancer and then assign the IP address.
...
annotations:
networking.gke.io/load-balancer-type: "Internal"
...
type: LoadBalancer
loadBalancerIP: <your_static_internal_IP>
This will spin up an internal LB and assign your static IP to it. You can also check in Static Internal IP Addresses screen that new IP is now in use by freshly created load balancer. You can assign a Cloud DNS record to it, if needed.
Also, you can choose IP address "shared" during the reservation process so it can be used by up to 50 internal load balancers.
Assigning Static IP to Internal LB
Enabling Shared IP
I am trying to connect my ingress to a static ip. I seem to be following all the tutorials, but still I cannot seem to attach my static ip to ingress. My ingress file is as follows (refering to the static ip "test-ip")
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-web
annotations:
kubernetes.io/ingress.global-static-ip-name: "test-ip"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: api-cluster-ip-service
servicePort: 5005
- path: /
backend:
serviceName: web-cluster-ip-service
servicePort: 80
However, when I run
kubectl get ingress ingress-web
it returns
kubectl get ingress ingress-web
NAME HOSTS ADDRESS PORTS AGE
ingress-web * 80 4m
without giving the address. In the VPC network [External IP addresses
] the static ip is there, it is global, but it keeps saying: In use by None
gcloud compute addresses describe test-ip --global
gives
address: 34.240.xx.xxx
creationTimestamp: '2019-03-26T00:34:26.086-07:00'
description: ''
id: '536303927960423409'
kind: compute#address
name: test-ip
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/my-project- adbc8/global/addresses/test-ip
status: RESERVED
What am I missing here?
I ran into this issue. I believe it has been fixed by this pull request.
Changing
kubernetes.io/ingress.global-static-ip-name
to
kubernetes.io/ingress.regional-static-ip-name
Worked for me.
I've spent hours trying to figure the issue out.
It simply seems like a bug with GKE.
What solved it was:
Starting ingress with no static ip
Going to cloud console on the web under VPC Network > External IP addresses
Waiting for the Ingress ip to show up
Setting is as static, and giving it a name
Adding kubernetes.io/ingress.global-static-ip-name: <ip name> Ingress yaml and applying it.
You have to make sure the IP you created in GCP is Global and not Regional in order to use the following annotation in your ingress:
kubernetes.io/ingress.global-static-ip-name
I had the same problem, but after some research and testing I managed to solve this issue. These are the steps I took:
First you need to create a Global static IP address on GCP.
I happened to use Terraform to do this eg see example below
resource "google_compute_global_address" "static" {
name = "global-test-ip"
project = var.gcp_project_id
address_type = "EXTERNAL"
}
based on this documentation: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_global_address
You could however use the GCP console to do this.
Note: I created this Global Static IP in the same GCP project as my GKE cluster.
Once I had completed the creation of the Global Static IP I then added the following annotation to the Kubernetes ingress yaml file and applied it (ie kubectl apply -f ingress.yaml):
annotations:
kubernetes.io/ingress.global-static-ip-name: "global-test-ip"
Note: it took a few minutes for the Ingress and Google Load balancer to update after I applied this ingress change.
The first thing you should check is the status of the IP, e.g.
gcloud compute addresses describe traefik --global
You should see something along the lines of:
address: 34.111.200.XXX
addressType: EXTERNAL
creationTimestamp: '2022-07-25T14:06:48.827-07:00'
description: ''
id: '5625073968713218XXX'
ipVersion: IPV4
kind: compute#address
name: traefik
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/contrawork/global/addresses/traefik
status: RESERVED
Your Ingress should look something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: 'gce'
kubernetes.io/ingress.global-static-ip-name: 'traefik'
name: secondary-ingress
spec:
defaultBackend:
service:
name: 'traefik'
port:
number: 80
After this is deployed, within 5 minutes you should see status change to IN USE.
If not, I would attempt to delete and re-create the Ingress resource.
If it still does not happen, then I would check the documentation if you have properly configured the cluster, e.g. Ensure that GKE cluster has "HTTP Load Balancing" enabled.