I want to use regex for path of Ingress in Kubernetes to match strings starting with /app but not starting with /app/api.
My application in fact is in /app prefix and the back end part has prefix /app/api and now I want to match all requests related to my application but do not belong to back end (which should go to front end part).
For this, I know that the PCRE regex is /app(/|$)(?!api)(.*). But in the documentations of Ingress Kubernetes, it is mentioned that it supports RE2 Syntax which seems that doesn't support negative look-ahead (and many other features). How else I can specify this regex? or what other options do I have?
Just define both ingress objects - the one for the backend and the one for the frontend (you can even use prefix notation here). The path priority rules will order these entries accordingly.
Something like this should work:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress-1
spec:
rules:
- host: test.com
http:
paths:
- path: /app/api
backend:
serviceName: backend-service
servicePort: 80
- path: /app
backend:
serviceName: frontend-service
servicePort: 80
Related
I am trying to deploy a Helm hook post-install, post-upgrade hook which will create a simple pod with busybox and perform a wget on an app's application port to insure the app is reachable.
I can not get the hook to pass, even though I know the sample app is up and available.
Here is the manifest:
apiVersion: v1
kind: Pod
metadata:
name: post-install-test
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
containers:
- name: wget
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c"]
args: ["sleep 15; wget {{ include "sampleapp.fullname" . }}:{{ .Values.service.applicationPort.port }}"]
restartPolicy: Never
As you can see in the manifest in the args, the name of the container is in Helm's template syntax. A developer will input the desired name of their app in a Jenkins pipeline, so I can't hardcode it.
I see from kubectl logs -n namespace post-install-test, this result:
Connecting to sample-app:8080 (172.20.87.74:8080)
wget: server returned error: HTTP/1.1 404 Not Found
But when I check the EKS resources I see the pod running the sample app that I'm trying to test with the added suffix of what I've determined is the pod-template-hash.
sample-app-7fcbd52srj9
Is this suffix making my Helm hook fail? Is there a way I can account for this template hash?
I've tried different syntaxes on the command, but I can confirm with the kubectl logs the helm hook is attempting to connect but keeps getting a 404.
Regex doesnt work after kong upgrade to version 3.x.
After upgrading from kong 2.7 to 3.2, regex stopped working.
Regex pattern used in 2.7: /payment/(docs|health)
Regex pattern used in 3.2: /~payment/(docs|health)
Also tried to use ~/payment/(docs|health), but it gives error as in screenshot
Pathtype is ImplementationSpecific
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/name: payment-svc
name: payment-without-auth
namespace: payment
spec:
ingressClassName: kong
rules:
- host: abc.example.com
http:
paths:
- backend:
service:
name: payment-svc
port:
number: 80
path: /payment/(docs|health)
pathType: ImplementationSpecific
Tried couple if regex changes.
To complement #kranthiveer-dontineni's answer, you'll need the /~ prefix in kubernetes manifests to make this work
Ingress paths that begin with /~ are now treated as regular
expressions, and are translated into a Kong route path that begins
with ~ instead of /~. To preserve the existing translation, set
konghq.com/regex-prefix to some value. For example, if you set
konghq.com/regex-prefix: /#, paths beginning with /~ will result in
route paths beginning in /~, whereas paths beginning in /# will result
in route paths beginning in ~. #2956
https://github.com/Kong/kubernetes-ingress-controller/blob/main/CHANGELOG.md#breaking-changes-1
If you have upgraded your kong deployment from 2.8.X to 3.0 the prefix(~) will be added to the routes in the database automatically and results in a configuration drift between the routes database and config file, as per the official documentation.
Before using your state files to update the database, convert them into the 3.0 format using the deck convert command.
Important: Don’t use deck sync with Kong Gateway 3.x before converting paths into the 3.0 format. This will break all regex routing in 3.x.
Run deck-convert against your 2.x state file to turn it into a 3.x file:
deck convert --from kong-gateway-2.x --to kong-gateway-3.x --input-file kong.yaml --output-file new-kong.yaml
Note: This content is taken from official kong documentation. Refer to this link for more information.
I am evaluating Kustomize as a templating solution for my Project. I want an option to replace specific key-value pairs.
ports:
- containerPort: 8081
resources:
limits:
cpu: $CPU_LIMIT
memory: $MEMORY_LIMIT
requests:
cpu: $CPU_REQUESTS
memory: $MEMORY_REQUESTS
In the above example, I want to replace CPU_LIMIT with a config-driven value. What options do I have to do this with Kustomize?
Kustomize doesn't do direct variable replacement like a templating engine. But there are some solutions depending on what attributes you need to variabalize.
Usually variables in deployments, statefulsets, daemonset, pod, job, etc, attributes allow you to use variables powered by a configmap, so you don't necessarily have to use a variable at compile time. However, this doesn't work when controlling values like resource limits and requests, as those would be processed before configmaps would be mounted.
Kustomize isn't designed to be a templating engine, it's designed as a purely declarative approach to configuration management, this includes the ability to use patches for overlays (overrides) and reference resources to allow you to DRY (Do-Not Repeat Yourself) which is especially useful when your configuration powers multiple Kubernetes clusters.
For Kustomize, maybe consider if patching might meet your needs. There are several different ways that Kustomize can patch a file. If you need to change individual attributes, you can use the patchesJSON6902, although when you have to change a lot of values in a deployment, changing them one at a time this way is cumbersome, instead use something like patchesStrategicMerge
Consider the following way to use a patch (overlay):
.
├── base
│ └── main
│ ├── kustomization.yaml
│ └── resource.yaml
└── cluster
├── kustomization.yaml
└── pod_overlay.yaml
Contents of base/main/resource.yaml:
---
apiVersion: v1
kind: Pod
metadata:
name: site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 8081
resources:
requests:
cpu: 100m
memory: 4Gi
limits:
cpu: 200m
memory: 8Gi
Contents of cluster/pod_overlay.yaml:
---
apiVersion: v1
kind: Pod
metadata:
name: site
spec:
containers:
- name: front-end
resources:
requests:
cpu: 200m
memory: 8Gi
limits:
cpu: 400m
memory: 16Gi
Note that we only included the selectors (kind, metadata.name, spec.containers[0].name) and the values we wanted to replace, in this case the resource requests and limits. You don't have to duplicate the entire resource for the patch to apply.
Now to apply the patch with kustomize, the contents of cluster/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/main
patchesStrategicMerge:
- pod_overlay.yaml
Another option to consider if you really need templating power is to use Helm.
Helm is a much more robust templating engine that you may want to consider, and you can use a combination of Helm for templating and the Kustomize for resource management, patches for specific configuration, and overlays.
new feature of "WSO IS 5.9.0" is the deployment.toml , but i have not found the configuration options nor the way how one can set the xml config file from this file.
For example, if I want to enable in carbon.xml option EnableHTTPAdminConsole, what should one do?
[server]
hostname = "my.server.com"
node_ip = "127.0.0.1"
base_path = "https://$ref{server.hostname}:${carbon.management.port}"
enable_h_t_t_p_admin_console = true
enable_http_admin_console = true
EnableHTTPAdminConsole = true
does not work
Also, i have tried to modify in my docker image:
wso2is-5.9.0/repository/resources/conf/templates/repository/conf/carbon.xml.j2
or
wso2is-5.9.0/conf/carbon.xml
But all these files gets overwritten.
My UseCase is to use WSO2IS in K8S without the port.
https://wso2is.my.domain/ > k8s nginx ingress : 443 (manages certificate) > wso2is-service > wso2is-pod : 9763 (plain http)
However the question still resides, what configuration options are available in deployment.toml
This seems like not possible through the deployment.toml. As a workaround, you can uncomment the property in
wso2is-5.9.0/repository/resources/conf/templates/repository/conf/carbon.xml.j2
Report this as an issue: https://github.com/wso2/product-is/issues
If the above fix is not getting applied, properly your docker image is getting overridden with the default configs. Can you try to build a new docker image with requested changes? This link https://github.com/wso2/docker-is/tree/5.9.0/dockerfiles/ubuntu/is can help you to build the image.
But I am not sure why you cannot access ssl(9443) from Nginx ingress. Maybe you can try this sample Nginx ingress https://github.com/wso2/kubernetes-is/blob/master/advanced/is-pattern-1/templates/identity-server-ingress.yaml
The answer to question what can be configured using deployment.toml has answered Buddhima, so i will mark his answer as answer.
One can look trhough the templates f.e.
wso2is-5.9.0/repository/resources/conf/templates/repository/conf/carbon.xml.j2
And can see all the options.
Answer to EnableHTTPAdminConsole answered pulasthi7 that it was intented left out.
I found workaround for the ingress to connect to ssl
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: wso2is-ingress
namespace: wso2is
spec:
tls:
- hosts:
- wso2is.k8s.mydomain.com
secretName: tls-wso2is
rules:
- host: wso2is.k8s.mydomain.com
http:
paths:
- backend:
serviceName: wso2is-is-service
servicePort: 9443
path: /(.*)
The most important line:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
This way the connection to is encrypted to ingress nginx using its own certificate, and from nginx to the pod using certificate in the pod.
The mysterious bug was causing two things to happen
None of my VirtualServices were working despite being correctly formatted and having checked the fields several times.
On istioctl proxy-status the entire RDS column was STALE.
Upon looking at the istio-proxy logs -c discovery (greping for RDS), I saw the following error.
2019-02-27T19:09:58.644652Z warn ads ADS:RDS: ACK ERROR ... ... ... "Only unique values for domains are permitted. Duplicate entry of domain 172.16.x.y"
How do I fix this?
Info
Istio version 1.0.6
Kubernetes version 1.10.x-gke
The key to solving this was that IP address in the log. After searching for where in my configuration I had that IP address, it turns out it was in my ServiceEntries.
One of my ServiceEntries looked like this:
spec:
addresses:
- 172.16.x.y
hosts:
- 172.16.x.y
location: MESH_EXTERNAL
ports:
- name: http
number: 80
protocol: HTTP
- name: https
number: 443
protocol: HTTPS
resolution: DNS
It turns out you cannot have multiple ports in there. I deleted the HTTPS block and, like magic, everything worked. The istioctl proxy-status command displayed everything in the RDS as SYNCED and all of my VirtualServices started working again.