ActiveMQ 5.15 - Change context root of web console - jetty

I'm using ActiveMQ 5.15, and I want to add a path-element to the url for the ActiveMQ web console so it works with my nginx-ingress in a k8s cluster
For example it is now 127.0.0.1:8161/admin, but I want to use 127.0.0.1:8161/activemq/admin/.
I tried to change the settings in jetty.xml, but I couldn't find the correct position for entering a context-root activemq.

Found it:
conf/jetty.xml: just edit pathelement to /activemq/admin
<bean class="org.eclipse.jetty.webapp.WebAppContext">
<property name="contextPath" value="/activemq/admin" />
<property name="resourceBase" value="${activemq.home}/webapps/admin" />
<property name="logUrlOnStart" value="true" />
--- ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: activemq
name: activemq-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: someip.linode.com
http:
paths:
- path: /activemq
pathType: ImplementationSpecific
backend:
service:
name: activemq-service
port:
number: 8161

Related

Rewrite in virtualsvc not working when authpolicy implemented - Istio

I am following some of the instructions in https://github.com/istio/istio/issues/40579 to setup Istio with an custom oauth2 provider with keycloak.
I have a main ingress which is sending all the traffic on one host to istio-ingressgateway
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: istio-ingress-main
namespace: istio-system
spec:
ingressClassName: nginx
tls:
- hosts:
- mlp.prod
secretName: mlp-tls
rules:
- host: mlp.prod # A FQDN that describes the host where that rule should be applied
http:
paths: # A list of paths and handlers for that host
- path: /
pathType: Prefix
backend: # How the ingress will handle the requests
service:
name: istio-ingressgateway # Which service the request will be forwarded to
port:
number: 80 # Which port in that service
My ingress gateway is defined as below
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: prod-gateway
namespace : istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'mlp.prod'
One of my services is mlflow which is installed in mlflow namespace for which the virtual service is defined as below
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-vs-mlflow
namespace: mlflow
spec:
hosts:
- '*'
gateways:
- istio-system/prod-gateway
http:
- match:
- uri:
prefix: "/mlflow"
rewrite:
uri: " "
route:
- destination:
host: mlflow-service.mlflow.svc.cluster.local
port:
number: 5000
Now when i try to access the host mlp.prod/mlflow/, I am able to access MLFLOW without any issues and the UI comes up correctly.
However if i try to add an oauth provider in an authpolicy towards the /mlflow route, then I get 404 page not available after the oauth authentication is done
The authpolicy is as in the below
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: oauth-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/mlflow"]
Please assist in this issue. Is the rewrite in Virtual service supposed to work only without authpolicy with oauth2-proxy provider
Kindly help
Thanks,
Sujith.
Version
istioctl version
client version: 1.15.2
control plane version: 1.15.2
data plane version: 1.15.2 (8 proxies)
kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.2
Kustomize Version: v4.5.4
Server Version: v1.22.9
WARNING: version difference between client (1.24) and server (1.22) exceeds the supported minor version skew of +/-1
I was able to resolve this by setting the oauth config with the below new value
upstreams="static://200"
Once this was done the oauth2 started returning 200 for authenticated users and everything was fine.

Can't expose Keycloak Server on AWS with Traefik Ingress Controller and AWS HTTPS Load Balancer

I have successfully exposed two microservices on AWS with Traefik Ingress Controller and AWS HTTPS Load Balancer on my registered domain.
Here is the source code:
https://github.com/skyglass-examples/user-management-keycloak
I can easily access both microservices with https url:
https://users.skycomposer.net/usermgmt/swagger-ui/index.html
https://users.skycomposer.net/whoami
So, it seems that Traefik Ingress Controller and AWS HTTPS Load Balancer configured correctly.
Unfortunately, Keycloak Server doesn't work in this environment.
When I try to access it by https url:
https://users.skycomposer.net/keycloak
I receive the following response:
404 page not found
Do I miss something in my configuration?
Here are some keycloak kubernetes manifests, which I use:
keycloak-config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: keycloak
data:
KEYCLOAK_USER: admin#keycloak
KEYCLOAK_MGMT_USER: mgmt#keycloak
JAVA_OPTS_APPEND: '-Djboss.bind.address.management=0.0.0.0'
PROXY_ADDRESS_FORWARDING: 'true'
KEYCLOAK_LOGLEVEL: INFO
ROOT_LOGLEVEL: INFO
DB_VENDOR: H2
keycloak-deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak:12.0.4
imagePullPolicy: Always
ports:
- containerPort: 9990
hostPort: 9990
volumeMounts:
- name: keycloak-data
mountPath: /opt/jboss/keycloak/standalone/data
env:
- name: KEYCLOAK_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_USER
- name: KEYCLOAK_MGMT_USER
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_USER
- name: JAVA_OPTS_APPEND
valueFrom:
configMapKeyRef:
name: keycloak
key: JAVA_OPTS_APPEND
- name: DB_VENDOR
valueFrom:
configMapKeyRef:
name: keycloak
key: DB_VENDOR
- name: PROXY_ADDRESS_FORWARDING
valueFrom:
configMapKeyRef:
name: keycloak
key: PROXY_ADDRESS_FORWARDING
- name: KEYCLOAK_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: KEYCLOAK_LOGLEVEL
- name: ROOT_LOGLEVEL
valueFrom:
configMapKeyRef:
name: keycloak
key: ROOT_LOGLEVEL
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_PASSWORD
- name: KEYCLOAK_MGMT_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak
key: KEYCLOAK_MGMT_PASSWORD
volumes:
- name: keycloak-data
persistentVolumeClaim:
claimName: keycloak-pvc
keycloak-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 9990
selector:
app: keycloak
traefik-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-usermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/usermgmt"
backend:
serviceName: "usermgmt"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-whoami-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/whoami"
backend:
serviceName: "whoami"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-keycloak-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/keycloak"
backend:
serviceName: "keycloak"
servicePort: 80
See all other files on my github: https://github.com/skyglass-examples/user-management-keycloak
I also checked the logs for keycloak pod, running on my K3S Kubernetes Cluster:
20:57:34,147 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started in 43054ms - Started 687 of 972 services (687 services are lazy, passive or on-demand)
20:57:34,153 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
20:57:34,153 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
Everything seems to be fine, Admin console is listening on http://127.0.0.1:9990
I also tried using 9990 target port in deployment and service manifests, instead of 8080, but still the same result.
I have found one small workaround, but unfortunately, this is not the best solution for me.
I forwarded the port:
kubectl port-forward --address 0.0.0.0 service/keycloak 32080:http
Now Keycloak Server is available on:
http://localhost:32080/auth/
But how to make it available externally by this url ?
https://keycloak.skycomposer.net/keycloak/auth
It is still not clear to me, why the keycloak is not visible from the outside, with my current configuration.
Finally solved the issue.
The following configuation is required to run keycloak behind traefik:
PROXY_ADDRESS_FORWARDING=true
KEYCLOAK_HOSTNAME=${YOUR_KEYCLOAK_HOSTNAME}
Also, I had to use the root path "/" for the ingress rule:
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-keycloak-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/"
backend:
serviceName: "keycloak"
servicePort: 80
Here, you can find other configuration properties, which you might find useful:
https://github.com/Artiume/docker/blob/master/traefik-SSO.yml
Believe it or not, this is the only resource on the internet, which mentioned KEYCLOAK_HOSTNAME to fix my problem. Two days of searching by keyword "keycloak traefik 404" and no results!
You can find the full fixed code, with correct configuration, on my github:
https://github.com/skyglass-examples/user-management-keycloak
Right - the admin console is listening on 127.0.0.1. This is not the outside world interface. This is "localhost".
You have two choices here. You can start Keycloak with a command line argument like:
bin/standalone.sh -Djboss.bind.address.management=0.0.0.0
This starts the management console on port 9990 but on the 0.0.0.0 interface which is to say all interfaces. So you can still connect to it on localhost but it will now be listening on other (i.e. Ethernet) interfaces.
Another option is to modify the standalone/configuration/standalone.xml file and change:
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:127.0.0.1}"/>
</interface>
</interfaces>
to just be:
<interfaces>
<interface name="management">
<inet-address value="0.0.0.0"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:127.0.0.1}"/>
</interface>
</interfaces>
or whatever address that you'd like Keycloak to listen on. Of course, you can change the public address too if you'd like.
Note that the port is controlled in a different way. The standard way of controlling this is to to run with something like:
bin/standalone.sh -Djboss.socket.binding.port-offset=1000
In this example all ports have 1000 added to them. So the management port went from 9990 to 10990 as 1000 was added to the base.
As a general statement I usually place a proxy (AJP or HTTP) in front of all of my Wildfly servers. That way none of this matters and your proxy connects to, for example, 127.0.0.1, port 9990. But, of course, that's up to you.

how to use mutliple service paths in AWS EKS Ingress

I deployed all my resources in Amazon EKS Cluster, now i want to access each services using ingress.i have 3 micro-services.when i added only one service in ingress yaml file it is working please find that code below.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dummy.us-east-2.elb.amazonaws.com
http:
paths:
- path: /
backend:
serviceName: user-api-service
servicePort: 80
the above code is working for me and this i changed the ingress file to support multiple paths. the changed code is below
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dummy.elb.amazonaws.com
http:
paths:
- path: /user(/|$)(.*)
backend:
serviceName: user-api-service
servicePort: 80
after this i try to access the service using the below link in postman
http://dummy.us-east-2.elb.amazonaws.com/user/api/user/register
but the postman throwing the error 404
can anyone please help me with this issue? please ask if you need more informations

Redirecting traffic to external url

Updates based on comments:
Lets say there's an API hosted # hello.company1.com in another GCP Project...
I would like to have a possibility that when some1 visits a url abc.company.com they are serverd traffic from hello.company1.com something similar to an API gateway...
It could be easily done with an API gateway, I am just trying to figure out if its possible to with K8S service & ingress.
I have created a Cloud DNS zone as abc.company.com
When someone would visit abc.company.com/google I would like the request to be forwarded to an external url let's say google.com
Could this be achieved by creating a service of type external name and an ingress with host name abc.company.com
kind: Service
apiVersion: v1
metadata:
name: test-srv
spec:
type: ExternalName
externalName: google.com
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: abc.company.com
- http:
paths:
- path: /google
backend:
serviceName: test-srv
It's possible to achieve what you want, however you will need to use Nginx Ingress to do that, as you will need to use specific annotation - nginx.ingress.kubernetes.io/upstream-vhost.
It was well described in this Github issue based on storage.googleapis.com.
apiVersion: v1
kind: Service
metadata:
name: google-storage-buckets
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-assets-ingress
annotations:
kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/rewrite-target: /[BUCKET_NAME]/[BUILD_SHA]
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
spec:
rules:
- host: abc.company.com
http:
paths:
- path: /your/path
backend:
serviceName: google-storage-buckets
servicePort: 443
Depends on your needs, if you would use it on non https you would need to change servicePort to 80 and remove annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS".
For additional details, you can check other similar Stackoverflow question.
Please remember to not use - in spec.rules.host and spec.rules.http in the same manifest. You should use - only with http, if you don't have host in your configuration.

Health check problem with setting up GKE with istio-gateway

Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway