WSO2 Identity Server X509 Authentication Behind Proxy - wso2

I'm using WSO2 Identity Server 5.10.
I need to add support for X509 Authentication and I was reading the documentation about the X509 Certificate Authentication here
In order to add the configuration I had to modify the catalina-server.xml.j2 file as suggested here
So done it works but I have a very huge ports commingling.... anyway let's assume it works.
Now I have this issue: I need to deploy WSO2 Identity Server in a K8S cluster. So, basically, I have a nginx ingress controller that will manage all the traffic to the backend.
What I did locally is to put a simple nginx reverse proxy and configure WSO2 Identity Server in order to use this proxy. So in my deployment.toml I did the following
[custom_trasport.x509.properties]
protocols="HTTP/1.1"
port="8443"
maxThreads="200"
scheme="https"
secure=true
SSLEnabled=true
keystoreFile="mykeystore.jks"
keystorePass="pwd"
truststoreFile="myclient_trust.jks"
truststorePass="myclient_trust_pwd"
bindOnInit=false
clientAuth="want"
sslProtocol = "TLS"
proxyPort="443"
[authentication.authenticator.x509_certificate.parameters]
name ="x509CertificateAuthenticator"
enable=true
AuthenticationEndpoint="https://$ref{server.hostname}:8443/x509-certificate-servlet"
username= "CN"
SearchAllUserStores="false"
EnforceSelfRegistration = "false"
SearchAndValidateUserLocalStores = "false"
[transport.https.properties]
proxyPort="443"
In this way when I want to sign-in by using the X509 Certificate Authentication it will ask for my certificate but then, when I choose the certificate, it shows an error because it can't find the certificate in the browser request
Moreover I don't think I should leave AuthenticationEndpoint="https://$ref{server.hostname}:8443/x509-certificate-servlet" because this means that the submit will be done to the 8443 port that is never exposed on internet.
Did anyone solve this issue? Basically the question is: how can I configure the X509 Certificate Authentication behind a proxy (e.g. nginx)?
Any tip is very precious.
Thank you
Angelo

I think I solved the issue I'm facing.
Basically when I have a reverse_proxy (e.g. nginx), the certicate doesn't arrive to the tomcat in the request attribute.
What I did is configure the reverse_proxy to put the certificate as HTTP header without verifying it; I'll verify on the server side.
So now my nginx configuration is:
server {
listen 443;
server_name wso2_iam;
ssl on;
ssl_certificate certificate_full_path.crt;
ssl_certificate_key full_path_to_kwy_no_pwd.key;
ssl_verify_client optional_no_ca;
location /x509-certificate-servlet/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://127.0.0.1:8443/x509-certificate-servlet;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://127.0.0.1:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
error_log /var/log/nginx/wso2-error.log;
access_log /var/log/nginx/wso2-access.log;
}
The ssl_verify_client optional_no_ca; tells nginx to retrieve the certificate but to not validate it.
The proxy_set_header X-SSL-CERT $ssl_client_escaped_cert; instruction tells nginx to put the certificate PEM based in the request header called X-SSL-CERT
Then I modified the org.wso2.carbon.identity.authenticator.x509Certificate.X509CertificateAuthenticator in order to search for the certificate between request headers when it's not found in the http request attribute.
It seems to be working.
Thank you to all
Angelo

Related

Django Channels doesn't detect WebSocket request with NGINX

I am deploying a website on AWS. Everything works fine for HTTP and HTTPS. I am passing all requests to Daphne. However, incoming WebSocket connections are treated as HTTP requests by Django. I am guessing there is some header that isn't set in Nginx, but I have copied a lot of my Nginx config from tutorials.
Nginx Config:
upstream django {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name 18.130.130.126;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name 18.130.130.126;
ssl_certificate /etc/nginx/certificate/certificate.crt;
ssl_certificate_key /etc/nginx/certificate/private.key;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
location / {
include proxy_params;
proxy_pass http://django;
}
}
Daphne is bonded to 0.0.0.0:9000. Channels have a very basic setup. A ProtocolTypeRouter, with AuthMiddlewareStack and then URLRouter, as shown on the Channels tutorial. And then a Consumer class. I am using Redis for the channel layer, but that doesn't seem to be a problem. This is some data about the request on response from fiddler. The request headers say Upgrade to WebSocket, but it returns a 404 HTTP request, as it doesn't see it as a WebSocket request.
Thanks for any help.
include proxy params was the problem. It was overwriting headers.

Does Django Channels uses ws:// protocol prefix to route between Django view or Channels app?

I am running Django + Channels server using Daphne. Daphne server is behind Nginx. My Nginx config looks like as given at end.
When I try to connect to ws://example.com/ws/endpoint I am getting NOT FOUNT /ws/endpoint error.
For me, it looks like Daphne is using protocol to route to either Django views or Channels app. If it sees http it routes to Django view and when it sees ws it routes to Channels app.
With following Nginx proxy pass configuration the URL always has http protocol prefix. So I am getting 404 or NOT FOUND in logs. If I change proxy_pass prefix to ws Nginx config fails.
What is the ideal way to setup Channels in the this scenario?
server {
listen 443 ssl;
server_name example.com
location / {
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
# redirect all HTTP traffic to localhost:8088;
proxy_pass http://0.0.0.0:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-NginX-Proxy true;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 999999999;
}
}
Yes, as in the question Channels detects the route based on the protocol header ws or http/https
Using ws prefix in proxy_pass http://0.0.0.0:8000/; is not possible. To forward the protocol information following config should be included.
proxy_set_header X-Forwarded-Proto $scheme;
This will forward the schema/protocol(ws) information to Channels app. And channels routes according to this information.

Amazon ELB + Django HTTPS issues

I have been searching on SO but none of the solutions seem to work for my case:
I have a Classic Elastic Load Balancer from AWS, passing requests to my Nginx docker containers that also proxy passes to my python Gunicorn containers.
Nginx config:
server {
listen 80;
listen [::]:80;
...
if ($http_x_forwarded_proto = 'http') {
return 301 https://$server_name$request_uri;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Scheme $scheme;
proxy_pass http://app_server;
}
}
In my Django Settings I have :
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_SSL_REDIRECT = False
The problem is, when a request is made to an endpoint, if I print(request.META.get('HTTP_X_FORWARDED_PROTO')) I get http instead of https. This causes my DRF auto-generated doc links to be generated in http instead of https.
Is there something wrong with my configurations?
How can I force https behind an ELB?
Just add
proxy_set_header X-Forwarded-Proto https;
in your nginx config. Your nginx will always be serving the clients using https as the ELB is configured to receive https traffic.
Also the reason $scheme may not have worked is because your nginx is still on http protocol and not https protocol
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;

clustering wso2 api manager gateway

i'm trying to cluster wso2 api manager 1.10 gateway into 3 host using tutorial here : https://docs.wso2.com/display/CLUSTER44x/Clustering+the+Gateway, but some of the steps are confusing.
As i know, wso2 api manager has two transport which is
1.) servlet transport (tomcat) located at port 9443 (https) and 9763 (http) used to serve carbon related service
2.) PTT/NIO transport (axis2) located at port 8243 (https) and 8280 (http) used to serve requests to deployment artifacts.
What i don't understand from the tutorial is :
1.) why should there be a port mapping in the clustering configuration (located in axis2 configuration) of the gateway manager component?
<parameter name="properties">
<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
<property name="subDomain" value="mgt"/>
<property name="port.mapping.80" value="9763"/>
<property name="port.mapping.443" value="9443"/>
</parameter>
isn't it already defined in load balancer (nginx) configuration
server {
listen 443;
server_name mgt.am.wso2.com;
ssl on;
ssl_certificate /etc/nginx/ssl/mgt.crt;
ssl_certificate_key /etc/nginx/ssl/mgt.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://xxx.xxx.xxx.xx3:9443/;
}
error_log /var/log/nginx/mgt-error.log ;
access_log /var/log/nginx/mgt-access.log;
}
and in tomcat configuration?
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80" ... />
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443"/>
2.) Why does the load balancer configuration for gateway worker use the servlet port? shouldn't it be the PTT/NIO port? (since the gateway worker are used to serve requests to deployment artifacts)
upstream wso2.am.com {
sticky cookie JSESSIONID;
server xxx.xxx.xxx.xx4:9763;
server xxx.xxx.xxx.xx5:9763;
}
server {
listen 80;
server_name am.wso2.com;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass http://wso2.am.com;
}
}
upstream ssl.wso2.am.com {
sticky cookie JSESSIONID;
server xxx.xxx.xxx.xx4:9443;
server xxx.xxx.xxx.xx5:9443;
}
server {
listen 443;
server_name am.wso2.com;
ssl on;
ssl_certificate /etc/nginx/ssl/wrk.crt;
ssl_certificate_key /etc/nginx/ssl/wrk.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://ssl.wso2.am.com;
}
}
There are two of transport ports in API Manager, PTT and servlet. When a request comes into the API Manager, it always goes to the default transport which is the PTT/NIO transport.
When the admin services are called (ex: Publishing API), you send a servlet request. If you do not specify the port mapping parameter in the manager node, it would hit the PTT/NIO transport and the request would fail.
Here, it has specified a common example for load balancer config. Your config needs to be changed according to Gateway worker.

Nginx Share Cookies Between Subdomains without Access to Backend

TLDR: How to share cookies between subdomains for a backend application sever that I cannot "configure" using nginx (1.8.x) as a proxy - some magical combination of proxy_*?
A tornado web server is running on "127.0.0.1:9999/ipython" that I cannot configure (it's running as part of an ipython notebook server). I'm using nginx to proxy from "www.mysite.com" to 127.0.0.1:9999 successfully (http traffic at least).
However, part of the backend application requires Websockets. Because I am using CloudFlare, I have to use a separate domain for Websockets ("Websockets are currently only available for Enterprise customers ... All other customers ... should create a subdomain for Websockets in their CloudFlare DNS and disable the CloudFlare proxy"). I'm using "ws.mysite.com".
When a user logs in at "https :// www.mysite.com", a cookie is set by the tornado web server for "www.mysite.com" (I can't seem to configure it, otherwise I would just set it to ".mysite.com"). When the websocket part of the application kicks in, it sends a request to "wss :// ws.mysite.com", but fails to authenticate because the cookie is set for a different domain("www.mysite.com").
Is it possible for nginx to "spoof" the domain so the tornado webserver registers it for ".mysite.com"? proxy_cookie_domain doesn't seem to work as I'd expect... Should I hard code "proxy_set_header Host"?
I was thinking a nginx conf similar to....
upstream ipython_server {
server 127.0.0.1:8888;
}
server {
listen 443;
server_name www.mysite.com;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
ssl on;
# **** THIS DOESN'T WORK ??? ****
proxy_cookie_domain www.mysite.com .mysite.com;
location /ipython/static {
proxy_pass https://ipython_server$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /ipython/api/sessions {
proxy_pass https://ipython_server$request_uri;
proxy_set_header Host $host;
proxy_set_header Origin "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /ipython {
proxy_pass https://ipython_server$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 443;
server_name ws.azampagl.com;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
ssl on;
# **** THIS DOESN'T WORK ??? ****
proxy_cookie_domain ws.mysite.com .mysite.com;
# This is the websocket location
location /ipython/api/kernels/ {
proxy_pass https://ipython_server$request_uri;
proxy_redirect off;
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_read_timeout 86400;
proxy_set_header Host $host;
proxy_set_header Origin "";
proxy_set_header Upgrade websocket;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I've been looking in the nginx lua module? It looks like you can set cookie domains, but it looks hackish...
Thanks greatly in advance for your assistance!
(Side note: I do technically have access to the tornado configuration, but there is zero documentation on how to set the "cookie domain" for the server. i.e.
c.NotebookApp.tornado_settings = {'cookie_domain????':'.mysite.com'}
)