i'm trying to cluster wso2 api manager 1.10 gateway into 3 host using tutorial here : https://docs.wso2.com/display/CLUSTER44x/Clustering+the+Gateway, but some of the steps are confusing.
As i know, wso2 api manager has two transport which is
1.) servlet transport (tomcat) located at port 9443 (https) and 9763 (http) used to serve carbon related service
2.) PTT/NIO transport (axis2) located at port 8243 (https) and 8280 (http) used to serve requests to deployment artifacts.
What i don't understand from the tutorial is :
1.) why should there be a port mapping in the clustering configuration (located in axis2 configuration) of the gateway manager component?
<parameter name="properties">
<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
<property name="subDomain" value="mgt"/>
<property name="port.mapping.80" value="9763"/>
<property name="port.mapping.443" value="9443"/>
</parameter>
isn't it already defined in load balancer (nginx) configuration
server {
listen 443;
server_name mgt.am.wso2.com;
ssl on;
ssl_certificate /etc/nginx/ssl/mgt.crt;
ssl_certificate_key /etc/nginx/ssl/mgt.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://xxx.xxx.xxx.xx3:9443/;
}
error_log /var/log/nginx/mgt-error.log ;
access_log /var/log/nginx/mgt-access.log;
}
and in tomcat configuration?
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80" ... />
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443"/>
2.) Why does the load balancer configuration for gateway worker use the servlet port? shouldn't it be the PTT/NIO port? (since the gateway worker are used to serve requests to deployment artifacts)
upstream wso2.am.com {
sticky cookie JSESSIONID;
server xxx.xxx.xxx.xx4:9763;
server xxx.xxx.xxx.xx5:9763;
}
server {
listen 80;
server_name am.wso2.com;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass http://wso2.am.com;
}
}
upstream ssl.wso2.am.com {
sticky cookie JSESSIONID;
server xxx.xxx.xxx.xx4:9443;
server xxx.xxx.xxx.xx5:9443;
}
server {
listen 443;
server_name am.wso2.com;
ssl on;
ssl_certificate /etc/nginx/ssl/wrk.crt;
ssl_certificate_key /etc/nginx/ssl/wrk.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://ssl.wso2.am.com;
}
}
There are two of transport ports in API Manager, PTT and servlet. When a request comes into the API Manager, it always goes to the default transport which is the PTT/NIO transport.
When the admin services are called (ex: Publishing API), you send a servlet request. If you do not specify the port mapping parameter in the manager node, it would hit the PTT/NIO transport and the request would fail.
Here, it has specified a common example for load balancer config. Your config needs to be changed according to Gateway worker.
Related
I have a Spring Boot web application deployed in Elastic Beanstalk single instance environment using Amazon Linux 2. I have configured SSL in the NGNIX as per the documentation and all HTTPS request are working fine.
However the HTTP requests are not redirected to HTTPS.
Below is my conf file located at \PROJECT_ROOT\.platform\nginx\conf.d\https.conf
# HTTP server
server {
listen 80;
return 301 https://example.com$request_uri;
}
# HTTPS server
server {
listen 443 ssl;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
I have created an A record to map example.com to EB environment URL.
However, when I try to hit http://example.com it simply loads the homepage over HTTP rather then redirecting to HTTPS.
Can someone please help me with this ?
I'm using WSO2 Identity Server 5.10.
I need to add support for X509 Authentication and I was reading the documentation about the X509 Certificate Authentication here
In order to add the configuration I had to modify the catalina-server.xml.j2 file as suggested here
So done it works but I have a very huge ports commingling.... anyway let's assume it works.
Now I have this issue: I need to deploy WSO2 Identity Server in a K8S cluster. So, basically, I have a nginx ingress controller that will manage all the traffic to the backend.
What I did locally is to put a simple nginx reverse proxy and configure WSO2 Identity Server in order to use this proxy. So in my deployment.toml I did the following
[custom_trasport.x509.properties]
protocols="HTTP/1.1"
port="8443"
maxThreads="200"
scheme="https"
secure=true
SSLEnabled=true
keystoreFile="mykeystore.jks"
keystorePass="pwd"
truststoreFile="myclient_trust.jks"
truststorePass="myclient_trust_pwd"
bindOnInit=false
clientAuth="want"
sslProtocol = "TLS"
proxyPort="443"
[authentication.authenticator.x509_certificate.parameters]
name ="x509CertificateAuthenticator"
enable=true
AuthenticationEndpoint="https://$ref{server.hostname}:8443/x509-certificate-servlet"
username= "CN"
SearchAllUserStores="false"
EnforceSelfRegistration = "false"
SearchAndValidateUserLocalStores = "false"
[transport.https.properties]
proxyPort="443"
In this way when I want to sign-in by using the X509 Certificate Authentication it will ask for my certificate but then, when I choose the certificate, it shows an error because it can't find the certificate in the browser request
Moreover I don't think I should leave AuthenticationEndpoint="https://$ref{server.hostname}:8443/x509-certificate-servlet" because this means that the submit will be done to the 8443 port that is never exposed on internet.
Did anyone solve this issue? Basically the question is: how can I configure the X509 Certificate Authentication behind a proxy (e.g. nginx)?
Any tip is very precious.
Thank you
Angelo
I think I solved the issue I'm facing.
Basically when I have a reverse_proxy (e.g. nginx), the certicate doesn't arrive to the tomcat in the request attribute.
What I did is configure the reverse_proxy to put the certificate as HTTP header without verifying it; I'll verify on the server side.
So now my nginx configuration is:
server {
listen 443;
server_name wso2_iam;
ssl on;
ssl_certificate certificate_full_path.crt;
ssl_certificate_key full_path_to_kwy_no_pwd.key;
ssl_verify_client optional_no_ca;
location /x509-certificate-servlet/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://127.0.0.1:8443/x509-certificate-servlet;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://127.0.0.1:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
error_log /var/log/nginx/wso2-error.log;
access_log /var/log/nginx/wso2-access.log;
}
The ssl_verify_client optional_no_ca; tells nginx to retrieve the certificate but to not validate it.
The proxy_set_header X-SSL-CERT $ssl_client_escaped_cert; instruction tells nginx to put the certificate PEM based in the request header called X-SSL-CERT
Then I modified the org.wso2.carbon.identity.authenticator.x509Certificate.X509CertificateAuthenticator in order to search for the certificate between request headers when it's not found in the http request attribute.
It seems to be working.
Thank you to all
Angelo
I am deploying a website on AWS. Everything works fine for HTTP and HTTPS. I am passing all requests to Daphne. However, incoming WebSocket connections are treated as HTTP requests by Django. I am guessing there is some header that isn't set in Nginx, but I have copied a lot of my Nginx config from tutorials.
Nginx Config:
upstream django {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name 18.130.130.126;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name 18.130.130.126;
ssl_certificate /etc/nginx/certificate/certificate.crt;
ssl_certificate_key /etc/nginx/certificate/private.key;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
location / {
include proxy_params;
proxy_pass http://django;
}
}
Daphne is bonded to 0.0.0.0:9000. Channels have a very basic setup. A ProtocolTypeRouter, with AuthMiddlewareStack and then URLRouter, as shown on the Channels tutorial. And then a Consumer class. I am using Redis for the channel layer, but that doesn't seem to be a problem. This is some data about the request on response from fiddler. The request headers say Upgrade to WebSocket, but it returns a 404 HTTP request, as it doesn't see it as a WebSocket request.
Thanks for any help.
include proxy params was the problem. It was overwriting headers.
I have two docker containers running on AWS elastic beanstalk. One container has my web application(django) and the other has my NGINX server. I have a positiveSSL certificate verified for my domain name, after configuring my NGINX to accept HTTPS and it seems like the website refuses to connect over HTTPS and only works on HTTP
I have my AWS security groups open to accept traffic from port 443 and my certificate is valid so I can only assume I am not setting my nginx correctly
upstream app {
server app:8000;
}
server {
listen 443 ssl;
server_name mysite.com www.mysite.com;
ssl_certificate /app/ssl/mysite_chain.crt;
ssl_certificate_key /app/ssl/mysite.key;
location / {
proxy_pass http://app;
proxy_ssl_session_reuse on;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
}
Everything is working fine when I use normal HTTP and I don't get any logs from NGINX on HTTPS for some reason. The only message I get is from my browser saying the 'site can't be reached' and that the 'website refused the connection'. Is there something obvious here I am missing?
I am using nginx proxy to force all traffic through HTTPS. However, I have a page (/upload) which posts to /upload-downloadable which then uploads the users files using a stream to aws (bucketname.s3.eu-west-1.amazonaws.com)
It uploads as I can see it on AWS s3 bucket, but doesn't respond back to the server to tell the user? Works without the proxy perfectly, but not with my current config.
So it does Client -> AWS, but AWS->Server/Client doesn't work.
Any ideas?
upstream site {
server 127.0.0.1:1337;
}
upstream project {
server localhost:27017;
}
# HTTP — redirect all traffic to HTTPS
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS — proxy all requests to the Node app
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name tryhackme.com;
error_page 502 /down.html;
location /down.html {
root /var/www/html;
}
#error_page 500 502 503 504 /var/www/html/down.html;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location / {
#proxy_pass http://127.0.0.1:28017;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_read_timeout 3600;
proxy_pass http://localhost:1337/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}