Is there a mechanism built into Tomcat or Apache to differentiate between a local web application calling a local web service vs a direct call to a webservice?
For example, a single server has both apache and tomcat servers with a front end web app deployed to apache and back end services deployed on tomcat. The webapp can call the service via port 80 utilizing mod_proxy, however a direct call to the webservice and examining tomcaat's logs shows the requests to both be identical. For example:
http://127.0.0.1/admin/tools
<Location /admin/tools>
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Location>
ProxyPass /admin/tools http://localhost:8080/admin/tools
ProxyPassReverse /admin/tools http://localhost:8080/admin/tools
This only blocks (or allows if you remove the deny) all external requests and both requests appear identical in tomcat's log.
Is there a recommended mechanism to differentiate and limit a direct remote service request vs the web application making a service request?
You need to use Tomcat's Remote IP Filter to extract the client IP provided by the proxy in the X-Forwarded-For HTTP header and use it to populate Tomcat's internal data structures. Then you will be able to correctly identify the source of the request.
Related
My Django application currently runs on HTTPS in the server. Recently i added a new functionality for which it has access another link to get JSON object which is HTTP link.
Its working fine in the localhost but when I deploy it in the server, it is showing the following error.
Site was loaded over HTTPS, but requested an insecure resource http link. This request has been blocked; the content must be served over HTTPS.
Can someone please suggest a workaround to bypass this so that the new functionality runs smooth.
This error comes from the browser, so there is not match you can do on the server side.
Easiest thing would be to enable https to those external resources if you have control over that.
Next workaround would be to add a proxy for your http resources and make this proxy https. In example, you could add a simple nginx server with proxy_pass to your http server and add https on that proxy'ing nginx.
Note, that if this JSON you are talking about contains anything sensitive, security-wise you really should serve it via https and not via proxy-workaround I described above. If nothing sensitive is served, workaround might be ok.
Since you have control over your http server, just allow ssl proxy on the nginx, with configuration that may look something like that:
server {
listen 443;
server_name my.host.name;
ssl_certificate /path/to/cert;
ssl_certificate_key /path/to/key;
location / {
proxy_pass http://localhost:80;
}
}
Note, if you're using something like AWS / GCP / Azure - you can do it on the load balancer side instead of nginx.
Otherwise, you can use letsencrypt to get the actual certificate and do some auto-configuration of nginx for you.
As many others, we have got bitten by the lack of TLS and SHA-2 support in IBM Domino.
Our application relies heavily on consuming web services that require authentication using certificates. And everything worked fine until last week. Then, one of the providers started requesting SHA-2 certificates for authentication and the other started requesting TLS instead of SSS v3.
Our current solution uses Java web consumers, similar to this:
ServiceBinding stub = new ServiceLocator().getWebService(portAddress);
stub.setSSLOptions(PortTypeBase.NOTES_SSL_SEND_CLIENT_CERT + PortTypeBase.NOTES_SSL_ACCEPT_SITE_CERTS);
Certificates are kept in the server's keyring.
How can we use SHA-2 certificates and TLS with Domino web consumers?
I tried importing the certificates in Java truststore / keystore and using code like this:
System.setProperty("javax.net.ssl.keyStore", "/path/to/keystore");
System.setProperty("javax.net.ssl.keyStorePassword", "pwd);
System.setProperty("javax.net.ssl.trustStore", "/path/to/truststore");
System.setProperty("javax.net.ssl.trustStorePassword", "pwd");
System.setProperty("java.protocol.handler.pkgs","com.sun.net.ssl.internal.www.protocol");
but it didn't seem to work. I am still debugging the code in order to find the exact cause.
But what to do with TLS? Is is possible to use Apache / Nginx as some kind of proxy for web service authentication?
Or is our only option to write web service consumers as standalone Java applications and call them from Notes?
Thanks,
Sasa
We were able to solve both SHA-2 and TLS issues by using an Apache reverse proxy. We first tried with forward proxy, but it didn't work.
In the working solution, our Domino web service consumer first contacts the Apache reverse proxy using SSL, but without any authentication. Then Apache contacts the web service provider using the certificate that Domino used previously.
After Apache and web service provider finished handshake and authentication, it is free for the web service consumer in Domino to do its stuff.
As it turns out, it was rather easy to set up. You'll need an Apache server (obviously), we installed our in a CentOS virtual machine.
The configuration you need to do is quite simple and looks like this:
<VirtualHost *:8443>
# Turn off forward proxy
ProxyRequests Off
# Communication with Domino is using SSL, so we need SSL support
SSLEngine On
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
# This is necessary for authentication to work.
SSLProxyEngine On
# This is Domino certificate including private key saved as unecrypted pem file.
SSLProxyMachineCertificateFile /etc/httpd/certs/domino-cert.pem
# This is list of CA certificates necessary to authenticate the provider.
SSLProxyCACertificateFile /etc/httpd/certs/provider-cert.pem
# Redirection rules are in this case very simple - redirect everything that comes
# to the proxy to the web service provider address.
ProxyPass / https://ws.provider.com/
ProxyPassReverse / https://ws.provider.com/
# Allow only connections from Intranet.
<Proxy *>
Order deny,allow
Deny from all
Allow from 172.20.20.0/24
</Proxy>
</VirtualHost>
Just a few things to mention here:
You should be able to use certificate and key installed by default with Apache, as they are only used to secure communication between the Domino and the proxy.
Domino key and certificate must be in unencrypted pem format. Use openssl to convert if necessary. If you should get error message about missing or encrypted private key, open your pem certificate and confirm that it includes RSA in lines -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY-----. openssl sometimes generates certificate without the RSA and then Apache won't be able to use it.
That concludes the Apache configuration. The only thing that remains is to modify the web service consumer - find in your code the line where you set endpoint address, something like
https://ws.provider.com/ws/getTemperature
and change it to
https://proxy.mycompany.com:8443/ws/getTemperature
And that's it. We now have working solution for using Domino web services together with TLS and SHA-2 certificates. And we can calmly wait for IBM to implement support for this in Domino.
SHA2 works but TLS for Windows and Unix use tips
I guess in the context of Poodle TLS not SHA-2 is critical, but anyway here is how to get SHA-2 working with Domino 9 without IBM HTTP.
http://www.infoware.com/?p=1592
TLS is NOT SOLVED by this only SHA-2.
For Windows use IHS integration
For unix look at this link http://blog.darrenduke.net/darren/ddbz.nsf/dx/here-is-a-freely-available-vm-to-reverse-proxy-domino-shoot-the-poodle.htm
Regards
Mats
You can avoid having to change your addresses to use a different port.
The way I solved this was to use IBM HTTP Server (IHS) installed with Domino 9 Server (you have to choose IBM HTTP Server from the Custom installation screen). IHS is a version of Apache with a Domino HTTP handler. You can install your TLS certificates on the IHS/Apache server, and will proxy to the Domino server on-the-fly. So you don't even have to change your URLs.
Here are some instructions from IBM:
http://www-01.ibm.com/support/docview.wss?uid=swg27039743&aid=1
It shows you how to Create Signing Requests (CSRs) using IKEYMAN and store the certificate in Domino.
In the domino\ihs\conf\ domino.conf file, edit by uncommenting the lines as per below and add the VirtualHost nodes:
# IPv4 support:
Listen 0.0.0.0:80
# Uncomment the following line for IPv6 support on Windows XP or Windows
# 2003 or later. Windows IPv6 networking must be configured first.
# Listen [::]:80
...
Listen 0.0.0.0:443
## IPv6 support:
#Listen [::]:443
#default vhost for Domino HTTP:
<VirtualHost *:80>
ServerName "${DOMINO_SERVER_NAME}"
DocumentRoot "${DOMINO_DOCUMENT_ROOT}"
</VirtualHost>
<VirtualHost *:443>
ServerName "${DOMINO_SERVER_NAME}"
DocumentRoot "${DOMINO_DOCUMENT_ROOT}"
SSLEnable
#SSLProtocolDisable SSLv2
#SSLProtocolDisable SSLv3
</VirtualHost>
KeyFile d:/keys/myserver.kdb
SSLDisable
#
Remember to add HTTPIHSEnabled=1 to notes.ini when all the domino.conf modifications are done. Then watch the Domino console for any errors during HTTP Start due to domino.conf. You can also add HTTPIHSDebugStartup=1 to notes.ini to get a bit of debug info during HTTP IHS startup.
I intend to run a Jetty server (for generating PDF files with PDFreactor) on a dedicated (virtual) machine; I don't want to have it on my webserver.
According to the PDFreactor documentation, the Jetty server must run on localhost to be usable by the Python API; but a port and host can be given to the PDFreactor constructor, and apparently the restriction to listen to localhost only can be lifted.
Can Jetty be configured to accept requests from some whitelist of hosts only, or is it preferable to put it in a VirtualHost and let apache httpd do the work?
I have three web services exposing their APIs.
Internally they are running on different ports but on the same server.
Then I have a nginx instance mapping those services to "api.domain.com", so that they are accessible from the web.
Now I need to secure this services, and I was thinking about OAuth2.
Unfortunatly I have no experience with OAuth2 so I'd like to know if there's a way to use one access token for all three webservices without requiring a different auth for each service.
What I want to make is to allow a consumer to be able to get the authorization once and then access all services under api.domain.com (that's just a reverse proxy forwarding request to our internal services).
Then I'd need to create a simple interface to perform certain operations on those services.
It would allow my users to login with their account info, of course this interface would be a consumer itself for those services, can I skip the authorization part and allow this app to work on the behalf of the user by just making it login? It will run on the same server with services.
Can I do this with OAuth2 or am I better looking for something else?
Yes, you can do this irrespective of whether you use OAuth or not.
You can have a reverse proxy that sits in front of your services/apps that requires Authentication. If the request is unauthenitcated it redirects to what ever authentication mechanism is being used. Once that happens it sets an authenticated request header that contains the user name and the requests are passed on to whatever rules are configured.
Am not sure of nginx but I have used apache modules like mod_authnz_ldap, mod_auth_cas to make sure the requests passing through it are authenticated. This is the sample of my apache config that uses CAS for authentication and checks users LDAP group for authorization (user should belong to developers group)
# a2enmod
# proxy_http
# auth_cas
# authnz_ldap
<VirtualHost *:80>
ServerName servername
LogLevel debug
CASVersion 2
CASDebug On
CASValidateServer Off
CASLoginURL cas <loginurl>
CASValidateURL <casvalidateurl>
<Location />
AuthType CAS
AuthLDAPUrl ldap
AuthLDAPGroupAttribute memberUid
AuthLDAPGroupAttributeIsDN off
Require valid-user
Require ldap-group cn=developers,ou=Groups,dc=company,dc=com
Satisfy All
</Location>
ProxyPreserveHost On
ProxyRequests Off
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
</VirtualHost>
Whether Oauth is a good choice for you really depends on your use case. From what I see, you don't really require that, but I don't have sufficient information either.
I want to read client's IP address in Django. When I try to do so now with the HTTP_X_FORWARDED_FOR Header, it fails. The key is not present.
Apparently this is related to configuring my Apache server (I'm deploying with apache and mod_wsgi). I have to configure it as a reverse proxy? How do I do that, are there security implications?
Thanks,
Brendan
Usually these headers are available in request.META. So you might try request.META['HTTP_X_FORWARDED_FOR'].
Are you using Apache as a reverse proxy as well? This doesn't seem right to me. Usually one uses a lighter weight static server like nginx as the reverse proxy to Apache running the app server. Nginx can send any headers you like using the proxy_set_header config entry.
I'm not familiar with mod_wsgi, but usually the client IP address is available in the REMOTE_ADDR environment variable.
If the client is accessing the website through a proxy, or if your setup includes a reverse proxy, the proxy address will be in the REMOTE_ADDR variable instead, and the proxy may copy the original client IP in HTTP_X_FORWARDED_FOR (depending on it's configuration).
If you have a request object, you can access these environment variables like this :
request.environ.get('REMOTE_ADDR')
request.environ.get('HTTP_X_FORWARDED_FOR')
There should be no need to change your Apache configuration or configure a reverse proxy just to get the client's IP address.