OAuth2 - Securing multiple webservices' APIs with one token - web-services

I have three web services exposing their APIs.
Internally they are running on different ports but on the same server.
Then I have a nginx instance mapping those services to "api.domain.com", so that they are accessible from the web.
Now I need to secure this services, and I was thinking about OAuth2.
Unfortunatly I have no experience with OAuth2 so I'd like to know if there's a way to use one access token for all three webservices without requiring a different auth for each service.
What I want to make is to allow a consumer to be able to get the authorization once and then access all services under api.domain.com (that's just a reverse proxy forwarding request to our internal services).
Then I'd need to create a simple interface to perform certain operations on those services.
It would allow my users to login with their account info, of course this interface would be a consumer itself for those services, can I skip the authorization part and allow this app to work on the behalf of the user by just making it login? It will run on the same server with services.
Can I do this with OAuth2 or am I better looking for something else?

Yes, you can do this irrespective of whether you use OAuth or not.
You can have a reverse proxy that sits in front of your services/apps that requires Authentication. If the request is unauthenitcated it redirects to what ever authentication mechanism is being used. Once that happens it sets an authenticated request header that contains the user name and the requests are passed on to whatever rules are configured.
Am not sure of nginx but I have used apache modules like mod_authnz_ldap, mod_auth_cas to make sure the requests passing through it are authenticated. This is the sample of my apache config that uses CAS for authentication and checks users LDAP group for authorization (user should belong to developers group)
# a2enmod
# proxy_http
# auth_cas
# authnz_ldap
<VirtualHost *:80>
ServerName servername
LogLevel debug
CASVersion 2
CASDebug On
CASValidateServer Off
CASLoginURL cas <loginurl>
CASValidateURL <casvalidateurl>
<Location />
AuthType CAS
AuthLDAPUrl ldap
AuthLDAPGroupAttribute memberUid
AuthLDAPGroupAttributeIsDN off
Require valid-user
Require ldap-group cn=developers,ou=Groups,dc=company,dc=com
Satisfy All
</Location>
ProxyPreserveHost On
ProxyRequests Off
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
</VirtualHost>
Whether Oauth is a good choice for you really depends on your use case. From what I see, you don't really require that, but I don't have sufficient information either.

Related

Make available online a localhost web application

I have built a basic web application using html, css and php (it is a library with query, modify etc. capabilities). I have built the databases containing the books information, subscribers information etc. with phpMyAdmin from Wamp server. On localhost (C:\wamp\www) everything works ok (I can add, modify, make queries etc.).
Now I would like to make this web application available online, but I have no idea how this can be done. The access to the database must be also available online (for search, queries etc. from the databases).
Can somebody support me?
The access to your database can be local since the php files that use yourdatabase run in the same machine.
You only need to accept online access to your apache server, if it's not accessible yet, and have no firewall active. In this case you should be able to connect to your server by ip. And you'll need a domain and a dns server if you want not having to write the public IP to connect.
You need a public IP address or routing the outside web traffic to your own web server.
Most routers have an advanced section called IP/Port Forwarding: find yours. If you don’t have this, I’m afraid you cannot be reachable by the outside.
Besides that, find your private IP with:
C:\>ipconfig
take note of the IP address: that’s your private address, which uniquely identifies you in your local network. 
In httpd.conf change:
ServerName localhost:80
With:
ServerName <private IP>:80
Also find this line:
Require local
And change it to:
Require all granted
Restart your web server. Find out what’s your current public IP address (the public address of your router: https://www.whatismyip.com ) and visit:
http://<public IP>:<port>/
Or, in case you have not changed the default http port (80) just visit:
http://<public IP>/

How to use TLS and SHA-2 certificates in Domino Web Service Consumer

As many others, we have got bitten by the lack of TLS and SHA-2 support in IBM Domino.
Our application relies heavily on consuming web services that require authentication using certificates. And everything worked fine until last week. Then, one of the providers started requesting SHA-2 certificates for authentication and the other started requesting TLS instead of SSS v3.
Our current solution uses Java web consumers, similar to this:
ServiceBinding stub = new ServiceLocator().getWebService(portAddress);
stub.setSSLOptions(PortTypeBase.NOTES_SSL_SEND_CLIENT_CERT + PortTypeBase.NOTES_SSL_ACCEPT_SITE_CERTS);
Certificates are kept in the server's keyring.
How can we use SHA-2 certificates and TLS with Domino web consumers?
I tried importing the certificates in Java truststore / keystore and using code like this:
System.setProperty("javax.net.ssl.keyStore", "/path/to/keystore");
System.setProperty("javax.net.ssl.keyStorePassword", "pwd);
System.setProperty("javax.net.ssl.trustStore", "/path/to/truststore");
System.setProperty("javax.net.ssl.trustStorePassword", "pwd");
System.setProperty("java.protocol.handler.pkgs","com.sun.net.ssl.internal.www.protocol");
but it didn't seem to work. I am still debugging the code in order to find the exact cause.
But what to do with TLS? Is is possible to use Apache / Nginx as some kind of proxy for web service authentication?
Or is our only option to write web service consumers as standalone Java applications and call them from Notes?
Thanks,
Sasa
We were able to solve both SHA-2 and TLS issues by using an Apache reverse proxy. We first tried with forward proxy, but it didn't work.
In the working solution, our Domino web service consumer first contacts the Apache reverse proxy using SSL, but without any authentication. Then Apache contacts the web service provider using the certificate that Domino used previously.
After Apache and web service provider finished handshake and authentication, it is free for the web service consumer in Domino to do its stuff.
As it turns out, it was rather easy to set up. You'll need an Apache server (obviously), we installed our in a CentOS virtual machine.
The configuration you need to do is quite simple and looks like this:
<VirtualHost *:8443>
# Turn off forward proxy
ProxyRequests Off
# Communication with Domino is using SSL, so we need SSL support
SSLEngine On
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
# This is necessary for authentication to work.
SSLProxyEngine On
# This is Domino certificate including private key saved as unecrypted pem file.
SSLProxyMachineCertificateFile /etc/httpd/certs/domino-cert.pem
# This is list of CA certificates necessary to authenticate the provider.
SSLProxyCACertificateFile /etc/httpd/certs/provider-cert.pem
# Redirection rules are in this case very simple - redirect everything that comes
# to the proxy to the web service provider address.
ProxyPass / https://ws.provider.com/
ProxyPassReverse / https://ws.provider.com/
# Allow only connections from Intranet.
<Proxy *>
Order deny,allow
Deny from all
Allow from 172.20.20.0/24
</Proxy>
</VirtualHost>
Just a few things to mention here:
You should be able to use certificate and key installed by default with Apache, as they are only used to secure communication between the Domino and the proxy.
Domino key and certificate must be in unencrypted pem format. Use openssl to convert if necessary. If you should get error message about missing or encrypted private key, open your pem certificate and confirm that it includes RSA in lines -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY-----. openssl sometimes generates certificate without the RSA and then Apache won't be able to use it.
That concludes the Apache configuration. The only thing that remains is to modify the web service consumer - find in your code the line where you set endpoint address, something like
https://ws.provider.com/ws/getTemperature
and change it to
https://proxy.mycompany.com:8443/ws/getTemperature
And that's it. We now have working solution for using Domino web services together with TLS and SHA-2 certificates. And we can calmly wait for IBM to implement support for this in Domino.
SHA2 works but TLS for Windows and Unix use tips
I guess in the context of Poodle TLS not SHA-2 is critical, but anyway here is how to get SHA-2 working with Domino 9 without IBM HTTP.
http://www.infoware.com/?p=1592
TLS is NOT SOLVED by this only SHA-2.
For Windows use IHS integration
For unix look at this link http://blog.darrenduke.net/darren/ddbz.nsf/dx/here-is-a-freely-available-vm-to-reverse-proxy-domino-shoot-the-poodle.htm
Regards
Mats
You can avoid having to change your addresses to use a different port.
The way I solved this was to use IBM HTTP Server (IHS) installed with Domino 9 Server (you have to choose IBM HTTP Server from the Custom installation screen). IHS is a version of Apache with a Domino HTTP handler. You can install your TLS certificates on the IHS/Apache server, and will proxy to the Domino server on-the-fly. So you don't even have to change your URLs.
Here are some instructions from IBM:
http://www-01.ibm.com/support/docview.wss?uid=swg27039743&aid=1
It shows you how to Create Signing Requests (CSRs) using IKEYMAN and store the certificate in Domino.
In the domino\ihs\conf\ domino.conf file, edit by uncommenting the lines as per below and add the VirtualHost nodes:
# IPv4 support:
Listen 0.0.0.0:80
# Uncomment the following line for IPv6 support on Windows XP or Windows
# 2003 or later. Windows IPv6 networking must be configured first.
# Listen [::]:80
...
Listen 0.0.0.0:443
## IPv6 support:
#Listen [::]:443
#default vhost for Domino HTTP:
<VirtualHost *:80>
ServerName "${DOMINO_SERVER_NAME}"
DocumentRoot "${DOMINO_DOCUMENT_ROOT}"
</VirtualHost>
<VirtualHost *:443>
ServerName "${DOMINO_SERVER_NAME}"
DocumentRoot "${DOMINO_DOCUMENT_ROOT}"
SSLEnable
#SSLProtocolDisable SSLv2
#SSLProtocolDisable SSLv3
</VirtualHost>
KeyFile d:/keys/myserver.kdb
SSLDisable
#
Remember to add HTTPIHSEnabled=1 to notes.ini when all the domino.conf modifications are done. Then watch the Domino console for any errors during HTTP Start due to domino.conf. You can also add HTTPIHSDebugStartup=1 to notes.ini to get a bit of debug info during HTTP IHS startup.

Can foo.example.com set a cookie for bar.example.com?

I'm setting these cookies for a single sign on solution where I have one app running at foo.example.com and a different app running at bar.example.com.
I know that I can set a cookie from foo.example.com for .example.com.
If I had control over bar.example.com I'd just have it recognize a cookie from .example.com. But I have very little control of it.
For what it's worth, the app at foo.example.com is in python and the app at bar.example.com is java.
You can certianly try. However, browsers should not honor this behavior as it is a cross-site cooking attack.
This is not possible. SSO is done using protocols such as OAuth or SAML that imply sending signed messages between the endpoints and/or communication between them. There is no way to do this on the "client side".

Restrict RESTful endpoint on tomcat to local webapp

Is there a mechanism built into Tomcat or Apache to differentiate between a local web application calling a local web service vs a direct call to a webservice?
For example, a single server has both apache and tomcat servers with a front end web app deployed to apache and back end services deployed on tomcat. The webapp can call the service via port 80 utilizing mod_proxy, however a direct call to the webservice and examining tomcaat's logs shows the requests to both be identical. For example:
http://127.0.0.1/admin/tools
<Location /admin/tools>
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
</Location>
ProxyPass /admin/tools http://localhost:8080/admin/tools
ProxyPassReverse /admin/tools http://localhost:8080/admin/tools
This only blocks (or allows if you remove the deny) all external requests and both requests appear identical in tomcat's log.
Is there a recommended mechanism to differentiate and limit a direct remote service request vs the web application making a service request?
You need to use Tomcat's Remote IP Filter to extract the client IP provided by the proxy in the X-Forwarded-For HTTP header and use it to populate Tomcat's internal data structures. Then you will be able to correctly identify the source of the request.

Is it possible to run Apache and IIS on the same machine with one IP-Address (and different ports ?)

The "main" one should be IIS. Is there an option to address the Apache without typing in the port-number
The reason for this is: I cannot get Django to work on IIS
Any ideas will be appreciated
You could set up Apache on a different port, then use redirects or proxying on IIS to get people to the Apache port without them having to type it.
The only way to avoid typing in the port number is to set up a proxy, which could be either one of the two webservers. That way, the proxy makes the connection on the alternate port and the client doesn't have to know where it is.
I don't know about IIS, but on Apache, you would have to load mod_proxy (and I think, mod_proxy_http) and then do something like this:
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /foo http://foo.example.com/bar
ProxyPassReverse /foo http://foo.example.com/bar
Also check the docs for mod_proxy online.
You might also want to look at lightweight webservers such as lighttpd, if you're going to have two running. It's a common setup to have a light webserver taking specific tasks away from the main one. (Apache for dynamic and lighttpd for static content is one typical example).
There's also other possibilities, ranging from getting more fancy, such as
Have a third webserver doing only the proxying and the other two on alternate ports
Have them running on the same port but two IPs and hide that fact via your network setup
to attacking the root cause by either
finding somenone who knows how to get Django running on IIS
moving from IIS to another webserver
Of course, I have no clue what might be appropriate for your situation.
If this is a matter of running Django on a server that already needs IIS, you can run django on IIS directly, thanks to efforts like Django-IIS and PyISAPIe. I think it would be preferable to NOT run a second web server when all its going to be doing is proxying requests out to a third server, the Django code.