Authentication Application layer to reach backed services - web-services

I have two web-applications, one running on port 8001 and another running on 8002 and another stand-alone auth-application running on 8090.
I want every request to first pass through auth-application:8090 and then this decides whether it should be processed by web-application:8001 or by web-application:8002.
There could be multiple auth-application which could be decided by putting a Load-balancer on top of those and several web-application cluster and the auth-application decides the web-application cluster to forward the request.
By meaning on several web-application cluster, I mean one cluster is built on Java application and another cluster is composed on Django web-application. I want to decide the cluster based on request-header or request parameters.
What is the best away to achieve this?
I could think of using a script to be called in nginx proxy_pass block but am not sure how this could work or even if this would work. There might be some existing implementation for this problem; Might Google / Amazon use this kind of architecture?

Usually authentication flow is initiated from the application (how the auth server should know where do you want to go after the successful auth?), so the flow should be:
1. user reaches app
2. app checks if user is authenticated
3. if no, redirects to auth service
4. let's you in (based on the success of the auth)
So the users should know first of all what app do they want (8001 or 8002). If the two apps are the same then it's a loadbalancer you need but the auth flow still has to be initiated from the app.

Related

Is it possible to dynamically change the URI that an asp.net core process is listening on?

I'd like to have the following architecture:
A factory web service that is running all the time under a general URI, say /service.
A set of specific self-hosted application services, running under specific URI, say /service/appA/db1, /service/appA/db2, /service/appB/. The application services are facades on top of legacy applications, and each application-database pair must be in a separate process. I cannot merge them. There may be thousands of combinations of these separate services, so I don't want to kick them all off. I'd like to kick them off on demand.
Now I could have the client ask the factory to kick off the needed service, and then connect to it, but this doesn't work cleanly in a load-balanced environment.
What I would really like to do is for the factory service to listen to requests, and automatically kick off the application services as needed and then stop listening to the portion of the URI that the newly kicked off service is listening to.
For example, say that client wants to call "http://hostname/service/productapp/myproductdb/products/get/byname"
The first call would go to the factory service. It would kick off the application service "productapp-myproductdb" and redirect the initial call to the name URL, and stop listening on the URI "/service/productapp/myproductdb".
Any subsequent calls would go directly to the "productapp-myproductdb" service, which would be listening on the "/service/productapp/myproductdb" uri.
I don't want to redirect calls from the factory service to the application service. I want the client to be able to connect to the application service directly after the initial request.
Is this something that is possible to do?
If it is possible, how can I do it in ASP.NET Core?
thanks,
Alex

authenticate play 1.2.x application running on separate server from another play 1.2.x application implemented with secure module

I have developed a play 1.2.5 application and implemented secure module module for authentication.Its working fine. Now I have developed another play 1.2.5 application which is running on a separate server. I have maintained a href tag in my first play application which has the link to second application.On loging in through my first application, I want the username to be passed to the second application because i am using the logged username. As soon as I log out from the first application, The session (username) should be removed from the second application too.
How can i achieve this ...Plz help!
If you run both of servers on 1 domain (such as www.example.com), and using load balancer (like nginx) to transfer requests to 2 server. You just make sure the config application.secret is the same for both.
If you run on different sub-domain (Recommend), you MUST do like that:
Server should use sub-domain, for example login server is login.example.com and application server is app.example.com
Use config application.defaultCookieDomain=.example.com for both server, then they can use the cookie each others
Make sure both servers have same config application.secret
If you really want to put 2 difference domain, like example.com and example.net. You should implement OAuth on login server and provide API to call from application server.

Webservice Endpoint - can someone externally scan all services available on a host?

Say we have hosted a few webservices over over https://mycompany.com/Service
e.g.
https://mycompany.com/Service/Service1
https://mycompany.com/Service/Service2
https://mycompany.com/Service/Service3
As you can see on mycompany.com we have hosted 3 webservices each having their distinct urls.
What we have is a Jboss instance with 3 different web wars deployed in it. When someone hits the service it gets past our firewall and then teh load balancer redirects to Jboss on port 8080 on the requried path and it gets serviced.
the 3 services are consumed by 3 different clients. My question if say Client1 using Service 1 is only given out the url corresponding to it can they use some kind of scanner that can also inform them that Service2 and Service3 are alaso available on mycompany.com/Service?
Irrespective of clients - can anyone simply use some scanner tool to identify what Service Endpoints are exposed on the host?
Kindly note they are a mix of SOAP (WSDL) and Rest based services deployed on same instance of Jboss.
Yes, someone can scan for those endpoints. Their scanner would generate a bunch of 404s in your logs, because it would have to guess the other URLs. If you have some kind of rate limiting firewall, it might take them quite a long time. You should be checking the logs regularly anyway.
If you expose your URL to the public internet, relying on people not finding it is just security via obscurity. You should secure each URL using application-level security, and assume that the bad guys already have the URL.
You may want to consider adding subdomains for the separate applications (e.g. service1.mycompany.com, service2.mycompany.com) - this will make firewalling easier.

Nginx reverse proxy allow traffic based on cookie presence

I have a small Linux server acting as a reverse proxy running Nginx. The main server behind Nginx is running a website in asp.net with a forms authentication login and an instance of ArcServer, running some REST services on port 6080.
Is it possible to only allow traffic to port 6080 on Nginx to people that have a session cookie from the asp.net login? Basically I only want logged in users to be able to access those REST services and not the whole wide web.
If someone could point me in the right direction, I am running short on ideas.
Thanks.
The following works quite well. But naturally is only a bit of obfuscation and doesn't replace any security checking deeper in the app
location /url/to/secret/ {
if ($cookie_secretCookieName) {
proxy_pass http://serverhere;
}
}
This wouldn't prevent anyone that knows the cookie name from getting access e.g. someone who was a user and isn't anymore; but it could be a nice extra step to reduce a bit of load on your servers

How do I setup a asmx web service in Azure that accepts a client certificate?

I apologize in advance if the question is ridiculous.
I have an asmx service running in Azure (HTTP - no SSL).
I have a WPF app that loads a X509Certificate2 and adds it to the request by doing the following:
X509Certificate2 cert = new X509Certificate2("...");
webRequest.ClientCertificates.Add(cert);
In the web service I get the certificate by
new X509Certificate2(this.Context.Request.ClientCertificate.Certificate)
And then I load a cert (that I have both uploaded to the Azure control panel and added to my service definition file) by using the following sample:
var store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
X509Certificate2Collection certs = store.Certificates.Find(X509FindType.FindBySubjectName, certName, true);
And then I validate by doing the following:
clientCert.Thumbprint == certs[0].Thumbprint
Now unfortunately I get an exception (System.Security.Cryptography.CryptographicException: m_safeCertContext is an invalid handle) as soon as I do
Request.ClientCertificate.Certificate
So I have a few questions. How do I avoid the exception. This answer states I need to modify an IIS setting, but how can I do that in Azure?
In any case is this even the proper way to do certificate authentication?
Thanks!
You can use command scripts to modify IIS, in combination with appcmd.exe.
For a quick example (disabling timeout in an application pool), take a look at this sample by Steve Marx.
In this example, you'd call DisableTimeout.cmd as a startup task. For more info on creating startup tasks, you can watch this episode of Cloud Cover Show. There should be a lab on startup tasks in the Platform Training Kit as well.
Just remember that any type of IIS configuration change should be made via an automated task at startup. If you manually change IIS via RDP, those changes won't propagate to all of your instances, and won't remain persistent in the event of hardware failure or OS update.
You can remote into your azure instances to manage IIS. As for a way to do it globally for all instances at once, I'm not sure. That would be an interesting side project though.
http://learn.iis.net/page.aspx/979/managing-iis-on-windows-azure-via-remote-desktop/