Coherence - Cookie Session Sharing between Applications Hosted on Different Servers
i have some web application on different servers i need them to have shared cookie
session in browser.
i want to assign same domain to all of them with different urls.
how can i implement this?
is it actually gonna work?
i want to do it with virual host on a proxy server.
The first way that comes to mind is to create a symbolic link in your DocumentRoot to a mounted directory which exists on another server. If you do this cross-server and for each application, then no matter which server people arrive at (due to load balancing, etc.) each server has a 'complete' set as far as apache is concerned but actually you still have the different data in its respective place.
In your /html/ directory (example DocumentRoot) you would have:
application1/
application2 -> /mnt/application2/
application3 -> /mnt/application3/
Then you'd set up the mount - for example - so a df would have:
192.168.1.2:/var/www/html/application2 ... /mnt/application2
192.168.1.3:/var/www/html/application3 ... /mnt/application3
Doing it this way keeps the guy on the same site as far as apache and his browser, etc. are concerned and you are definitely using the same domain, but essentially just splitting the file system between servers based on url.
Related
I have two spring boot applications.
module1 running on port 8080
module2 running on port 9090
I have set the ports using this property in application.properties file
server.port=${port:9090}
Both modules have /login, /signup which are accessible without authentication accomplished via the code below.
http.authorizeRequests()
.antMatchers("/signup", "/login").permitAll()
Any other request requires that the user be authenticated.
If i use one module at a time there is no problem,
But if try to use them back and forth at the same time then the problem is that i have to login again to the previous app every time i use the other one.
Eg.
Goto Login page to module1 - (Header response has set jsessionid=XX) ok
Login to module1 - ok
Browse secured content on module1 - ok
Goto Sign Up page on module2 - (Header response has set jsessionid=YY) ok
Try to browse to another secured content on module1 - I have to login again
I'm quite sure it's due to the jessionid being reset by module2.
Are HTTP cookies port specific?
I have read this article which states that cookies are not port specific.
But there must be a solution so that i don't have to login everytime i switch apps.
You need to use different cookie names for the two applications.
There are different ways to do these, the most simple one, for a spring-boot application with version >=1.3 is just setting a property :
server.session.cookie.name = MYSESSIONID
Other ways are described in this post .
Map your applications to different context paths, so the JSESSIONID cookies will be independent; otherwise, the cookie is for the same context, so there is effectively one cookie for both applications. Another solution would be to use different hosts.
Please note that you don't only change context path of the cookie here: if you change context path of your application, Servlet API implementation will handle cookie context path change for you.
I've experimented a bit with same host and different context paths.
I'm currently having two applications launched, both have Servlet API as their base (and JSESSIONID cookie is defined by Servlet API's session mechanics). The applications both run on localhost, on different ports, and with different contexts (/app1 and /app2). I've logged into both applications, and in Chrome's dialog which lists cookies I can see two JSESSIONID cookies: one for localhost and /app1 and another for localhost and /app2.
Then I logout in /app2. Its JSESSIONID is destroyed and recreated with different content (because I've been redirected to the login page again). (Please note that to see that change I had to close Cookies dialog and reopen it as Chrome did not update it on the fly). At the same time, JSESSIONID cookie which belongs to /app1 is intact, and I can proceed working in /app1 (so I was not logged out from it).
UPDATE
One more experiment. I've mapped both appilcation at the same context (/app1). They run on localhost:8084 and localhost:8085. I do the following:
I log into the first application (port 8084)
I log into the second application (port 8085)
I switch back to the first application tab, click any link and see that the session is destroyed (as I am being redirected to login page).
So even if applications run on different ports of the same host with the same context path, they use the same cookie. Basically, this is what was said in Are HTTP cookies port specific? : Cookies do not provide isolation by port
A little summary:
Different hosts: no problem, cookies are different
Different application contexts: no problem, cookies are different
Same host, same application context, different ports: there is only one cookie, and this causes a conflict.
So the recipe is the same as before:
Either use different hosts (that is host names, not including port)
Or use different context paths
Or (as another answer suggests) change cookie name to avoid the conflict
I have my production servers running behind a load balancer on AWS (they scale up based on an AMI). Some websites have cookies - for example, a restaurant with multiple locations, and each location is set in a cookie.
I noticed that a cookie wasn't being saved across multiple servers, so I remedied this by going into Load Balancers -> Port Configuration, clicking Enable Application Generated Cookie Stickiness, and inserting the name of the cookie.
As far as I know, this only allows one cookie name, and I have many - Google Analytics, for example. (Perhaps they can be comma separated, I haven't checked yet.)
My port configuration now looks like this:
80 (HTTP) forwarding to 80 (HTTP)
Stickiness: AppCookieStickinessPolicy, cookieName='MY_COOKIE'
I was wondering if there was any way to allow ANY app generated cookie to be recognized, instead of having to name them individually.
Any input greatly appreciated. Thank you!
I think you're misunderstanding the use and purpose of session stickiness.
If you don't have a shared session store - i.e. memcached, redis, or something that is available to ALL instances in your pool, then you're probably using a session mechanism that involves local storage - saving them on a local file system is a common mechanism for php, while IIS will usually have a local session store.
If you're using a local session store, then you need to make sure that all subsequent request come back to the node that has the session stored - because if it doesn't, then whatever information your application has saved in session is no longer available.
To do this, you have two choices: allow the ELB to set and manage the session affinity cookie, or have it do it based on the session cookie you set. Note that in both cases, the ELB will create a new cookie with the name AWSELB and a value that allows it to map the request to the instance that original created it - but if you tie it to the session cookie set when the ELB only generates the AWSELB cookie when it sees a new session cookie.
It sounds like the application problem could be because you're pulling the location from session, not from the cookie, but that's just a guess.
I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox
I understand that there are a number of ways/hacks to implement cross domain cookies such as iframe, redirects etc. I believe those methods are necessary when different app servers are serving each domain.
Now if both domains are served by the same app server, would there be an efficient and best practice method for handling these cookies? Could the app server in this case, just keep track of the origin and determine which users each request is associated to regardless of what target domain is being requested?
Any input would be greatly appreciated.
Bob
Cookies are how a server knows who's talking to it, so having both domains on the same server doesn't really help. When the request comes in, you have the source IP:port, user agent, cookies, and that's about it. IP isn't useful because of NAT (multiple users, one IP) and mobile (one user, multiple IPs--moving from cellular to wifi or vice versa). User agent has similar problems. The answers discussed in Cross-Domain Cookies are still the best options available.
Unfortunately, there's still not the super-direct way to share user data across domains. I found that the iframe implementation was the most re-usable.
To this end, I created an NPM module to simplify cross-domain sharing. It gives you a function to produce an iframe with a whitelist of your domains, and get/set functions that let you access that iframe from any whitelisted domain.
https://www.npmjs.com/package/cookie-toss
Hope this helps!
I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.