Application-Controlled Session Stickiness: enable stickiness on all cookies? - amazon-web-services

I have my production servers running behind a load balancer on AWS (they scale up based on an AMI). Some websites have cookies - for example, a restaurant with multiple locations, and each location is set in a cookie.
I noticed that a cookie wasn't being saved across multiple servers, so I remedied this by going into Load Balancers -> Port Configuration, clicking Enable Application Generated Cookie Stickiness, and inserting the name of the cookie.
As far as I know, this only allows one cookie name, and I have many - Google Analytics, for example. (Perhaps they can be comma separated, I haven't checked yet.)
My port configuration now looks like this:
80 (HTTP) forwarding to 80 (HTTP)
Stickiness: AppCookieStickinessPolicy, cookieName='MY_COOKIE'
I was wondering if there was any way to allow ANY app generated cookie to be recognized, instead of having to name them individually.
Any input greatly appreciated. Thank you!

I think you're misunderstanding the use and purpose of session stickiness.
If you don't have a shared session store - i.e. memcached, redis, or something that is available to ALL instances in your pool, then you're probably using a session mechanism that involves local storage - saving them on a local file system is a common mechanism for php, while IIS will usually have a local session store.
If you're using a local session store, then you need to make sure that all subsequent request come back to the node that has the session stored - because if it doesn't, then whatever information your application has saved in session is no longer available.
To do this, you have two choices: allow the ELB to set and manage the session affinity cookie, or have it do it based on the session cookie you set. Note that in both cases, the ELB will create a new cookie with the name AWSELB and a value that allows it to map the request to the instance that original created it - but if you tie it to the session cookie set when the ELB only generates the AWSELB cookie when it sees a new session cookie.
It sounds like the application problem could be because you're pulling the location from session, not from the cookie, but that's just a guess.

Related

Sandbox Cookies between environments

I have a production environment and a staging environment. I am wondering if I can sandbox cookies between the environments. My setup looks like
Production
domain.com - frontend SPA
api.domain.com - backend Node
Staging
staging.domain.com - frontend SPA
api.staging.domain.com - backend Node
My staging cookies use the domain .staging.domain.com so everything is fine there. But my production cookies use the domain .domain.com so these cookies show up in the staging environment.
I've read one possible solution is to use a separate domain for staging like staging-domain.com but I would like to avoid this if possible. Are there any other solutions or am I missing something about how cookies work?
There are multiple alternatives:
Set your production domains to be www.domain.com and api.www.domain.com and set your cookie to .www.domain.com
This way, your production cookie will not be seen in the staging environment.
or
Use .domain.com , but have your backend behave differently depending on which environment they receive the cookie in.
One solution would be to change the pass phrase used on staging environment to encrypt cookies.
Doing so will render cookies coming from the production invalid.
The method to do so is web server dependent, for example on Apache HTTP server:
http://httpd.apache.org/docs/current/mod/mod_session_crypto.html
Text from above link:
SessionCryptoPassphrase secret
The session will be encrypted with the given key. Different servers can be configured to share sessions by ensuring the same encryption key is used on each server.
If the encryption key is changed, sessions will be invalidated automatically.
So find how o change the passphrase on your web server on staging environment, and all cookies coming from production, along with all cookies (issued in the past) from staging will be considered invalid on staging.
Alternative option if you don't want to use separate domain or www subdomain: you can append staging environment name to the cookie name.
But personally, I would put an API gateway/proxy in front of backend and spa to keep both services under a single domain (domain.com and domain.com/api).
For staging: staging.domain.com and staging.domain.com/api or completely separate domain to avoid exposing a staging address in SSL certificate.
And I would not allow cookie sharing by omitting domain while setting the cookie. Probably, I would set the cookie path to /api.

PGadmin4 on Kubernetes: Session invalidated when using ELB

I have a weird problem with PGAdmin4.
My setup
pgadmin 4.1 deployed on kubernetes using the chorss/docker-pgadmin4 image. One POD only to simplify troubleshooting;
Nginx ingress controller as reverse proxy on the cluster;
Classic ELB in front to load balance incoming traffic on the cluster.
ELB <=> NGINX <=> PGADMIN
From a DNS point of view, the hostname of pgadmin is a CNAME towards the ELB.
The problem
Application is correctly reachable, users can login and everything works just fine. Problem is that after a couple of (roughly 2-3) minutes the session is invalidated and users are requested to login again. This happens regardless of the fact that pgadmin is actively used or not.
After countless hours of troubleshooting, I found out that the problem happens when the DNS resolution of ELB's CNAME switches to another IP address.
In fact, I tried:
connecting to the pod directly by connecting to the k8s service's node port directly => session doesn't expire;
connecting to nginx (bypassing the ELB) directly => session doesn't expire;
mapping one of the ELB's IP addresses in my hosts file => session doesn't expire.
Given the above test, I'd conclude that the Flask app (PGAdmin4 is a Python Flask application apparently) is considering my cookie invalid after the remote address changes for my hostname.
Any Flask developer that can help me fix this problem? Any other idea about something I might be missing?
PGadmin 4 seems to use Flask-Security for authentication:
pgAdmin utilised the Flask-Security module to manage application security and users, and provides options for self-service password reset and password changes etc.
https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html
Flask-Security seems to use Flask-Login:
Many of these features are made possible by integrating various Flask extensions and libraries. They include:
Flask-Login
...
https://pythonhosted.org/Flask-Security/
Flask-Login seems to have a feature called "session protection":
When session protection is active, each request, it generates an identifier for the user’s computer (basically, a secure hash of the IP address and user agent). If the session does not have an associated identifier, the one generated will be stored. If it has an identifier, and it matches the one generated, then the request is OK.
https://flask-login.readthedocs.io/en/latest/#session-protection
I would assume setting login_manager.session_protection = None would solve the issue, but unfortunately I don't know how to set it in PGadmin. Hope it might help you somehow.
For those looking for a solution, You need to add below to config.py or config_distro.py or config_local.py
config_local.py
SESSION_PROTECTION = None
Faced similar issue in GKE Load balancer , Cleaner solution which worked for me is disabling cookie protection based on Ip address. Add below flag to config_local.py
#Disable Cookie generation base on Ip address
ENHANCED_COOKIE_PROTECTION = False

Coherence - Cookie Session Sharing between Applications Hosted on Different Servers

Coherence - Cookie Session Sharing between Applications Hosted on Different Servers
i have some web application on different servers i need them to have shared cookie
session in browser.
i want to assign same domain to all of them with different urls.
how can i implement this?
is it actually gonna work?
i want to do it with virual host on a proxy server.
The first way that comes to mind is to create a symbolic link in your DocumentRoot to a mounted directory which exists on another server. If you do this cross-server and for each application, then no matter which server people arrive at (due to load balancing, etc.) each server has a 'complete' set as far as apache is concerned but actually you still have the different data in its respective place.
In your /html/ directory (example DocumentRoot) you would have:
application1/
application2 -> /mnt/application2/
application3 -> /mnt/application3/
Then you'd set up the mount - for example - so a df would have:
192.168.1.2:/var/www/html/application2 ... /mnt/application2
192.168.1.3:/var/www/html/application3 ... /mnt/application3
Doing it this way keeps the guy on the same site as far as apache and his browser, etc. are concerned and you are definitely using the same domain, but essentially just splitting the file system between servers based on url.

Website Forms (POST) On Multiple Instances (Servers) Website (Python Django / PHP)

Suppose I have a PHP / Python (Django) website.
The website is running on multiple instances servers.
Meaning the URL for the website is www.test.com, and from a load balancer, it can get the client to www.server1.com or www.server2.com and so on.
When there is a form on the website, and the processing of this form is located on the same page:
Can the following situation exist ? :
- User go to www.test.com - behind the scenes, through the load balancer, he gets to www.server*1*.com. He fills a form.
- The form action (URL) is for www.test.com - so behind the scenes, through the load balancer, he gets to www.server*2*.com.
So here, will the needed form data, and more important for my question maybe - the 'request' data, (like request.SOMETHING at Python Django) will be missing ? Because maybe it was saved before on the session, at www.server*1*.com, and now it is missing at www.server*2*.com ?
The request will always have all data, as that gets forwarded to the edge server. request.POST and request.GET will have all the data from the request. The problem however, is that the session data might not be available at that edge server. Example, you started your session on server1, then request another page from server2. server2 might assign a new session and forbid you to access certain contents.
To overcome this session problem, you can do one of two things:
Share sessions between servers (central session storage)
Always forward the user to the same edge server. Some loadbalancers store the forwared-to edge server in a cookie. On subsequent requests, the user gets forwarded to the same edge node every time. That same edge node will keep the session of that user, so no problems.
Yes, this is a valid concern. Due to the nature of the Web (HTTP), the other request might end up on the other server. This issue is called persistence or stickiness.
The solution here would be to save all this information on the client side (using cookies) and not rely on server-side sessions. So it would be up to you to implement it like this using Python/Django. Using the client-side approach gives the best performance, and should be the easiest to implement.
Keep in mind that this solution bears quite a significant security risk for man-in-the-middle attacks, unless you encrypt the connection with SSL/TSL (using HTTPS), as all of the client data is stored in the cookies which could be intercepted.

Django+apache: HTTPS only for login page

I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.