PGadmin4 on Kubernetes: Session invalidated when using ELB - flask

I have a weird problem with PGAdmin4.
My setup
pgadmin 4.1 deployed on kubernetes using the chorss/docker-pgadmin4 image. One POD only to simplify troubleshooting;
Nginx ingress controller as reverse proxy on the cluster;
Classic ELB in front to load balance incoming traffic on the cluster.
ELB <=> NGINX <=> PGADMIN
From a DNS point of view, the hostname of pgadmin is a CNAME towards the ELB.
The problem
Application is correctly reachable, users can login and everything works just fine. Problem is that after a couple of (roughly 2-3) minutes the session is invalidated and users are requested to login again. This happens regardless of the fact that pgadmin is actively used or not.
After countless hours of troubleshooting, I found out that the problem happens when the DNS resolution of ELB's CNAME switches to another IP address.
In fact, I tried:
connecting to the pod directly by connecting to the k8s service's node port directly => session doesn't expire;
connecting to nginx (bypassing the ELB) directly => session doesn't expire;
mapping one of the ELB's IP addresses in my hosts file => session doesn't expire.
Given the above test, I'd conclude that the Flask app (PGAdmin4 is a Python Flask application apparently) is considering my cookie invalid after the remote address changes for my hostname.
Any Flask developer that can help me fix this problem? Any other idea about something I might be missing?

PGadmin 4 seems to use Flask-Security for authentication:
pgAdmin utilised the Flask-Security module to manage application security and users, and provides options for self-service password reset and password changes etc.
https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html
Flask-Security seems to use Flask-Login:
Many of these features are made possible by integrating various Flask extensions and libraries. They include:
Flask-Login
...
https://pythonhosted.org/Flask-Security/
Flask-Login seems to have a feature called "session protection":
When session protection is active, each request, it generates an identifier for the user’s computer (basically, a secure hash of the IP address and user agent). If the session does not have an associated identifier, the one generated will be stored. If it has an identifier, and it matches the one generated, then the request is OK.
https://flask-login.readthedocs.io/en/latest/#session-protection
I would assume setting login_manager.session_protection = None would solve the issue, but unfortunately I don't know how to set it in PGadmin. Hope it might help you somehow.

For those looking for a solution, You need to add below to config.py or config_distro.py or config_local.py
config_local.py
SESSION_PROTECTION = None

Faced similar issue in GKE Load balancer , Cleaner solution which worked for me is disabling cookie protection based on Ip address. Add below flag to config_local.py
#Disable Cookie generation base on Ip address
ENHANCED_COOKIE_PROTECTION = False

Related

FusionPBX Creating SIP trunk or Gateway

I have created fusionpbx instance using aws. Able to do internal calls between two extensions created. Now i would like to make external call to VOIP server when a particular extension is dialed. To do this i understand that we need to create a sip trunk between two machines i.e fusionpbx server and Voip server.
As of now i created a gateway without using username and password and added external Voip server ip address in CIDR block. But still cant start the gateway and it just refreshes page. No host name is given while configuring.
I have referred many documents available over internet but couldn't find any proper reference. Appreciate if anyone can help me here.
Finally figured this out, i am posting here so that it may help for someone facing same issue.
By changing config key Profile value from external to internal resolved this issue for me.
Key points here to note while configuring gateway is, keep Register to false and if it kept false then don't give any username and password, leave those fields blank. Configure proxy address as shown in below reference and don't forget to add and allow access control (CIDR is your voip server address) under advanced settings of fusionpbx.
Here is the screenshot of gateway configuration for reference.

DNS_PROBE_FINISHED_NXDOMAIN for single website

I created this question earlier but was told that it is a DNS issue as apposed to an issue with HSTS. Regardless, here is what I need help troubleshooting:
Issue:
A single site (one that I own), is showing server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN when I try to connect to it via chrome, firefox, or safari. I can however connect to it via Tor Browser. I can also verify that the address resolves correctly using mxtoolbox. I also am not able to connect via two other computers and two other phones. I also am not able to connect via a different WIFI connection or personal hotspot via my phone. Curl and Host via the command line are also not able to get a response.
What I've tried:
As I said above, I've tried different internet connections and computers. I've also tried flushing my DNS cache and pointing to another DNS server.
Having said that, I am not sure how else to trouble shoot this. The only change I made to the web app was to add HSTS headers, hence why I created the earlier posing. Please let me know what other information I can provide. Otherwise, here are some details about the site itself:
Other information about my stack:
Django web app
Gunicorn / WSGI server
Hosted on Heroku - Cedar-14 stack
DNS setup with AWS route53
domain name registered through AWS
EDIT:
Possibly related: https://serverfault.com/questions/606880/how-can-i-troubleshoot-a-route-53-hosted-zone
I had the similar issue and was not able to open Facebook. Rest all sites were working fine. Initially, I thought Facebook blocked me as I never faced this crappy issue earlier. Later when I searched in Google, I found an article which described the DNS_PROBE_FINISHED_NXDOMAIN issue on Chrome.
I just changed my DNS server address as 8.8.8.8 (preferred) and 8.8.4.4 (alternate) and I never faced that issue again.
Reference - https://www.mobipicker.com/dns_probe_finished_nxdomain/
So from our discussion regarding the NS server records always make sure that the local NS records matches the Parent NS records.
In your case there there were 2 extra NS records associated with your domain that was the reason why your domains and sub domains were acting unhealthy. once you deleted those records the domains and sub domains were back to normal.
you can also try to open an anon window
access the url
use it in anon mode
or
close it and it will load ok

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html

Application-Controlled Session Stickiness: enable stickiness on all cookies?

I have my production servers running behind a load balancer on AWS (they scale up based on an AMI). Some websites have cookies - for example, a restaurant with multiple locations, and each location is set in a cookie.
I noticed that a cookie wasn't being saved across multiple servers, so I remedied this by going into Load Balancers -> Port Configuration, clicking Enable Application Generated Cookie Stickiness, and inserting the name of the cookie.
As far as I know, this only allows one cookie name, and I have many - Google Analytics, for example. (Perhaps they can be comma separated, I haven't checked yet.)
My port configuration now looks like this:
80 (HTTP) forwarding to 80 (HTTP)
Stickiness: AppCookieStickinessPolicy, cookieName='MY_COOKIE'
I was wondering if there was any way to allow ANY app generated cookie to be recognized, instead of having to name them individually.
Any input greatly appreciated. Thank you!
I think you're misunderstanding the use and purpose of session stickiness.
If you don't have a shared session store - i.e. memcached, redis, or something that is available to ALL instances in your pool, then you're probably using a session mechanism that involves local storage - saving them on a local file system is a common mechanism for php, while IIS will usually have a local session store.
If you're using a local session store, then you need to make sure that all subsequent request come back to the node that has the session stored - because if it doesn't, then whatever information your application has saved in session is no longer available.
To do this, you have two choices: allow the ELB to set and manage the session affinity cookie, or have it do it based on the session cookie you set. Note that in both cases, the ELB will create a new cookie with the name AWSELB and a value that allows it to map the request to the instance that original created it - but if you tie it to the session cookie set when the ELB only generates the AWSELB cookie when it sees a new session cookie.
It sounds like the application problem could be because you're pulling the location from session, not from the cookie, but that's just a guess.

Why am I getting "Internal Server Error" running two Odoo instances (same domain but different ports)?

I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox