Can PostgreSQL have md5 auth for 1 IP range and certificate authentication for another IP range? - postgresql-11

I saw that in pg_hba.conf , we have the option to set different authentication for different IPs . Like can we set password (md5) authentication for 1 IP Range and certificate authentication for another IP range.
Please help.
Please share any good links for examples of implementing

Yes possible. I tried myself and it worked.

Related

using custom domain for a django page through traefik

i have my first custom domain (its through godaddy)
ive hooked it up to cloudflare.
i want to connect to it with traefik.
i have a django webpage that works fine on port 8000, so i switched it over to 80 and no dice. trying to connect to my custom domain just hangs and the port gives me a 404 error.
traefik dashboard looks fine and so do my records on cloudflare (as far as i can tell ive never done this before)
i was hoping someone could help me connect to my django page through my custom domain. is there anything ive done in the evidence provided below that looks wrong?
is there anything else you would need to see?
or any steps ive missed?
i recieve this error from traefik as the docker container starts
traefik2 | time="2023-02-13T14:08:29Z" level=error msg="Unable to obtain ACME certificate for domains \"tgmjack.com\": unable to generate a certificate for the domains [tgmjack.com]: error: one or more domains had a problem:\n[tgmjack.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: 2606:4700:3033::ac43:a864: Invalid response from http://tgmjack.com/.well-known/acme-challenge/PnsiuL5AtrJXM9UQNrLvhlGdm1MpJ8ZS6i_atIVWCA4: \"<!doctype html><html lang=\\\"en\\\"><head><meta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\" /><meta name=\\\"viewport\\\" c\"\n" providerName=myhttpchallenge.acme ACME CA="https://acme-v02.api.letsencrypt.org/directory" routerName=frontend#docker rule="Host(`tgmjack.com`)"
according to chatgpt
required file is an ACME challenge file and it should be present at the URL specified in the log message: "http://tgmjack.com/.well-known/acme-challenge/qC1w4L8-pPVgXvXmWm55u6ETasZWK2iCqJUfZNArY5U".
investigating, i belive the following few lines from my cmd line show that the only file on my computer called acme.json is here.
[ec2-user#ip-172-31-19-18 letsencrypt]$ sudo find / -name "acme.json"
/home/ec2-user/thing4/new_ui_51_fix_backend_for_8081/running_prices/TRAEFIK/letsencrypt/acme.json
and also there is no "acme-challenge" anywhere.
so is TRAEFIK/letsencrypt/acme.json the correct file? because the path looks miles away from what it should be? i didnt make it.
#####################################
extra info below
#################################
below is a collection of screenshots each thing ive stated above
do you have any advice or questions?
ps:)
this happens on my local machine and on amazon-linux ec2 containers, i have all my ports open (on the aws end of things)
Some considerations:
GoDaddy's pointing is ignored if you are using Cloudflare for your DNS, so we only look at Cloudflare's.
On CloudFlare you need to remove that random ip you found set as the A record.
You don't need to change your container port from 8000 to 80, you'll have to manage with an "ingress" or otherwise a webserver (nginx for example) a proxypass to localhost:8000.
Traefik,p probabbly, already has an "ingress" used to provisoning the certificate, which is why it returns you an error on ".well-known/acme-challenge". This file is used to identify the actual ownership of a domain and this is needed to generate a valid SSL/TLS certificate.
To do this you need to make sure that when you call your server at localhost:8000/.well-know/acme-challenge it returns the file with the unique key. You certainly find this information on Traefik (https://doc.traefik.io/traefik/https/acme/) this a link to the tutorial.
I recommend you to start checking the correct configuration of CloudFlare targeting by removing anything that is not useful to you.
I hope I have been of some help to you!

PGadmin4 on Kubernetes: Session invalidated when using ELB

I have a weird problem with PGAdmin4.
My setup
pgadmin 4.1 deployed on kubernetes using the chorss/docker-pgadmin4 image. One POD only to simplify troubleshooting;
Nginx ingress controller as reverse proxy on the cluster;
Classic ELB in front to load balance incoming traffic on the cluster.
ELB <=> NGINX <=> PGADMIN
From a DNS point of view, the hostname of pgadmin is a CNAME towards the ELB.
The problem
Application is correctly reachable, users can login and everything works just fine. Problem is that after a couple of (roughly 2-3) minutes the session is invalidated and users are requested to login again. This happens regardless of the fact that pgadmin is actively used or not.
After countless hours of troubleshooting, I found out that the problem happens when the DNS resolution of ELB's CNAME switches to another IP address.
In fact, I tried:
connecting to the pod directly by connecting to the k8s service's node port directly => session doesn't expire;
connecting to nginx (bypassing the ELB) directly => session doesn't expire;
mapping one of the ELB's IP addresses in my hosts file => session doesn't expire.
Given the above test, I'd conclude that the Flask app (PGAdmin4 is a Python Flask application apparently) is considering my cookie invalid after the remote address changes for my hostname.
Any Flask developer that can help me fix this problem? Any other idea about something I might be missing?
PGadmin 4 seems to use Flask-Security for authentication:
pgAdmin utilised the Flask-Security module to manage application security and users, and provides options for self-service password reset and password changes etc.
https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html
Flask-Security seems to use Flask-Login:
Many of these features are made possible by integrating various Flask extensions and libraries. They include:
Flask-Login
...
https://pythonhosted.org/Flask-Security/
Flask-Login seems to have a feature called "session protection":
When session protection is active, each request, it generates an identifier for the user’s computer (basically, a secure hash of the IP address and user agent). If the session does not have an associated identifier, the one generated will be stored. If it has an identifier, and it matches the one generated, then the request is OK.
https://flask-login.readthedocs.io/en/latest/#session-protection
I would assume setting login_manager.session_protection = None would solve the issue, but unfortunately I don't know how to set it in PGadmin. Hope it might help you somehow.
For those looking for a solution, You need to add below to config.py or config_distro.py or config_local.py
config_local.py
SESSION_PROTECTION = None
Faced similar issue in GKE Load balancer , Cleaner solution which worked for me is disabling cookie protection based on Ip address. Add below flag to config_local.py
#Disable Cookie generation base on Ip address
ENHANCED_COOKIE_PROTECTION = False

AWS-ELB -> GEOIP -> MAXMIND -> Laravel

Were trying to serve people from multiple countries the right language on our Website. We have added GeoIP in Laravel and also the maxmind package.
Whatever we try we get everytime issues as Error 500:
The IP address '10.2.1.211' is a reserved IP address
We first tried to make in apache a redirect X_FORWARDED_FOR but it isn't working still.
Can someone assist us and tell us exactly how to solve it?
Our Envoirements:
AWS: Cloundfront, ELB, Ec2, Laravel 5.5, Maxmind (for GeoIP)
It would appear that you can configure Cloudfront to provide an http header CloudFront-Viewer-Country which will contain the ISO country code for your visitor. This will be faster and simpler to use than Maxmind.
e.g. $visitorCountryCode = isset($_SERVER['CloudFront-Viewer-Country']) ? $_SERVER['CloudFront-Viewer-Country'] : '';
Is your error 500 is during testing only? If testing with devices directly connected to your site/intranet then try accessing via a browser connected through an Internet Service provider instead (a "direct" or intranet connection could well have a "reserved" IP address).
You should be able to get the public IP address of traffics from X_FORWARDED_FOR variable.
https://aws.amazon.com/premiumsupport/knowledge-center/log-client-ip-load-balancer-apache/
You should print out the variable from Apache and see if you could receive the value correctly. Anything with 10.x is a private address.

WSO2 AM (1.10.0) ip address redirect

The WSO2 AM site has been assigned a domain, puaki-uat.mpi.govt.nz. however, the site will automatically redirect to ip address after typing the domain name, which will results in mismatch signed certificate,
Expected always use domain name to match a security certificate,
Could please anyone can tell me how to prevent the site from switching to IP address?
Thanks, Sean
It looks like you haven't configured the hostname in carbon.xml. Go to wso2am-1.10.0/repository/conf/carbon.xml and change the following tags.
<HostName>puaki-uat.mpi.govt.nz</HostName>
<MgtHostName>puaki-uat.mpi.govt.nz</MgtHostName>

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html