I created this question earlier but was told that it is a DNS issue as apposed to an issue with HSTS. Regardless, here is what I need help troubleshooting:
Issue:
A single site (one that I own), is showing server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN when I try to connect to it via chrome, firefox, or safari. I can however connect to it via Tor Browser. I can also verify that the address resolves correctly using mxtoolbox. I also am not able to connect via two other computers and two other phones. I also am not able to connect via a different WIFI connection or personal hotspot via my phone. Curl and Host via the command line are also not able to get a response.
What I've tried:
As I said above, I've tried different internet connections and computers. I've also tried flushing my DNS cache and pointing to another DNS server.
Having said that, I am not sure how else to trouble shoot this. The only change I made to the web app was to add HSTS headers, hence why I created the earlier posing. Please let me know what other information I can provide. Otherwise, here are some details about the site itself:
Other information about my stack:
Django web app
Gunicorn / WSGI server
Hosted on Heroku - Cedar-14 stack
DNS setup with AWS route53
domain name registered through AWS
EDIT:
Possibly related: https://serverfault.com/questions/606880/how-can-i-troubleshoot-a-route-53-hosted-zone
I had the similar issue and was not able to open Facebook. Rest all sites were working fine. Initially, I thought Facebook blocked me as I never faced this crappy issue earlier. Later when I searched in Google, I found an article which described the DNS_PROBE_FINISHED_NXDOMAIN issue on Chrome.
I just changed my DNS server address as 8.8.8.8 (preferred) and 8.8.4.4 (alternate) and I never faced that issue again.
Reference - https://www.mobipicker.com/dns_probe_finished_nxdomain/
So from our discussion regarding the NS server records always make sure that the local NS records matches the Parent NS records.
In your case there there were 2 extra NS records associated with your domain that was the reason why your domains and sub domains were acting unhealthy. once you deleted those records the domains and sub domains were back to normal.
you can also try to open an anon window
access the url
use it in anon mode
or
close it and it will load ok
Related
so I've followed the documentation of Amazon's S3 and Route 53 to host a static website.
it worked perfectly and the next day my site was online. I kept updating my index.html afterward with small stuff like extra Text here and there and so far I had no issues, it would every time update the site to reflect the new changes. Until suddenly I visit my website and get a "server IP address could not be found" and I cannot reach my website.
I checked dnschecker.org and internic.net to verify the DNS status of my site, and it showed everything green. I created an Availability test in the Route 53 dashboard and it returns 200 OK.
I also made sure the 4 server names from the Hosted Zone match the ones internic is returning.
so apparently every service says that my site is reachable, but it's not. I have not changed any Public Access options since the first time after I've initially done it using the documentation.
I have also tried reaching the site from a different browser, different PC, and from my phone. they all cannot reach the website.
I absolutely have no idea what to do, to get my site back running. I would very much appreciate some insight.
Footnote: I am very new to this, so please let me know if I need to provide extra information
so it turned out the reason is that I was on the college's Network and somehow my website got delisted from their DNS server (I don't have an exact explanation, just an observation).
my website loaded using Data on my phone and also if I turned on a VPN on my PC. so basically its a DNS problem
I have a weird problem with PGAdmin4.
My setup
pgadmin 4.1 deployed on kubernetes using the chorss/docker-pgadmin4 image. One POD only to simplify troubleshooting;
Nginx ingress controller as reverse proxy on the cluster;
Classic ELB in front to load balance incoming traffic on the cluster.
ELB <=> NGINX <=> PGADMIN
From a DNS point of view, the hostname of pgadmin is a CNAME towards the ELB.
The problem
Application is correctly reachable, users can login and everything works just fine. Problem is that after a couple of (roughly 2-3) minutes the session is invalidated and users are requested to login again. This happens regardless of the fact that pgadmin is actively used or not.
After countless hours of troubleshooting, I found out that the problem happens when the DNS resolution of ELB's CNAME switches to another IP address.
In fact, I tried:
connecting to the pod directly by connecting to the k8s service's node port directly => session doesn't expire;
connecting to nginx (bypassing the ELB) directly => session doesn't expire;
mapping one of the ELB's IP addresses in my hosts file => session doesn't expire.
Given the above test, I'd conclude that the Flask app (PGAdmin4 is a Python Flask application apparently) is considering my cookie invalid after the remote address changes for my hostname.
Any Flask developer that can help me fix this problem? Any other idea about something I might be missing?
PGadmin 4 seems to use Flask-Security for authentication:
pgAdmin utilised the Flask-Security module to manage application security and users, and provides options for self-service password reset and password changes etc.
https://www.pgadmin.org/docs/pgadmin4/dev/code_overview.html
Flask-Security seems to use Flask-Login:
Many of these features are made possible by integrating various Flask extensions and libraries. They include:
Flask-Login
...
https://pythonhosted.org/Flask-Security/
Flask-Login seems to have a feature called "session protection":
When session protection is active, each request, it generates an identifier for the user’s computer (basically, a secure hash of the IP address and user agent). If the session does not have an associated identifier, the one generated will be stored. If it has an identifier, and it matches the one generated, then the request is OK.
https://flask-login.readthedocs.io/en/latest/#session-protection
I would assume setting login_manager.session_protection = None would solve the issue, but unfortunately I don't know how to set it in PGadmin. Hope it might help you somehow.
For those looking for a solution, You need to add below to config.py or config_distro.py or config_local.py
config_local.py
SESSION_PROTECTION = None
Faced similar issue in GKE Load balancer , Cleaner solution which worked for me is disabling cookie protection based on Ip address. Add below flag to config_local.py
#Disable Cookie generation base on Ip address
ENHANCED_COOKIE_PROTECTION = False
I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html
I followed this tutorial in order to setup CloudFlare with Digitalocean.com. However, I encountered the following problem:
Visiting my website from Chrome, I noticed that there is nothing indication that CloudFlare is working for me. There are no CF-RAY or cloudflare nginx headers on responses. Also the Claire extension showed that the CloudFlare is not active. However, when I test the website with WebPagetest, I can see that all the javascript files where served with CloudFlare caching system.
How long did you wait to test? DNS changes take time to propagate, it sounds like your DNS server is still using the old server address.
Check by running ping address.com, does it resolve to your digital ocean IP or something else?
CloudFlare is usually set-up at a nameserver level (though CloudFlare Partners alongside those on Business/Enterprise plans can choose to set-up via CNAME). Nameservers usually take time to propagate, this can be 24-48 hours (where as A records or CNAME records usually have a much lower TTL).
You can check which nameservers your computer is pointing to on a Mac/Linux box by:
dig NS
On Windows you can do:
nslookup -type=NS
This will tell you the nameservers you computer is pointing to, and whether you need to wait longer for them to propagate to CloudFlare.
Long story short, just be patient. :)
Use the google dns 8.8.8.8 and 8.8.4.4 in your client IPV4 connection properties. It might be that your ISP is caching the site and you are not able to see the instant changes
i'm trying to get CFHTTP to talk to a domain that i have created for testing purposes on my test server. the address of the domain is "mydomain.example.com". everytime i try to connect using cfhttp i get an error stating:
Your requested host "mydomain.example.com" could not be resolved by DNS.
i have already added the entry in the windows hosts file.
mydomain.example.com 127.0.0.1
i've also made sure that java.net.InetAddress can resolve the domain by doing the following in a coldfusion page:
<cfset loc.javaInet = createObject("java","java.net.InetAddress")>
<cfset loc.dnsLookup = loc.javaInet.getByName("mydomain.example.com")>
for which is get back
mydomain.example.com/127.0.0.1
i've even tried starting and stopping the coldfusion service and changing the value of networkaddress.cache.ttl in the runtime\jre\lib\security\java.security to 0.
i'm at a lost of why everything seems to be resolving at the jre level but not at the cfhttp level. any ideas???
Why is it that after I post a question, I figure it out? Go fig.
The issue was that for some reason I still had an old proxy configuration setup on my java.args line in my runtime\bin\jvm.config.
After removing the old configuration setting and restarting the ColdFusion service, I'm back in business.
For those that want to know, you can set the proxy information for cfhttp to use by adding the following arguments to your java.args line in the jvm.config file
-Dhttp.proxyHost=<ip address>
-Dhttp.proxyPort=<portnumber>
-Dhttp.proxyUser=<username>
-Dhttp.proxyPassword=<password>
Your problem may have to do with the way that DNS look-ups are cached by Coldfusion. CFHTTP permanently keeps a copy of the DNS look-up. You could try flushing this by restarting Coldfusion.
Also, your hosts file won't pick up those changes in windows easily. The easy way is with a reboot of the windows machine.
I agree, the problem is a DNS one, and using a proxy just masks the problem. Try setting your DNS resolver on Windows to something stable and public, like 8.8.8.8 which is a Google DNS server.