Postman Monitoring request error "Error: NETERR: getaddrinfo ENOTFOUND localhost" - postman

I am trying to figure out how to get monitoring to work in Postman.
I have written tests on the desktop client for get/create/put and everything works fine. I'm using a localhost address and port 5004 which is the port for the API.
http://127.0.0.1:5004/bookings
I have tried to change the proxy in setting to localhost and port 5004, I have tried to change it to 127.0.0.1:5004, I have tried to disable SSL on the desktop client. I am running the monitoring on using the desktop client from the browser, that doesn't work either.
I have also checked if my etc/host file contains 127.0.0.1 localhost and it does.
Not sure what else I can try, I would appreciate any help. :)

Accessible APIs:
Monitors require all URLs to be publicly available on the internet as
they run in the Postman cloud. A monitor cannot directly access your
localhost or run requests behind a firewall. However, to overcome this
issue, static IPs are available on Postman Business and Enterprise
plans.
https://learning.postman.com/docs/designing-and-developing-your-api/monitoring-your-api/intro-monitors/
you cannot use the monitor for inhouse and localhost websites, You could upgrade to enterprise or business plans and see if that helps

The issue is maybe you configured the environment variable and passed the correct value in the URL also
But while running the collection test class
didn't select the correct environment
Select the environment configured to run that collection, shown in attached screenshot

I faced this problem and the issue was my DNS address. After changing the DNS server it was solved.

I had the same problem, I had space between ip and :port =>0.0.0.0b:1111 in my env. I deleted the space and I can connect. => 0.0.0.0:1111.

I faced the same issue and it got solved by removing the env variable

what you should try is sign out, sign in again. make sure the environment variables are not empty.
also, try using the feature console.log(get environment variable name) it would be helpful.
make sure to click on persist all in the environment variable.
The key for me was to click on "persist all" in the environment variable
read the github
some comments which helped me resolve the issue

Related

DNS_PROBE_FINISHED_NXDOMAIN for single website

I created this question earlier but was told that it is a DNS issue as apposed to an issue with HSTS. Regardless, here is what I need help troubleshooting:
Issue:
A single site (one that I own), is showing server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN when I try to connect to it via chrome, firefox, or safari. I can however connect to it via Tor Browser. I can also verify that the address resolves correctly using mxtoolbox. I also am not able to connect via two other computers and two other phones. I also am not able to connect via a different WIFI connection or personal hotspot via my phone. Curl and Host via the command line are also not able to get a response.
What I've tried:
As I said above, I've tried different internet connections and computers. I've also tried flushing my DNS cache and pointing to another DNS server.
Having said that, I am not sure how else to trouble shoot this. The only change I made to the web app was to add HSTS headers, hence why I created the earlier posing. Please let me know what other information I can provide. Otherwise, here are some details about the site itself:
Other information about my stack:
Django web app
Gunicorn / WSGI server
Hosted on Heroku - Cedar-14 stack
DNS setup with AWS route53
domain name registered through AWS
EDIT:
Possibly related: https://serverfault.com/questions/606880/how-can-i-troubleshoot-a-route-53-hosted-zone
I had the similar issue and was not able to open Facebook. Rest all sites were working fine. Initially, I thought Facebook blocked me as I never faced this crappy issue earlier. Later when I searched in Google, I found an article which described the DNS_PROBE_FINISHED_NXDOMAIN issue on Chrome.
I just changed my DNS server address as 8.8.8.8 (preferred) and 8.8.4.4 (alternate) and I never faced that issue again.
Reference - https://www.mobipicker.com/dns_probe_finished_nxdomain/
So from our discussion regarding the NS server records always make sure that the local NS records matches the Parent NS records.
In your case there there were 2 extra NS records associated with your domain that was the reason why your domains and sub domains were acting unhealthy. once you deleted those records the domains and sub domains were back to normal.
you can also try to open an anon window
access the url
use it in anon mode
or
close it and it will load ok

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilĂ .
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html

Connecting a DD-WRT router to a Squid proxy running on AWS

I am trying to get a Linksys router with the latest DD-WRT (v24-sp2) in my house connected, via Comcast, to an external Squid (v3) proxy that I am running on AWS. When I connect over the WiFi to the DD-WRT router, it connects to the Squid proxy, but I get the nasty message (abbreviated here to show relevant part):
While trying to retrieve the URL: /
Note the backlash. I get this when I go to a root domain, like www.cnn.com. If I go to a page under a site, like www.cnn.com/today (fake link used for example only), that returns and error like:
While trying to retrieve the URL: /today
Again, notice the "/today", as if the root domain has been removed, and the string to the right of the domain name is being searched on.
For some background, I have installed Squid as generally as possible, and have done it on two servers with the same results. I get this same error no matter what domain I go to. Also, if I switch my network on my Mac to use this Squid proxy, it works fine. Only the connections from the DD-WRT give this error.
I have tried the instructions on the DD-WRT site with no luck. Others seem to have gotten this working well, so I assume I am making a configuration mistake.
Any clues for me? TIA...

cfhttp dns resolution

i'm trying to get CFHTTP to talk to a domain that i have created for testing purposes on my test server. the address of the domain is "mydomain.example.com". everytime i try to connect using cfhttp i get an error stating:
Your requested host "mydomain.example.com" could not be resolved by DNS.
i have already added the entry in the windows hosts file.
mydomain.example.com 127.0.0.1
i've also made sure that java.net.InetAddress can resolve the domain by doing the following in a coldfusion page:
<cfset loc.javaInet = createObject("java","java.net.InetAddress")>
<cfset loc.dnsLookup = loc.javaInet.getByName("mydomain.example.com")>
for which is get back
mydomain.example.com/127.0.0.1
i've even tried starting and stopping the coldfusion service and changing the value of networkaddress.cache.ttl in the runtime\jre\lib\security\java.security to 0.
i'm at a lost of why everything seems to be resolving at the jre level but not at the cfhttp level. any ideas???
Why is it that after I post a question, I figure it out? Go fig.
The issue was that for some reason I still had an old proxy configuration setup on my java.args line in my runtime\bin\jvm.config.
After removing the old configuration setting and restarting the ColdFusion service, I'm back in business.
For those that want to know, you can set the proxy information for cfhttp to use by adding the following arguments to your java.args line in the jvm.config file
-Dhttp.proxyHost=<ip address>
-Dhttp.proxyPort=<portnumber>
-Dhttp.proxyUser=<username>
-Dhttp.proxyPassword=<password>
Your problem may have to do with the way that DNS look-ups are cached by Coldfusion. CFHTTP permanently keeps a copy of the DNS look-up. You could try flushing this by restarting Coldfusion.
Also, your hosts file won't pick up those changes in windows easily. The easy way is with a reboot of the windows machine.
I agree, the problem is a DNS one, and using a proxy just masks the problem. Try setting your DNS resolver on Windows to something stable and public, like 8.8.8.8 which is a Google DNS server.

Facebook Connect not setting cookies

I'm trying to implement Facebook Connect on a website with .NET MVC using C#.
I've followed the instructions here: http://wiki.developers.facebook.com/index.php/Trying_Out_Facebook_Connect step by step. I can make the login work as in that when I log in through the site I'm also logged into Facebook.
In order to work with this in the server I think I need to access the cookies Facebook is supposed to leave like:
APIKEY_user
APIKEY_session_key
...
as mentioned here http://wiki.developers.facebook.com/index.php/Verifying_The_Signature.
The thing is I'm not getting any of these cookies. I've googled and it seems like I'm the only person with this problem. Any ideas as to what I could be doing wrong ? Has this happened to anyone else ?
The issue was that I was developing locally using localhost.
I resolved the problem by changing the settings for the application to point to a certain web address instead of localhost and changing my hosts file lo point that same web address to 127.0.0.1
from the UI/client-side perspective, always insure you have the correct path indicated for the xd_receiver file in your FB.init() method.
Firecookie is very useful for seeing what Cookies are/aren't being set.