I have a hosted zone created in Route53 and updated the NS records under the namespaces of the purchased domain.
Unfortunately the DNS check does not return or point to the new NS records instead gets resolved to old/ previously existing records.
I waited more than 72 hours and still i get "This site can’t be reached"failing with error DNS_PROBE_FINISHED_NXDOMAIN in the browser.
Below is a screenshot from the DNS check provided by https://mxtoolbox.com/,
It shows that the old NS records (First 4 rows with TTL to 48 hours) are present in the Parent and not in local whereas the newly updated records (The last 4 records) are present in the parent and not in the local.
Ping to the domain fails with Unknown host.
What are the next steps?
When you update the name servers for a domain, remove the old name server records.
Your TTL is set to 48 hours. That means any recursive resolver such as dns.google will not refresh for 48 hours after last update. For resolvers that have not cached your resource records, they might update immediately but might also get stale data from an upstream resolver. Wait a few hours so that you do not force a new cache load with old data and then check with an Internet tool such as dnschecker.org Change the selection box from A to NS to see the name server changes.
In general I recommend that it takes 48 to 72 hours for authoritative name server changes to propagate around the world.
Google DNS supports "Flush Cache". Wait an hour or two and then request that Google update their DNS cache. Flush Cache
Cloudflare also supports Purge Cache
Google and Cloudflare are very popular DNS resolvers.
Also, do not forget to flush your local computer's DNS cache:
Windows: ipconfig /flushdns
Linux: sudo service network-manager restart (ubuntu) or sudo /etc/init.d/nscd restart
macOS: sudo dscacheutil -flushcache followed by sudo killall -HUP mDNSResponder
I am using the following API's for making a HTTP request.
QNetworkRequest Request (QUrl (QString (HTTP_PRF PING_URL)));
m_pNetworkReply = m_pNetAccesMgr->get (Request);
My resolv.conf has the following entries.
nameserver 8.8.8.8
nameserver 10.10.182.225
It seems that the QNetworkAccessManager's get API uses the nameservers sequentially to resolve the given domain name, i.e it tries 8.8.8.8 first, and if it fails it tries 10.10.182.255. Is there some way to make Qt to do this name resolution parallely.
I am no network expert, but it looks like a problem that would better be solved system wise than just by tweaking a single program.
According to Adjusting how long Linux takes to fail over to backup DNS server listed in resolv.conf, you can add this line to resolv.conf:
options timeout:1 attempts:1
This will set the timeout to 1s, switch dns server after first failed attempt.
I transferred my domain ( simplifybits.com) using Route 53 and it successfully transferred.
However my domain is not resolving anymore :(
This is what my setup looks looks like
There are two buckets in S3
simplifybits.com
www.simplifybits.com
Route 53 configuration
simplifybits.com - A
s3-website.us-east-2.amazonaws.com.
simplifybits.com - NS
ns-1069.awsdns-05.org.
ns-31.awsdns-03.com.
ns-1556.awsdns-02.co.uk.
ns-535.awsdns-02.net.
simplifybits.com - SOA
ns-1069.awsdns-05.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
www.simplifybits.com - A
d3v4utl52t4eyk.cloudfront.net.
I had this same problem. Right now your domains still have Google as the name servers:
Tech Email: tech#simplifybits.com.whoisprivacyservice.org
Name Server: ns-cloud-d1.googledomains.com
Name Server: ns-cloud-d2.googledomains.com
Name Server: ns-cloud-d3.googledomains.com
Name Server: ns-cloud-d4.googledomains.com
It isn't obvious but go to the "Hosted zones" tab and select your domain. Then, copy the "NS" records. Now go to the "Registered domains" and select your domain. If you look you will likely see to the right that the "Name servers" still have Google. Click on "Add or edit name servers" and enter your name servers from above. The popup will keep adding lines for you.
It took me a while to get this right as, like you, I thought I had to only have the NS records correct but that isn't enough.
As #steve-harris points out you will still have to have S3 enabled to serve static content but you'll want to get DNS going first.
I have configured HAproxy on a RedHat server. The server is up and running without any issue but i cannot access the server through my browser. I have open the firewall port for the bind address.
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2080/haproxy
My haproxy.cfg is as below:
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http-in
bind *:80
default_backend servers
backend servers
option httpchk OPTIONS /
option forwardfor
stats enable
stats refresh 10s
stats hide-version
stats scope .
stats uri /admin?stats
stats realm Haproxy\ Statistics
stats auth admin:pass
cookie JSESSIONID prefix
server adempiere1 192.168.1.216:8085 cookie JSESSIONID_SERVER_1 check inter 5000
server adempiere2 192.168.1.25:8085 cookie JSESSIONID_SERVER_2 check inter 5000
any suggestion?
To view HAProxy stats on your browser, put these lines in your configuration file.
You will be able to see HAProxy at http://Hostname:9000
listen stats :9000
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
daemon
log global
mode http
option httplog
option dontlognull
option forwardfor
retries 1 #number of times it will try to know if system is up or down
option redispatch #if one system is down, it will redispatch to another system which is up.
maxconn 2000
contimeout 5 #you can increase these numbers according to your configuration
clitimeout 50 #this is set to smaller number just for testing
srvtimeout 50 #so you can view right away the actual result
listen http-in IP_ADDRESS_OF_LOAD_BALANCER:PORT #example 192.168.1.1:8080
mode http
balance roundrobin
maxconn 10000
server adempiere1 192.168.1.216:8085 cookie JSESSIONID_SERVER_1 check inter 5000
server adempiere2 192.168.1.25:8085 cookie JSESSIONID_SERVER_2 check inter 5000
#
#try access from your browser the ip address with the port mentioned in the listen configuration #above.
#or try this is command line `/terminal: curl http://192.168.1.1:8080`
I'm trying to put a set of EC2 instances behind a couple of Varnish servers. Our Varnish configuration very seldom changes (once or twice a year) but we are always adding/removing/replacing web backends for all kinds of reasons (updates, problems, load spikes). This creates problems because we always have to update our Varnish configuration, which has led to mistakes and heartbreak.
What I would like to do is manage the set of backend servers simply by adding or removing them from an Elastic Load Balancer. I've tried specifying the ELB endpoint as a backend, but I get this error:
Message from VCC-compiler:
Backend host "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
123.123.123.1
63.123.23.2
31.13.67.3
('input' Line 2 Pos 17)
.host = "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com";
The only consistent public interface ELB provides is its DNS name. The set of IP addresses this DNS name resolves to changes over time and with load.
In this case I would rather NOT specify one exact address - I would like to round-robin between whatever comes back from the DNS. Is this possible? Or could someone suggest another solution that would accomplish the same thing?
Thanks,
Sam
You could use a NGINX web server to deal with the CNAME resolution problem:
User-> Varnish -> NGNIX -> ELB -> EC2 Instances
(Cache Section) (Application Section)
You have a configuration example in this post: http://blog.domenech.org/2013/09/using-varnish-proxy-cache-with-amazon-web-services-elastic-load-balancer-elb.html
Juan
I wouldn't recommend putting an ELB behind Varnish.
The problem lies on the fact that Varnish is resolving the name
assigned to the ELB, and it’s caching the IP addresses until the VCL
get’s reloaded. Because of the dynamic nature of the ELB, the IPs
linked to the cname can change at any time, resulting in Varnish
routing traffic to an IP which is not linked to the correct ELB
anymore.
This is an interesting article you might like to read.
Yes, you can.
in your default.vcl put:
include "/etc/varnish/backends.vcl";
and set backend to:
set req.backend = default_director;
so, run this script to create backends.vcl:
#!/bin/bash
FILE_CURRENT_IPS='/tmp/elb_current_ips'
FILE_OLD_IPS='/tmp/elb_old_ips'
TMP_BACKEND_CONFIG='/tmp/tmp_backends.vcl'
BACKEND_CONFIG='/etc/varnish/backends.vcl'
ELB='XXXXXXXXXXXXXX.us-east-1.elb.amazonaws.com'
IPS=($(dig +short $ELB | sort))
if [ ! -f $FILE_OLD_IPS ]; then
touch $FILE_OLD_IPS
fi
echo ${IPS[#]} > $FILE_CURRENT_IPS
DIFF=`diff $FILE_CURRENT_IPS $FILE_OLD_IPS | wc -l`
cat /dev/null > $TMP_BACKEND_CONFIG
if [ $DIFF -gt 0 ]; then
COUNT=0
for i in ${IPS[#]}; do
let COUNT++
IP=$i
cat <<EOF >> $TMP_BACKEND_CONFIG
backend app_$COUNT {
.host = "$IP";
.port = "80";
.connect_timeout = 10s;
.first_byte_timeout = 35s;
.between_bytes_timeout = 5s;
}
EOF
done
COUNT=0
echo 'director default_director round-robin {' >> $TMP_BACKEND_CONFIG
for i in ${IPS[#]}; do
let COUNT++
cat <<EOF >> $TMP_BACKEND_CONFIG
{ .backend = app_$COUNT; }
EOF
done
echo '}' >> $TMP_BACKEND_CONFIG
echo 'NEW BACKENDS'
mv -f $TMP_BACKEND_CONFIG $BACKEND_CONFIG
fi
mv $FILE_CURRENT_IPS $FILE_OLD_IPS
I wrote this script to have a way to auto update the vcl once a new
instance comes up or down.
it requires that the .vcl has an include to backend.vcl
This script is just a part of the solution, the tasks should be:
1. get new servername and IP (auto scale) can use AWS API cmds to do that, also via bash
2. update vcl (this script)
3. reload varnish
The script is here
http://felipeferreira.net/?p=1358
Other pepole did it in different ways
http://blog.cloudreach.co.uk/2013/01/varnish-and-autoscaling-love-story.html
You don get to 10K petitions if had to resolve an ip on each one. Varnish resolve ips on start and do not refresh it unless its restarted o reloaded. Indeed varnish refuses to start if found two ip for a dns name in a backend definition, like the ip returned for multi-az ELBs.
So we solved a simmilar issue placing varnish in front of nginx. Nginx can define an ELB as a backend so Varnish backend is a local nginx an nginx backend is the ELB.
But I don't feel comfy with this solution.
You Could make the ELB in your private VPC so that it would have a local ip. This way you don't have to use any DNS kind of Cnames or anything which Varnish does not support as easily.
Using internal ELB does not help the problem, because it usually have 2 Internal IP's!
Backend host "internal-XXX.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
10.30.10.134
10.30.10.46
('input' Line 13 Pos 12)
What I am not sure is if this IPs will remain always the same or they can change? anyone?
I my previous answer (more than three years ago) I hadn't solve this issue, my [nginx - varnish - nxinx ] -> ELB solution worked until ELB changes IPs
But from some time ago we are using the same setup but with nginx compiled with jdomain plugin
So the idea is to place a nginx in the same host that varnish an there configure the upstream like this:
resolver 10.0.0.2; ## IP for the aws resolver on the subnet
upstream backend {
jdomain internal-elb-dns-name port=80;
}
that upstream will automatically reconfigure the upstream ips the IP if the ELB changes its addresses
It might not be a solution using varnish but it works as expected