I am using the following API's for making a HTTP request.
QNetworkRequest Request (QUrl (QString (HTTP_PRF PING_URL)));
m_pNetworkReply = m_pNetAccesMgr->get (Request);
My resolv.conf has the following entries.
nameserver 8.8.8.8
nameserver 10.10.182.225
It seems that the QNetworkAccessManager's get API uses the nameservers sequentially to resolve the given domain name, i.e it tries 8.8.8.8 first, and if it fails it tries 10.10.182.255. Is there some way to make Qt to do this name resolution parallely.
I am no network expert, but it looks like a problem that would better be solved system wise than just by tweaking a single program.
According to Adjusting how long Linux takes to fail over to backup DNS server listed in resolv.conf, you can add this line to resolv.conf:
options timeout:1 attempts:1
This will set the timeout to 1s, switch dns server after first failed attempt.
Related
I have a hosted zone created in Route53 and updated the NS records under the namespaces of the purchased domain.
Unfortunately the DNS check does not return or point to the new NS records instead gets resolved to old/ previously existing records.
I waited more than 72 hours and still i get "This site can’t be reached"failing with error DNS_PROBE_FINISHED_NXDOMAIN in the browser.
Below is a screenshot from the DNS check provided by https://mxtoolbox.com/,
It shows that the old NS records (First 4 rows with TTL to 48 hours) are present in the Parent and not in local whereas the newly updated records (The last 4 records) are present in the parent and not in the local.
Ping to the domain fails with Unknown host.
What are the next steps?
When you update the name servers for a domain, remove the old name server records.
Your TTL is set to 48 hours. That means any recursive resolver such as dns.google will not refresh for 48 hours after last update. For resolvers that have not cached your resource records, they might update immediately but might also get stale data from an upstream resolver. Wait a few hours so that you do not force a new cache load with old data and then check with an Internet tool such as dnschecker.org Change the selection box from A to NS to see the name server changes.
In general I recommend that it takes 48 to 72 hours for authoritative name server changes to propagate around the world.
Google DNS supports "Flush Cache". Wait an hour or two and then request that Google update their DNS cache. Flush Cache
Cloudflare also supports Purge Cache
Google and Cloudflare are very popular DNS resolvers.
Also, do not forget to flush your local computer's DNS cache:
Windows: ipconfig /flushdns
Linux: sudo service network-manager restart (ubuntu) or sudo /etc/init.d/nscd restart
macOS: sudo dscacheutil -flushcache followed by sudo killall -HUP mDNSResponder
Context: I have a simple Next.js and KeystoneJS app. I've made duplicate deployments on 2 AWS EC2 instances. Each instance also has an Nginx reverse proxy routing port 80 to 3000 (my apps port). The 2 instances are also behind an application load balancer.
Problem: When routing to my default url, my application attempts to fetch the buildManifest for the nextjs application. This, however, 404s most of the time.
My Guess: Because the requests are coming in so close together, my load balancer is routing the second request for the buildManifest to the other instance. Since I did a separate yarn build on that instance, the build ids are different, and therefore it is not fetching the correct build. This request 404s and my site is broken.
My Question: Is there a way to ensure that all requests made from instance A get routed to instance A? Or is there a better way to do my builds on each instance such that their ids are the same? Is this a use case for Docker?
I have had a similar problem with our load balancer and specifying a custom build id seems to have fixed it. Here's the dedicated issue and this is how my next.config.js looks like now:
const execSync = require("child_process").execSync;
const lastCommitCommand = "git rev-parse HEAD";
module.exports = {
async generateBuildId() {
return execSync(lastCommitCommand).toString().trim();
},
};
If you are using a custom build directory in your next.config.js file, then remove it and use the default build directory.
Like:
distDir: "build"
Remove the above line from your next.config.js file.
Credits: https://github.com/serverless-nextjs/serverless-next.js/issues/467#issuecomment-726227243
I have an issue with correctly failing over to the mirror database. When I am connected to the principal database (dbx) (mirroring is enabled and set up) and I fail over the principal database (shutting down SQL Server to simulate a crash), I can no longer send queries without a failure. This is expected since the previous connection is now lost.
I would like to simply close out my connections and handles and re-establish a new connection, using the same connection string, and re-connect to the mirror database (dby, which is now the principal database).
My connection string is as follows:
Driver={SQL Native Client};Server=dbx;Failover_Partner=dby;Database=db;Uid=uid;Pwd=pwd;Network=DBMSSOCN;
From doing research, I have learned that the Failover_Partner parameter in the connection is almost worthless. It is only used when the principal server is down and a new connection is being made for the first time. For some reason, the Failover_Partner is overwritten internally when a connection is established to the principal and the mirroring_partner_instance found in the sys.database_mirroring table is used instead. So when I specify the Failover_Partner to be dby, after I establish a connection, I query for what it thinks the failover partner is, and it returns the INSTANCE name of the failover partner and not the DNS name (dby).
Here is the issue, I cannot use the INSTANCE name as the failover partner. I am required to use the DNS name as the failover partner.
So my question(s) is/are this:
Is there a way to modify the sys.database_mirroring entry and change the mirroring_partner_instance?
Where does this field get its value from?
Is there any other way to force SQL Server to use the DNS name and NOT the INSTANCE name?
I found the answer to this question in case anyone has the same or a similar issue.
I had to modify the ##SERVERNAME property in SQL. It was internally set to the computer WIN-... instance name and I was able to drop it and add the server name I wanted with the following commands:
Get the current server name (WIN_NAME)
SELECT ##SERVERNAME
Drop the WIN-NAME
SP_DropServer 'WIN_NAME'
GO
SP_AddServer 'SERVER_NAME',local
GO
Restart SQL Server to see the changes take effect.
I set up countly analytics on the free tier AWS EC2, but stupidly did not set up an elastic IP with it. No, the traffic it too great that I can't even log into the analytics as the CPU is constantly running at 100%.
I am in the process of issuing app updates to change the analytics address to a private domain that forwards to the EC2 instance, so I can change the forwarding in future.
In the mean time, is it possible for me to set up a 2nd instance and forward all the traffic from the current one to the new one?
I found this http://lastzactionhero.wordpress.com/2012/10/26/remote-port-forwarding-from-ec2/ will this work from 1 EC2 instance to another?
Thanks
EDIT ---
Countly log
/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:529
throw err;
^ ReferenceError: liveApi is not defined
at processUserSession (/home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:203:17)
at /home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:32:13
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/collection.js:1010:5
at Cursor.nextObject (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:653:5)
at commandHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:635:14)
at null. (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/db.js:1709:18)
at g (events.js:175:14)
at EventEmitter.emit (events.js:106:17)
at Server.Base._callHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/base.js:130:25)
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:522:20
You can follow the steps described in the blog post to do the port forwarding. Just make sure not to forward it to localhost :)
Also about 100% CPU, it is probably caused by MongoDB. Did you have a chance to check the process? In case it is mongod, issue mongotop command to see the most time consuming collection accesses. We can go from there.
Yes. It is possible. I use ngnix with Node JS app. I wanted to redirect traffic from one instance to another. Instance was in different region and not configured in same VPC as mentioned in AWS documentation.
Step 1: Go to /etc/ngnix/site-enabled and open default.conf file. Your configuration might be on different file.
Step 2: Change proxy_pass to your chosen IP/domain/sub-domain
server
{
listen 80
server_name your_domain.com;
location / {
...
proxy_pass your_ip; // You can put domain, sub-domain with protocol (http/https)
}
}
Step 3: then restart the ngnix
sudo systemctl restart nginx
This can be possible for any external instances and different VPC instances.
I'm trying to put a set of EC2 instances behind a couple of Varnish servers. Our Varnish configuration very seldom changes (once or twice a year) but we are always adding/removing/replacing web backends for all kinds of reasons (updates, problems, load spikes). This creates problems because we always have to update our Varnish configuration, which has led to mistakes and heartbreak.
What I would like to do is manage the set of backend servers simply by adding or removing them from an Elastic Load Balancer. I've tried specifying the ELB endpoint as a backend, but I get this error:
Message from VCC-compiler:
Backend host "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
123.123.123.1
63.123.23.2
31.13.67.3
('input' Line 2 Pos 17)
.host = "XXXXXXXXXXX-123456789.us-east-1.elb.amazonaws.com";
The only consistent public interface ELB provides is its DNS name. The set of IP addresses this DNS name resolves to changes over time and with load.
In this case I would rather NOT specify one exact address - I would like to round-robin between whatever comes back from the DNS. Is this possible? Or could someone suggest another solution that would accomplish the same thing?
Thanks,
Sam
You could use a NGINX web server to deal with the CNAME resolution problem:
User-> Varnish -> NGNIX -> ELB -> EC2 Instances
(Cache Section) (Application Section)
You have a configuration example in this post: http://blog.domenech.org/2013/09/using-varnish-proxy-cache-with-amazon-web-services-elastic-load-balancer-elb.html
Juan
I wouldn't recommend putting an ELB behind Varnish.
The problem lies on the fact that Varnish is resolving the name
assigned to the ELB, and it’s caching the IP addresses until the VCL
get’s reloaded. Because of the dynamic nature of the ELB, the IPs
linked to the cname can change at any time, resulting in Varnish
routing traffic to an IP which is not linked to the correct ELB
anymore.
This is an interesting article you might like to read.
Yes, you can.
in your default.vcl put:
include "/etc/varnish/backends.vcl";
and set backend to:
set req.backend = default_director;
so, run this script to create backends.vcl:
#!/bin/bash
FILE_CURRENT_IPS='/tmp/elb_current_ips'
FILE_OLD_IPS='/tmp/elb_old_ips'
TMP_BACKEND_CONFIG='/tmp/tmp_backends.vcl'
BACKEND_CONFIG='/etc/varnish/backends.vcl'
ELB='XXXXXXXXXXXXXX.us-east-1.elb.amazonaws.com'
IPS=($(dig +short $ELB | sort))
if [ ! -f $FILE_OLD_IPS ]; then
touch $FILE_OLD_IPS
fi
echo ${IPS[#]} > $FILE_CURRENT_IPS
DIFF=`diff $FILE_CURRENT_IPS $FILE_OLD_IPS | wc -l`
cat /dev/null > $TMP_BACKEND_CONFIG
if [ $DIFF -gt 0 ]; then
COUNT=0
for i in ${IPS[#]}; do
let COUNT++
IP=$i
cat <<EOF >> $TMP_BACKEND_CONFIG
backend app_$COUNT {
.host = "$IP";
.port = "80";
.connect_timeout = 10s;
.first_byte_timeout = 35s;
.between_bytes_timeout = 5s;
}
EOF
done
COUNT=0
echo 'director default_director round-robin {' >> $TMP_BACKEND_CONFIG
for i in ${IPS[#]}; do
let COUNT++
cat <<EOF >> $TMP_BACKEND_CONFIG
{ .backend = app_$COUNT; }
EOF
done
echo '}' >> $TMP_BACKEND_CONFIG
echo 'NEW BACKENDS'
mv -f $TMP_BACKEND_CONFIG $BACKEND_CONFIG
fi
mv $FILE_CURRENT_IPS $FILE_OLD_IPS
I wrote this script to have a way to auto update the vcl once a new
instance comes up or down.
it requires that the .vcl has an include to backend.vcl
This script is just a part of the solution, the tasks should be:
1. get new servername and IP (auto scale) can use AWS API cmds to do that, also via bash
2. update vcl (this script)
3. reload varnish
The script is here
http://felipeferreira.net/?p=1358
Other pepole did it in different ways
http://blog.cloudreach.co.uk/2013/01/varnish-and-autoscaling-love-story.html
You don get to 10K petitions if had to resolve an ip on each one. Varnish resolve ips on start and do not refresh it unless its restarted o reloaded. Indeed varnish refuses to start if found two ip for a dns name in a backend definition, like the ip returned for multi-az ELBs.
So we solved a simmilar issue placing varnish in front of nginx. Nginx can define an ELB as a backend so Varnish backend is a local nginx an nginx backend is the ELB.
But I don't feel comfy with this solution.
You Could make the ELB in your private VPC so that it would have a local ip. This way you don't have to use any DNS kind of Cnames or anything which Varnish does not support as easily.
Using internal ELB does not help the problem, because it usually have 2 Internal IP's!
Backend host "internal-XXX.us-east-1.elb.amazonaws.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
10.30.10.134
10.30.10.46
('input' Line 13 Pos 12)
What I am not sure is if this IPs will remain always the same or they can change? anyone?
I my previous answer (more than three years ago) I hadn't solve this issue, my [nginx - varnish - nxinx ] -> ELB solution worked until ELB changes IPs
But from some time ago we are using the same setup but with nginx compiled with jdomain plugin
So the idea is to place a nginx in the same host that varnish an there configure the upstream like this:
resolver 10.0.0.2; ## IP for the aws resolver on the subnet
upstream backend {
jdomain internal-elb-dns-name port=80;
}
that upstream will automatically reconfigure the upstream ips the IP if the ELB changes its addresses
It might not be a solution using varnish but it works as expected