It says larger websites have their server distributed on several machines running the same website and depending on load a user is routed to one of the server machines . This happens without users knowledge and under same domain name as far as user's interface is concerned.
now the bit I don't get is even though you eliminate the bottleneck of using a single server by using distributed server how would you eliminate the bottleneck of dns.
It would require some kind of routing gateway that routes user to one of the servers for the Web page (resolving dns) now this gateway would be bombarded with requests.
So how would you reduce the routing gateway load?
Usually, you'll use several DNS servers for the zone. Even when you do that though, there are techniques to do this load balancing at the network layer.
Commonly referred to as "anycast", it's possible for multiple hosts to have the same IP address on the internet. Normal internet routing can then be used to route users to a server on the best path.
For example, you could put DNS servers on all continents and assign them all the same address, 8.8.8.8. Users in Europe would most likely end up on the European DNS server.
There is quite a bit of investment and administrative overhead for this, which is one of the reasons why globally distributed DNS hosting providers charge a premium price for this feature.
Related
I had an infrastructure consist of load balancer(nginx configuration) and two servers,
one is for UK and other is US,
Now requirements are I have to deploy runtime application to one of these servers based on client ip, that part is done in nginx conf with geoip module.
and will do server entry as well if not available in nginx upstream list.
Now second part is these servers e.g UK US having an ip's, I want runtime DNS entries for them as well,
Servers can be of AWS,Azure,GCP their domain providers may vary,
So its possible to do DNS entry during deployment stage like first application will be deployed to corresponding server then that server should do entry as well in DNS and get domain name (should be provided by user in runtime).
in short, there is script which is doing runtime domain entries like as.blabla.com in nginx
but I need to have an another parameter for server like 190.80.0.13 for asia, and i want dns entry for this ip as well either this belong to GCP,AWS or any DNS related system.
Question may seem alot twisted, its okay we can discuss further.
In AWS you will be better with AWS Elastic Load Balancer and Route53, using Geolocation, or Geoproximity as routing policy.
For better performance you can add (CDN) Cloudfront distribution.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
I just moved a simple, static website to GCP, and it's working fine. But I want to keep using a separate company as registrar, not the hosting company. So as a shortcut, I just set the www CNAME at the registrar's site to c.storage.googleapis.com, without using Google's DNS - and this works.
But is it good practice? If not, could someone recommend a step-by-step guide to setting up a public zone on GCP? Google's documentation is complicated, getting into private zones, authentication, and service accounts, which I probably don't need.
As long as the company providing your DNS services is reliable and has the DNS features you require, it really does not matter which DNS provider you use.
You bring up the point of good practice. There are lots of opinions, some prefer that the same cloud provider host DNS, others recommend separating these functions.
There are situations where you want the DNS servers in the same cloud. For example AWS supports A-ALIAS records which are a logical fit for AWS load balancers. Take a look at your current DNS server requirements and look forward to what you may need next year, etc. Then pick a DNS provider that meets your requirements.
It is also very easy today to switch both registrars and DNS providers. It can be a pain for a couple of days while DNS switches over, but this just means hosting your records with two companies while the world synchronizes.
Our Company local network is connected to a AWS VPC in VPN - see schema below :
view architecture here
Now, we want to configure DNS servers in order to use host name instead of Ip all over the network.
What is the best solution ?
Let Route53 handle DNS for the entire network (even the local one)
Have a DNS server on our local network, and Route53 on Amazon VPC. And if so, how to perform synchronization/replication between local DNS server and Route53 ?
Another solution :)
Thanks !
And have a nice day !
The problem with Route 53 is that it doesn't play with other DNS servers. It is a completely self contained solution. This means that if you used Route 53 your internal servers could only look up through the VNet into Route 53, you couldn't have a secondary Nameserver onsite that took a zone transfer from Route 53 (they don't support them)
You could potentially have caching nameservers internally, and have long expirely times on your host records, so if there was any problem the records wouldn't go stale but this brings its own set of problems.
This leaves you with a couple of solutions.
Use your internal network entirely, set up your internal name servers, internal.example.com and have a secondary name server located inside your Vnet that AWS clients can refer to. This way if there is a problem with the link, both sides still have working DNS.
Alternatively, you could configure internal.example.com in the same way, but then have aws.example.com running on Route 53. (or on a standalone server)
If Route 53 supported Zone Transfers and secondary servers it would be largely irrelevant what you went with but because they don't any solution you build is going to mean rolling some sort of glue to sit in between everything. This is invariably a Very Bad Thing™
We have the same architecture, network wise, and have not found a reasonable way to unify both networks' DNS data into one set of DNS servers.
Here is what works for us.
Assuming you want to use a corporate domain such as example.com, you can get a unified naming scheme where all hosts are under the example.com domain. This is done via Zone Delegation. In this document it states:
Domain Name System (DNS) provides the option of dividing up the
namespace into one or more zones, which can then be stored,
distributed, and replicated to other DNS servers. When you are
deciding whether to divide your DNS namespace to make additional
zones, consider the following reasons to use additional zones:
So in your case:
Use company network DNS for servers/devices on the local network. server1.example.com resolves to the IP# for the local network.
Delegate a subdomain such as 'corp' or 'cloud' to Route 53 for all hosts on AWS. Also known as a subzone, this gives full DNS responsibility to another name server. An instance in EC2 would be referenced as server1.cloud.example.com
This gives you a logical naming scheme, with IP resolution for all hosts on the network.
See Creating a Subdomain That Uses Amazon Route 53 as the DNS Service without Migrating the Parent Domain
There are some 3rd party solutions that add features onto Route 53, easyRoute53, and Route53d. Route53d claims for offer some sup[port for zone transfers (IXFR only).
I have create a LAMP based service on a shared hosting provider. It has now grown and I want to move it on AWS EC2. I have already ported the code and the data, set up ESB for the persistent content, set up an AMI image that boots up fine and has tested the solution on EC2.
I want now to redirect my DNS to the EC2 instance(s) IP. I have asked my shared hosting provider if they can redirect the TLD record (domain.com) to this IP, but they say they can only redirect safely a subdomain (like www.domain.com) because the cPanel breaks if the TLD is redirected. I'm not sure I follow the problem details, but it looks like I have to rent a DNS solution.
What alternatives do I have? I think DynDNS.com is one solution (or a similar service), what else? Or, amongst commercial DNS services, what are good choices in terms of reliability, quality of service, quality of support etc?
Seems on you have one foot on the ship and on one the shore! But the good news is you're almost there!
I suggest leaving your shared host. If they are unable to support you with such a request, it's a good sign you've outgrown the service. There are a few options for you to consider.
First of all, when moving web hosting from shared to AWS, you also need to consider what to do with the DNS, email and cPanel services. I use AWS for web hosting, but separate providers for DNS and email. I don't use cPanel - I just configure DNS and the web server manually. This keeps things much simpler and much more flexible, and the only extra cost is a bit of time to configure DNS separately. cPanel, Plesk and similar systems add a lot of unnecessary complexity into Apache and I find this causes problems later.
Though, if you want to keep cPanel, you might consider installing it or some other web-based management system on AWS. (I'd bet you'd find a prebuilt AMI for this if you look around.)
I'm not sure about running a DNS server on AWS, but I think it would be much easier and more reliable to use a DNS service.
EasyDNS.com and No-IP.com are both great DNS hosts - I've used No-IP for my enterprise AWS web hosting for over 2 years. (It is particularly good because they offer monitoring, and automatic and manual DNS failover in case there's a problem. But, that may be more than you need.) I've used EasyDNS for 4 or 5 years. Both services have solid support and are very reliable.
If you want something free, MyDomain.com has been very reliable for me for almost 10 years, but support is very slow. MyDomain will host your DNS for free even if you didn't register the domain with them.
One last consideration in addition to these: Amazon also offers Elastic IPs, which is basically a static IP for your web server instance. Using this will make your DNS much simpler, give you flexibility to easily change to a new instance in the future, if you ever need to. I strongly recommend using an Elastic IP.
I've used EasyDNS, their DNS rates are reasonable and always been fast to help.
https://dns.he.net/ - free for up to 50 domains. Supports IPv6 AAAA records, custom TTL and has convenient management interface.
The life cycle of a web request has many pieces. I will try to explain the individual pieces so you can fill in the blank according to how you best see fit.
Domain registration (could be your current host, or someone else)
Where does this domain point to ? i.e which server answers requests forwarded to this domain? (this is determined by DNS records i.e in your case A record should point to the server). You most likely need to modify this.
Previously, you were using your host, so most likely in DNS the A entry pointed to their server. Like SaintSal mentioned easiest way is to change it to the elastic IP you get from AWS. I don't know why your host does not allow modification of TLD, but it shouldn't break cpanel. [perhaps if you have been with them for more than 90days, you can transfer your registration to another provider - I personally use dreamhost. With dreamhost, such a setup is a breeze. The only thing I have with dreamhost is the domains. They are hosted with rackspace and aws]
At the end of it, you will still have domain registration (not hosting) with your current host, but web hosting on AWS.
If you want to make things more complicated, your DNS hosting could be another service. In this case, you will need to change the DNS servers with your domain registrar to a third party such as DynDNS or others.
The DNS servers will resolve a request to example.com into an IP such as 11.11.11.11. In your case, this should be the AWS elastic IP. In order to make this work, your domain registrar will have DynDNS servers as DNS servers. DynDNS will have A record pointing to your elastic IP.
I hope I am not confusing you.
good luck. You are mostly there. just need a few settings here and there :)