I currently have a GKE cluster and service set up at app.companyname.com. It lives in the prod GCP "project". I am now spinning up a dev project, and it's all good, except I don't know what to do with the domain name and certificate settings. I want the app available at dev.companyname.com
Do I have a public DNS zone per project, or should I have one, in the prod enviromment that dev accesses somehow? Do they share nameserver settings? Does the prod project forward to the dev project?
Do I have a separate SSL certificates, one per environment? Or one in prod environment that dev accesses.
What is the general overview on how this should be set up with GCP?
Essentially the question is about separation between Prod and Dev. Following this approach ("spinning up" a separate Dev project in GCP) you should stay consistent and separate public DNS as well. This will make projects scalable and align GCP service boundaries accordingly with the administration, trust, management and billing scopes managed by different teams.
Do I have a public DNS zone per project? - Broadly speaking, you should have a separate DNS zone per environment (but in the described case this means "per project"), in particular because SOA settings like REFRESH or TTL could be set different between Prod and Dev. Also, since each managed zone that you create is associated with a Google Cloud project, you can leverage IAM access control features to stick with the principle of least privilege in DNS zone management.
Do they share nameserver settings? - With a managed service such as Cloud DNS your projects will use name servers provided by vendor. You can find them in GCP Console or with gcloud:
$ gcloud config set core/project myworks-240610
Updated property [core/project].
$ gcloud dns managed-zones list
NAME DNS_NAME DESCRIPTION VISIBILITY
my-works myworks.example.com. public
$ gcloud dns managed-zones describe my-works --format="flattened(nameServers)"
nameServers[0]: ns-cloud-b1.googledomains.com.
nameServers[1]: ns-cloud-b2.googledomains.com.
nameServers[2]: ns-cloud-b3.googledomains.com.
nameServers[3]: ns-cloud-b4.googledomains.com.
$ dig ns-cloud-b1.googledomains.com +short
216.239.32.107
NS address is different for different projects (that way between Prod and Dev).
See Updating your domain's name servers
Do I have a separate SSL certificates, one per environment? - In different DNS zones hosts will have different CNs, so you will have separate certificates in Prod and Dev. You can use wildcard certificates to make things simpler, for example in Dev environment.
I am not familiar with the specifics of GKE, but I can speak to general domain system.
Answers:
There is one DNS record for companyname.com, the NS records are set for the overall domain. But there would be two A records, one for prod and one for dev. The A record points to the IP, so you can host them on any two servers, or the same one, or in a subdirectory of the other one, as long as Apache, or whatever serves the website, can match the full subdomain name to the directory in which the subdomain's files live
There are separate SSL certificates for each subdomain
Here is the example as the final DNS zone record to give you an idea of how to set your DNS settings:
; Name Servers
companyname.com. IN NS ns1.somehost.com.
companyname.com. IN NS ns2.somehost.com.
companyname.com. IN NS ns3.somehost.com.
; A records
# IN A XXX.XXX.XXX.XXX ; IP for the main companyname.com
prod IN A XXX.XXX.XXX.XXX ; IP for the app.companyname.com
dev IN A XXX.XXX.XXX.XXX ; IP for the dev.companyname.com
About URLs
A URL consists of protocol://Domain Name:port/path/file
https://prod.companyname.com:80/index.html
Protocol is just http or https
Domain name is what traverses the entire internet and the Domain Name Systems entire purpose is to map the domain name with the IP of the public facing server that holds that domain
Just the port and hidden by the browser to the user
The server then receives the request (something like Apache, or maybe GKE takes it from there) and maps the domain name to a root directory on the server, and then traverses down the path to the correct files to serve up to the requester
I know it is a lot to take in, but that is the root problem. I can't speak to GKE and someone else might jump in and explain out it might be configured to do what you are doing. But if you can get access to the DNS settings of the domain, and make a proper subdomain, this all gets a whole lot easier for you.
Related
I have followed the instructions at https://docs.openshift.com/container-platform/4.11/installing/installing_gcp/installing-gcp-account.html#installation-gcp-dns_installing-gcp-account for setting up an openshift trial.
All steps I managed to get working excpet for DNS steps mentioned.
I created a zone my-new-zone for my subdomain in GCP clusters.mysite.com and pointed DNS NS's to google (ns-cloud-d[1-4].googledomains.com) and I am able to interact with my openshift just fine.
However, in so doing, all of my other DNS entries for mysite.com no longer function.
I tried creating a second zone my-zone in GCP for mysite.com and added those preexisting entries there, but they came up with different GCP DNS NS servers (ns-cloud-a[1-4].googledomains.com).
How can I fix this so that I can access the openshift and also access my original sites?
Note: I can destroy and recreate the openshift cluster as needed at this point, but I need to know the correct steps for getting the DNS right.
Additional clarifications:
Note 1. I thought I had included above but apparently left out this detail: [mysite].com DNS entries were maintained at Dotster.com. When I got to step 6 in the linked instructions, I had to call Dotster.com because I could not understand how to proceed. I was told I could not use separate NS servers for the subdomain and they asked if I wanted to point the NS servers for my domain to the GCP servers indicated. I agreed and they repointed the NS servers. At that point I tried to add my DNS entries to GCP to restore access to my primary sites, and am not understanding how to do so. GCP will not allow me to change the zone name from clusters.[mysite].com to [mysite].com. It looked like all I needed to do was add another zone for [mysite].com, so I did so, not expecting the second zone would use totally different nameservers.
PROBLEM: DNS does not work for primary domain after setting up OpenShift on GCP. My website is down, my email is down, all of my sites are down.
Objective/Goal: Restore DNS service for primary domain entries AND have OpenShift working correctly.
Errors:
$ nslookup www.[mysite].com 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53
** server can't find www.[mysite].com: SERVFAIL
As for why I created a subdomain, I already had my domain set up at dotster.com. I was following step 2 which says "2. Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as clusters.openshiftcorp.com." And then I called dotster.com when I got to step 6 as I did not know how to proceed at that point. Please see note 1 above.
I suggest to create zone in your root domain, once created, add "A record" using your root domain then add "cname record" for subdomain. Once done, get the nameserver and place it in your domain registrar. Make sure to add other necessary record to the zone that you created in order for other service to work like email. Propagation will take at least 24 to 48 hours depends on DNS server.
I had an infrastructure consist of load balancer(nginx configuration) and two servers,
one is for UK and other is US,
Now requirements are I have to deploy runtime application to one of these servers based on client ip, that part is done in nginx conf with geoip module.
and will do server entry as well if not available in nginx upstream list.
Now second part is these servers e.g UK US having an ip's, I want runtime DNS entries for them as well,
Servers can be of AWS,Azure,GCP their domain providers may vary,
So its possible to do DNS entry during deployment stage like first application will be deployed to corresponding server then that server should do entry as well in DNS and get domain name (should be provided by user in runtime).
in short, there is script which is doing runtime domain entries like as.blabla.com in nginx
but I need to have an another parameter for server like 190.80.0.13 for asia, and i want dns entry for this ip as well either this belong to GCP,AWS or any DNS related system.
Question may seem alot twisted, its okay we can discuss further.
In AWS you will be better with AWS Elastic Load Balancer and Route53, using Geolocation, or Geoproximity as routing policy.
For better performance you can add (CDN) Cloudfront distribution.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Even though I have a private domain configured in route 53 resolver for a vpc, new instances still have default names like:
ip-10-1-1-170.ap-southeast-2.compute.internal
Is there a way to configure things such that new instances will automatically have an FQDN of the (sub)domain I have configured like:
ip-10-1-1-170.green.example.com
I am hoping to ensure that instances in seperate deployments (dev/green/blue) have FQDN's in seperate sub domains (and different VPCs), so that I can configure my onsite DNS to know where any host is based on that sub domain in its name, but automatically getting the host name on start is the first step on that journey.
I can successfully create route 53 records to achieve this too one by one, but it seems a bit nuts for a compute cluster, so I'm hoping that theres a way to achieve it just with the host name and the route53 resolver will still correctly handle DNS requests to those hosts somehow.
This domains are actually related to the domain controller that the instances are bound to.
When you create a VPC, the default DHCP configuration is amazons DNS (AmazonProvidedDNS) which in your case is providing the ap-southeast-2.compute.internal domain names.
If you added a custom DHCP option set of green.example.com then this would become part of that domain and show the DNS as you expect, although you are limited to one DHCP option set per VPC.
AWS have the following services which can act as domain controllers although you would need to ensure that your on premise can also forward requests to these name servers to resolve the domains:
Simple AD
Managed Microsoft AD
This is quite a bit of overhead in order to get the DNS names like those domains, it might be simpler by using 2 private hosted zones and automatings adding hosts to the domains along with an inbound endpoint instead from your on premise.
A question about Google Cloud DNS: what happens when you create two Google Cloud projects (e.g., ProjectA and ProjectB), each with a public managed DNS zone with the same top-level domain (e.g., example.com)?
More precisely: will the sub-domains of both (e.g., a.example.com in ProjectA and b.example.com in ProjectB) both be resolvable by clients?
And more exotically: what would happen if both projects would define the same subdomain (e.g., an A record for overlapping.example.com)?
I've read Google's documentation on overlapping zones, but that does not seem to give an answer to these questions.
Any experiences?
If you have a public domain managed in one Project and you want to setup subdomain in a different Project then you can follow this:
Let's have Project A that contains Zone X for domain.com that is registered with Google's NS servers ns-cloud-a{1..4}.googledomains.com.
Then let's have Project B that contains Zone Y for dev.domain.com that is registered with Google's NS servers ns-cloud-b{1..4}.googledomains.com.
In order to make domain names from Zone Y public, create NS record for dev.domain.com that points to ns-cloud-b{1..4}.googledomains.com in the Zone X.
TLDR
A parent domain project needs an NS record for the subdomain, configured with subdomain name server.
Our Company local network is connected to a AWS VPC in VPN - see schema below :
view architecture here
Now, we want to configure DNS servers in order to use host name instead of Ip all over the network.
What is the best solution ?
Let Route53 handle DNS for the entire network (even the local one)
Have a DNS server on our local network, and Route53 on Amazon VPC. And if so, how to perform synchronization/replication between local DNS server and Route53 ?
Another solution :)
Thanks !
And have a nice day !
The problem with Route 53 is that it doesn't play with other DNS servers. It is a completely self contained solution. This means that if you used Route 53 your internal servers could only look up through the VNet into Route 53, you couldn't have a secondary Nameserver onsite that took a zone transfer from Route 53 (they don't support them)
You could potentially have caching nameservers internally, and have long expirely times on your host records, so if there was any problem the records wouldn't go stale but this brings its own set of problems.
This leaves you with a couple of solutions.
Use your internal network entirely, set up your internal name servers, internal.example.com and have a secondary name server located inside your Vnet that AWS clients can refer to. This way if there is a problem with the link, both sides still have working DNS.
Alternatively, you could configure internal.example.com in the same way, but then have aws.example.com running on Route 53. (or on a standalone server)
If Route 53 supported Zone Transfers and secondary servers it would be largely irrelevant what you went with but because they don't any solution you build is going to mean rolling some sort of glue to sit in between everything. This is invariably a Very Bad Thing™
We have the same architecture, network wise, and have not found a reasonable way to unify both networks' DNS data into one set of DNS servers.
Here is what works for us.
Assuming you want to use a corporate domain such as example.com, you can get a unified naming scheme where all hosts are under the example.com domain. This is done via Zone Delegation. In this document it states:
Domain Name System (DNS) provides the option of dividing up the
namespace into one or more zones, which can then be stored,
distributed, and replicated to other DNS servers. When you are
deciding whether to divide your DNS namespace to make additional
zones, consider the following reasons to use additional zones:
So in your case:
Use company network DNS for servers/devices on the local network. server1.example.com resolves to the IP# for the local network.
Delegate a subdomain such as 'corp' or 'cloud' to Route 53 for all hosts on AWS. Also known as a subzone, this gives full DNS responsibility to another name server. An instance in EC2 would be referenced as server1.cloud.example.com
This gives you a logical naming scheme, with IP resolution for all hosts on the network.
See Creating a Subdomain That Uses Amazon Route 53 as the DNS Service without Migrating the Parent Domain
There are some 3rd party solutions that add features onto Route 53, easyRoute53, and Route53d. Route53d claims for offer some sup[port for zone transfers (IXFR only).