We have few rancher hosts in few different datacenters. The Issue I am trying solve is:
Get DC Rancher app to resolve or connect to AWS rancher container and we have VPN between so the network is pretty much wide open. I could potentially do everything through public interfaces however am more interested to isolate it to private network between DC and AWS.
Check out Working with Private Hosted Zones:
"If you have integrated your on-premises network with one or more Amazon VPC virtual networks and you want your on-premises network to resolve domain names in private hosted zones, you can create a Simple AD directory. Simple AD provides IP addresses that you can use to submit DNS queries from your on-premises network to your private hosted zone. For more information, see Getting Started with Simple AD in the AWS Directory Service Administration Guide."
See https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html
See the Custom DNS Servers section and how to use Simple AD to resolve DNS for your use case.
Related
Assuming I have a custom VPC with IP ranges 10.148.0.0/20
This custom VPC has firewall rules to allow-internal so the service inside those IP ranges can communicate to each other.
After the system grows I need to connect to some on-premises network by using Classic Cloud VPN, already create Cloud VPN (the on-premises side configuration already configured by someone) and the VPN Tunnel already established (with green checkmarks).
I also can ping to on-premises IP right now (let's say ping to 10.xxx.xxx.xxx where this is not GCP internal/private IP but on-premises private IP) using compute engine created on custom VPC network.
The problem is all the compute engine instance spawn in custom VPC network can't communicate to the internet now (like doing sudo apt update) or even communicate to google cloud storage (using gsutil), but they can communicate using private IP.
I also can't spawn dataproc cluster on that custom VPC (I guess because it can't connect to GCS, since dataproc needs GCS for staging buckets).
Since I do not really know about networking stuff and relatively new to GCP, how to be able to connect to the internet on instances that I created inside custom VPC?
After checking more in-depth about my custom VPC and Cloud VPN I realize there's misconfiguration when I establish the Cloud VPN, I've chosen route-based in routing option and input 0.0.0.0/0 in Remote network IP ranges. I guess this routes sending all traffic to VPN as #John Hanley said.
Solved it by using policy-based in routing option and only add specific IP in Remote network IP ranges.
Thank you #John Hanley and
#guillaume blaquiere for pointing this out
I have a GCP VPC and it is connected to on-prem using Public Cloud Interconnect.
Traffic flow between onprem and the VPC is ok. All routes and firewalls are configured correctly.
Now I would like to have the company DNS servers available for VMs in my VPC.
My 3 DNS servers are
10.17.121.30 dns-01.net.company.corp
10.17.122.10 dns-02.net.company.corp
10.17.122.170 dns-03.net.company.corp
Now I have done the below config in Cloud DNS in GCP.
The DNS name is company.corp
The "In use by" is referring my VPC.
The IPs 10.17.121.30, 10.17.122.10 and 10.17.122.170 are on-prem and are accessible from the VPC over port 53.
But after having done all the above, if I try to connect to any on-prem machine using its name, I get
telnet: could not resolve example-server.corp.sap/443: No address associated with hostname
The above request is being made from a VM inside the VPC.
Which leads me to believe that my DNS servers might not be correctly configured. What have I missed here ?
If you are intending to have your VMs able to resolve hostnames within your on-premises network, then you will need to make use of DNS forwarding. You would need to configure your private zone as a forwarding zone. Once this is done you can use your forwarding zone to query on-premises servers.
I have successfully built a VPN connection between gcp and aws using the following guide(https://cloud.google.com/solutions/automated-network-deployment-multicloud).
I can currently ping the resources on the other cloud providers based on the private IP. However, I would like to use the dns resolution that resolves to private IP of the AWS resource DNS names. Can someone please help me with this?. Using DNS server policy may not be the best of options for me as it points to alternative name server only and not the gcp’s internal name servers anymore. So how can I use forwarding zones in gcp for DNS names such as database-test.c34fdgt1ascxz.us-west-1.rds.amazonaws.com so that it resolves to private IP. The above example is for database which I have not made public. Has someone done this already? Or does anyone have any idea on how to go about this. Any help is much appreciated, thank you so much.
It is possible.
If your goal is to configure outbound forwarding to AWS, then you should remove this policy you just need a Cloud DNS managed zone to accomplish this.
The DNS queries that are forwarded from GCP to AWS will come from the 35.199.192.0/19 address block.
The 35.199.192.0/19 traffic can be routed over a dynamic VPN tunnel dynamic (BGP), so you would just need to modify your AWS VPN gateway or router by adding a route that to reach 35.199.192.0/19.
It looks like a public address block, but Google uses this block only for forwarding, and does not announce it on the public Internet.
And finally, AWS needs to be configured so that responses to DNS queries from 35.199.192.0/19 are routed back to GCP using the VPN tunnel configured between AWS and GCP.
In other words, this traffic needs to go through the VPN tunnel.
To debug it you can use stackdriver logging and also by checking network captures on both endpoints.
Check this documentation guides: Creating Forward zones1 and DNS forwarding2.
You can't resolve AWS private IP addresses by submitting the AWS public endpoint to GCP's DNS. That just wont work.
AWS uses a service called Route53 resolver that will forward requests that can't be resolved internally to an external DNS server that you specify. We use this in our env's to resolve on-prem corp IP's that are not part of Route53. I have not tried this, but it's possible you can use that to point to GCP DNS.
I stared my kubernetes cluster on AWS EC2 with kops using a private hosted zone in route53. Now when I do something like kubectl get nodes, the cli says that it can't connect to api.kops.test.com as it is unable to resolve it. So I fixed this issue by manually adding api.kops.test.com and its corresponding public IP (got through record sets) mapping in /etc/hosts file.
I wanted to know if there is a cleaner way to do this (without modifying the system-wide /etc/hosts file), maybe programmatically or through the cli itself.
Pragmatically speaking, I would add the public IP as a IP SAN to the master's x509 cert, and then just use the public IP in your kubeconfig. Either that, or the DNS record not in the private route53 zone.
You are in a situation where you purposefully made things private, so now they are.
Another option, depending on whether it would be worth the effort, is to use a VPN server in your VPC and then connect your machine to EC2 where the VPN connection can add the EC2 DNS servers to your machine's config as a side-effect of connecting. Our corporate Cisco AnyConnect client does something very similar to that.
I have two vnets that are connected using a gateway. VnET1 and VNET2. VNET2 has a VM which hosts a mongodb instance. I have a webjob running within an App service environment which is deployed into a subnet within VNET1. From this subnet i am able to access the VM in VNET2 with its DNS. But i am unable to access the VM's internal IP. Any suggestions are welcome.
An internal IP address is internal to a VNET, and VNETs are isolated from one another by design. See this site for a good overview.. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-overview/. If you want to connect internally you might want to consider having multiple subnets within the same VNET instead.
At present, connecting two vnets using a gateway allows IP communication but doesn't allow DNS name resolution. In this scenario we recommend managing a local DNS server. This page shows the requirements for using your own DNS server in Azure.
Hth, Gareth