Kubectl: Access kubernetes cluster using route53 private hosted zone - amazon-web-services

I stared my kubernetes cluster on AWS EC2 with kops using a private hosted zone in route53. Now when I do something like kubectl get nodes, the cli says that it can't connect to api.kops.test.com as it is unable to resolve it. So I fixed this issue by manually adding api.kops.test.com and its corresponding public IP (got through record sets) mapping in /etc/hosts file.
I wanted to know if there is a cleaner way to do this (without modifying the system-wide /etc/hosts file), maybe programmatically or through the cli itself.

Pragmatically speaking, I would add the public IP as a IP SAN to the master's x509 cert, and then just use the public IP in your kubeconfig. Either that, or the DNS record not in the private route53 zone.
You are in a situation where you purposefully made things private, so now they are.
Another option, depending on whether it would be worth the effort, is to use a VPN server in your VPC and then connect your machine to EC2 where the VPN connection can add the EC2 DNS servers to your machine's config as a side-effect of connecting. Our corporate Cisco AnyConnect client does something very similar to that.

Related

Create a public endpoint to AWS ElasticSearch domain which is inside a VPC

I need to access a AWS ElasticSearch (AES) domain, which is inside a VPC, from the internet, so that I can do read/write testing from a local machine. Ultimately, the code will run on an EC2 instance inside the VPC, but for now I need direct access. It would be ideal if the same code would run both outside and inside the VPC (as we do with DynamoDB), but we may not be that lucky.
Thus, I want to create a public endpoint to access the AES domain that is inside the VPC.
Since I have the AES internal endpoint name and the ENI connected to it, I thought I could just connect an Elastic IP address to the ENI, but that's not allowed -- I assume its because the internal IP address may change.
Alternatively, it would make sense that I could map a route in the route table from the IGW (Internet Gateway) to the internal address. But that would again be connected to the internal IP address, and thats bad.
I expect I could use Route53 to map an external facing domain name in to it. But that seems like overkill.
Is there way to map an address from the internet in to the AES domain name?
Is there way to map an address from the internet in to the AES domain name?
Sadly, there is no direct way. You have to setup a VPN connection between your home and your VPC, or some other type of proxy server. However, for testing and development purposes, usually this is done using SSH tunnel is more then sufficient. Setting up the SSH tunnel is explain in Testing VPC Domains of AWS Docs.
There are also numerous other manuals and tutorials on how to do it, e.g.:
How can I use an SSH tunnel to access Kibana from outside of a VPC with Amazon Cognito authentication?
I want to use an SSH tunnel through AWS Systems Manager to access my private VPC resources. How can I do this?
Elasticsearch api secure using SSH tunneling

Use on-prem DNS servers inside a VPC

I have a GCP VPC and it is connected to on-prem using Public Cloud Interconnect.
Traffic flow between onprem and the VPC is ok. All routes and firewalls are configured correctly.
Now I would like to have the company DNS servers available for VMs in my VPC.
My 3 DNS servers are
10.17.121.30 dns-01.net.company.corp
10.17.122.10 dns-02.net.company.corp
10.17.122.170 dns-03.net.company.corp
Now I have done the below config in Cloud DNS in GCP.
The DNS name is company.corp
The "In use by" is referring my VPC.
The IPs 10.17.121.30, 10.17.122.10 and 10.17.122.170 are on-prem and are accessible from the VPC over port 53.
But after having done all the above, if I try to connect to any on-prem machine using its name, I get
telnet: could not resolve example-server.corp.sap/443: No address associated with hostname
The above request is being made from a VM inside the VPC.
Which leads me to believe that my DNS servers might not be correctly configured. What have I missed here ?
If you are intending to have your VMs able to resolve hostnames within your on-premises network, then you will need to make use of DNS forwarding. You would need to configure your private zone as a forwarding zone. Once this is done you can use your forwarding zone to query on-premises servers.

How do I SSH tunnel to a remote server whilst remaining on my machine?

I have a Kubernetes cluster to administer which is in it's own private subnet on AWS. To allow us to administer it, we have a Bastion server on our public subnet. Tunnelling directly through to our cluster is easy. However, we need to have our deployment machine establish a tunnel and execute commands to the Kubernetes server, such as running Helm and kubectl. Does anyone know how to do this?
Many thanks,
John
In AWS
Scenario 1
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
if that's the case you can use the kubectl commands from your Concourse server which has internet access using the kubeconfig file provided, if you don't have the kubeconfig file follow these steps
Scenario 2
when you have private cluster endpoint enabled (which seems to be your case)
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support for Your VPC in the Amazon VPC User Guide.
Either you can modify your private endpoint Steps here OR Follow these Steps
Probably there are more simple ways to get it done but the first solution which comes to my mind is setting simple ssh port forwarding.
Assuming that you have ssh access to both machines i.e. Concourse has ssh access to Bastion and Bastion has ssh access to Cluster it can be done as follows:
First make so called local ssh port forwarding on Bastion (pretty well described here):
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<kubernetes-cluster-ip-address-or-hostname>
Now you can access your kubernetes api from Bastion by:
curl localhost:<kube-api-server-port>
however it isn't still what you need. Now you need to forward it to your Concourse machine. On Concource run:
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<bastion-server-ip-address-or-hostname>
From now you have your kubernetes API available on localhost of your Concourse machine so you can e.g. access it with curl:
curl localhost:<kube-api-server-port>
or incorporate it in your .kube/cofig.
Let me know if it helps.
You can also make such tunnel more persistent. More on that you can find here.

Why hostname and IP gets changed after restart of EC2?

After restarting AWS EC2, hostname & public IP gets changed.
Remote docker clients get affected as they rely(export DOCKER_HOST) on these public names.
How to resolve this dynamic IP(public) problem of EC2?
By default, AWS assigned public IP addresses as well as hostnames are ephemeral, meaning they will be released back to the pool if you restart the instance. If you really need a persistent IP address, you can use Elastic IPs, but bear in mind there’s a limit per region.
Note: I’d still recommend evaluating the need for using a public IP from the IPv4 pool, as they are a rare resource. Most of the times, one can get by well by using the correct combination of security groups and private IPs, along with Route53 hosted zones for friendly naming, assuming instances are in the same VPC or can communicate via VPC peering.

How to set up an alternate DNS server for AWS's China EC2 instance?

Currently, the DNS server for the EC2 instance in AWS China is 10.0.0.2, as shown below:
[root#ip-10-0-0-191 ec2-user]# cat /etc/resolv.conf
search cn-north-1.compute.internal
nameserver 10.0.0.2
If the DNS server is down, the domain name of the EC2 instance will not be resolved. Is there any way to create an alternate DNS server or a slave DNS server to avoid this problem?
What are the solutions for the following two environments:
I have several EC2 instances running in the AWS US region. So, can I set up a DNS server in the US as an alternate DNS server for the EC2 instance in China? If this method works, what are the specific steps? What services need to be connected through the AWS network in China and the US?
I only have instances of the AWS China region, and there are no instances of other areas of AWS. How can I accomplish my goal?
There is no apparent problem to solved, here.
First, this IP address does not represent a single DNS server.
The Amazon DNS Server is actually a service of provided by the network infrastructure -- not a dedicated machine.
Anything can theoretically fail, but a failure of this subsystem is unlikely unless the physical hardware where this VM is running or its hypervisor has failed or is failing... in which case, the server will fail its health checks and will be offline.
Second, this isn't quite accurate:
If the DNS server is down, the domain name of the EC2 instance will not be resolved
If the VPC DNS resolver service were to somehow fail, this would prevent the instance from resolving any names, but names pointing to the instance are not something this server does. This server is only used when the instance is doing the lookup -- not when something else is looking up the instance.