I've read through all the white papers for Route53, Private Hosted Zones, and Workspaces and I'm too the point of banging my head on the wall. :p
I'm having trouble getting an EC2 instance and an Amazon Workspace within a private cloud to communicate using a Fully Qualified Domain Name. I need them to communicate with a FQDN instead of an IP address so that I can have an encrypted connection with an SSL.
Here is my configuration:
Setup a VPC with two public subnets, a route table, and internet gateway.
VPC is setup with DNSResolution and DNSHostnames enabled.
Setup a Simple AD for the workspace within the private VPC.
Setup an EC2 instance within the private VPC with a public subnet.
Setup the EC2 instance with a security group that allows port 80,443, and 5003 open to 0.0.0.0/0.
Setup a workspace within the private VPC with no security group.
Disabled the firewall within the EC2 instance and Workspace.
Setup a Hosted Zone on Route53 configured for Private and linked to the VPC.
Setup an A Record pointing the private IP of the EC2 instance.
If I run a ping from the Workspace to the DNS record that was setup in Route53, I get a successful connection.
If I try to reach the EC2 server using a Web browser on Port 80 or Port 443 using the DNS record, it fails.
If I try to reach the Ec2 server using an application that runs on Port 5003 using the DNS record, it fails.
If I try to reach the EC2 server with either web browser or application by referencing the IP, it is successful. So I know that my ports aren't being blocked.
Did I configure the route53 record incorrectly or am I missing a particular IAM Role permission set?
Thanks and let me know if I need to elaborate on any of the configuration.
SimpleAD DNS is being used instead of Route53. If the zone is the same then only one or the other can be used I'm afraid.
For example if you have host.com DNS zone in SimpleAD then the workspace won't use R53 for any *.host.com resolution. Try a different private zone in R53 and therefore fqdn for the EC2 instance private IP address.
https://forums.aws.amazon.com/thread.jspa?threadID=215126
Related
I have one privately hosted zone in my vpc using Route 53. I also have one client vpn connection to that vpc, which is functioning normally.
I want the client to get access to my website hosted using private zone in private subnet through their browser when they are connected to the VPN Client.
I have enabled "DNS Configuration" in the Client VPN Settings. But my client is not able to access the hostname of the webitse hosted in the private hosted zone. Though they are able to access the website using client vpn connection but by using ip address. I want them to access it using hostname.
I have tried defining the DNS ip in client vpn settings as
AWS Provided DNS (VPC CIDR + 2)
2.Route 53 inbound endpoint ips.
Both did not work. Help me out on this.
Take a look at this guide, it might be useful for you, but as far as I understand you need to use direct connect or AWS VPN. Cause even if your client is inside the VPC by your custom VPN it still does not use the same DNS resolver https://aws.amazon.com/premiumsupport/knowledge-center/route53-resolve-with-inbound-endpoint/
I have an AWS Landing Zone Setup.
My Shared Account contains an AWS Client VPN.
My Network Account contains Transit Gateway, this is shared with Production Account and Production Account VPC is attached.
My Production Account contains a VPC which initally had Private Subnet and one EC2 Instance.
Initially my team wanted to login to the EC2 and install some softwares. This setup was successfull and my team has completed their work.
Now,Due to some requirement changes (they have configured a Website) over the Instance, I changed my Private Subnet to Public Subnet by introducing IGW entry in the Route Table. Also, I have attached an Elastic IP.
One of the other team wanted to connect to this URL/Portal, so I have added their corporate VPN Public address in the Security Group. They are able to access the website/portal/url easily.
Now my team wants to access the URL, but they are not able to access it and I cannot make it Public or Open to world
Other Configurations:
Current Security Group contains Inbound All Traffic from AWS Client VPN and the Corporate VPN
DNS entries and resolution is done by other team, they have made entry for the Public IP and the URL to which it should resolve
What changes I should make so that my team can connect to VPN and should be able to access this portal/URL?
I added the entires of my URL in my local machine host file and mapped it to my EC2 Private IP.
Now I can connect to my VPN and access the URLs from my browser.
I have a VPC in my AWS account peered to a VPC of a partners account. The partner account has Route 53 resolvers to resolve DNS within domain.com to IPs in their peered VPC.
I've associated my VPC with their private hosted zone.
Within my VPC (for example SSH into an EC2 instance), the DNS resolution for foo.bar.domain.com works great - I'm resolving & connecting to the resources in their VPC as expected.
However, when I'm running and AWS client VPN on my personal machine, I'm unable to resolve the foo.bar.domain.com to the same private IP address through the VPN. So, for example, running a development server on my machine connected to the partner VPC URLs is failing.
I've tried hosting a DNS server in the VPC with a zone forwarding rule pointing to the Route 53 IPs.
I've tried setting the VPN DNS server IP to the Route 53 IPs.
But none of that has worked. Help would be appreciated?
The answer was simpler than I thought: I just had to set the DNS server in the AWS Client VPN Endpoint settings to be the private IP address of my VPC's DNS (which is always the VPC's CIDR +2).
From the AWS docs:
If you're unsure about which IP address to specify for the DNS servers, specify the VPC DNS resolver at the .2 IP address in your VPC.
Client VPN Endpoints > Modify Client VPN Endpoint > Other optional parameters -> Enable DNS Servers -> IP Address
I have a GCP VPC and it is connected to on-prem using Public Cloud Interconnect.
Traffic flow between onprem and the VPC is ok. All routes and firewalls are configured correctly.
Now I would like to have the company DNS servers available for VMs in my VPC.
My 3 DNS servers are
10.17.121.30 dns-01.net.company.corp
10.17.122.10 dns-02.net.company.corp
10.17.122.170 dns-03.net.company.corp
Now I have done the below config in Cloud DNS in GCP.
The DNS name is company.corp
The "In use by" is referring my VPC.
The IPs 10.17.121.30, 10.17.122.10 and 10.17.122.170 are on-prem and are accessible from the VPC over port 53.
But after having done all the above, if I try to connect to any on-prem machine using its name, I get
telnet: could not resolve example-server.corp.sap/443: No address associated with hostname
The above request is being made from a VM inside the VPC.
Which leads me to believe that my DNS servers might not be correctly configured. What have I missed here ?
If you are intending to have your VMs able to resolve hostnames within your on-premises network, then you will need to make use of DNS forwarding. You would need to configure your private zone as a forwarding zone. Once this is done you can use your forwarding zone to query on-premises servers.
I have to connect to EC2 instance on the basis of hostname. Can anyone please help me in how I can connect to EC2 instance on the basis of hostname from outside AWS domain?
Right now I am using IP address to connec to EC2 instance.
All the help is highly appreciated.
If your EC2 instance is inside a VPC and it doesn't have a public DNS name, you probably need to enable DNS Hostnames for your VPC. In the AWS Console, go to the VPC screen under "Your VPCs". Select your VPC, click the Actions button and select "Edit DNS Hostnames". Also make sure that DNS Resolution is enabled.
Note from the AWS doc:
If you enable DNS hostnames and DNS support in a VPC that didn't
previously support them, an instance that you already launched into
that VPC gets a public DNS hostname if it has a public IP address or
an Elastic IP address.