Does Google Cloud provide public hostnames for their Compute instances?
AWS seems to generate public hostnames for their EC2 instances:
A public (external) DNS hostname takes the form ec2-public-ipv4-address.compute-1.amazonaws.com for the us-east-1 region, and ec2-public-ipv4-address.region.compute.amazonaws.com for other regions. We resolve a public DNS hostname to the public IPv4 address of the instance outside the network of the instance...
Similar question:
This seems like a similar question but (1) setting up a DNS seems like an overkill, (2) seems like I'll need to do some sort of thing outside of Google Cloud anyway or it isn't public (not sure), and (3) it could be outdated (2014).
No, GCE doesn't offer hostnames for an instance. It does assign external IP addresses for each instance. Associating a DNS record with your instance is the only method to generate a hostname.
GCE does have built in private hostnames, inside the same network. For example two instances in the same VPC can ping each other by name
Instance 'test-instance': start server on :8080
Instance 'second-instance': curl test-instance:8080
// Response 'Hello World'
No. Source: FridayPush's answer (thanks! from his profile, seems worthy of trust for Google-Cloud things :-)).
The reason I wrote a separate answer is to make it clear that you can't have a public hostname totally through Google Cloud. You can either have an internal hostname totally through Google Cloud, or you'll need to do something outside of Google Cloud (e.g., own a domain name) to have a public hostname.
GCE instances don't currently have a public DNS name for their external IP address. But there is now a gcloud compute config-ssh (docs) command that's a pretty good substitute.
This will insert Host blocks into your ~/.ssh/config file that contain the IP address and configuration for the host key.
Although this only helps with SSH (and SSH-based applications like Mosh and git+ssh), it does have a few advantages over DNS:
There is no caching/propagation delay as you might have with DNS
It pre-populates the right host key, and the host key is checked the right way even if the ephemeral IP address changes.
Example:
$ gcloud compute config-ssh
...
$ ssh myhost.us-west1-b.surly-koala-232
If your GCP instance has an external IP, ephemeral or static, then that IP address has public DNS entry that you can easily get with a reverse DNS lookup.
Example:
# get your external IP
$ curl icanhazip.com
34.88.81.150
# do a reverse DNS lookup
$ dig +short -x 34.88.81.150
150.81.88.34.bc.googleusercontent.com.
A one-liner to get that public DNS entry:
# (sed removes the trailing dot)
$ dig +short -x $(curl -s icanhazip.com) | sed "s/.$//"
150.81.88.34.bc.googleusercontent.com
Related
I have one VPC with two Subnets (SubnetA and SubnetB).
My team wants to have multiple IPs assigned to the Instance, each from one subnet.
The Instance already have one Private IP (from SubnetA, Primary one) when I launched it, then I attached another Private IP from another SubnetB via the Console Attach network Interface option.
I can see both of the IPs in the console under Managed IP Address option.
I rebooted the Instance, and I was expecting to see both of the IPs when I do ifconfig, but I can see only the Primary one.
To cross-check if the Private IP is actually attached to the Instance, I queried Instance Metadata using the following commands :
curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:14:46:91:bc:34/local-ipv4s
curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:1d:2a:75:ax:04/local-ipv4s
I can see both of the IPs in the output for the above two commands respectively.
I checked the status of NETWORKMANAGER systemctl status NetworkManager
It was stopped
I started the Service and enabled NetworkManager automatically at boot time, using following commands:
systemctl start NetworkManager
systemctl enable NetworkManager
Then I checked the output of ifconfig
This time it showed me both of the MAC addresses, with the only difference for the second one I was not able to see the IP address. So basically the interface is up, so the underlying device is found. There is no IP address associated with this interface.
So I tried both of the options to associate IP:
Assign an IP address manually:
sudo ifconfig ens6 w.x.y.z
Or contact the DHCP server, if it exists, and let it provides an IP address for the interface:
sudo dhclient -v ens6
Both of them worked and I can see both of the IPs under inet.
The last problem was I have to do this every time I reboot the Instance.
So I was trying to add a permanent route using the following command:
sudo /sbin/route add default gw 1xx.xx.2xx.193
Here the IP is the second IP from the SubnetB, but I am getting the error :
SIOCADDRT: Network is unreachable
To solve the above problem what I did is, I was already having a file with
/etc/sysconfig/network-scripts/ifcfg-ens5 with details for Primary IP, I added one more file
/etc/sysconfig/network-scripts/ifcfg-ens6 with the necessary details for secondary IP
This is what I referred.
Rebooted and it is working.
But I am not able to ping the secondary IP.
I think I have to add one more Gateway from the second subnet but not sure about this.
What else needs to be done so that I can route traffic, ping, ssh using the Secondary IP.
Please refer to my VPC Subnet CIDRS:
Subnet A: 1.7.2.128/26
Subnet B: 1.7.2.192/26
Output of ip route:
Update:
Today when I started the server I am able to ping the Secondary IP(200), but not the Primary one(136), from one of my test Instances. Also, ssh is done using Primary IP.
ip route add default via 1XX.XX.XXX.X9X dev ens6 table 2000;
ip route add 1XX.7X.2XX.X9X dev ens6 table 2000;
ip rule add from 1XX.7X.2XX.1XX lookup 2000;
The above command helps me to resolve this issue and I am able to ping my secondary IP.
To make this configuration persist after reboots, the same commands, I have added into rc.local
In the first line, the IP is the Gateway IP (Second IP in the Subnet Range)
The IP mentioned in the second and third lines in the command is the actual Secondary IP of my Server.
By default AWS EC2 instances are accessible using something like this
ssh -i "key.pem" ubuntu#ec2-00-00-00-00.us-east-2.compute.amazonaws.com
Is it possible to change that to something along the lines of:
ssh -i "key.pem" ubuntu#ec2-00-00-00-00.myowndomain.com
Sorry if this is a noob question, I just can't figure out what to even Google. I either get tutorial about SSHing in or tutorial about running websites on an EC2 - neither helpful.
If you have your own domain name, then:
Assign an Elastic IP address to the instance, which is a static IP address (it won't change if the instance is stopped and started)
In your DNS system that controls your Domain Name, create a CNAME record for the subdomain (eg app.mydomain.com) that points to the Elastic IP address
As long as both ec2-00-00-00-00.us-east-2.compute.amazonaws.com and ec2-00-00-00-00.myowndomain.com resolve to the same IP, you will have no accessibility issue with the underlying instance.
At our company we have three AWS accounts, the main one, used as "root" account for IAM and hosting an OpenVPN Access Server. The other two accounts are pro and stg. Each one has its own VPC, with different IP ranges, and we have a VPC peering between the root and pro accounts, and other one between root and stg. IP routing is already setup and everything is under control from this side.
(I'm sorry I can't upload images yet, so here you have the link)
VPN+VPC-Peering
The problem comes with DNS resolution. The setup is this one:
I've installed BIND9 in the OpenVPN server, to allow DNS forwarding for private hosted domains, using a configuration like this one in named.conf.local
zone "stg-my-internal-domain.com" IN {
type forward;
forward only;
forwarders { 10.229.1.100;10.229.2.100; };
};
zone "pro-my-internal-domain.com" IN {
type forward;
forward only;
forwarders { 10.228.1.100;10.228.2.100; };
};
And also two Route53 inbound resolvers (a simple BIND server running on each VPC also works) running in 10.229.1.100 and 10.229.2.100 for stg and 10.228.1.100 10.228.2.100 for pro account
VPN clients have OpenVPN profiles that use the Access Server as DNS resolver.
From my client, I can resolve both my-service-1.pro-my-internal-domain.com and my-service-2.stg-my-internal-domain.com perfectly, but the problem comes when I want to resolve internal domain names like the ones that AWS generates inside each VPC with my-service-2.eu-west-1.compute.internal
I know that this is an anti-pattern and I should always use the private domain as much as I can, but for some cases like EMR clusters, YARN and Hadoop managers use links that reference to the internal AWS names, making the resolution impossible.
So my question is: Is there any way to configure DNS to delegate resolution to a secondary address if primary fails?
I could set up a forwarder for the eu-west-1.compute.internal zone using all the accounts resolvers, but
DNS specification says that the secondary nameserver will only be used if the first one is unreachable, so as far as it answers an empty or "unknown" response, it's still a valid response and the second one will not be queried.
Any help is really appreciated!
Why not just change the internal host name to a public dns name? Those services are using the hostname assigned to them of course. You can change it.
See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-hostname.html
You may (or may not) need to assign fixed private ips to each. In any case publish this private IP in a public DNS zone. You should then be able to resolve these names properly. Note you can also have a script run on each instance on startup, to update the hostname and dns record.
For a good discussion on private ip addresses in public DNS, see https://serverfault.com/questions/4458/private-ip-address-in-public-dns
For reference, here is the best answer there:
Some people will say no public DNS records should ever disclose private IP addresses....with the thinking being that you are giving potential attackers a leg up on some information that might be required to exploit private systems. Personally, I think that obfuscation is a poor form of security, especially when we are talking about IP addresses because in general they are easy to guess anyway, so I don't see this as a realistic security compromise. The bigger consideration here is making sure your public users don't pickup this DNS record as part of the normal public services of your hosted application. ie: External DNS lookups somehow start resolving to an address they can't get to. Aside from that, I see no fundamental reason why putting private address A records into the public space is a problem....especially when you have no alternate DNS server to host them on. If you do decide to put this record into the public DNS space, you might consider creating a separate zone on the same server to hold all the "private" records. This will make it clearer that they are intended to be private....however for just one A record, I probably wouldn't bother.
AWS only supports DNS resolution of these internal ipv4 DNS hostnames if your VPN is in the same region as your EMR cluster (or any other compute resource). I have reached out to their Support and they have confirmed this.
For example, I have an AWS Client VPN endpoint setup in Frankfurt and an EMR cluster in Ireland. I am pushing to my host the private DNS server of the VPC (and all other related config is enabled in both VPCs) so that I can resolve private Route53 DNS zone records.
While I am connected to the VPN,
I can't resolve this:
$ dig +short ip-10-11-x-x.eu-west-1.compute.internal
$
But I can resolve the following, which is an instance that's in the same region as the VPN endpoint:
$ dig +short ip-10-10-x-y.eu-central-1.compute.internal
10.10.x.y
How to solve this:
Either move your EMR clusters in the same region as your VPN is, or the other way around.
But the simplest solution might be to just use a Chrome plugin (here's an example) that automatically redirects ip-x-y-z... URLS to x.y.z IPs.
One of the options to ssh into the instance is in-browser ssh. It only works if I allow SSH from 0.0.0.0/0 IP range.
Is there a way to get the range of specific IP address range(s) from where GCP will establish in-browser SSH?
P.S.: I am not talking about SSH from my laptop. I am talking about in-browser SSH.
The Handling "Unable to connect on port 22" error message documentation states that you can get Google's IP address range using the public SPF records.
Per the documentation, you'll need to run the three commands below from a linux VM instance:
nslookup -q=TXT _netblocks.google.com 8.8.8.8
nslookup -q=TXT _netblocks2.google.com 8.8.8.8
nslookup -q=TXT _netblocks3.google.com 8.8.8.8
You may need to install dnsutils on the vm instance to be able to use dnslookup.
I just tested it and got various ranges for IPv6 and IPv4. I believe this are the ranges you are searching for.
I also wanted to restrict SSH access to in-browser only and found this
The client IP address in the SSH connection will be part of the range 35.235.240.0/20. This range is the pool of IP addresses used by IAP to proxy the connection from your browser to your instance. So, you can create a more restrictive VPC firewall rule allowing SSH connections only from this IP address range. As a result, only users allowed by IAP will be able to connect to VM using SSH.
(from https://cloud.google.com/community/tutorials/ssh-via-iap)
In order to allow SSH access from your laptop GCP console browser, you need to find your public IP address or external IP address of your LAN. You can check it in this link.Once you have your external IP address you need to create a FW to allow SSH access just from that external IP address.
Example:
gcloud compute firewall-rules create test-ssh-example \
--action allow \
--direction ingress \
--target-tags=[TAG] \ [You can specify a tag to apply this FW rule just to the VMs with the same tag. (optional)]
--rules tcp:22\
--source-ranges \
--priority 1000
With this option, only from that IP address you will be able to have SSH access to a VM instance.
Why don't you run the command who on your ssh session, to see where the connection is originating from?
The search the excellent Cloud Platform documentation to see if an automation to allow this already exists. Otherwise write one.
This is not the best place to do your firewalling from, as its one more configuration to keep up to date. I would recommend an application firewall, or just adding a rate limit to your existing firewall. The chances of someone logging into your servers if you use keys, is virtually 0, even with a 14 character random password, and rate limited new connections. If you use keys or a password manager (as you should), use fail2ban.
There is a topic in EC2 documentation Changing the System Hostname. Why does one need to change it? Just for fun? Just to have some nice shell prompt?
// change this
ubuntu#ip-123-12-1-231 ~ $
// to this?
ubuntu#my-beautiful-hostname ~ $
I'm learning how AWS DNS work, where my EC2's DNS lives that resolves a default Public DNS name to Public IP address of my instance
Public DNS: ec2-xx-xx-xxx-xx.ap-southeast-2.compute.amazonaws.com
Public IP: xx-xx-xxx-xx
And how can I host multiple apps with real domain names (example1.com, example2.com, so on) in one EC2 instance, how to modify and manage DNS. And actually I don't know what to read about it in docs, and read everything related to hostnames and DNS, and found this topic Changing the System Hostname and don't understand why would one want to change a hostname and if it can be valuable info for me.
UPD:
And now a real a practical question for those specimens who like closing questions quietly.
Where does a DNS live in EC2 instance? How is Public DNS mapped to Public IP? Where is that record in my EC2 Ubuntu instance? Is Route53 involved in it?
Where does a DNS live in EC2 instance?
It doesn't, DNS resolution use by the server is set in /etc/resolv.conf and /etc/nsswitch.conf. The hostname domain name for that server is set (Redhat derived systems) in /etc/sysconfig/network
How is Public DNS mapped to Public IP?
With a DNS record
Where is that record in my EC2 Ubuntu instance?
In the DNS for the domain that you have attached it to
Is Route53 involved in it?
Only if you are using Route53 for DNS
EC2 DNS location (source):
In EC2-Classic, the Amazon DNS server is located at 172.16.0.23.
In EC2-VPC, the Amazon DNS server is located at the base of your VPC network range plus two.
For more information, see Amazon DNS Server in the Amazon VPC User Guide
Well i had the same issue as you did and someone replied me this
It isn't a huge deal if you are just running a single server, mostly
to help you identify a server with local networking. Some things like
mail servers will use your hostname unless you specify otherwise.
This is an example of somewhere I saw that done
My original query
why do some people set hostname and some dont? whats the use?
hostnamectl set-hostname